Met police accused of using hackers to access protesters’ emails
April 5, 2017
Exclusive: Watchdog investigates claim that secretive unit worked with Indian police to obtain campaigners’ passwords
An anonymous letter claimed the Scotland Yard unit accessed activists’ email accounts for ‘a number of years’.
The police watchdog is investigating allegations that a secretive Scotland Yard unit used hackers to illegally access the private emails of hundreds of political campaigners and journalists.
The allegations were made by an anonymous individual who says the unit worked with Indian police, who in turn used hackers to illegally obtain the passwords of the email accounts of the campaigners, and some reporters and press photographers.
Met presses undercover police inquiry to examine fewer officers
The person, who says he or she previously worked for the intelligence unit that monitors the activities of political campaigners, detailed their concerns in a letter to the Green party peer Jenny Jones. The peer passed on the allegations to the Independent Police Complaints Commission (IPCC), which is investigating.
Hacked passwords were passed to the Metropolitan police unit, according to the writer of the letter, which then regularly checked the emails of the campaigners and the media to gather information. The letter to Jones listed the passwords of environmental campaigners, four of whom were from Greenpeace. Several confirmed they matched the ones they had used to open their emails.
The letter said: “For a number of years the unit had been illegally accessing the email accounts of activists. This has largely been accomplished because of the contact that one of the officers had developed with counterparts in India who in turn were using hackers to obtain email passwords.”
Jones said: “There is more than enough to justify a full-scale criminal investigation into the activities of these police officers and referral to a public inquiry. I have urged the Independent Police Complaints Commission to act quickly to secure further evidence and to find out how many people were victims of this nasty practice.”
The letter also alleges that emails of reporters and photographers, including two working for the Guardian, were monitored. A spokesperson for the Guardian said: “Allegations that the Metropolitan police has accessed the email accounts of Guardian journalists are extremely concerning and we expect a full and thorough investigation into these claims.”
The IPCC has for several months been investigating claims that the national domestic extremism and disorder intelligence unit shredded a large number of documents over a number of days in May 2014.
The stories you need to read, in one handy email
Last month the IPCC said it had uncovered evidence suggesting the documents had been destroyed despite a specific instruction that files should be preserved to be examined by a judge-led public inquiry into the undercover policing of political groups.
The letter claimed that the shredding “has been happening for some time and on a far greater scale than the IPCC seems to be aware of”. The author added that “the main reason for destroying these documents is that they reveal that [police] officers were engaged in illegal activities to obtain intelligence on protest groups”.
The letter to Jones lists 10 individuals, alongside specific passwords that they used to access their email accounts. Lawyers at Bindmans, who are representing Jones, contacted six on the list and, after outlining the allegations, asked them to volunteer their passwords.
Five of them gave the identical password that had been identified in the letter. The sixth gave a password that was almost the same. The remaining four on the list have yet to be approached or cannot be traced.
Colin Newman has for two decades volunteered to help organise mainly local Greenpeace protests which he says were publicised to the media. He used the password specified in the letter for his private email account between the late 1990s and last year.
Newman said he felt “angry and violated, especially for the recipients”. He added: “I am open about my actions as I make a stand and am personally responsible for those, but it is not fair and just that others are scrutinised.
“I am no threat. There is no justification for snooping in private accounts unless you have a reason to do so, and you have the authority to do that.”
He said he had been cautioned by the police once, for trespassing on the railway during a protest against coal about two years ago.
Another on the list was Cat Dorey who has worked for Greenpeace, both as an employee and a volunteer, since 2001. She said all the protests she had been involved in were non-violent.
The password specified in the letter sent to Jones had been used for emails that contained private information about her family and friends.
She said: “Even though Greenpeace UK staff, volunteers, and activists were always warned to assume someone was listening to our phone conversations or reading our emails, it still came as a shock to find out I was being watched by the police. It’s creepy to think of strangers reading my personal emails.”
In 2005, she was part of a group of Greenpeace protesters who were sentenced to 80 hours of community service after installing solar panels on the home of the then deputy prime minister, John Prescott, in a climate change demonstration.
According to the letter, the “most sensitive side of the work was monitoring the email accounts of radical journalists who reported on activist protests (as well as sympathetic photographers) including at least two employed by the Guardian newspaper”. None were named.
Investigators working for the IPCC have met Jones twice with her lawyer, Jules Carey, and have asked to interview the peer. An IPCC spokesperson said: “After requesting and receiving a referral by the Metropolitan police service, we have begun an independent investigation related to anonymous allegations concerning the accessing of personal data. We are still assessing the scope of the investigation and so we are not able to comment further.”
The letter’s writer said he or she had spoken out about the “serious abuse of power” because “over the years, the unit had evolved into an organisation that had little respect for the law, no regard for personal privacy, encouraged highly immoral activity and, I believe, is a disgrace”.
In recent years, the unit has monitored thousands of political activists, drawing on information gathered by undercover officers and informants as well as from open sources such as websites. Police chiefs say they need to keep track of a wide pool of activists to identify the small number who commit serious crime to promote their cause.
But the unit has come in for criticism after it was revealed to be compiling files on law-abiding campaigners, including John Catt, a 91-year-old pensioner with no criminal record as well as senior members of the Green party including the MP Caroline Lucas.
The Metropolitan police said the IPCC had made it “aware of anonymous allegations concerning the accessing of personal data, and requested the matters were referred to them by the MPS. This was done. The MPS is now aware that the IPCC are carrying out an independent investigation.”
Tuesday 21 March 2017 16.35 GMT Last modified on Wednesday 22 March 2017 00.50 GMT
Find this story at 22 March 2017
© 2017 Guardian News and Media Limited
The letter I received about alleged police hacking shows how at risk we all are
April 5, 2017
The whistleblower lists damning claims of spying on innocent individuals by a secretive Scotland Yard unit. It’s now vital that we hold the police to account
‘When the police act with impunity all of our private lives are put at risk’
As the only Green party peer I receive a lot of post to my office in the House of Lords. Rarely, though, do I open letters like the one that has been revealed. The anonymous writer alleged that there was a secretive unit within Scotland Yard that has used hackers to illegally access the emails of campaigners and journalists. It included a list of 10 people and the passwords to their email accounts.
As soon as I read the first sentence of the letter, I knew the content would be astonishing – and when some aspects of the letter were corroborated by lawyers and those on the list – I was convinced that we owed it to this brave whistleblower to hold the police to account.
The list of allegations is lengthy. It includes illegal hacking of emails, using an Indian-based operation to do the dirty work, shredding documents and using sex as a tool of infiltration. And these revelations matter to all of us. None of us knows whether the police organised for our emails to be hacked, but all of us know the wide range of personal information that our emails contain. It might be medical conditions, family arguments, love lives or a whole range of drug- or alcohol-related misdemeanours.
When the police act with impunity, all of our private lives are put at risk. Whether you’re involved in a local campaign against library closures, a concerned citizen worried about air pollution or someone working for a charity – who’s to say that officers won’t be spying on the emails you send? The police put me on the domestic extremism database during the decade when I was on the Metropolitan Police Authority signing off their budgets and working closely with officers on the ground to fight crimes such as road crime and illegal trafficking. If someone in my position – no criminal record and on semi-friendly terms with the Met commissioner – can end up on the database, then you can too.
The truth is that without the bravery and professionalism of two serving police officers who have blown the whistle on state snooping I would know nothing about my files, and those of other campaigners, being shredded by the Domestic Extremism Unit. We would have had no suspicion that those files had been shredded to cover up the illegal hacking of personal and work e-mails by the police.
Please don’t fall for the old establishment lie that the problem is a few rotten apples. This alleged criminality is the result of a deliberate government policy of using the police and security services to suppress dissent and protest in order to protect company profits and the status quo. Such an approach inevitably leads to police officers overstepping the mark as they feel emboldened by those at the top levels of government and an immunity from prosecution provided by senior officers keen to please the people who decide their budgets.
The stories you need to read, in one handy email
The police don’t always act as neutral agents of the law. We know that the Thatcher government’s determination to break the miners’ strike led to the Orgreave confrontation in 1984. There are still allegations about the links between the police and those running blacklisting databases that led to hundreds of construction workers being condemned to unemployment and poverty.
And don’t mistake this for a partisan attack on Conservative politicians. Theresa May has forced through the draconian Investigatory Powers Act, but the Labour party too has been timid at best in opposing this snoopers’ charter. Indeed it was the Blair government that left a legacy of draconian public order laws, and which broadly defined the anti-terrorism legislation upon which an edifice of modern surveillance powers has been constructed.
Many are unaware that joining an anti-fracking group, or going on a demonstration, could get you labelled a domestic extremist, photographed, questioned and followed for months or even years – without ever having been convicted of a crime.
It’s only by speaking out against these intrusions that we are able to challenge this rotten culture of impunity. After all, it was David Cameron who gave us the Hillsborough inquiry and Theresa May who set up the Pitchford inquiry into undercover officers. Politicians don’t always do things for good reasons, but they do respond to public pressure.
Change is possible, but in the meantime, we should be doing everything we can to make it hard for the police to spy on us. Use encryption, two-step email security and other precautions suggested by organisations such as Liberty. Don’t stop saying what you think, or working to make the world a better place, but do assume that the police will be working to protect the companies, banks or energy companies that you want to challenge.
It isn’t how things should be, but the evidence shows that is the way things are.
A campaign to get the police out of the lives of environmentalists and social justice campaigners is a good start, but it will fail unless it reaches out – starting by working with those in the Muslim community intimidated by Prevent.
Above all, we must convince the middle ground of society that everyone will be safer if the security services focused on what we all want them to do – stopping terrorists and serious criminals. This is not unreasonable, and the starting point is a change to the legislation so that it narrows the definition of terrorism to exclude the nonviolent, noisy and rebellious
Wednesday 22 March 2017 15.23 GMT Last modified on Wednesday 22 March 2017 17.29 GMT
Find this story at 22 March 2017
© 2017 Guardian News and Media Limited
Were the hackers who broke into the DNC’s email really Russian?
April 5, 2017
The question of whether political operative Roger Stone helped Russian hackers break into the email of Democratic politicians, to some people, invites another: Who says the hackers were Russian?
The FBI does, and so do several U.S. intelligence agencies, as they’ve declared repeatedly over the past five months. But among private-sector computer security companies, not everybody thinks the case is proven.
“I have no problem blaming Russia for what they do, which is a lot,” said Jeffrey Carr of the international cybersecurity company Taia Global Inc. “I just don’t want to blame them for things we don’t know that they did. It may turn out that they’re guilty, but we are very short on evidence here.”
As Carr notes, the FBI never examined the servers that were hacked at the Democratic National Committee. Instead, the DNC used the private computer security company CrowdStrike to detect and repair the penetrations.
“All the forensic work on those servers was done by CrowdStrike, and everyone else is relying on information they provided,” said Carr. “And CrowdStrike was the one to declare this the work of the Russians.”
The CrowdStrike argument relies heavily on the fact that remnants of a piece of malware known as AGENT-X were found in the DNC computers. AGENT-X collects and transmits hacked files to rogue computers.
“AGENT-X has been around for ages and ages, and its use has always been attributed to the Russian government, a theory that’s known in the industry as ‘exclusive use,’” Carr said. “The problem with exclusive use is that it’s completely false. Unlike a bomb or an artillery shell, malware doesn’t detonate on impact and destroy itself.
“You can recover it, reverse-engineer it, and reuse it. The U.S. government learned a lesson about that when it created the Stuxnet computer worm to destroy Iran’s nuclear program. Stuxnet survived and now other people have it.”
Carr said he is aware of at least two working copies of AGENT-X outside Russian hands. One is in the possession of a group of Ukrainian hackers he has spoken with, and the other is with an American cybersecurity company. “And if an American security company has it, you can be certain other people do, too,” he said.
There’s growing doubt in the computer security industry about CrowdStrike’s theories about AGENT-X and Russian hackers, Carr said, including some critical responses to a CrowdStrike report on Russian use of the malware to disable Ukrainian artillery.
“This is a close-knit community and criticizing a member to the outside world is kind of like talking out of turn,” Carr said. “I’ve been repeatedly criticized for speaking out in public about whether the hacking was really done by the Russians. But this has to be made public, has to be addressed, and has to be acknowledged by the House and Senate Intelligence Committees.”
MARCH 24, 2017 7:00 AM
BY GLENN GARVIN
Find this story at 24 March 2017
Did the Russians Really Hack the DNC?
April 5, 2017
Russia, we are told, breached the servers of the Democratic National Committee (DNC), swiped emails and other documents, and released them to the public, to alter the outcome of the U.S. presidential election.
How substantial is the evidence backing these assertions?
Hired by the Democratic National Committee to investigate unusual network activity, the security firm Crowdstrike discovered two separate intrusions on DNC servers. Crowdstrike named the two intruders Cozy Bear and Fancy Bear, in an allusion to what it felt were Russian sources. According to Crowdstrike, “Their tradecraft is superb, operational security second to none,” and “both groups were constantly going back into the environment” to change code and methods and switch command and control channels.
On what basis did Crowdstrike attribute these breaches to Russian intelligence services? The security firm claims that the techniques used were similar to those deployed in past security hacking operations that have been attributed to the same actors, while the profile of previous victims “closely mirrors the strategic interests of the Russian government. Furthermore, it appeared that the intruders were unaware of each other’s presence in the DNC system. “While you would virtually never see Western intelligence agencies going after the same target without de-confliction for fear of compromising each other’s operations,” Crowdstrike reports, “in Russia this is not an uncommon scenario.” 
Those may be indicators of Russian government culpability. But then again, perhaps not. Regarding the point about separate intruders, each operating independently of the other, that would seem to more likely indicate that the sources have nothing in common.
Each of the two intrusions acted as an advanced persistent threat (APT), which is an attack that resides undetected on a network for a long time. The goal of an APT is to exfiltrate data from the infected system rather than inflict damage. Several names have been given to these two actors, and most commonly Fancy Bear is known as APT28, and Cozy Bear as APT29.
The fact that many of the techniques used in the hack resembled, in varying degrees, past attacks attributed to Russia may not necessarily carry as much significance as we are led to believe. Once malware is deployed, it tends to be picked up by cybercriminals and offered for sale or trade on Deep Web black markets, where anyone can purchase it. Exploit kits are especially popular sellers. Quite often, the code is modified for specific uses. Security specialist Josh Pitts demonstrated how easy that process can be, downloading and modifying nine samples of the OnionDuke malware, which is thought to have first originated with the Russian government. Pitts reports that this exercise demonstrates “how easy it is to repurpose nation-state code/malware.” 
In another example, when SentinalOne Research discovered the Gyges malware in 2014, it reported that it “exhibits similarities to Russian espionage malware,” and is “designed to target government organizations. It comes as no surprise to us that this type of intelligence agency-grade malware would eventually fall into cybercriminals’ hands.” The security firm explains that Gyges is an “example of how advanced techniques and code developed by governments for espionage are effectively being repurposed, modularized and coupled with other malware to commit cybercrime.” 
Attribution is hard, cybersecurity specialists often point out. “Once an APT is released into the wild, its spread isn’t controlled by the attacker,” writes Mark McArdle. “They can’t prevent someone from analyzing it and repurposing it for their own needs.” Adapting malware “is a well-known reality,” he continues. “Finding irrefutable evidence that links an attacker to an attack is virtually unattainable, so everything boils down to assumptions and judgment.” 
Security Alliance regards security firm FireEye’s analysis that tied APT28 to the Russian government as based “largely on circumstantial evidence.” FireEye’s report “explicitly disregards targets that do not seem to indicate sponsorship by a nation-state,” having excluded various targets because they are “not particularly indicative of a specific sponsor’s interests.”  FireEye reported that the APT28 “victim set is narrow,” which helped lead it to the conclusion that it is a Russian operation. Cybersecurity consultant Jeffrey Carr reacts with scorn: “The victim set is narrow because the report’s authors make it narrow! In fact, it wasn’t narrowly targeted at all if you take into account the targets mentioned by other cybersecurity companies, not to mention those that FireEye deliberately excluded for being ‘not particularly indicative of a specific sponsor’s interests’.” 
FireEye’s report from 2014, on which much of the DNC Russian attribution is based, found that 89 percent of the APT28 software samples it analyzed were compiled during regular working hours in St. Petersburg and Moscow. 
But compile times, like language settings, can be easily altered to mislead investigators. Mark McArdle wonders, “If we think about the very high level of design, engineering, and testing that would be required for such a sophisticated attack, is it reasonable to assume that the attacker would leave these kinds of breadcrumbs? It’s possible. But it’s also possible that these things can be used to misdirect attention to a different party. Potentially another adversary. Is this evidence the result of sloppiness or a careful misdirection?” 
“If the guys are really good,” says Chris Finan, CEO of Manifold Technology, “they’re not leaving much evidence or they’re leaving evidence to throw you off the scent entirely.”  How plausible is it that Russian intelligence services would fail even to attempt such a fundamental step?
James Scott of the Institute for Critical Infrastructure Technology points out that the very vulnerability of the DNC servers constitutes a muddied basis on which determine attribution. “Attribution is less exact in the case of the DNC breach because the mail servers compromised were not well-secured; the organization of a few hundred personnel did not practice proper cyber-hygiene; the DNC has a global reputation and is a valuable target to script kiddies, hacktivists, lone-wolf cyber-threat actors, cyber-criminals, cyber-jihadists, hail-mary threats, and nation-state sponsored advanced persistent threats; and because the malware discovered on DNC systems were well-known, publicly disclosed, and variants could be purchased on Deep Web markets and forums.” 
Someone, or some group, operating under the pseudonym of Guccifer 2.0, claimed to be a lone actor in hacking the DNC servers. It is unclear what relation – if any – Guccifer 2.0 has to either of the two APT attacks on the DNC. In a PDF file that Guccifer 2.0 sent to Gawker.com, metadata indicated that it was it was last saved by someone having a username in Cyrillic letters. During the conversion of the file from Microsoft Word to PDF, invalid hyperlink error messages were automatically generated in the Russian language. 
This would seem to present rather damning evidence. But who is Guccifer 2.0? A Russian government operation? A private group? Or a lone hacktivist? In the poorly secured DNC system, there were almost certainly many infiltrators of various stripes. Nor can it be ruled out that the metadata indicators were intentionally generated in the file to misdirect attribution. The two APT attacks have been noted for their sophistication, and these mistakes – if that is what they are – seem amateurish. To change the language setting on a computer can be done in a matter of seconds, and that would be standard procedure for advanced cyber-warriors. On the other hand, sloppiness on the part of developers is not entirely unknown. However, one would expect a nation-state to enforce strict software and document handling procedures and implement rigorous review processes.
At any rate, the documents posted to the Guccifer 2.0 blog do not necessarily originate from the same source as those published by WikiLeaks. Certainly, none of the documents posted to WikiLeaks possess the same metadata issues. And one hacking operation does not preclude another, let alone an insider leak.
APT28 relied on XTunnel, repurposed from open source code that is available to anyone, to open network ports and siphon data. The interesting thing about the software is its failure to match the level of sophistication claimed for APT28. The strings in the code quite transparently indicate its intent, with no attempt at obfuscation.  It seems an odd oversight for a nation-state operation, in which plausible deniability would be essential, to overlook that glaring point during software development.
Command-and-control servers remotely issue malicious commands to infected machines. Oddly, for such a key component of the operation, the command-and-control IP address in both attacks was hard-coded in the malware. This seems like another inexplicable choice, given that the point of an advanced persistent threat is to operate for an extended period without detection. A more suitable approach would be to use a Domain Name System (DNS) address, which is a decentralized computer naming system. That would provide a more covert means of identifying the command-and-control server.  Moreover, one would expect that address to be encrypted. Using a DNS address would also allow the command-and-control operation to easily move to another server if its location is detected, without the need to modify and reinstall the code.
One of the IP addresses is claimed to be a “well-known APT 28” command-and-control address, while the second is said to be linked to Russian military intelligence.  The first address points to a server located in San Jose, California, and is operated by a server hosting service.  The second server is situated in Paris, France, and owned by another server hosting service.  Clearly, these are servers that have been compromised by hackers. It is customary for hackers to route their attacks through vulnerable computers. The IP addresses of compromised computers are widely available on the Deep Web, and typically a hacked server will be used by multiple threat actors. These two particular servers may or may not have been regularly utilized by Russian Intelligence, but they were not uniquely so used. Almost certainly, many other hackers would have used the same machines, and it cannot be said that these IP addresses uniquely identify an infiltrator. Indeed, the second IP address is associated with the common Trojan viruses Agent-APPR and Shunnael. 
“Everyone is focused on attribution, but we may be missing the bigger truth,” says Joshua Croman, Director of the Cyber Statecraft Initiative at the Atlantic Council. “[T]he level of sophistication required to do this hack was so low that nearly anyone could do it.” 
In answer to critics, the Department of Homeland Security and the FBI issued a joint analysis report, which presented “technical details regarding the tools and infrastructure used” by Russian intelligence services “to compromise and exploit networks” associated with the U.S. election, U.S. government, political, and private sector entities. The report code-named these activities “Grizzly Steppe.” 
For a document that purports to offer strong evidence on behalf of U.S. government allegations of Russian culpability, it is striking how weak and sloppy the content is. Included in the report is a list of every threat group ever said to be associated with the Russian government, most of which are unrelated to the DNC hack. It appears that various governmental organizations were asked to send a list of Russian threats, and then an official lacking IT background compiled that information for the report, and the result is a mishmash of threat groups, software, and techniques. “PowerShell backdoor,” for instance, is a method used by many hackers, and in no way describes a Russian operation.
Indeed, one must take the list on faith, because nowhere in the document is any evidence provided to back up the claim of a Russian connection. Indeed, as the majority of items on the list are unrelated to the DNC hack, one wonders what the point is. But it bears repeating: even where software can be traced to Russian origination, it does not necessarily indicate exclusive usage. Jeffrey Carr explains: “Once malware is deployed, it is no longer under the control of the hacker who deployed it or the developer who created it. It can be reverse-engineered, copied, modified, shared and redeployed again and again by anyone.” Carr quotes security firm ESET in regard to the Sednit group, one of the items on the report’s list, and which is another name for APT28: “As security researchers, what we call ‘the Sednit group’ is merely a set of software and the related infrastructure, which we can hardly correlate with any specific organization.” Carr points out that X-Agent software, which is said to have been utilized in the DNC hack, was easily obtained by ESET for analysis. “If ESET could do it, so can others. It is both foolish and baseless to claim, as Crowdstrike does, that X-Agent is used solely by the Russian government when the source code is there for anyone to find and use at will.” 
The salient impression given by the government’s report is how devoid of evidence it is. For that matter, the majority of the content is taken up by what security specialist John Hinderaker describes as “pedestrian advice to IT professionals about computer security.” As for the report’s indicators of compromise (IoC), Hinderaker characterizes these as “tools that are freely available and IP addresses that are used by hackers around the world.” 
In conjunction with the report, the FBI and Department of Homeland Security provided a list of IP addresses it identified with Russian intelligence services.  Wordfence analyzed the IP addresses as well as a PHP malware script provided by the Department of Homeland Security. In analyzing the source code, Wordfence discovered that the software used was P.A.S., version 3.1.0. It then found that the website that manufactures the malware had a site country code indicating that it is Ukrainian. The current version of the P.A.S. software is 4.1.1, which is much newer than that used in the DNC hack, and the latest version has changed “quite substantially.” Wordfence notes that not only is the software “commonly available,” but also that it would be reasonable to expect “Russian intelligence operatives to develop their own tools or at least use current malicious tools from outside sources.” To put it plainly, Wordfence concludes that the malware sample “has no apparent relationship with Russian intelligence.” 
Wordfence also analyzed the government’s list of 876 IP addresses included as indicators of compromise. The sites are widely dispersed geographically, and of those with a known location, the United States has the largest number. A large number of the IP addresses belong to low-cost server hosting companies. “A common pattern that we see in the industry,” Wordfence states, “is that accounts at these hosts are compromised and those hacked sites are used to launch attacks around the web.” Fifteen percent of the IP addresses are currently Tor exit nodes. “These exit nodes are used by anyone who wants to be anonymous online, including malicious actors.” 
If one also takes into account the IP addresses that not only point to current Tor exits, but also those that once belonged to Tor exit nodes, then these comprise 42 percent of the government’s list.  “The fact that so many of the IPs are Tor addresses reveals the true sloppiness of the report,” concludes network security specialist Jerry Gamblin. 
Cybersecurity analyst Robert Graham was particularly blistering in his assessment of the government’s report, characterizing it as “full of garbage.” The report fails to tie the indicators of compromise to the Russian government. “It contains signatures of viruses that are publicly available, used by hackers around the world, not just Russia. It contains a long list of IP addresses from perfectly normal services, like Tor, Google, Dropbox, Yahoo, and so forth. Yes, hackers use Yahoo for phishing and maladvertising. It doesn’t mean every access of Yahoo is an ‘indicator of compromise’.” Graham compared the list of IP addresses against those accessed by his web browser, and found two matches. “No,” he continues. “This doesn’t mean I’ve been hacked. It means I just had a normal interaction with Yahoo. It means the Grizzly Steppe IoCs are garbage.” Graham goes on to point out that “what really happened” with the supposed Russian hack into the Vermont power grid “is that somebody just checked their Yahoo email, thereby accessing one of the same IP addresses I did. How they get from the facts (one person accessed Yahoo email) to the story (Russians hacked power grid)” is U.S. government “misinformation.” 
The indicators of compromise, in Graham’s assessment, were “published as a political tool, to prove they have evidence pointing to Russia.” As for the P.A.S. web shell, it is “used by hundreds if not thousands of hackers, mostly associated with Russia, but also throughout the rest of the world.” Relying on the government’s sample for attribution is problematic: “Just because you found P.A.S. in two different places doesn’t mean it’s the same hacker.” A web shell “is one of the most common things hackers use once they’ve broken into a server,” Graham observes. 
Although cybersecurity analyst Robert M. Lee is inclined to accept the government’s position on the DNC hack, he feels the joint analysis report “reads like a poorly done vendor intelligence report stringing together various aspects of attribution without evidence.” The report’s list “detracts from the confidence because of the interweaving of unrelated data.” The information presented is not sourced, he adds. “It’s a random collection of information and in that way, is mostly useless.” Indeed, the indicators of compromise have “a high rate of false positives for defenders that use them.” 
Among the government’s list of Russian actors are Energetic Bear and Crouching Yeti, two names for the same threat group. In its analysis, Kaspersky Lab found that most of the group’s victims “fall into the industrial/machinery building sector,” and it is “not currently possible to determine the country of origin.” Although listed in the government’s report, it is not suggested that the group played a part in the DNC hack. But it does serve as an example of the uncertainty surrounding government claims about Russian hacking operations in general. 
CosmicDuke is one of the software packages listed as tied to Russia. SecureList, however, finds that unlike the software’s predecessor, CosmicDuke targets those who traffic in “controlled substances, such as steroids and hormones.” One possibility is that CosmicDuke is used by law enforcement agencies, while another possibility “is that it’s simply available in the underground and purchased by various competitors in the pharmaceutical business to spy on each other.” In either case, whether or not the software is utilized by the Russian government, there is a broader base for its use. 
The intent of the joint analysis report was to provide evidence of Russian state responsibility for the DNC hack. But nowhere does it do so. Mere assertions are meant to persuade. How much evidence does the government have? The Democratic Party claims that the FBI never requested access to DNC servers.  The FBI, for its part, says it made “multiple requests” for access to the DNC servers and was repeatedly turned down.  Either way, it is a remarkable admission. In a case like this, the FBI would typically conduct its own investigation. Was the DNC afraid the FBI might come to a different conclusion than the DNC-hired security firm Crowdstrike? The FBI was left to rely on whatever evidence Crowdstrike chose to supply. During its analysis of DNC servers, Crowdstrike reports that it found evidence of APT28 and APT29 intrusions within two hours. Did it stop there, satisfied with what it had found? Or did it continue to explore whether additional intrusions by other actors had taken place?
In an attempt to further inflame the hysteria generated from accusations of Russian hacking, the Office of the Director of National Intelligence published a declassified version of a document briefed to U.S. officials. The information was supplied by the CIA, FBI, and National Security Agency, and was meant to cement the government’s case. Not surprisingly, the report received a warm welcome in the mainstream media, but what is notable is that it offers not a single piece of evidence to support its claim of “high confidence” in assessing that Russia hacked the DNC and released documents to WikiLeaks. Instead, the bulk of the report is an unhinged diatribe against Russian-owned RT media. The content is rife with inaccuracies and absurdities. Among the heinous actions RT is accused of are having run “anti-fracking programming, highlighting environmental issues and the impacts on health issues,” airing a documentary on Occupy Wall Street, and hosting third-party candidates during the 2012 election.
The report would be laughable, were it not for the fact that it is being played up for propaganda effect, bypassing logic and appealing directly to unexamined emotion. The 2016 election should have been a wake-up call for the Democratic Party. Instead, predictably enough, no self-examination has taken place, as the party doubles down on the neoliberal policies that have impoverished tens of millions, and backing military interventions that have sown so much death and chaos. Instead of thoughtful analysis, the party is lashing out and blaming Russia for its loss to an opponent that even a merely weak candidate would have beaten handily.
Mainstream media start with the premise that the Russian government was responsible, despite a lack of convincing evidence. They then leap to the fallacious conclusion that because Russia hacked the DNC, only it could have leaked the documents.
So, did the Russian government hack the DNC and feed documents to WikiLeaks? There are really two questions here: who hacked the DNC, and who released the DNC documents? These are not necessarily the same. An earlier intrusion into German parliament servers was blamed on the Russians, yet the release of documents to WikiLeaks is thought to have originated from an insider.  Had the Russians hacked into the DNC, it may have been to gather intelligence, while another actor released the documents. But it is far from certain that Russian intelligence services had anything to do with the intrusions. Julian Assange says that he did not receive the DNC documents from a nation-state. It has been pointed out that Russia could have used a third party to pass along the material. Fair enough, but former UK diplomat Craig Murray asserts: “I know who the source is… It’s from a Washington insider. It’s not from Russia.” 
There are too many inconsistencies and holes in the official story. In all likelihood, there were multiple intrusions into DNC servers, not all of which have been identified. The public ought to be wary of quick claims of attribution. It requires a long and involved process to arrive at a plausible identification, and in many cases the source can never be determined. As Jeffrey Carr explains, “It’s important to know that the process of attributing an attack by a cybersecurity company has nothing to do with the scientific method. Claims of attribution aren’t testable or repeatable because the hypothesis is never proven right or wrong.” 
Russia-bashing is in full swing, and there does not appear to be any letup in sight. We are plunging headlong into a new Cold War, riding on a wave of propaganda-induced hysteria. The self-serving claims fueling this campaign need to be challenged every step of the way. Surrendering to evidence-free emotional appeals would only serve those who arrogantly advocate confrontation and geopolitical domination.
 Dmitri Alperovitch, “Bears in the Midst: Intrusion into the Democratic National Committee,” Crowdstrike blog, June 15, 2016.
 Josh Pitts, “Repurposing OnionDuke: A Single Case Study Around Reusing Nation-state Malware,” Black Hat, July 21, 2015.
 Udi Shamir, “The Case of Gyges, the Invisible Malware,” SentinelOne, July 2014.
 Mark McArdle, “’Whodunnit?’ Why the Attribution of Hacks like the Recent DNC Hack is so Difficult,” Esentire, July 28, 2016.
 “The Usual Suspects: Faith-Based Attribution and its Effects on the Security Community,” October 21, 2016.
 Jeffrey Carr, “The DNC Breach and the Hijacking of Common Sense,” June 20, 2016.
 “APT28: A Window into Russia’s Cyber Espionage Operations?” FireEye, October 27, 2014.
 Mark McArdle, “’Whodunnit?’ Why the Attribution of Hacks like the Recent DNC Hack is so Difficult,” Esentire, July 28, 2016.
 Patrick Howell O’Neill, “Obama’s Former Cybersecurity Advisor Says Only ‘Idiots’ Want to Hack Russia Back for DNC Breach,” The Daily Dot, July 29, 2016.
 Janes Scott, Sr., “It’s the Russians! … or is it? Cold War Rhetoric in the Digital Age,” ICIT, December 13, 2016.
 Sam Biddle and Gabrielle Bluestone, “This Looks like the DNC’s Hacked Trump Oppo File,” Gawker, June 15, 2016.
Dan Goodin, “’Guccifer’ Leak of DNC Trump Research Has a Russian’s Fingerprints on It,” Ars Technica, June 16, 2016.
 Pat Belcher, “Tunnel of Gov: DNC Hack and the Russian XTunnel,” Invincea, July 28, 2016.
 Seth Bromberger, “DNS as a Covert Channel within Protected Networks,” National Electric Sector Cyber Security Organization, January 25, 2011.
 Thomas Rid, “All Signs Point to Russia Being Behind the DNC Hack,” Motherboard, July 25, 2016.
 Paul, “Security Pros Pan US Government Report on Russian Hacking,” The Security Ledger, December 30, 2016.
 “Grizzly Steppe – Russian Malicious Cyber Activity,” JAR-16-20296, National Cybersecurity & Communications Integration Center, Federal Bureau of Investigation, December 29, 2016.
 Jeffrey Carr, “FBI/DHS Joint Analysis Report: A Fatally Flawed Effort,” Jeffrey Carr/Medium, December 30, 2016.
 John Hinderaker, “Is “Grizzly Steppe’ Really a Russian Operation?” Powerline, December 31, 2016.
 Mark Maunder, “US Govt Data Shows Russia Used Outdated Ukrainian PHP Malware,” Wordfence, December 30, 2016.
 Mark Maunder, “US Govt Data Shows Russia Used Outdated Ukrainian PHP Malware,” Wordfence, December 30, 2016.
 Micah Lee, “The U.S. Government Thinks Thousands of Russian Hackers May be Reading my Blog. They Aren’t,” The Intercept, January 4, 2017.
 Jerry Gamblin, “Grizzly Steppe: Here’s My IP and Hash Analysis,” A New Domain, January 2, 2017.
 Robert Graham, “Dear Obama, from Infosec,” Errata Security, January 3, 2017.
 Robert Graham, “Some Notes on IoCs,” Errata Security, December 29, 2016.
 Robert M. Lee, “Critiques of the DHS/FBI’s Grizzly Steppe Report,” Robert M. Lee blog, December 30, 2016.
 “Energetic Bear – Crouching Yeti,” Kaspersky Lab Global Research and Analysis Team, July 31, 2014.
 “Miniduke is back: Nemesis Gemina and the Botgen Studio,” Securelist, July 3, 2014.
 Ali Watkins, “The FBI Never Asked for Access to Hacked Computer Servers,” Buzzfeed, January 4, 2017.
 “James Comey: DNC Denied FBI Direct Access to Servers During Russia Hacking Probe,” Washington Times, January 10, 2017.
 “Assessing Russian Activities and Intentions in Recent Activities and Intentions in Recent US Elections,” Office of the Director of National Intelligence, January 6, 2017.
 “Quelle für Enthüllungen im Bundestag Vermutet,” Frankfurter Allgemeine Zeitung, December 17, 2016.
 RT broadcast, January 7, 2017. https://www.youtube.com/watch?v=w3DvaVrRweY
 Jeffrey Carr, “Faith-based Attribution,” Jeffrey Carr/Medium, July 10, 2016.
Join the debate on Facebook
Gregory Elich is on the Board of Directors of the Jasenovac Research Institute and the Advisory Board of the Korea Policy Institute. He a member of the Solidarity Committee for Democracy and Peace in Korea, a columnist for Voice of the People, and one of the co-authors of Killing Democracy: CIA and Pentagon Operations in the Post-Soviet Period, published in the Russian language. He is also a member of the Task Force to Stop THAAD in Korea and Militarism in Asia and the Pacific. His website is https://gregoryelich.org
JANUARY 13, 2017
by GREGORY ELICH
Find this story at 13 January 2017
Copyright © CounterPunch
HERE’S THE PUBLIC EVIDENCE RUSSIA HACKED THE DNC — IT’S NOT ENOUGH
April 5, 2017
THERE ARE SOME good reasons to believe Russians had something to do with the breaches into email accounts belonging to members of the Democratic party, which proved varyingly embarrassing or disruptive for Hillary Clinton’s presidential campaign. But “good” doesn’t necessarily mean good enough to indict Russia’s head of state for sabotaging our democracy.
There’s a lot of evidence from the attack on the table, mostly detailing how the hack was perpetrated, and possibly the language of the perpetrators. It certainly remains plausible that Russians hacked the DNC, and remains possible that Russia itself ordered it. But the refrain of Russian attribution has been repeated so regularly and so emphatically that it’s become easy to forget that no one has ever truly proven the claim. There is strong evidence indicating that Democratic email accounts were breached via phishing messages, and that specific malware was spread across DNC computers. There’s even evidence that the attackers are the same group that’s been spotted attacking other targets in the past. But again: No one has actually proven that group is the Russian government (or works for it). This remains the enormous inductive leap that’s not been reckoned with, and Americans deserve better.
We should also bear in mind that private security firm CrowdStrike’s frequently cited findings of Russian responsibility were essentially paid for by the DNC, which contracted its services in June. It’s highly unusual for evidence of a crime to be assembled on the victim’s dime. If we’re going to blame the Russian government for disrupting our presidential election — easily construed as an act of war — we need to be damn sure of every single shred of evidence. Guesswork and assumption could be disastrous.
The gist of the Case Against Russia goes like this: The person or people who infiltrated the DNC’s email system and the account of John Podesta left behind clues of varying technical specificity indicating they have some connection to Russia, or at least speak Russian. Guccifer 2.0, the entity that originally distributed hacked materials from the Democratic party, is a deeply suspicious figure who has made statements and decisions that indicate some Russian connection. The website DCLeaks, which began publishing a great number of DNC emails, has some apparent ties to Guccifer and possibly Russia. And then there’s WikiLeaks, which after a long, sad slide into paranoia, conspiracy theorizing, and general internet toxicity has made no attempt to mask its affection for Vladimir Putin and its crazed contempt for Hillary Clinton. (Julian Assange has been stuck indoors for a very, very long time.) If you look at all of this and sort of squint, it looks quite strong indeed, an insurmountable heap of circumstantial evidence too great in volume to dismiss as just circumstantial or mere coincidence.
But look more closely at the above and you can’t help but notice all of the qualifying words: Possibly, appears, connects, indicates. It’s impossible (or at least dishonest) to present the evidence for Russian responsibility for hacking the Democrats without using language like this. The question, then, is this: Do we want to make major foreign policy decisions with a belligerent nuclear power based on suggestions alone, no matter how strong?
What We Know
So far, all of the evidence pointing to Russia’s involvement in the Democratic hacks (DNC, DCCC, Podesta, et al.) comes from either private security firms (like CrowdStrike or FireEye) who sell cyber-defense services to other companies, or independent researchers, some with university affiliations and serious credentials, and some who are basically just Guys on Twitter. Although some of these private firms groups had proprietary access to DNC computers or files from them, much of the evidence has been drawn from publicly available data like the hacked emails and documents.
Some of the malware found on DNC computers is believed to be the same as that used by two hacking groups believed to be Russian intelligence units, codenamed APT (Advanced Persistent Threat) 28/Fancy Bear and APT 29/Cozy Bear by industry researchers who track them.
The attacker or attackers registered a deliberately misspelled domain name used for email phishing attacks against DNC employees, connected to an IP address associated with APT 28/Fancy Bear.
Malware found on the DNC computers was programmed to communicate with an IP address associated with APT 28/Fancy Bear.
Metadata in a file leaked by “Guccifer 2.0″ shows it was modified by a user called, in cyrillic, “Felix Edmundovich,” a reference to the founder of a Soviet-era secret police force. Another document contained cyrillic metadata indicating it had been edited on a document with Russian language settings.
Peculiarities in a conversation with “Guccifer 2.0″ that Motherboard published in June suggests he is not Romanian, as he originally claimed.
The DCLeaks.com domain was registered by a person using the same email service as the person who registered a misspelled domain used to send phishing emails to DNC employees.
Some of the phishing emails were sent using Yandex, a Moscow-based webmail provider.
A bit.ly link believed to have been used by APT 28/Fancy Bear in the past was also used against Podesta.
Why That Isn’t Enough
Viewed as a whole, the above evidence looks strong, and maybe even damning. But view each piece on its own, and it’s hard to feel impressed.
For one, a lot of the so-called evidence above is no such thing. CrowdStrike, whose claims of Russian responsibility are perhaps most influential throughout the media, says APT 28/Fancy Bear “is known for its technique of registering domains that closely resemble domains of legitimate organizations they plan to target.” But this isn’t a Russian technique any more than using a computer is a Russian technique — misspelled domains are a cornerstone of phishing attacks all over the world. Is Yandex — the Russian equivalent of Google — some sort of giveaway? Anyone who claimed a hacker must be a CIA agent because they used a Gmail account would be laughed off the internet. We must also acknowledge that just because Guccifer 2.0 pretended to be Romanian, we can’t conclude he works for the Russian government — it just makes him a liar.
Next, consider the fact that CrowdStrike describes APT 28 and 29 like this:
Their tradecraft is superb, operational security second to none and the extensive usage of “living-off-the-land” techniques enables them to easily bypass many security solutions they encounter. In particular, we identified advanced methods consistent with nation-state level capabilities including deliberate targeting and “access management” tradecraft — both groups were constantly going back into the environment to change out their implants, modify persistent methods, move to new Command & Control channels and perform other tasks to try to stay ahead of being detected.
Compare that description to CrowdStrike’s claim it was able to finger APT 28 and 29, described above as digital spies par excellence, because they were so incredibly sloppy. Would a group whose “tradecraft is superb” with “operational security second to none” really leave behind the name of a Soviet spy chief imprinted on a document it sent to American journalists? Would these groups really be dumb enough to leave cyrillic comments on these documents? Would these groups that “constantly [go] back into the environment to change out their implants, modify persistent methods, move to new Command & Control channels” get caught because they precisely didn’t make sure not to use IP addresses they’d been associated before? It’s very hard to buy the argument that the Democrats were hacked by one of the most sophisticated, diabolical foreign intelligence services in history, and that we know this because they screwed up over and over again.
But how do we even know these oddly named groups are Russian? CrowdStrike co-founder Dmitri Alperovitch himself describes APT 28 as a “Russian-based threat actor” whose modus operandi “closely mirrors the strategic interests of the Russian government” and “may indicate affiliation [Russia’s] Main Intelligence Department or GRU, Russia’s premier military intelligence service.” Security firm SecureWorks issued a report blaming Russia with “moderate confidence.” What constitutes moderate confidence? SecureWorks said it adopted the “grading system published by the U.S. Office of the Director of National Intelligence to indicate confidence in their assessments. … Moderate confidence generally means that the information is credibly sourced and plausible but not of sufficient quality or corroborated sufficiently to warrant a higher level of confidence.” All of this amounts to a very educated guess, at best.
Even the claim that APT 28/Fancy Bear itself is a group working for the Kremlin is speculative, a fact that’s been completely erased from this year’s discourse. In its 2014 reveal of the group, the high-profile security firm FireEye couldn’t even blame Russia without a question mark in the headline: “APT28: A Window into Russia’s Cyber Espionage Operations?” The blog post itself is remarkably similar to arguments about the DNC hack: technical but still largely speculative, presenting evidence the company “[believes] indicate a government sponsor based in Moscow.” Believe! Indicate! We should know already this is no smoking gun. FireEye’s argument that the malware used by APT 28 is connected to the Russian government is based on the belief that its “developers are Russian language speakers operating during business hours that are consistent with the time zone of Russia’s major cities.”
As security researcher Jeffrey Carr pointed out in June, FireEye’s 2014 report on APT 28 is questionable from the start:
To my surprise, the report’s authors declared that they deliberately excluded evidence that didn’t support their judgment that the Russian government was responsible for APT28’s activities:
“APT28 has targeted a variety of organizations that fall outside of the three themes we highlighted above. However, we are not profiling all of APT28’s targets with the same detail because they are not particularly indicative of a specific sponsor’s interests.” (emphasis added)
That is the very definition of confirmation bias. Had FireEye published a detailed picture of APT28’s activities including all of their known targets, other theories regarding this group could have emerged; for example, that the malware developers and the operators of that malware were not the same or even necessarily affiliated.
The notion that APT 28 has a narrow focus on American political targets is undermined in another SecureWorks paper, which shows that the hackers have a wide variety of interests: 10 percent of their targets are NGOs, 22 percent are journalists, 4 percent are aerospace researchers, and 8 percent are “government supply chain.” SecureWorks says that only 8 percent of APT 28/Fancy Bear’s targets are “government personnel” of any nationality — hardly the focused agenda described by CrowdStrike.
Truly, the argument that “Guccifer 2.0″ is a Kremlin agent or that GRU breached John Podesta’s email only works if you presume that APT 28/Fancy Bear is a unit of the Russian government, a fact that has never been proven beyond any reasonable doubt. According to Carr, “it’s an old assumption going back years to when any attack against a non-financial target was attributed to a state actor.” Without that premise, all we can truly conclude is that some email accounts at the DNC et al. appear to have been broken into by someone, and perhaps they speak Russian. Left ignored is the mammoth difference between Russians and Russia.
Security researcher Claudio Guarnieri put it this way:
[Private security firms] can’t produce anything conclusive. What they produce is speculative attribution that is pretty common to make in the threat research field. I do that same speculative attribution myself, but it is just circumstantial. At the very best it can only prove that the actor that perpetrated the attack is very likely located in Russia. As for government involvement, it can only speculate that it is plausible because of context and political motivations, as well as technical connections with previous (or following attacks) that appear to be perpetrated by the same group and that corroborate the analysis that it is a Russian state-sponsored actor (for example, hacking of institutions of other countries Russia has some geopolitical interests in).
Finally, one can’t be reminded enough that all of this evidence comes from private companies with a direct financial interest in making the internet seem as scary as possible, just as Lysol depends on making you believe your kitchen is crawling with E. Coli.
What Does the Government Know?
In October, the Department of Homeland Security and the Office of the Director of National Intelligence released a joint statement blaming the Russian government for hacking the DNC. In it, they state their attribution plainly:
The U.S. Intelligence Community (USIC) is confident that the Russian Government directed the recent compromises of e-mails from US persons and institutions, including from US political organizations. The recent disclosures of alleged hacked e-mails on sites like DCLeaks.com and WikiLeaks and by the Guccifer 2.0 online persona are consistent with the methods and motivations of Russian-directed efforts. These thefts and disclosures are intended to interfere with the US election process.
What’s missing is any evidence at all. If this federal confidence is based on evidence that’s being withheld from the public for any reason, that’s one thing — secrecy is their game. But if the U.S. Intelligence Community is asking the American electorate to believe them, to accept as true their claim that our most important civic institution was compromised by a longtime geopolitical nemesis, we need them to show us why.
The same goes for the CIA, which is now squaring off directly against Trump, claiming (through leaks to the Washington Post and New York Times) that the Russian government conducted the hacks for the express purpose of helping defeat Clinton. Days later, Senator John McCain agreed with the assessment, deeming it “another form of warfare.” Again, it’s completely possible (and probable, really) that the CIA possesses hard evidence that could establish Russian attribution — it’s their job to have such evidence, and often to keep it secret.
But what we’re presented with isn’t just the idea that these hacks happened, and that someone is responsible, and, well, I guess it’s just a shame. Our lawmakers and intelligence agencies are asking us to react to an attack that is almost military in nature — this is, we’re being told, “warfare.” When a foreign government conducts (or supports) an act of warfare against another country, it’s entirely possible that there will be an equal response. What we’re looking at now is the distinct possibility that the United States will consider military retaliation (digital or otherwise) against Russia, based on nothing but private sector consultants and secret intelligence agency notes. If you care about the country enough to be angry at the prospect of election-meddling, you should be terrified of the prospect of military tensions with Russia based on hidden evidence. You need not look too far back in recent history to find an example of when wrongly blaming a foreign government for sponsoring an attack on the U.S. has tremendously backfired.
We Need the Real Evidence, Right Now
It must be stated plainly: The U.S. intelligence community must make its evidence against Russia public if they want us to believe their claims. The integrity of our presidential elections is vital to the country’s survival; blind trust in the CIA is not. A governmental disclosure like this is also not entirely without precedent: In 2014, the Department of Justice produced a 56-page indictment detailing their exact evidence against a team of Chinese hackers working for the People’s Liberation Army, accused of stealing American trade secrets; each member was accused by name. The 2014 trade secret theft was a crime of much lower magnitude than election meddling, but what the DOJ furnished is what we should demand today from our country’s spies.
If the CIA does show its hand, we should demand to see the evidence that matters (which, according to Edward Snowden, the government probably has, if it exists). I asked Jeffrey Carr what he would consider undeniable evidence of Russian governmental involvement: “Captured communications between a Russian government employee and the hackers,” adding that attribution “should solely be handled by government agencies because they have the legal authorization to do what it takes to get hard evidence.”
Claudio Guarnieri concurred:
All in all, technical circumstantial attribution is acceptable only so far as it is to explain an attack. It most definitely isn’t for the political repercussions that we’re observing now. For that, only documental evidence that is verifiable or intercepts of Russian officials would be convincing enough, I suspect.
Given that the U.S. routinely attempts to intercept the communications of heads of state around the world, it’s not impossible that the CIA or the NSA has exactly this kind of proof. Granted, these intelligence agencies will be loath to reveal any evidence that could compromise the method they used to gather it. But in times of extraordinary risk, with two enormous military powers placed in direct conflict over national sovereignty, we need an extraordinary disclosure. The stakes are simply too high to take anyone’s word for it.
December 14 2016, 5:30 p.m.
Find this story at 14 December 2016
US misfires in online fight against Islamic State
February 7, 2017
TAMPA, Fla. (AP) — On any given day at MacDill Air Force Base, web crawlers scour social media for potential recruits to the Islamic State group. Then, in a high-stakes operation to counter the extremists’ propaganda, language specialists employ fictitious identities and try to sway the targets from joining IS ranks.
At least that’s how the multimillion-dollar initiative is being sold to the Defense Department.
A critical national security program known as “WebOps” is part of a vast psychological operation that the Pentagon says is effectively countering an enemy that has used the internet as a devastating tool of propaganda. But an Associated Press investigation found the management behind WebOps is so beset with incompetence, cronyism and flawed data that multiple people with direct knowledge of the program say it’s having little impact.
Several current and former WebOps employees cited multiple examples of civilian Arabic specialists who have little experience in counter-propaganda, cannot speak Arabic fluently and have so little understanding of Islam they are no match for the Islamic State online recruiters.
It’s hard to establish rapport with a potential terror recruit when — as one former worker told the AP — translators repeatedly mix up the Arabic words for “salad” and “authority.” That’s led to open ridicule on social media about references to the “Palestinian salad.”
Four current or former workers told the AP that they had personally witnessed WebOps data being manipulated to create the appearance of success and that they had discussed the problem with many other employees who had seen the same. Yet the companies carrying out the program for the military’s Central Command in Tampa have dodged attempts to implement independent oversight and assessment of the data.
Central Command spokesman Andy Stephens declined repeated requests for information about WebOps and other counter-propaganda programs, which were launched under the Obama administration. And he did not respond to detailed questions the AP sent on Jan. 10.
The AP investigation is based on Defense Department and contractor documents, emails, photographs and interviews with more than a dozen people closely involved with WebOps as well as interviews with nearly two dozen contractors. The WebOps workers requested anonymity due to the sensitive nature of the work and because they weren’t authorized to speak publicly.
The information operations division that runs WebOps is the command’s epicenter for firing back at the Islamic State’s online propaganda machine, which uses the internet to sway public opinion in a swath of the globe that stretches from Central Asia to the Horn of Africa.
Early last year, the government opened bidding on a new counter-propaganda contract — separate from WebOps— that is worth as much as $500 million. Months after the AP started reporting about the bidding process, the Naval Criminal Investigative Service told the AP that it had launched an investigation. NCIS spokesman Ed Buice said the service is investigating a whistleblower’s “allegations of corruption” stemming from how the contract was awarded.
The whistleblower’s complaint alleges multiple conflicts of interest that include division officers being treated to lavish dinners paid for by a contractor. The complaint also alleges routine drinking at the office where classified work is conducted. The drinking was confirmed by multiple contractors, who spoke to AP and described a frat house atmosphere where happy hour started at 3 p.m.
One of the most damning accusations leveled by the whistleblower is against Army Col. Victor Garcia, who led the information operations division until July 2016, when he moved to a new assignment at Special Operations Command, also in Tampa. The whistleblower contended that Garcia steered the contract to a team of vendors that included a close friend’s firm. The whistleblower requested anonymity for fear of professional retribution.
The AP obtained a screen-grab from a Facebook page that shows Garcia and the friend at a tiki bar in Key Largo two weeks before the winning team was officially announced Sept. 30. The photo was also turned over to NCIS investigators by the whistleblower, who said the photo created a “clear impression and perception of impropriety.”
Garcia, a West Point graduate and decorated officer, denied any wrongdoing and described the complaint as “character assassination.” Garcia, who moved to his new post two months before the contract was decided, said he scrupulously avoided any discussions about the contract with both his friend and his former deputy. His former deputy served on the five-member panel that reviewed all of the bids.
“Because I was aware of these conflicts of interest, I intentionally kept myself out of that process — with any of these contract processes,” Garcia said.
The whistleblower is a senior manager at a company that lost its bid for the work. He told AP that he was investigated for attempting to accept kickbacks on an unrelated government contract. He denied the allegations, which were made four years ago, and no charges have been filed in the case.
The problems with the WebOps operation and the personal bonds underpinning the new contract illustrate challenges awaiting President Donald Trump. He has promised to boost military spending by tens of billions of dollars while also cutting waste at the Defense Department and ensuring that contractors aren’t getting sweetheart deals.
Charles Tiefer, a professor at the University of Baltimore’s law school and a government contracting expert, reviewed AP’s findings and called Central Command’s lack of rigorous oversight inexcusable.
“These people should not be wasting the money consigned to defend us against terrorism,” said Tiefer, who served on a bipartisan Commission on Wartime Contracting. The commission reported in 2011 that at least $31 billion was lost to waste and fraud in Iraq and Afghanistan.
“DO YOU SPEAK ARABIC?”
In a large office room filled with cubicles at Central Command, about 120 people, many of them Arabic language specialists, are assigned to fight IS militants on their own turf: the internet.
The WebOps contract is run by Colsa Corp., based in Huntsville, Alabama. A major challenge for Colsa — and contractors working on other national security programs— is finding people who can speak Arabic fluently and can also get security clearances to handle classified material.
The problem, according to six current and former Colsa employees, is that to engage with operatives of the Islamic State, or their potential recruits, you need to be fluent in language, nuance and Islam — and while Colsa has some Arabic experts, those skills are not widely distributed.
“One of the things about jihadis: they are very good in Arabic,” said one specialist who worked on WebOps.
Another former employee said common translation mistakes he personally witnessed, including the “Palestinian salad” example, were the result of the company hiring young people who were faking language abilities.
He mockingly described the conversations between managers and potential hires: “‘Do you speak Arabic?'” he mimicked. “‘Yes. How do you say ‘good morning?’ Oh, you can do that? You are an expert. You are hired.'”
A third specialist said she asked a colleague, who was assigned to analyze material written in Arabic, why he was discarding much of it. While watching a soap opera online, the colleague said the material was irrelevant because it was in Farsi or Urdu. But when she checked, it was indeed Arabic. She has since left WebOps to find more meaningful work, she said.
The WebOps Arabic program focuses on Syria, Iraq and Yemen, but for most of the time Colsa has been running it, it has had no Syrian or Yemeni staff, the AP was told in separate interviews with two current employees and one who left recently.
Engaging in theological discussions on social media with people who are well versed in the Quran is not for beginners. Iraq and Syria are riven with sectarian violence between Shiite and Sunni Muslims, who follow different interpretations of Islam. Multiple workers said that WebOps “experts” often trip up on language that is specific to one sect or region.
“People can tell whether you are local, or whether you are Sunni or Shia,” said another former worker, so poorly crafted messages are not effective. He said he left WebOps because he was disgusted with the work.
A number of the workers complained to AP that a large group on staff from Morocco, in North Africa, were often ignorant of Middle Eastern history and culture — or even the difference between groups the U.S. considers terrorist organizations. The group was so dominant that colleagues jokingly referred to them as “the Moroccan mafia.”
A lot of them “don’t know the difference between Hezbollah and Hamas,” said the employee who left to find more meaningful work. Hezbollah is an Iran-backed Shiite group based in Lebanon. Hamas, based in the Gaza Strip and the West Bank, is the Palestinian branch of the Sunni Muslim Brotherhood.
Cathy Dickens, a vice president for business management and corporate ethics at Colsa Corp., referred questions to CENTCOM, which declined comment.
“YOU SHOULDN’T GRADE YOUR OWN HOMEWORK”
To determine whether WebOps actually dissuades people from becoming radicalized, Colsa’s scoring team analyzes the interactions employees have online and tries to measure whether the subjects’ comments reflect militant views or a more tolerant outlook.
Three former members of its scoring team told the AP they were encouraged by a manager to indicate progress against radicalism in their scoring reports even if they were not making any.
The employee who said she left to find meaningful work recalled approaching a Colsa manager to clarify how the scoring was done shortly after starting her job. She said he told her that the bottom line was “the bread we put on the table for our children.”
The boss told her that the scoring reports should show progress, but not too much, so that the metrics would still indicate a dangerous level of militancy online to justify continued funding for WebOps, she said.
She was shocked. “Until my dying day, I will never forget that moment,” she said.
She, like other former employees, spoke only on condition of anonymity for fear of retribution from Colsa that could affect future employment.
The manager she spoke to declined to comment. AP withheld his name because of security concerns.
Employees and managers routinely inflate counts of interactions with potential terrorist recruits, known as “engagements,” according to multiple workers. Engagements are delivered in tweets or comments posted on social media to lists of people and can also be automated. That automation is at times used to inflate the actual number of engagements, said two former workers, including the one who talked about colleagues faking their language abilities.
The worker who left in disgust explained that a single tweet could be programmed to be sent out to all the followers of a target individually, multiple times. So the targets and their followers get the same tweets tagged to them over and over again.
“You send it like a blind copy. You program it to send a tweet every five minutes to the whole list individually from now until tomorrow,” the former employee said. “Then you see the reports and it says yesterday we sent 5,000 engagements. Often that means one tweet on Twitter.” The person said that he saw managers printing out the skewed reports for weekly briefings with CENTCOM officers. But the volume made it look like the WebOps team’s work was “wow, amazing,” he said.
Garcia said Colsa has a done a good job under his watch, that the data is sufficiently scrutinized and the program is succeeding.
In 2014, a group of more than 40 Defense Department data specialists came to Tampa to evaluate the program. Their unclassified report, obtained by AP, identified what one of the authors called “serious design flaws.” For instance, the report found that any two analysts were only 69 percent likely to agree on how to score a particular engagement. The author said a rate of 90 percent or higher is required to draw useful conclusions.
The report found that computers would be as accurate or better than analysts, and could evaluate effectiveness more quickly — and cheaply.
What Central Command really needed, the report said, was outside oversight.
“You shouldn’t grade your own homework,” said the author, a former U.S. military officer and data specialist once stationed at Central Command. The author, one of many people who signed off on the report, spoke on condition of anonymity for fear of professional retribution.
He said the report was given to officers, including Garcia, and to Colsa. The author said the suggestions were not implemented and WebOps managers resisted multiple attempts at oversight. The author said that when he directly appealed to Garcia for outside assessment, an officer under Garcia said the effort would cloud the mission.
“The argument was that WebOps was the only program at Central Command that was directly engaging the enemy and that it couldn’t function if its staff was constantly distracted by assessment,” he said. The argument worked, he said, and Colsa was not forced or instructed to accept outside oversight.
Garcia disputed that account but would not elaborate on what steps were taken to address the Defense Department data specialists’ concerns. The Government Accountability Office issued a report in 2015 on WebOps oversight, but it is classified.
Despite the problems behind the scenes at WebOps, Central Command will play a key role in the new $500 million psychological operations campaign against the Islamic State and other groups. The five-year contract was a hefty commitment to “degrade and ultimately defeat extremist organizations,” according to a document detailing the scope of the work. It would run parallel to WebOps.
The request for bids was announced in April. Four separate teams of companies competed for the contract, including one led by defense giant Northrop Grumman.
From the start, competitors complained among themselves that Simon Bergman, an executive with the British advertising firm M&C Saatchi, had an advantage because he was friends with Garcia. Bergman was working with Northrop to prepare the bid.
A former British officer, Bergman was deployed to Iraq while Garcia was there working on psychological operations during the Iraq war. It was well known that the two men were close, and in recent years, contractors often saw Bergman at CENTCOM offices.
In April, defense contractor CACI International held a meeting in Tampa to discuss the bid. Three contractors on the team said a CACI manager warned a roomful of people that Garcia had already told him that he would decide who got the contract. The manager said that Garcia indicated that having Bergman on the team would help.
So in mid-September, when a photo appeared on Facebook showing Garcia and Bergman together in the Florida Keys, it did not look good in the eyes of many contractors. Garcia’s girlfriend captured the old friends inside the Tiki Bar at Gilbert’s Resort in Key Largo. They were on her Facebook page, shoulder-to-shoulder, smiling and giving the thumbs up.
Within days, the photos had been taken down from her page.
Two weeks later, the government announced Northrop had won the contract. Its team included M&C Saatchi, Bergman’s firm.
A panel led by the U.S. General Services Administration chose the winner of the contract. Chris Hamm, a senior GSA acquisition executive, said a five-member team scrutinized the technical merits of the proposals for the contract. That team was led by two GSA officials and included three military officers — one of whom was Marine Corps Lt. Col. Matt Coughlin, who reported directly to Garcia before Garcia left his post. Coughlin is the information operations’ liaison with contractors.
In an interview with AP, Hamm said the contract award was handled properly.
“The process is designed to avoid bias,” Hamm said.
But several other contractors on losing teams said Coughlin would clearly have been the person on the panel with the most sway, because of both his technical expertise and the fact that he represented CENTCOM. And given Coughlin’s ties with Garcia, they found that troubling.
Garcia said that while the bids were being considered, he stayed away from any discussions of it with Coughlin, his deputy. So he didn’t even realize the award announcement was imminent when he went with Bergman to the Keys.
“I wasn’t involved with the contracting process at all,” Garcia said. “So I had no idea what the timing of the contract was.”
When asked why the photo with Bergman was taken off Facebook, Garcia declined to comment.
Bergman said that his friendship with Garcia, one of many he has with military officers, is irrelevant. He noted that M&C Saatchi was only a subcontractor.
“I don’t see why my relationship with somebody in the military would have any influence over anything,” he said.
The whistleblower complaint however, filed in December with Central Command’s inspector general, contended the photo of Garcia and Bergman created a “clear impression and perception of impropriety.”
The four-page complaint, now under investigation by NCIS, said the atmosphere at the CENTCOM division, with routine drinking at the office and myriad conflicts of interest, led to an “air of untouchable invincibility.”
Several contractors who spoke to AP, among the nearly two dozen either bidding for work or involved in CENTCOM information operations, said they suspected undue influence in the decision for the $500 million contract. In his complaint, the whistleblower alleges that Garcia told him directly at one point that “any team must include Simon Bergman.”
All the contractors asked for anonymity to discuss sensitive work because they feared repercussions for their companies.
Colsa, the primary WebOps contractor, was not involved in Northrop’s bid. However, nothing prevents Northrop from bringing the company in as a subcontractor.
That’s the plan, said several contractors who have been briefed by Northrop. Such a move would provide ample funding to keep WebOps running for up to five more years.
Associated Press researchers Jennifer Farrar, Rhonda Shafner and Monika Mathur contributed to this report.
By DESMOND BUTLER and RICHARD LARDNER
Jan. 31, 2017
Find this story 31 January 2017
© copyright 2017 Associated Press
Digital Counterinsurgency How to Marginalize the Islamic State Online
February 7, 2017
The Islamic State, or ISIS, is the first terrorist group to hold both physical and digital territory: in addition to the swaths of land it controls in Iraq and Syria, it dominates pockets of the Internet with relative impunity. But it will hardly be the last. Although there are still some fringe terrorist groups in the western Sahel or other rural areas that do not supplement their violence digitally, it is only a matter of time before they also go online. In fact, the next prominent terrorist organization will be more likely to have extensive digital operations than control physical ground.
Although the military battle against ISIS is undeniably a top priority, the importance of the digital front should not be underestimated. The group has relied extensively on the Internet to market its poisonous ideology and recruit would-be terrorists. According to the International Centre for the Study of Radicalisation and Political Violence, the territory controlled by ISIS now ranks as the place with the highest number of foreign fighters since Afghanistan in the 1980s, with recent estimates putting the total number of foreign recruits at around 20,000, nearly 4,000 of whom hail from Western countries. Many of these recruits made initial contact with ISIS and its ideology via the Internet. Other followers, meanwhile, are inspired by the group’s online propaganda to carry out terrorist attacks without traveling to the Middle East.
ISIS also relies on the digital sphere to wage psychological warfare, which directly contributes to its physical success. For example, before the group captured the Iraqi city of Mosul in June 2014, it rolled out an extensive online campaign with text, images, and videos that threatened the city’s residents with unparalleled death and destruction. Such intimidation makes it easier to bring populations under ISIS’ control and reduces the likelihood of a local revolt.
Foiling ISIS’ efforts on the Internet will thus make the group less successful on the battlefield. To date, however, most digital efforts against ISIS have been too limited, focusing on specific tactics, such as creating counternarratives to extremism, in lieu of generating a comprehensive strategy. Instead of resorting to a single tool, opponents should treat this fight as they would a military confrontation: by waging a broad-scale counterinsurgency.
KNOW YOUR ENEMY
The first step of this digital war is to understand the enemy. Most analyses of ISIS’ online footprint focus on social media. In a Brookings Institution report, J. M. Berger and Jonathon Morgan estimated that in late 2014, 46,000 Twitter accounts openly supported the group. Back then, strategies for fighting ISIS online centered on simply removing such accounts.
Social media platforms are just the tip of the iceberg, however. ISIS’ marketing tools run the gamut from popular public platforms to private chat rooms to encrypted messaging systems such as WhatsApp, Kik, Wickr, Zello, and Telegram. At the other end of the spectrum, digital media production houses such as the Al-Furqaan Foundation and the Al-Hayat Media Center—presumably funded by and answering to ISIS’ central leadership—churn out professional-grade videos and advertisements.
The first step of this digital war is to understand the enemy.
Yet understanding the full extent of ISIS’ marketing efforts without knowing who is behind them is not an actionable insight; it is like understanding how much land the group controls without knowing what kinds of fighters occupy it and how they hold it. An effective counterinsurgency requires comprehending ISIS’ hierarchy. Unlike al Qaeda, which comprises a loose cluster of isolated cells, ISIS resembles something akin to a corporation. On the ground in Iraq and Syria, a highly educated leadership sets its ideological agenda, a managerial layer implements this ideology, and a large rank and file contributes fighters, recruiters, videographers, jihadist wives, and people with every other necessary skill. This hierarchy is replicated online, where ISIS operates as a pyramid consisting of four types of digital fighters.
At the top sits ISIS’ central command for digital operations, which gives orders and provides resources for disseminating content. Although its numbers are small, its operations are highly organized. According to Berger, for example, the origins of most of ISIS’ marketing material on Twitter can be traced to a small set of accounts with strict privacy settings and few followers. By distributing their messages to a limited network outside the public eye, these accounts can avoid being flagged for terms-of-service violations. But the content they issue eventually trickles down to the second tier of the pyramid: ISIS’ digital rank and file.
The U.S. Central Command Twitter feed after it was apparently hacked by people claiming to be Islamic State, January, 2015.
STAFF / REUTERS
The U.S. Central Command Twitter feed after it was apparently hacked by people claiming to be Islamic State sympathizers, January, 2015.
This type of fighter may or may not operate offline as well. He and his ilk run digital accounts that are connected to the central command and disseminate material through guerrilla-marketing tactics. In June 2014, for example, Islamic State supporters hijacked trending hashtags related to the World Cup to flood soccer fans with propaganda. Because they operate on the frontline of the digital battlefield, these fighters often find their accounts suspended for terms-of-service violations, and they may therefore keep backup accounts. And to make each new account appear more influential than it really is, they purchase fake followers from social media marketing firms; just $10 can boost one’s follower count by tens of thousands.
Then there are the vast numbers of radical sympathizers across the globe, who constitute ISIS’ third type of digital fighter. Unlike the rank and file, they do not belong to ISIS’ official army, take direct orders from its leadership, or reside in Iraq or Syria. But once drawn into ISIS’ echo chamber by the rank and file, they spend their time helping the group disseminate its radical message and convert people to its cause. These are often the people who identify and engage potential recruits on an individual level, developing online relationships strong enough to result in physical travel. In June, for example, The New York Times documented how a radical Islamist in the United Kingdom met a young woman from Washington State online and convinced her to consider heading to Syria.
Although joining ISIS’ operations in Iraq and Syria may be illegal, spreading extremism online is not. These fighters are masters at taking advantage of their right to free speech, and their strength lies both in their numbers and in their willingness to mimic ISIS’ official line without having to receive direct orders from its leadership.
ISIS’ fourth type of digital fighter is nonhuman: the tens of thousands of fake accounts that automate the dissemination of its content and multiply its message. On Twitter, for example, so-called Twitter bots automatically flood the digital space with retweets of terrorist messages; countless online tutorials explain how to write these relatively simple programs. In comment sections on Facebook, YouTube, and other sites, such automated accounts can monopolize the conversation with extremist propaganda and marginalize moderate voices. This programmable army ensures that whatever content ISIS’ digital central command issues will make its way across as many screens as possible.
RECAPTURING DIGITAL TERRITORY
Much of the debate over how to combat ISIS on the ground has been binary, split between those proposing containment and those insisting on its defeat. The best strategy for fighting it online, however, is something else: marginalization. The result would be something similar to what has happened to the Revolutionary Armed Forces of Colombia, or FARC, the narcoterrorist group that grabbed headlines throughout the 1990s for its high-profile kidnappings and savage guerrilla warfare. Today, the group has been neither disbanded nor entirely defeated, but its ranks have largely been driven into the jungle.
Along the same lines, ISIS will be neutered as a digital threat when its online presence becomes barely noticeable. The group would find it either too risky or tactically impossible to commandeer control of social media platforms and public chat rooms, and its digital content would be hard to discover. Incapable of growing its online ranks, it would see its ratio of digital fighters to human fighters fall to one to one. It would be forced to operate primarily on the so-called dark Web, the part of the Internet not indexed by mainstream search engines and accessible to only the most knowledgeable users.
Compelling terrorist organizations to operate in secret does make plots more difficult to intercept, but in the case of ISIS, that is a tradeoff worth making. Every day, the group’s message reaches millions of people, some of whom become proponents of ISIS or even fighters for its cause. Preventing it from dominating digital territory would help stanch the replenishment of its physical ranks, reduce its impact on the public psyche, and destroy its most fundamental means of communication.
The Islamic State will be neutered as a digital threat when its online presence becomes barely noticeable.
It will take a broad coalition to marginalize ISIS online: from governments and companies to nonprofits and international organizations. First, they should separate the human-run accounts on social networks from the automated ones. Next, they should zero in on ISIS’ digital central command, identifying and suspending the specific accounts responsible for setting strategy and giving orders to the rest of its online army. When that is done, digital society at large should push the remaining rank and file into the digital equivalent of a remote cave.
The suspension of accounts needs to be targeted—more like kill-or-capture raids than strategic bombing campaigns. Blanket suspensions covering any accounts that violate terms of service could not guarantee that the leadership will be affected. In fact, as Berger and Morgan’s research highlighted, ISIS has learned to protect its digital leadership from suspension by keeping its activities hidden behind strict privacy settings.
This is not to downplay the importance of banning users who break the rules and distribute terrorist content. Technology companies have become skilled at doing just that. In 2014, the British Counter Terrorism Internet Referral Unit, a service run by London’s Metropolitan Police, worked closely with such companies as Google, Facebook, and Twitter to flag more than 46,000 pieces of violent or hateful content for removal. That same year, YouTube took down approximately 14 million videos. In April 2015, Twitter announced that it had suspended 10,000 accounts linked to ISIS on a single day. Such efforts are valuable in that they provide a cleaner digital environment for millions of users. But they would be doubly so if the leadership that orders terrorist content to be distributed were also eliminated.
That, in turn, will require mapping ISIS’ network of accounts. One way law enforcement could make inroads into this digital network is by covertly infiltrating ISIS’ real-world network. This technique has already achieved some success. In April, the FBI arrested two young women accused of plotting attacks in New York City after a two-year investigation that had relied extensively on their social media activity for evidence. Law enforcement should scale such efforts to focus on the digital domain and target ISIS’ digital leadership, suspending the accounts of its members and arresting them in certain cases.
The U.S. Central Command Twitter feed after it was apparently hacked by people claiming to be Islamic State
STAFF / REUTERS
A computer screenshot shows the U.S. Central Command Twitter feed after it was apparently hacked by people claiming to be Islamic State sympathizers January 12, 2015.
Once ISIS’ online leadership has been separated from the rank and file, the rank and file will become significantly less coordinated and therefore less effective. The next step would be to reduce the group’s level of online activity overall, so that it is forced into the margins of digital society. During this phase, the danger is that online, ISIS might splinter into less coordinated but more aggressive rogue groups. With a higher tolerance for risk, these groups might undertake “doxing” of opponents of ISIS, whereby the private information (such as the address and social security number) of a target is revealed, or “distributed denial-of-service attacks,” which can take down an entire website.
To mitigate this threat, the digital fighters’ activities need to be diverted away from extremism altogether. This is where counternarratives against violent extremism can come in. Over the last two years, several notable efforts have been launched, including video series produced by the Arab Center for Scientific Research and Humane Studies and the Institute for Strategic Dialogue. To be effective, these campaigns need to reflect the diversity of the group’s ranks: professional jihadist fighters, former Iraqi soldiers, deeply religious Islamic scholars, young men in search of adventure, local residents joining out of fear or ambition. Moderate religious messages may work for the pious recruit, but not for the lonely British teenager who was promised multiple wives and a sense of belonging in Syria. He might be better served by something more similar to suicide-prevention and anti-bullying campaigns.
For maximum effect, these campaigns should be carefully targeted. An antiextremist video viewed by 50,000 of the right kinds of people will have a greater impact than one seen by 50 million random viewers. Consider Abdullah-X, a cartoon series marketed through a YouTube campaign funded by the European Union. Its pilot episode was promoted using targeted advertising oriented toward those interested in extremist Islam. Eighty percent of the YouTube users who watched it found it through targeted ads rather than through unrelated searches.
Given the diversity of ISIS’ digital rank and file, however, betting on counternarratives alone would be too risky. To combat extremists who have already made up their minds, the coalition should target their willingness to operate in the open. Al Qaeda has taken pains to keep its digital operations secret and works under the cover of passwords, encryption, and rigid privacy settings. These tactics have made the group notoriously difficult to track, but they have also kept its digital footprint miniscule. Likewise, ISIS’ rank and file should be forced to adopt similar behavior.
Achieving this will require creativity. For example, governments should consider working with the news media to aggressively publicize arrests that result from covert infiltration of ISIS’ online network. If any new account with which a digital soldier interacts carries the risk of being that of an undercover agent, it becomes exponentially more hazardous to recruit new members. Law enforcement could also create visual presentations showing how police investigations of digital extremists’ accounts can lead to arrests, thereby telling the cautionary tale that a single mistake can cause the downfall of a digital soldier and his entire social network.
Within the next few years, new high-tech tools may become available to help governments marginalize digital rank-and-file terrorists. One is machine learning. Just as online advertisers can target ads to users with a particular set of interests, law enforcement could use algorithmic analysis to identify, map, and deactivate the accounts of terrorist supporters. Assisted by machine learning, such campaigns could battle ISIS online with newfound precision and reach a scale that would not be possible with a manual approach.
It is worth noting that just like a physical counterinsurgency, a digital counterinsurgency is more likely to succeed when bolstered by the participation of local communities. All the online platforms ISIS uses have forum moderators, the equivalent of tribal leaders and sheiks. The technology companies that own these platforms have no interest in seeing their environments flooded with fake accounts and violent messages. They should therefore give these moderators the tools and training to keep their communities safe from extremist messaging. Here again, machine learning could someday help, by automatically identifying terrorist messages and either highlighting them for moderators or blocking them on their behalf.
At first glance, ISIS can look hopelessly dominant online, with its persistent army of propaganda peddlers and automated trolls. In fact, however, the group is at a distinct disadvantage when it comes to resources and numbers. The vast majority of Internet users disagree with its message, and the platforms that its fighters use belong to companies that oppose its ideology.
There is no doubt that undertaking a digital counterinsurgency campaign represents uncharted territory. But the costs of failure are low, for unlike in a real-world counterinsurgency, those who fight digitally face no risk of injury or death. That is yet another factor making ISIS particularly vulnerable online, since it means that the group’s opponents can apply and discard new ways of fighting terrorism quickly to hone their strategy.
The benefits of digitally marginalizing ISIS, meanwhile, are manifold. Not only would neutering the group online improve the lives of millions of users who would no longer be as likely to encounter the group’s propaganda; it would also make the group’s real-world defeat more imminent. As ISIS’ digital platforms, communication methods, and soldiers became less accessible, the group would find it harder to coordinate its physical attacks and replenish its ranks. And those fighting it online would gain valuable experience for when the time came to fight the next global terrorist group trying to win the Internet.
By Jared Cohen
November/December 2015 Issue
Find this story at December 2015
©2017 Council on Foreign Relations, Inc.
Dubieus onderzoek van VU en NSCR naar cybercriminaliteit
August 14, 2015
Medewerkers van de Vrije Universiteit en het NSCR hebben in samenwerking met het Openbaar Ministerie ruim 2.000 personen geënquêteerd voor een onderzoek naar cybercriminaliteit. De respondenten is niet verteld dat zij benaderd zijn omdat zij als veroordeelden of verdachten te boek staan.
In juni dit jaar zijn allerlei mensen benaderd voor een onderzoek van de Vrije Universiteit (VU) in Amsterdam en het Nederlands Studiecentrum Criminaliteit en Rechtshandhaving (NSCR). Het betreft personen die door het ministerie van Veiligheid en Justitie ervan worden verdacht zich in het verleden schuldig te hebben gemaakt aan enigerlei vorm van cybercriminaliteit, of daarvoor zijn veroordeeld. Dat wordt echter niet vermeld in de brief die is gedateerd van 15 juni 2015 en ondertekend door prof.dr. Wim Bernasco (VU Amsterdam/NSCR) en Marleen Weulen Kranenbarg, MSc (NSCR).
Het NSCR maakt deel uit van de NWO, de Nederlandse Organisatie voor Wetenschappelijk Onderzoek. Op haar website stelt de NSCR dat zij ‘zich toelegt op fundamenteel wetenschappelijk onderzoek naar criminaliteit en rechtshandhaving.’ Het instituut werd in 1992 opgezet door het toenmalige Ministerie van Justitie en de Universiteit van Leiden. Op dit moment wordt het NSCR mede gefinancierd door het Ministerie van Veiligheid en Justitie en de Vrije Universiteit.
Het onderwerp van de uitnodigingsbrief betreft ‘Onderzoek NL-ONLINE-OFFLINE‘ waaraan respondenten kunnen deelnemen middels een online survey. Op de gerelateerde website wordt gesproken van een onderzoek onder dezelfde naam. Het lijkt allemaal volstrekt onschuldig aangezien er gesproken wordt over de ‘kennis van computers en het internet en over uw ervaringen met online en offline veiligheid.’ Respondenten wordt voorgehouden dat zij een tegoedbon van Bol.com, VVV of Zalando ontvangen ter waarde 50 euro (vet gedrukt!). Daarnaast wordt expliciet vermeld dat de mening van respondenten voor de onderzoekers van groot belang is.
In werkelijkheid wordt het onderzoek niet NL-ONLINE-OFFLINE genoemd maar ‘Cyber Crime Offender Profiling (CyberCOP): the human factor examined.‘ Oftewel, onderzoek naar het profiel van de cybercrimineel, want de menselijke factor moet natuurlijk een beeld/profiel scheppen van de persoon van de ‘crimineel’ ten behoeve van de opsporing door het ministerie van Veiligheid en Justitie en haar diensten. Het onderzoek richt zich dus op verdachten en plegers van cybercriminaliteit en niet zomaar op kennis en ervaringen met computers, internet en veiligheid. Een medewerkster van het Openbaar Ministerie (OM) laat weten dat de uitnodigingsbrief zo is opgesteld dat ‘niet op te maken is dat de respondent zelf verdachte is geweest van een delict.’
Uit het besluit van het Agentschap Basisadministratie Persoonsgegevens en Reisdocumenten (BPR), de dienst waarvan onderzoekers de adressen van de respondenten gekregen heeft, blijkt in het geheel niet dat het uitsluitend om verdachten of veroordeelden gaat. ‘In het verzoek van 10 juni 2014, 2014-0000658731, heeft het Nederlands Studiecentrum voor Criminaliteit en Rechtshandhaving verzocht om autorisatie voor de systematische verstrekking van gegevens uit de basisregistratie personen in verband met het uitvoeren van het onderzoek CyberCrime Offender Profiling (CyberCOP): the human factor examined.’
Artikel 2 lid 1 van het besluit luidt: ‘Aan de onderzoeksinstelling wordt op zijn verzoek een gegeven verstrekt dat is vermeld op de persoonslijst van een ingeschrevene, indien het een gegeven betreft dat is opgenomen in bijlage II bij dit besluit.’ Het BPR heeft op 26 februari 2015 toestemming verleend voor het verstrekken van gegevens. Uit bijlage 1 van het besluit blijkt duidelijk dat het om daders gaat. Of het ook gaat om verdachten maakt het besluit niet duidelijk. ‘Het Nederlands Studiecentrum Criminaliteit en Rechtshandhaving is recentelijk begonnen aan een meerjarig onderzoeksproject naar de plegers van cybercrime.’
Bij navraag blijkt dat het onderzoek ook gericht is op daders: “Voor dit onderzoek zijn zowel personen benaderd die ooit verdacht zijn geweest van het plegen van een online delict als personen die ooit verdacht zijn geweest van het plegen van een offline delict.”
Als een van de respondenten de onderzoekers van het NSCR benaderd, wordt gesteld dat ook slachtoffers benaderd zijn. Het BPR maakt echter helemaal geen melding van slachtoffers in het besluit. Er is dus geen toestemming verleend voor het verkrijgen van gegevens over slachtoffers van cybercriminaliteit. Slachtoffers zijn dus niet benaderd. Bijlage 1 stelt dat het bij ‘centraal doel/probleemstelling van het onderzoek’ gaat om ‘het zicht krijgen op (verschillende typen) cybercriminelen, hoog-risico groepen te identificeren en effectievere en beter gerichte interventies en sancties ontwikkelen.’ De onderzoekers schrijven in een email bericht met vragen over het onderzoek dat zij “niet alleen geïnteresseerd zijn in offline en online daderschap, maar ook in de vraag of deze mensen wel eens slachtoffer worden. Vandaar de keuze voor deze meer algemene naam en beschrijving.”
Daders en verdachten
Het onderzoek is duidelijk gericht op daders en verdachten van cybercriminaliteit. De medewerkers van het NSCR stellen dat het onderzoek dat uitgevoerd wordt op basis van de gegevensverstrekking uit de BRP zich richt op ‘bekende plegers van cybercriminaliteit (op basis van door het Openbaar Ministerie aan ons verschafte verdachten-registraties).’
Er is een selectie gemaakt bestaande uit 1.050 verdachten van cybercrime-delicten en 1.050 verdachten van overige delicten, in totaal 2.100 personen. ‘Zij worden bevraagd over hun achtergronden, motieven, persoonlijkheid en sociale netwerken.’ De steekproef is volgens de medewerkster van het OM uitgevoerd middels een ‘abstractie van registers’. Daarvoor zijn twee selectiecriteria gebruikt. Een daarvan houdt in dat de pleger zijn delict in de periode tussen 2000 en 2014 moet hebben gepleegd. Het lijkt dus te gaan om mensen die veroordeeld zijn, maar een medewerkster gaf ook dat het ook om verdachten gaat.
De vragenlijst voor de respondenten is opmerkelijk omdat het vragen oproept waarvoor de verstrekte antwoorden zullen worden gebruikt. De respondenten worden uitgedaagd om hun kunde te tonen en er worden technische vragen gesteld. Een van de respondenten heeft de indruk dat de onderzoekers de respondenten proberen te stimuleren om zich te bewijzen om aan een van de stereotyperingen van cybercriminelen te voldoen.
In het onderzoek wordt om zeer specifieke daderkennis gevraagd, maar daarnaast ook kennis die voor opsporingsdiensten van groot belang kan zijn, zoals bijnamen van kennissen, delicten die recent zijn begaan en de contacten van de respondent. Het wordt duidelijk dat de vragen zijn gericht op wat het onderzoek zegt te te bestuderen: ‘de typische persoonlijkheidskenmerken van cybercriminelen, hun sociale (criminele) netwerken, hun primaire motivaties en hun criminele carrière.’ De onderzoekers stellen in de bijlage van het BPR besluit dat het onderzoek zich richt op de gelijkenis tussen ‘cybercriminelen’ en ‘criminelen uit de offline wereld’, hun persoonlijkheid en of ‘ze deel uit maken van criminele organisaties.’
Wetenschappelijk onderzoek is noodzakelijk, maar waarom wordt de respondenten niet verteld dat het in dit geval niet over computers, internet en veiligheid gaat? Het onderzoek lijkt onafhankelijk, maar er is op allerlei fronten samenwerking met het openbaar ministerie en de respondenten wordt dat niet gemeld. Het is duidelijk dat de onderzoekers bang zijn dat als de geadresseerden dat weten ze niet meedoen. De respondenten worden dus als een soort geblinddoekt proefdier behandeld voor een onderzoek dat niet het echte onderzoek is wat uitgevoerd wordt. Waarom wordt de werkelijke naam van het onderzoek niet aan de geadresseerden gegeven? Waarom wordt de bijlage van het besluit van Agentschap BPR niet aan de geselecteerden toegestuurd? Waarom staat in de aanvraag bij BPR niet de naam van het NL-ONLINE-OFFLINE onderzoek?
In antwoord op vragen over het onderzoek, wordt geen antwoord gegeven op de vraag waarom niet de werkelijke naam van het onderzoek aan de proefpersonen is vermeld. Wel schrijven de onderzoekers: “In de brief vermelden wij expliciet niet dat er in dit onderzoek personen zijn aangeschreven die ooit verdacht zijn geweest van een strafbaar feit. De reden hiervoor betreft het waarborgen van de privacy omdat het altijd mogelijk is dat iemand anders dan de aangeschreven persoon de brief open maakt.”
De onderzoekers stellen dat zij: “Bij al onze onderzoeken hanteren wij een uitgebreide en zorgvuldige toetsingsprocedure voordat deze worden uitgevoerd. Hierbij wordt de opzet van het onderzoek en de benaderingswijze van de respondenten getoetst door zowel het College van Procureurs Generaal van het Openbaar Ministerie en tevens voorgelegd aan de commissie Ethiek Rechtswetenschappelijk & Criminologisch Onderzoek van de Rechtenfaculteit van de Vrije Universiteit Amsterdam. Daarnaast wordt er een melding gedaan bij het College Bescherming Persoonsgegevens. In deze procedure wordt de meest zorgvuldige benaderingsmethode bepaald. Dit is ook gebeurd voor het NL-ONLINE-OFFLINE onderzoek.”
Middels de Wet Openbaarheid van Bestuur zijn bij het ministerie van Veiligheid en Justitie en het College van Procureurs Generaal stukken opgevraagd over het NL-ONLINE-OFFLINE onderzoek. Beide bestuursorganen stellen dat zij geen stukken met betrekking tot dit onderzoek hebben. In een presentatie over het onderzoek ‘Cyber Crime Offender Profiling (CyberCOP): the human factor examined’ worden de volgende partners van VU en NSCR genoemd: “Team High Tech Crime, Nationale Politie Landelijk Parket, Openbaar Ministerie en Reclassering Nederland.”
NL-ONLINE-OFFLINE is een profielonderzoek van de overheid over cybercriminelen. Het heet geen NL-ONLINE-OFFLINE maar ‘Cyber Crime Offender Profiling (CyberCOP): the human factor examined.‘ Hierbij gaat het vooral om mogelijke interventies en sancties. Aan het einde van de enquête wordt het de respondenten nog eens benadrukt hoe belangrijk het wel niet is dat zij de vragen serieus hebben ingevuld. Hiervoor wordt hen een vraag voorgelegd om te controleren of ze wel écht de vragen goed hebben gelezen en beantwoord. Want als dat niet het geval is, lopen zij de beloning van 50 euro mis.
Na deze vermanende woorden wordt de respondent het volgende voorgelegd: ‘Voor wetenschappelijk onderzoek is het relevant om de gegevens die u zojuist heeft verstrekt te koppelen aan gegevens die over u bekend zijn in lokale of landelijke registers (zoals het Sociaal Statistisch Bestand en het Justitieel Documentatie Systeem). Dit zal alleen gebeuren als u hiervoor toestemming geeft. Uiteraard zullen ook die gegevens uitsluitend worden gebruikt voor wetenschappelijke doeleinden en volstrekt vertrouwelijk behandeld, anoniem verwerkt en veilig bewaard worden.’
Nu hebben de onderzoekers in Bijlage 1 van het besluit van het Agentschap BPR aangegeven dat ‘de gegevens uit de BRP worden gebruikt om een vragenlijst op te sturen naar de persoon.’ Er wordt tevens vermeld dat ‘op geen enkel moment tijdens het onderzoek een persoon herleidbaar wordt gerapporteerd.’ Dit zal mogelijk betrekking hebben op het doorgeven van mogelijke strafbare feiten die de respondenten in de enquête hebben ingevuld.
Er wordt in de bijlage echter niet vermeld dat de verstrekte gegevens van de ondervraagde worden gekoppeld aan de data van justitie: ‘koppelen aan gegevens die over u bekend zijn in lokale of landelijke registers.’ In principe hoeft dat natuurlijk niet omdat die gegevens niet in het bezit zijn van het Agentschap BPR, maar het roept wel vragen op over de mate van ‘eerlijkheid’ van de onderzoekers en het openbaar ministerie.
Volgens de onderzoekers wordt “geen van de verzamelde gegevens gedeeld met derden, ook niet met het OM. Er zal nooit zonder expliciete toestemming van de respondent koppeling plaatsvinden met data van het OM. Het OM heeft ons enkel geholpen met het verzenden van de brieven. Het OM en het BPR helpen echter niet alleen bij de mailing aan de proefpersonen, ook bij het koppelen aan ‘gegevens die over u bekend zijn in lokale of landelijke registers (zoals het Sociaal Statistisch Bestand en het Justitieel Documentatie Systeem)’. Dit zou anoniem zijn, maar dat kan niet bij koppeling aan die bestanden, daarnaast krijgt iedere proefpersoon een persoonlijke deelnamecode, die maar een keer is te gebruiken.
Het VU/NSCR-onderzoek is gericht op cyber profiling. Daarvoor worden personen benaderd die zelf niet weten dat ze benaderd zijn als veroordeelde of verdachte. Zij krijgen een vragenlijst voorgelegd waarmee ze uit de tent worden gelokt om daderkennis te presenteren. Maar dat niet alleen, ook info over mogelijke mededaders en andere gegevens over strafbare feiten waarvoor zij veroordeeld zijn, en/of mogelijke strafbare feiten die zij recentelijk zouden hebben gepleegd en waarvoor zij niet veroordeeld zijn. Dit laatste over de connecties van de respondent is natuurlijk gericht op het in kaart brengen van criminele netwerken.
Verdachten worden ook op deze wijze benaderd, dus het zou net zo goed een lok-onderzoek kunnen zijn. Vraag is of de respondenten op hun rechten ten aanzien van de gegevens die zij mogelijk prijsgeven, zijn gewezen? Waarom spelen de onderzoekers geen open kaart omtrent het werkelijke doel van het onderzoek? Vanwaar die omslachtigheid en vaagheid? Is dit een wetenschappelijk onderzoek waarbij geënquêteerden worden ingelicht over de ware doelstelling, of is hier soms sprake van een phishing expeditie van een instituut dat gefinancierd wordt door het Ministerie van Veiligheid en Justitie en direct samenwerkt met het OM? Probeert het openbaar ministerie via een onderzoek het zwijgrecht te ondermijnen of te omzeilen?
Respondenten zijn reeds strafrechtelijk vervolgd of niet verdachten meer omdat er geen bewijs werd gevonden voor hun schuld. Toch worden zij verleid om aan te geven met wie, waar, wanneer en waarom zij mogelijke strafbare feiten hebben begaan. Los van het profiling en phishing gedeelte van het onderzoek, roept de aanpak vragen op over de wetenschappelijke verantwoording ervan. Want welke serieuze data levert het op om de stoere veroordeelde en/of verdachte cyber criminelen uit de tent te lokken om hun kunde te tonen en met wie ze contact hebben. Het enige waar het onderzoek op gericht lijkt te zijn is de bevestiging van bestaande stereotypering over de cyber criminelen. Dat is zowel voor de wetenschap als voor de opsporing weinig vruchtbaar.
Buro Jansen & Janssen, augustus 2015
Brief aan respondenten
besluit agentschap BPR
Welkomstekst bij onderzoek NL-Online-Offline
presentatie over het onderzoek Cyber Crime Offender Profiling (CyberCOP): the human factor examined
The Mystery of Duqu 2.0: a sophisticated cyberespionage actor returns New zero-day used for effective kernel memory injection and stealth
July 27, 2015
Earlier this year, during a security sweep, Kaspersky Lab detected a cyber-intrusion affecting several of our internal systems.
Following this finding, we launched a large scale investigation, which led to the discovery of a new malware platform from one of the most skilled, mysterious and powerful groups in the APT world – Duqu. The Duqu threat actor went dark in 2012 and was believed to have stopped working on this project – until now. Our technical analysis indicates the new round of attacks include an updated version of the infamous 2011 Duqu malware, sometimes referred to as the stepbrother of Stuxnet. We named this new malware and its associated platform “Duqu 2.0”.
Some of the new 2014-2015 Duqu infections are linked to the P5+1 events and venues related to the negotiations with Iran about a nuclear deal. The threat actor behind Duqu appears to have launched attacks at the venues for some of these high level talks. In addition to the P5+1 events, the Duqu 2.0 group has launched a similar attack in relation to the 70th anniversary event of the liberation of Auschwitz-Birkenau.
In the case of Kaspersky Lab, the attack took advantage of a zero-day in the Windows Kernel, and possibly up to two other, currently patched vulnerabilities, which were zero-day at that time. The analysis of the attack revealed that the main goal of the attackers was to spy on Kaspersky Lab technologies, ongoing research and internal processes. No interference with processes or systems was detected. More details can be found in our technical paper.
From a threat actor point of view, the decision to target a world-class security company must be quite difficult. On one hand, it almost surely means the attack will be exposed – it’s very unlikely that the attack will go unnoticed. So the targeting of security companies indicates that either they are very confident they won’t get caught, or perhaps they don’t care much if they are discovered and exposed. By targeting Kaspersky Lab, the Duqu attackers probably took a huge bet hoping they’d remain undiscovered; and lost.
At Kaspersky Lab, we strongly believe in transparency, which is why we are going public with this information. Kaspersky Lab is confident that its clients and partners are safe and that there is no impact on the company’s products, technologies and services.
By GReAT on June 10, 2015. 12:00 pm
Find this story at 10 June 2015
© 2015 AO Kaspersky Lab.
Obama orders US to draw up overseas target list for cyber-attacks
June 20, 2013
Exclusive: Top-secret directive steps up offensive cyber capabilities to ‘advance US objectives around the world’
Barack Obama has ordered his senior national security and intelligence officials to draw up a list of potential overseas targets for US cyber-attacks, a top secret presidential directive obtained by the Guardian reveals.
The 18-page Presidential Policy Directive 20, issued in October last year but never published, states that what it calls Offensive Cyber Effects Operations (OCEO) “can offer unique and unconventional capabilities to advance US national objectives around the world with little or no warning to the adversary or target and with potential effects ranging from subtle to severely damaging”.
It says the government will “identify potential targets of national importance where OCEO can offer a favorable balance of effectiveness and risk as compared with other instruments of national power”.
The directive also contemplates the possible use of cyber actions inside the US, though it specifies that no such domestic operations can be conducted without the prior order of the president, except in cases of emergency.
The aim of the document was “to put in place tools and a framework to enable government to make decisions” on cyber actions, a senior administration official told the Guardian.
The administration published some declassified talking points from the directive in January 2013, but those did not mention the stepping up of America’s offensive capability and the drawing up of a target list.
Obama’s move to establish a potentially aggressive cyber warfare doctrine will heighten fears over the increasing militarization of the internet.
The directive’s publication comes as the president plans to confront his Chinese counterpart Xi Jinping at a summit in California on Friday over alleged Chinese attacks on western targets.
Even before the publication of the directive, Beijing had hit back against US criticism, with a senior official claiming to have “mountains of data” on American cyber-attacks he claimed were every bit as serious as those China was accused of having carried out against the US.
Presidential Policy Directive 20 defines OCEO as “operations and related programs or activities … conducted by or on behalf of the United States Government, in or through cyberspace, that are intended to enable or produce cyber effects outside United States government networks.”
Asked about the stepping up of US offensive capabilities outlined in the directive, a senior administration official said: “Once humans develop the capacity to build boats, we build navies. Once you build airplanes, we build air forces.”
The official added: “As a citizen, you expect your government to plan for scenarios. We’re very interested in having a discussion with our international partners about what the appropriate boundaries are.”
The document includes caveats and precautions stating that all US cyber operations should conform to US and international law, and that any operations “reasonably likely to result in significant consequences require specific presidential approval”.
The document says that agencies should consider the consequences of any cyber-action. They include the impact on intelligence-gathering; the risk of retaliation; the impact on the stability and security of the internet itself; the balance of political risks versus gains; and the establishment of unwelcome norms of international behaviour.
Among the possible “significant consequences” are loss of life; responsive actions against the US; damage to property; serious adverse foreign policy or economic impacts.
The US is understood to have already participated in at least one major cyber attack, the use of the Stuxnet computer worm targeted on Iranian uranium enrichment centrifuges, the legality of which has been the subject of controversy. US reports citing high-level sources within the intelligence services said the US and Israel were responsible for the worm.
In the presidential directive, the criteria for offensive cyber operations in the directive is not limited to retaliatory action but vaguely framed as advancing “US national objectives around the world”.
The revelation that the US is preparing a specific target list for offensive cyber-action is likely to reignite previously raised concerns of security researchers and academics, several of whom have warned that large-scale cyber operations could easily escalate into full-scale military conflict.
Sean Lawson, assistant professor in the department of communication at the University of Utah, argues: “When militarist cyber rhetoric results in use of offensive cyber attack it is likely that those attacks will escalate into physical, kinetic uses of force.”
An intelligence source with extensive knowledge of the National Security Agency’s systems told the Guardian the US complaints again China were hypocritical, because America had participated in offensive cyber operations and widespread hacking – breaking into foreign computer systems to mine information.
Provided anonymity to speak critically about classified practices, the source said: “We hack everyone everywhere. We like to make a distinction between us and the others. But we are in almost every country in the world.”
The US likes to haul China before the international court of public opinion for “doing what we do every day”, the source added.
One of the unclassified points released by the administration in January stated: “It is our policy that we shall undertake the least action necessary to mitigate threats and that we will prioritize network defense and law enforcement as preferred courses of action.”
The full classified directive repeatedly emphasizes that all cyber-operations must be conducted in accordance with US law and only as a complement to diplomatic and military options. But it also makes clear how both offensive and defensive cyber operations are central to US strategy.
Under the heading “Policy Reviews and Preparation”, a section marked “TS/NF” – top secret/no foreign – states: “The secretary of defense, the DNI [Director of National Intelligence], and the director of the CIA … shall prepare for approval by the president through the National Security Advisor a plan that identifies potential systems, processes and infrastructure against which the United States should establish and maintain OCEO capabilities…” The deadline for the plan is six months after the approval of the directive.
The directive provides that any cyber-operations “intended or likely to produce cyber effects within the United States” require the approval of the president, except in the case of an “emergency cyber action”. When such an emergency arises, several departments, including the department of defense, are authorized to conduct such domestic operations without presidential approval.
Obama further authorized the use of offensive cyber attacks in foreign nations without their government’s consent whenever “US national interests and equities” require such nonconsensual attacks. It expressly reserves the right to use cyber tactics as part of what it calls “anticipatory action taken against imminent threats”.
The directive makes multiple references to the use of offensive cyber attacks by the US military. It states several times that cyber operations are to be used only in conjunction with other national tools and within the confines of law.
When the directive was first reported, lawyers with the Electronic Privacy Information Center filed a Freedom of Information Act request for it to be made public. The NSA, in a statement, refused to disclose the directive on the ground that it was classified.
In January, the Pentagon announced a major expansion of its Cyber Command Unit, under the command of General Keith Alexander, who is also the director of the NSA. That unit is responsible for executing both offensive and defensive cyber operations.
Earlier this year, the Pentagon publicly accused China for the first time of being behind attacks on the US. The Washington Post reported last month that Chinese hackers had gained access to the Pentagon’s most advanced military programs.
The director of national intelligence, James Clapper, identified cyber threats in general as the top national security threat.
Obama officials have repeatedly cited the threat of cyber-attacks to advocate new legislation that would vest the US government with greater powers to monitor and control the internet as a means of guarding against such threats.
One such bill currently pending in Congress, the Cyber Intelligence Sharing and Protection Act (Cispa), has prompted serious concerns from privacy groups, who say that it would further erode online privacy while doing little to enhance cyber security.
In a statement, Caitlin Hayden, national security council spokeswoman, said: “We have not seen the document the Guardian has obtained, as they did not share it with us. However, as we have already publicly acknowledged, last year the president signed a classified presidential directive relating to cyber operations, updating a similar directive dating back to 2004. This step is part of the administration’s focus on cybersecurity as a top priority. The cyber threat has evolved, and we have new experiences to take into account.
“This directive establishes principles and processes for the use of cyber operations so that cyber tools are integrated with the full array of national security tools we have at our disposal. It provides a whole-of-government approach consistent with the values that we promote domestically and internationally as we have previously articulated in the International Strategy for Cyberspace.
“This directive will establish principles and processes that can enable more effective planning, development, and use of our capabilities. It enables us to be flexible, while also exercising restraint in dealing with the threats we face. It continues to be our policy that we shall undertake the least action necessary to mitigate threats and that we will prioritize network defense and law enforcement as the preferred courses of action. The procedures outlined in this directive are consistent with the US Constitution, including the president’s role as commander in chief, and other applicable law and policies.”
Glenn Greenwald and Ewen MacAskill
guardian.co.uk, Friday 7 June 2013 20.06 BST
Find this story at 7 June 2013
© 2013 Guardian News and Media Limited or its affiliated companies. All rights reserved.
SPYING ON AMERICANS: Obama’s Backdoor “Cybersecurity” Wiretap Bill Threatens Political and Private Rights; Spying on Social Media
May 24, 2013
Under the guise of “cybersecurity,” the new all-purpose bogeyman to increase the secret state’s already-formidable reach, the Obama administration and their congressional allies are crafting legislation that will open new backdoors for even more intrusive government surveillance: portals into our lives that will never be shut.
As Antifascist Calling has frequently warned, with the endless “War on Terror” as a backdrop the federal government, most notably the 16 agencies that comprise the so-called “Intelligence Community” (IC), have been constructing vast centralized databases that scoop-up and store all things digital–from financial and medical records to the totality of our electronic communications online–and do so without benefit of a warrant or probable cause.
The shredding of constitutional protections afforded by the Fourth Amendment, granted to the Executive Branch by congressional passage of the Authorization for Use of Military Force (AUMF) after the 9/11 attacks, followed shortly thereafter by the oxymoronic USA Patriot Act set the stage for today’s depredations.
Under provisions of multiple bills under consideration by the House and Senate, federal officials will be given broad authority over private networks that will almost certainly hand security officials wide latitude over what is euphemistically called “information-sharing” amongst corporate and government securocrats.
As The Washington Post reported in February, the National Security Agency “has pushed repeatedly over the past year to expand its role in protecting private-sector computer networks from cyberattacks” but has allegedly “been rebuffed by the White House, largely because of privacy concerns.”
“The most contentious issue,” Post reporter Ellen Nakashima wrote, “was a legislative proposal last year that would have required hundreds of companies that provide such critical services as electricity generation to allow their Internet traffic to be continuously scanned using computer threat data provided by the spy agency. The companies would have been expected to turn over evidence of potential cyberattacks to the government.”
Both the White House and Justice Department have argued, according to the Post, that the “proposal would permit unprecedented government monitoring of routine civilian Internet activity.”
National Security Agency chief General Keith Alexander, the dual-hatted commander of NSA and U.S. Cyber Command (USCYBERCOM), the Pentagon satrapy that wages offensive cyberwar, was warned to “restrain his public comments after speeches in which he argued that more expansive legal authority was necessary to defend the nation against cyberattacks.”
While we can take White House “objections” with a proverbial grain of salt, they do reveal however that NSA, the largest and most well-funded of the secret state’s intel shops will use their formidable surveillance assets to increase their power while undermining civilian control over the military in cahoots with shadowy security corporations who do their bidding. (Readers are well-advised to peruse The Surveillance Catalog posted by The Wall Street Journal as part of their excellent What They Know series for insight into the burgeoning Surveillance-Industrial Complex).
As investigative journalist James Bamford pointed out recently in Wired Magazine, “the exponential growth in the amount of intelligence data being produced every day by the eavesdropping sensors of the NSA and other intelligence agencies” is “truly staggering.”
In a follow-up piece for Wired, Bamford informed us that when questioned by Congress, Alexander stonewalled a congressional subcommittee when asked whether NSA “has the capability of monitoring the communications of Americans, he never denies it–he simply says, time and again, that NSA can’t do it ‘in the United States.’ In other words it can monitor those communications from satellites in space, undersea cables, or from one of its partner countries, such as Canada or Britain, all of which it has done in the past.”
Call it Echelon on steroids, the massive, secret surveillance program first exposed by journalists Duncan Campbell and Nicky Hager.
And with the eavesdropping agency angling for increased authority to monitor the electronic communications of Americans, the latest front in the secret state’s ongoing war against privacy is “cybersecurity” and “infrastructure protection.”
‘Information Sharing’ or Blanket Surveillance?
Among the four bills currently competing for attention, the most egregious threat to civil liberties is the Cyber Intelligence Sharing and Protection Act of 2011 (CISPA, H.R. 3523).
Introduced by Mike Rogers (R-MI) and Dutch Ruppersberger (D-MD), the bill amends the National Security Act of 1947, adding language concerning so-called “cyber threat intelligence and information sharing.”
“Cyber threat intelligence” is described as “information in the possession of an element of the intelligence community directly pertaining to a vulnerability of, or threat to, a system or network of a government or private entity, including information pertaining to the protection of a system or network from: (1) efforts to degrade, disrupt, or destroy such system or network; or (2) theft or misappropriation of private or government information, intellectual property, or personally identifiable information.”
In keeping with other “openness” mandates of our Transparency Administration™ the Rogers bill will require the Director of National Intelligence (DNI) to establish procedures that permit IC elements to “share cyber threat intelligence with private-sector entities, and (2) encourage the sharing of such intelligence.”
These measures however, will not protect the public at large from attacks by groups of organized cyber criminals since such intelligence is only “shared with certified entities or a person with an appropriate security clearance,” gatekeepers empowered by the state who ensure that access to information is “consistent with the need to protect U.S. national security, and used in a manner that protects such intelligence from unauthorized disclosure.”
In other words, should “cleared” cyber spooks be directed by their corporate or government masters to install state-approved malware on private networks as we discovered last year as a result of the HBGary hack by Anonymous, it would be a crime punishable by years in a federal gulag if official lawbreaking were disclosed.
The bill authorizes “a cybersecurity provider (a non-governmental entity that provides goods or services intended to be used for cybersecurity purposes),” i.e., an outsourced contractor from any one of thousands of spooky “cybersecurity” firms, to use “cybersecurity systems to identify and obtain cyber threat information in order to protect the rights and property of the protected entity; and share cyber threat information with any other entity designated by the protected entity, including the federal government.”
Furthermore, the legislation aims to regulate “the use and protection of shared information, including prohibiting the use of such information to gain a competitive advantage and, if shared with the federal government, exempts such information from public disclosure.”
And should the public object to the government or private entities trolling through their personal data in the interest of “keeping us safe” well, there’s an app for that too! The bill “prohibits a civil or criminal cause of action against a protected entity, a self-protected entity (an entity that provides goods or services for cybersecurity purposes to itself), or a cybersecurity provider acting in good faith under the above circumstances.”
One no longer need wait until constitutional violations are uncovered, the Rogers bill comes with a get-out-of-jail-free card already in place for state-approved scofflaws.
Additionally, the bill also “preempts any state statute that restricts or otherwise regulates an activity authorized by the Act.” In other words, in states like California where residents have “an inalienable right to privacy” under Article 1, Section 1 of the State Constitution, the Rogers bill would be abolish that right and effectively “legalize” unaccountable snooping by the federal government or other “self-protected,” i.e., private entities deputized to do so by the secret state.
Social Media Spying
How would this play out in the real world? As Government Computer News reported, hyped-up threats of an impending “cyber-armageddon” have spawned a host of new actors constellating America’s Surveillance-Industrial Complex: the social media analyst.
“Companies and government agencies alike are using tools to sweep the Internet–blogs, websites, and social media such as Facebook and Twitter feeds–to find out what people are saying about, well, just about anything.”
Indeed, as researchers Jerry Brito and Tate Watkins pointed out last year in Loving the Cyber Bomb?, “An industrial complex reminiscent of the Cold War’s may be emerging in cybersecurity today.”
Brito and Watkins averred that “the military-industrial complex was born out of exaggerated Soviet threats, a defense industry closely allied with the military and Department of Defense, and politicians striving to bring pork and jobs home to constituents. A similar cyber-industrial complex may be emerging today, and its players call for government involvement that may be superfluous and definitely allows for rent seeking and pork barreling.”
Enter social media analysis and the private firms out to make a buck–at our expense.
“Not surprisingly,” GCN’s Patrick Marshall wrote, “intelligence agencies have already been looking at social media as a source of information. The Homeland Security Department has been analyzing traffic on social networks for at least the past three years.”
While DHS claims it does not routinely monitor Facebook or Twitter, and only responds when it receives a “tip,” such assertions are demonstrably false.
Ginger McCall, the director of the Electronic Electronic Privacy Information Center’s Open Government Program told GCN that the department is “explicitly monitoring for criticism of the government, for reports that reflect adversely on the agency, for public reaction to policy proposals.”
But DHS isn’t the only agency monitoring social media sites such as Facebook and Google+.
As Antifascist Calling reported back in 2009, according to New Scientist the National Security Agency “is funding research into the mass harvesting of the information that people post about themselves on social networks.”
Not to be outdone, the CIA’s venture capital investment arm, In-Q-Tel, has poured millions of dollars into Visible Technologies, a Bellevue, Washington-based firm specializing in “integrated marketing, social servicing, digital experience management, and consumer intelligence.”
According to In-Q-Tel “Visible Technologies has developed TruCast®, which takes an innovative and holistic approach to social media management. TruCast has been architected as an enterprise-level solution that provides the ability to track, analyze, and respond to social media from a single, Web-based platform.”
Along similar lines, the CIA has heavily invested in Recorded Future, a firm which “extracts time and event information from the web. The company offers users new ways to analyze the past, present, and the predicted future.”
The firm’s defense and intelligence analytics division promises to “help analysts understand trends in big data, and foresee what may happen in the future. Groundbreaking algorithms extract temporal and predictive signals from unstructured text. Recorded Future organizes this information, delineates results over interactive timelines, visualizes past trends, and maps future events–all while providing traceability back to sources. From OSINT to classified data, Recorded Future offers innovative, massively scalable solutions.”
As Government Computer News pointed out, in January the FBI “put out a request for vendors to provide information about available technologies for monitoring and analyzing social media.” Accordingly, the Bureau is seeking the ability to:
• Detect specific, credible threats or monitor adversarial situations.
• Geospatially locate bad actors or groups and analyze their movements, vulnerabilities, limitations, and possible adverse actions.
• Predict likely developments in the situation or future actions taken by bad actors (by conducting trend, pattern, association, and timeline analysis).
• Detect instances of deception in intent or action by bad actors for the explicit purpose of misleading law enforcement.
• Develop domain assessments for the area of interest (more so for routine scenarios and special events).
So much for privacy in our Orwellian New World Order!
Backdoor Official Secrets Act
Social media “harvesting” by private firms hot-wired into the state’s Surveillance-Industrial Complex will be protected from challenges under provisions of CISPA.
As the Electronic Frontier Foundation (EFF) pointed out, “a company that protects itself or other companies against ‘cybersecurity threats’ can ‘use cybersecurity systems to identify and obtain cyber threat information to protect the rights and property’ of the company under threat. But because ‘us[ing] cybersecurity systems’ is incredibly vague, it could be interpreted to mean monitoring email, filtering content, or even blocking access to sites. A company acting on a ‘cybersecurity threat’ would be able to bypass all existing laws, including laws prohibiting telcos from routinely monitoring communications, so long as it acted in ‘good faith’.”
And as EFF’s Rainey Reitman and Lee Tien aver, the “broad language” concerning what constitutes a cybersecurity “threat,” is an invitation for the secret state and their private “partners” to include “theft or misappropriation of private or government information, intellectual property, or personally identifiable information.”
“Yes,” Reitman and Tien wrote, “intellectual property. It’s a little piece of SOPA wrapped up in a bill that’s supposedly designed to facilitate detection of and defense against cybersecurity threats. The language is so vague that an ISP could use it to monitor communications of subscribers for potential infringement of intellectual property. An ISP could even interpret this bill as allowing them to block accounts believed to be infringing, block access to websites like The Pirate Bay believed to carry infringing content, or take other measures provided they claimed it was motivated by cybersecurity concerns.”
More troubling, “the government and Internet companies could use this language to block sites like WikiLeaks and NewYorkTimes.com, both of which have published classified information.”
Should CISPA pass muster it could serve as the basis for establishing an American “Official Secrets Act.” In the United Kingdom, the Act has been used against whistleblowers to prohibit disclosure of government crimes. But it does more than that. The state can also issue restrictive “D-Notices” that “advise” editors not to publish material on subjects deemed sensitive to the “national security.”
EFF warns that “online publishers like WikiLeaks are currently afforded protection under the First Amendment; receiving and publishing classified documents from a whistleblower is a common journalistic practice. While there’s uncertainty about whether the Espionage Act could be brought to bear against WikiLeaks, it is difficult to imagine a situation where the Espionage Act would apply to WikiLeaks without equally applying to the New York Times, the Washington Post, and in fact everyone who reads about the cablegate releases.”
And with the Obama regime’s crusade to prosecute and punish whistleblowers, as the recent indictment of former CIA officer John Kiriakou for alleged violations of the Espionage Act and the Intelligence Identities Protection Act for disclosing information on the CIA’s torture programs, we have yet another sterling example of administration “transparency”! While Kiriakou faces 30 years in prison, the former head of the CIA’s Directorate of Operations, Jose A. Rodriguez Jr., who was responsible for the destruction of 92 torture videotapes held by the Agency, was not charged by the government and was given a free pass by the Justice Department.
As the World Socialist Web Site points out: “More fundamentally, the prosecution of Kiriakou is part of a policy of state secrecy and repression that pervades the US government under Obama, who came into office promising ‘the most transparent administration in history.’”
Critic Bill Van Auken observed that Kiriakou’s prosecution “marks the sixth government whistleblower to be charged by the Obama administration under the Espionage Act, twice as many such prosecutions as have been brought by all preceding administrations combined. Prominent among them is Private Bradley Manning, who is alleged to have leaked documents exposing US war crimes to WikiLeaks. He has been held under conditions tantamount to torture and faces a possible death penalty.”
“In all of these cases,” the World Socialist Web Site noted, “the World War I-era Espionage Act is being used to punish not spying on behalf of a foreign government, but exposing the US government’s own crimes to the American people. The utter lawlessness of US foreign policy goes hand in hand with the collapse of democracy at home.”
The current crop of “cybersecurity” bills are sure to hasten that collapse.
Under Rogers’ legislation, “the government would have new, powerful tools to go after WikiLeaks,” or anyone else who challenges the lies of the U.S. government by publishing classified information that contradicts the dominant narrative.
By Tom Burghardt
Global Research, April 10, 2012
Find this story at 10 April 2013
Copyright © 2005-2013 GlobalResearch.ca
The school that trains cyber spies: U.S. university training students in online espionage for jobs in the NSA and CIA
November 30, 2012
University of Tulsa’s Cyber Corps programme is training students to write viruses, hack networks, crack passwords and mine data
The little known course has been named as one of four ‘centres of excellence’ and places 85 per cent of graduates with the NSA or CIA
Not your average student: The University of Tulsa is training students in the fundamentals of cyber-espionage, with many taking jobs in the CIA
A university is offering a two-year course in cyber-espionage, with recruits going on to jobs with the CIA, the National Security Agency and the Secret Service.
Students at the University of Tulsa, Oklahoma, are learning how to write computer viruses, hack networks, crack passwords and mine data from a range of digital devices.
The little-known Cyber Corps programme already places 85 per cent of its graduates with the NSA – known to students as ‘the fraternity – or the CIA – which they call ‘the sorority’.
Sujeet Shenoi, an Indian immigrant to the U.S., founded the programme at Tulsa’s Institute for Information Security in 1998 and continues to lead the teaching, the LA Times reported.
Students are taught with a mixture of classroom theory and practical field work, he said, with each assigned to a police crime lab on campus to apply their skills to help recover evidence from digital devices.
‘I throw them into the deep end,’ Mr Shenoi told the LA Times. ‘And they become fearless.’
Much of their work involves gathering evidence against paedophiles, with several students having posed as children on the internet to lure predators into stings.
But his students in 2003 also helped solve a triple murder case by cracking an email account that linked the killer with his victims and, working alongside the Secret Service, they have developed new techniques for extracting data from damged smartphones, GPS devices and other digital devices.
The NSA in May named Tulsa as one of four centres of academic excellence in cyber operations, alongside Northeastern University in Boston, the Naval Postgraduate School in Monterey, California, and Dakota State University in Madison, South Dakota.
Neal Ziring, a senior NSA official who visited the school recently, told LA Times: ‘Tulsa students show up to NSA with a lot of highly relevant hands-on experience.
‘There are very few schools that are like Tulsa in terms of having participation with law enforcement, with industry, with government.’
Centre of excellence: Tulsa was in May named by the NSA alongside four other schools as important centres for training cyber-security operatives
WIRETAPPING THE INTERNET
New eavesdropping technology could allow government agencies to ‘silently record’ conversations on internet chat services like Skype in real time.
Until now, so called voice over internet protocol (VoIP) services have been difficult for police to tap into, because of the way they send information over the web.
The services convert analogue audio signals into digital data packets, which are then sent in a way that is costly and complex for third parties to intercept.
But now a California businessman has obtained a patent for a ‘legal intercept’ technology he says ‘would allow governments to “silently record” VoIP communications’.
Dennis Chang, president of VoIP-PAL, an chat service similar to Skype, claims his system would allow authorities to identify and monitor suspects merely by accessing their username and subscriber data.
Applicants to Tulsa’s programme, who have ranged in age from 17 to 63, must be U.S. citizens eligible for security clearance of ‘top secret’ or higher.
Many are military veterans or others looking to start second careers, usually people who are working towards degrees in computer science, engineering, law or business.
By Damien Gayle
PUBLISHED: 09:41 GMT, 26 November 2012 | UPDATED: 14:15 GMT, 26 November 2012
Find this story at 26 November 2012
Published by Associated Newspapers Ltd
Part of the Daily Mail, The Mail on Sunday & Metro Media Group
© Associated Newspapers Ltd
Researcher: CIA, NSA may have infiltrated Microsoft to write malware
June 25, 2012
Did spies posing as Microsofties write malware in Redmond? How do you spell ‘phooey’ in C#?
June 18, 2012, 2:46 PM — A leading security researcher has suggested Microsoft’s core Windows and application development programming teams have been infiltrated by covert programmer/operatives from U.S. intelligence agencies.
If it were true it would be another exciting twist to the stories of international espionage, sabotage and murder that surround Stuxnet, Duqu and Flame, the most successful cyberwar weapons deployed so far, with the possible exception of Windows itself.
Nevertheless, according to Mikko Hypponen, chief research officer of antivirus and security software vendor F-Secure, the scenario that would make it simplest for programmers employed by U.S. intelligence agencies to create the Stuxnet, Duqu and Flame viruses and compromise Microsoft protocols to the extent they could disguise downloads to Flame as patches through Windows Update is that Microsoft has been infiltrated by members of the U.S. intelligence community.
[ FREE DOWNLOAD: 68 great ideas for running a security department ]
Having programmers, spies and spy-supervisors from the NSA, CIA or other secret government agencies infiltrate Microsoft in order to turn its technology to their own evil uses (rather than Microsoft’s) is the kind of premise that would get any writer thrown out of a movie producer’s office for pitching an idea that would put the audience to sleep halfway through the first act.
Not only is it unlikely, the “action” most likely to take place on the Microsoft campus would be the kind with lots of tense, acronymically dense debates in beige conference rooms and bland corporate offices.
The three remarkable bits of malware that attacked Iranian nuclear-fuel development facilities and stole data from its top-secret computer systems – Flame Duqu and Stuxnet – show clear signs of having been built by the same teams of developers, over a long period of time, Hypponen told PC Pro in the U.K.
Flame used a counterfeit Microsoft security certificates to verify its trustworthiness to Iranian users, primarily because Microsoft is among the most widely recognized and trusted computer companies in the world, Hypponen said.
Faking credentials from Microsoft would give the malware far more credibility than using certificates from other vendors, as would hiding updates in Windows Update, Hypponen said.
The damage to Microsoft’s reputation and suspicion from international customers that it is a puppet of the CIA would be enough to keep Microsoft itself from participating in the operation, even if it were asked.
That doesn’t mean it didn’t happen.
“It’s plausible that if there is an operation under way and being run by a US intelligence agency it would make perfect sense for them to plant moles inside Microsoft to assist in pulling it off, just as they would in any other undercover operation,” Hypponen told PC Pro. “It’s not certain, but it would be common sense to expect they would do that.”
The suggestion piqued the imaginations of conspiracy theorists, but doesn’t have a shred of evidence to support it.
It does have a common-sense appeal, however. Planting operatives inside Microsoft would probably be illegal, would certainly be unethical and could have a long-range disadvantage by making Microsofties look like tools of the CIA rather than simply tools.
“No-one has broken into Microsoft, but by repurposing the certificate and modifying it with unknown hash collision technologies, and with the power of a supercomputer, they were able to start signing any program they wanted as if it was from Microsoft,” Hypponen said. “If you combine that with the mechanism they were using to spoof MS Update server they had the crown jewels.”
Hypponen is one of a number of security experts who have said Stuxnet and Duqu have the hallmarks of software written by traditionally minded software engineers accustomed to working in large, well-coordinated teams.
After studying the code for Duqu, security researchers at Kaspersky Labs said the malware was most similar to the kind of work done by old-school programmers able to write code for more than one platform at a time, do good quality control to make sure the modules were able to install themselves and update in real time, and that the command-and-control components ahd been re-used from previous editions.
“All the conclusions indicate a rather professional team of developers, which appear to be reusing older code written by top “old school” developers,” according to Kaspersky’s analysis. “Such techniques are normally seen in professional software and almost never in today’s malware. Once again, these indicate that Duqu, just like Stuxnet, is a ‘one of a kind’ piece of malware which stands out like a gem from the large mass of “dumb” malicious program we normally see.”
Earlier this month the NYT ran a story detailing two years worth of investigations during which a range of U.S. officials, including, eventually, President Obama, confirmed the U.S. had been involved in writing the Stuxnet and Flame malware and siccing them on Iran.
That’s far from conclusive proof that the NSA has moved its nonexistent offices to Redmond, Wash. It doesn’t rule it out either, however.
Very few malware writers are able to write such clean code that can install on a variety of hardware systems, assess their new environments and download the modules they need to successfully compromise a new network, Kaspersky researchers said.
Stuxnet and Flame are able to do all these things and to get their own updates through Windows Update using a faked Windows Update security certificate.
No other malware writer, hacker or end user has been able to do that before. Knowing it happened this time makes it more apparent that the malware writers know what they are doing and know Microsoft code inside and out.
That’s still no evidence that Microsoft could be or has been infiltrated by spies from the U.S. or from other countries.
It does make sense, but so do a lot of conspiracy theories.
Until there’s some solid indication Flame came from inside Microsoft, not outside, it’s probably safer to write off this string of associative evidence.
Even in his own blog, Hypponen makes fun of those who make fun of Flame as ineffective and unremarkable, but doesn’t actually suggest moles at Microsoft are to blame.
Find this story at 18 June 2012
By Kevin Fogarty
© 1994 – 2012 ITworld. All rights reserved.
Stuxnet was work of U.S. and Israeli experts, officials say
June 4, 2012
A damaging cyberattack against Iran’s nuclear program was the work of U.S. and Israeli experts and proceeded under the secret orders of President Obama, who was eager to slow that nation’s apparent progress toward building an atomic bomb without launching a traditional military attack, say current and former U.S. officials.
The origins of the cyberweapon, which outside analysts dubbed Stuxnet after it was inadvertently discovered in 2010, have long been debated, with most experts concluding that the United States and Israel probably collaborated on the effort. The current and former U.S. officials confirmed that long-standing suspicion Friday, after a New York Times report on the program.
The officials, speaking on the condition of anonymity to describe the classified effort code-named Olympic Games, said it was first developed during the George W. Bush administration and was geared toward damaging Iran’s nuclear capability gradually while sowing confusion among Iranian scientists about the cause of mishaps at a nuclear plant.
The use of the cyberweapon — malware designed to infiltrate and damage systems run by computers — was supposed to make the Iranians think that their engineers were incapable of running an enrichment facility.
“The idea was to string it out as long as possible,” said one participant in the operation. “If you had wholesale destruction right away, then they generally can figure out what happened, and it doesn’t look like incompetence.”
Even after software security companies discovered Stuxnet loose on the Internet in 2010, causing concern among U.S. officials, Obama secretly ordered the operation continued and authorized the use of several variations of the computer virus.
Overall, the attack destroyed nearly 1,000 of Iran’s 6,000 centrifuges — fast-spinning machines that enrich uranium, an essential step toward building an atomic bomb. The National Security Agency developed the cyberweapon with help of Israel.
Several senior Iranian officials on Friday referred obliquely to the cyberattack in reaffirming Iran’s intention to expand its nuclear program.
“Despite all plots and mischievous behavior of the Western countries . . . Iran did not withdrawal one iota from its rights,” Kazem Seddiqi, a senior Iranian cleric, said during services at a Tehran University mosque, according to news reports from Iran.
Iran previously has blamed U.S. and Israeli officials and has said its nuclear program is solely for peaceful purposes, such as generating electricity.
White House officials declined to comment on the new details about Stuxnet, and an administration spokesman denied that the material had been leaked for political advantage.
“It’s our view, as it is the view of everybody who handles classified information, that information is classified for a reason: that it is kept secret,” deputy press secretary Josh Earnest told reporters. “It is intended not to be publicized because publicizing it would pose a threat to our national security.”
The revelations come at a particularly sensitive time, as the United States and five other world powers are engaged in talks with Iran on proposed cuts to its nuclear program. Iran has refused to agree to concessions on what it says is its rightful pursuit of peaceful nuclear energy. The next round of negotiations is scheduled for this month in Moscow.
“Effectively the United States has gone to war with Iran and has chosen to do so in this manner because the effects can justify this means,” said Rafal Rohozinski, a cyber-expert and principal of the SecDev Group, referring to the slowing of Iran’s nuclear program.
“This officially signals the beginning of the cyber arms race in practice and not in theory,” Rohozinski said.
In 2006, senior Bush administration officials developed the idea of using a computer worm, with Israeli assistance, to damage Iranian centrifuges at its uranium enrichment plant in Natanz. The concept originated with Gen. James E. Cartwright, who was then head of U.S. Strategic Command, which handles nuclear deterrence, and had a reputation as a cyber-strategist.
“Cartwright’s role was describing the art of the possible, having a view or vision,” said a former senior official familiar with the program. But “the heavy lifting” was done by NSA Director Keith Alexander, who had “the technical know-how and carried out the actual activity,” said the former official.
Olympic Games became a collaborative effort among NSA, the CIA and Israel, current and former officials said. The CIA, under then-Director Michael V. Hayden, lent its covert operation authority to the program.
The CIA and Israelis oversaw the development of plans to gain physical access to the plant. Installing the worm in plant equipment not connected to the Internet depended on spies and unwitting accomplices — engineers, plant technicians — who might connect an infected device to one of the systems, officials said.
The cyberweapon took months of testing and development. It began to show effects in 2008, when centrifuges began spinning at faster-than-normal speeds until sensitive components began to warp and break, participants said.
By Ellen Nakashima and Joby Warrick, Published: June 1 | Updated: Saturday, June 2, 12:03 PM
© The Washington Post Company
Find this story at 1 june 2012