Demand evidence: a digital security takeaway from ShmooCon

Gus Andrews
5 min readJan 22, 2017
How do we know where FancyBear came from? (this logo, at least, “was derived from fancybear.net, a website created by “Fancy Bears’ hack team””, per Wikipedia. Used here for purposes of illustration.)

On the human-factors side of the infosec community, we are at the mercy of those more technical than we are. Those of us who are better at writing grants, improving interfaces, or training journalists and activists must work hard to understand the complicated technical strengths and weaknesses of the tools we fund, recommend, and make easier to use. We have to trust security analysts to explain how attacks work, and where they are coming from.

From the beginning of my time in this field, this has troubled me. What if members of the tiny, elite group of technologists we trust were wrong, or, worse yet, exaggerating or lying to us? How would we know?

ShmooCon in DC last weekend served up reminders of how perilously delicate this trust is. But a few talks also suggested checks and balances to help ensure the information we’re getting from infosec professionals is reliable. I caught the tail end of the conference: one talk on ad network fraud, and two on the alleged Russian hacking of the US election, from Mark Kuhr and Toni Gidwani.

ShmooCon lesson one: Disinformation can be deployed to confuse those who are analyzing a hack. Not just journalists; not just people reading the news. Aspects of an attack itself can be manipulated to intentionally mislead those to whom we look for clarity on security breaches. Kuhr talked about misleading security analysts as counterintelligence: attackers, he said, would be thinking about which feeds they need to taint. Misleading analysts is more effective than trying to mislead the public; it makes a weapon out of our trust in them.

Mark Kuhr’s talk on disinformation campaigns and attribution claims laid out a sobering scenario in which an attack could be made to appear as if it were coming from China, when it actually came from elsewhere. If an analyst finds that malicious code originates on a machine using a Chinese character set or other Chinese OS settings, should the analyst conclude that the attacker was Chinese? These aspects can be pretty easily spoofed, Kuhr noted. So what counts as enough evidence to attribute an attack to a particular country or actor?

Digital attackers have a range of techniques to foil forensic analysis — many of which are familiar to those of us who are trying to help free-speech activists. Attackers can encrypt their payloads so they can’t be analyzed. They can route attacks through multiple geographic locations. They can make use of code which is known to be associated with other attackers (in the example here, Russian attackers borrowing malicious Chinese code). Gidwani also noted that the attackers on state boards of elections made use of open-source tools, which made attribution harder. The tools which help our activists avoid detection can also help attackers misrepresent where they are coming from.

Faced with potential misinformation, analysts must be careful in their conclusions. Gidwani, in her talk on apparent Russian attacks on the US election, described the competing hypotheses her firm analyzed. Disproving a hypothesis, she said, is more valuable than claims that a hypothesis is true. She noted that her company’s PR team pushed their analysts to confirm the hack’s link to Russia, but they held back as they did not feel they had a conclusive link yet. (It is worth noting that here, yet again, the profit motives that may drive PR and journalism can make their reporting a weak link in trying to determine and communicate the truth.)

If even security analysts struggle to confirm the source of attacks, understanding digital security seems all the more daunting to the rest of us. As less-technical people, what can we do? These talks suggested a few ways forward:

— We must demand evidence that supports analysts’ claims. We need it to recommend tools and to confirm attacks. What should we ask for as evidence? Gidwani and Kuhr’s talks suggest a few criteria:
• Do the analysts present the competing hypotheses they considered, and the ones they disproved?
• What do they present as evidence of the origin of attacks?
• Have we heard from other analysts about their analysis — what do they think?
• How are the analysts speaking about their confidence in the source of the attack? Generally, like the intelligence and military communities, we should look for them to be very specific and careful in their language — and we should avoid passing along reports which seem over-confident without presenting evidence.

— We must be careful about the information we pass along. Social media are absolutely our Achilles heel, here. All of us who recommend tools are ultimately part of the feeds which attackers seek to taint. And retweeting when there’s a serious-looking news item — like the recent Guardian report on the “backdoor” in WhatsApp (which isn’t a backdoor) — is so painfully easy to do without even reading the related link for evidence. (I’m as guilty of this as anyone else. I trust the Guardian to do smart work on infosec, so I retweeted that article before reading. Signing Zeynep’s open letter will have to count as my mea culpa, but letters to the editor are too little, too late.)

— We should support information-sharing about attacks — something that is already being done effectively by groups like Citizen Lab, the EFF, and Ranking Digital Rights. Interestingly, this is a theme I’ve heard arising in places as diverse as West Point, Johnson&Johnson, and Internews over the past year. Kuhr and Gidwani also emphasized the importance of information-sharing between analysts. Gidwani gave credit to Vice for their solid linguistic analysis of guccifer2’s writing, and noted a time when a journalist pushed her team to keep looking at a lead they had given up on. The more perspectives on a case that are considered — whether those perspectives come from journalists or from corporate, military, academic, or NGO security analysis — the more effective the analysis will be. “The adversary counts on us not talking amongst ourselves,” Gidwani said.

In sum: be the reporters and fact-checkers you would like to see in the world.

ShmooCon talks were live-streamed; not sure if they will be put online at some point. If they are, I highly recommend the ad networks talk, as well as Matt Blaze’s talk on metadata, which I missed but hear was excellent, as usual.

--

--

Gus Andrews

Researcher, educator, and speaker on human factors in tech. My policy work has been relied on by the EFF and US State Department. Author of keepcalmlogon.com