What’s hard about sharing threat intelligence?

Gus Andrews
2 min readApr 24, 2024

Any industry finds sharing cyber threat intelligence challenging. But the obstacles to social media platforms and civil society organizations sharing data about social trust and safety threats are uniquely thorny.

A vignette from my “stakeholder journey map” of the pain points in communication between social media companies’ trust and safety departments, human rights organizations, and other members of civil society when social and digital attacks are reported.

View the complete stakeholder journey map.

View the PDF of my report on the challenges of threat intelligence sharing in civil society.

As HUMAN Security’s Dan Kaminsky Fellow for 2023, I supported the international community of digital civil society and human rights organizations in their attempts to more effectively gather, share, analyze, and act on digital security threats. These organizations often run helplines that journalists and human rights defenders may contact when they are facing some sort of attack online — everything from phishing and account takeover, to targeted spyware, to malicious harassment and disinformation campaigns.

I was particularly interested in exploring connections between online social attacks (harassment, doxxing, physical threats, election manipulation, disinformation, impersonation, and so on) and what is more traditionally thought of as “cybersecurity.” There are plenty of cases in which researchers have found connections between social attacks and malicious software and infrastructure; for example, Citizen Lab’s Endless Mayfly campaign analysis. And there has actually been a disinformation campaign observed about the origins of malware. But structurally, there are still big disconnects when it comes to identifying patterns of activity that may cross the borders between “trust and safety violations” and “cyber threats.”

As I talked to more NGOs, trust and safety personnel, threat analysts, and emergency helpline staff, it became clear that cases were falling through the cracks in part because of systemic disconnects — between the privacy protections of NGOs and platforms, and how that shaped what data was gathered; the working incentives of basic moderation and threat intelligence teams; and the human scale of NGOs and the global market scale of platforms.

So I drew on my Thoughtworks product management experience and made an “information radiator” — a visualization of the journey of one user’s abuse report, from the moment they’re harassed, through their seeking help from an NGO, through platform moderators and security staff, and even through to the lawyers who might eventually help them pursue a legal case at some point. I’ve also written up a report of the findings of the “listening tour” I did and some analysis of statistics about sharing about attacks in a formal network (CERT) of organizations that run helplines and offer other security services.

My hope is that social media platforms, NGOs, and funders of the latter find this useful in addressing the places where cases are falling though the cracks. This may be a useful discussion starter for other CERTs, ISACs, and ISAOs beyond the NGO sphere, as some of the obstacles to sharing cybersecurity information may be common elsewhere as well.



Gus Andrews

Researcher, educator, and speaker on human factors in tech. My policy work has been relied on by the EFF and US State Department. Author of keepcalmlogon.com