This study examined the decision making processes surrounding social media platforms’ content policies on extremism and terrorism, with particular focus on measures targeting far-right content and users. By drawing attention to various stages of content policy such as development, implementation, and enforcement, combined with a focus on content that presents specific challenges with regards to legality and user circumvention, this project explored the challenges of platform governance on far-right content. To assess the dynamics of digital regulation, the project conducted qualitative interviews and fieldwork with tech company employees who oversee content policies on extremism and terrorism in Europe and North America. The project found that governance of online extremism and terrorism must be situated within a broader ecosystem of tech companies engaging with stakeholders across government, security services, industry, and civil society