January 27th, 2021
Protecting people from harmful content and making our products safer for everyone is core to the work of many different teams across Google and YouTube. When it comes to the content on our platforms, we have a responsibility to safeguard the people and businesses using our products, and to do so with clear, transparent policies and processes.
Today, we’re announcing our second Google Safety Engineering Center (GSEC)—and our first focused on content responsibility—to be located in our European headquarters in Dublin, Ireland. Our first GSEC opened in Munich in 2019 and builds tools to protect users’ privacy and security. The new Dublin center will be a regional hub for Google experts working to tackle the spread of illegal and harmful content and a place where we can share this work with policymakers, researchers, and regulators.
Dublin is already a hub for our Trust and Safety teams in the region, comprising many different policy experts, specialists, and engineers working to keep people safe online by using the latest technology and artificial intelligence. Europe has been leading many different safety efforts globally and is home to teams working on everything from ad transparency and child safety to botnet research and violent extremism.
The new GSEC will provide additional transparency into this work and make it easier for regulators, policymakers, and researchers to gain a hands-on understanding of how we deal with content safety. It will also help everyone understand how we develop and enforce policies, how our anti-abuse technologies and early threat detection systems work, how we work with trusted flaggers, as well as our incident management processes and content moderation practices. Specifically:
- Regulators: Through GSEC, regulators will be able to access more information about how our content moderation systems and other technologies work in practice, in a secure location that safeguards the confidentiality of user information. When fully operational, this will enable regulators and policymakers (under existing or upcoming legal frameworks like the Digital Services Act) to conduct inquiries, evaluate processes and engage in official fact-finding.
- Academics: GSEC Dublin will work with the wider academic community and civil society groups promoting safety online. This new center will enable us to engage more closely with researchers, NGOs, and other external stakeholders about emerging trends and risks with the aim of improving safety for people online. This work will be done with the appropriate safeguards on confidentiality, user privacy, and security. With the announcement of GSEC Dublin, we’re also announcing our first partnership with the Irish Research Council funding academic scholarship and research into online safety.
- Civil society: GSEC will build on existing initiatives within Google that bring to life our responsible-by-design approach. We will share this knowledge more widely through the publication of reports and insights on content responsibility. This work is already underway, and we recently published our white paper on content moderation and information quality.
Our Trust and Safety teams sit around the globe, and we have more than 20,000 people working in a variety of roles to help enforce our policies and moderate content. We also continue to invest in initiatives like transparency reports, proactive disclosures around coordinated influence operations and disinformation, as well as sharing information with researchers and supporting collaborations like Project Lumen to help users, academics and policymakers better understand how we manage content at scale.
We have a responsibility to keep people safe online and to protect our platforms and products from abuse. As we continue to invest and scale these efforts, we are committed to providing additional transparency into our processes and policies. The work of the GSEC for Content Responsibility will begin virtually, and we plan to open the physical center in Dublin as soon as it is safe to do so and COVID-19 restrictions allow.