Daniel Castro
Daniel Castro is vice president at the Information Technology and Innovation Foundation (ITIF) and director of ITIF's Center for Data Innovation.
Castro writes and speaks on a variety of issues related to information technology and internet policy, including privacy, security, intellectual property, Internet governance, e-government, and accessibility for people with disabilities. His work has been quoted and cited in numerous media outlets, including The Washington Post, The Wall Street Journal, NPR, USA Today, Bloomberg News, and Bloomberg Businessweek. In 2013, Castro was named to FedScoop’s list of the “top 25 most influential people under 40 in government and tech.” In 2015, U.S. Secretary of Commerce Penny Pritzker appointed Castro to the Commerce Data Advisory Council.
Castro previously worked as an IT analyst at the Government Accountability Office (GAO) where he audited IT security and management controls at various government agencies. He contributed to GAO reports on the state of information security at a variety of federal agencies, including the Securities and Exchange Commission and the Federal Deposit Insurance Corporation. In addition, Castro was a visiting scientist at the Software Engineering Institute in Pittsburgh, PA, where he developed virtual training simulations to provide clients with hands-on training of the latest information security tools.
He has a B.S. in foreign service from Georgetown University and an M.S. in information security technology and management from Carnegie Mellon University.
Research Areas
Recent Publications
Latest FTC Warning About Algorithmic Pricing Runs Counter to Facts
Federal Trade Commission (FTC) Chair Lina Khan is once again stoking unsubstantiated fears about algorithmic pricing—the practice of using algorithms to offer customers different prices based on dynamic market conditions—but this time, her examples are even more outlandish.
Breaking Up Google? So Much for a Whole-of-Government Approach to US AI Leadership
While the Biden administration champions the need for private sector innovation to drive U.S. leadership in artificial intelligence, its Justice Department wants to put one of America’s top innovators—Google—on the chopping block.
National Security Reminds Policymakers What Is at Stake for the United States in the Global AI Race
On October 24, 2024, President Biden signed a National Security Memorandum governing the use of AI for national security. Overall, it demonstrates that the Biden administration takes the threat of the United States losing the global AI race seriously and recognizes the serious repercussions of falling behind for national security. It serves as a reminder to policymakers about what is at stake
Comments to the FCC on AI-Generated Content in Political Ads
The Center for Data Innovation submitted comments to the Federal Communications Commission (FCC) regarding the disclosure and transparency of AI-generated content in political advertisements.
Opportunities for APEC To Build Trust in the Digital Economy
Global trade relies heavily on trust, and the Internet amplifies trust challenges due to distance, anonymity, and the vast scale of interactions. To address these challenges, APEC economies should focus on developing socio-technical solutions, like digital IDs and content provenance tools, to improve trust in the digital environment and ensure the safety and security of the digital economy.
Comments to the Ministry of Information and Communications Regarding Vietnam’s Draft Law on Digital Technology Industry
The draft DTI Law presents a promising framework for advancing Vietnam's digital technology sector. While it offers valuable steps forward in areas such as data accessibility, AI regulation, and industry support, there are several aspects that could benefit from refinement.
How Experts in China and the United Kingdom View AI Risks and Collaboration
As AI continues to advance, the technology has created many opportunities and risks. Despite significant geopolitical differences, a series of interviews with AI experts in China and the United Kingdom reveals common AI safety priorities, shared understanding of the benefits and risks of open source AI, and agreement on the merits of closer collaboration—but also obstacles to closer partnerships.
The AI Act’s AI Watermarking Requirement Is a Misstep in the Quest for Transparency
The AI Act requires providers of AI systems to mark their output as AI-generated content. This labelling requirement is meant to allow users to detect when they are interacting with content generated by AI systems to address concerns like deepfakes and misinformation. Unfortunately, implementing one of the AI Act’s suggested methods for meeting this requirement — watermarking — may not be feasible or effective for some types of media.
Podcast: Information Technology Is Increasingly Critical and Increasingly Demonized, With Daniel Castro
Over the last several years, public opinion on technology and the use of data has shifted from excitement to skepticism to fear.
The Surgeon General’s Misleading Claims About Social Media’s Risk To Children Should Come With Its Own Warning Label
The U.S. Surgeon General Dr. Vivek Murthy published a New York Times op-ed arguing that social media poses such a threat to children’s mental health that these platforms should come with a warning label like cigarettes and alcohol. However, his argument is flawed on multiple levels: There is no scientific consensus that social media is causing mental health issues among youth; social media use among youth does not present a similar level of risk as tobacco and alcohol use; and the alleged risks of social media come from interaction with specific types of content, not from the platforms themselves.
A Techno-Economic Agenda for the Next Administration
The next administration needs to place innovation, productivity, and competitiveness at the core of its economic policy. To that end, this report offers a comprehensive techno-economic agenda with 82 actionable policy recommendations.
State Department Risks Overlooking Potential of AI For Human Rights
President Biden’s 2023 executive order on artificial intelligence (AI) directed the State Department to work with other agencies and stakeholders to develop guidance for identifying and managing human rights risks associated with AI. As the State Department prepares this guidance, it should emphasize that in many cases, the risk of inaction—the missed opportunities to use AI to improve human rights—presents the most significant threat, and it should prioritize deploying AI to support and enhance human rights.
Recent Events and Presentations
From Data Policy to Practice: Bridging the Gap
Daniel Castro speaks at a webinar on the national and international policies shaping data management and use hosted by EMPOWER, a research programme sponsored by Science Foundation Ireland.
The Impact of AI on Cybersecurity
Daniel Castro presents on the impact of AI on cybersecurity at an event hosted by Dar America in Casablanca.
Safe and Responsible Use of AI: Ethical Guidelines and Guardrails
Daniel Castro presents about the safe and responsible use of AI in education at the conference "Bridging the Skills Gap: AI Solutions for Zimbabwe's Education to Workforce Pipeline."
Utilizing Technological Innovation to Enhance Intel IT
Daniel Castro moderates a panel about IT modernization in the defense and intelligence communities at the 8th Annual Intel IT Modernization Summit hosted by the Defense Strategies Institute.
Capital Goods: Artificial Intelligence, Data-centres, Electrification and Automation
Daniel Castro speaks about AI's impact on electricity consumption at the Capital Goods Virtual Conference 2024.
AR/VR Policy Conference 2024
Watch now for the fourth annual AR/VR Policy Conference presented by Information Technology and Innovation Foundation (ITIF) and the XR Association.
Past and Future, Threats and Opportunities of AI
Daniel Castro speaks about how policymakers can maximize the opportunities associated with AI at the 9th Congresso Empresarial Colombiano in Medellin, Colombia.
Responsible Practices and Use of AI
Daniel Castro speaks about AI governance and ethics at the IndabaX 2024 Zimbabwe AI Symposium hosted by the Harare Institute of Technology and the U.S. Embassy in Zimbabwe.
How to Improve Trust in the Digital Economy
Daniel Castro speaks on a panel at the Digital Trade Policy Dialogue. This panel was part of the Asia-Pacific Economic Cooperation’s (APEC's) Third Senior Officials’ Meetings (SOM 3) in Lima, Peru.
The AI Regulatory Landscape
Daniel Castro moderates a panel about the evolving regulatory terrain surrounding AI at the Ai4 conference in Las Vegas.
Policy, Governance, and Ethical Considerations In AI
Daniel Castro spoke about policy, governance, and ethical considerations in AI at a conference hosted by the Johns Hopkins University Whiting School of Engineering and the National Academy of Engineering Forum on Complex Unifiable Systems.
Insights on US Public Opinion on AI
Watch now for a Capitol Hill event covering an in-depth survey by ITIF’s Center for Data Innovation and Public First about what of Americans thinks about AI, how these views have shifted over the past year, and the implications of these beliefs for businesses, policymakers, and society at large.