TECNA Comments on National Priorities for Artificial Intelligence
TECNA Comments on National Priorities for Artificial Intelligence
The Office of Science and Technology Policy
Eisenhower Executive Office Building
1650 Pennsylvania Avenue, NW
Washington, D.C. 20504
Submitted electronically via www.regulations.gov
Re: Request for Information: National Priorities for Artificial Intelligence
To Whom It May Concern:
The Technology Councils of North America (TECNA) represents more than 60 technology business-serving councils. We empower regional technology organizations and serve as their collective voice in growing the North American technology economy. Our members represent 22,000 small- to medium-sized innovation-heavy technology companies across North America. These companies are often startups and are heavily dependent on a thriving ecosystem of investment capital and acquisitions.
We thank OSTP for affording us and other stakeholders an opportunity to comment on the future of national priorities for artificial intelligence (AI). AI can be applied in ways that help society tackle some of its biggest problems, such as making driving safer, delivering more accurate medical diagnoses, fighting human trafficking, countering cyberattacks, unleashing scientific discovery, enabling farmers to increase crop yields, helping investors maximize returns and helping athletes prevent injury. Moreover, AI can augment human abilities in ways that increase productivity and improve outcomes, which will foster widescale economic progress. At the same time, AI presents new ethical challenges surrounding privacy, potential underlying bias, liability and decision-making processes. To ensure we harness all of AI’s benefits while minimizing negative impacts, governments must pursue policies that enable the continued development of AI technologies, protect individual rights and freedoms, and mitigate impacts from increased automation. It is with this in mind that we respectfully offer for your consideration the following comments:
Federal clarity regarding definitions of AI and key terms is necessary to create and advance a unified strategy and informed baseline. The scope of AI is broad and captures many different types of systems and processes. Even under the current AI framework, however, the Administration defines AI differently via the White House’s Blueprint for an AI Bill of Rights and NIST’s AI Risk Management Framework. This disparity may create confusion among businesses and form barriers for further adoption of AI. Furthermore, terms such as “bias,” “discrimination” and “fairness” are highly context specific. Because they cross multiple disciplines (e.g., mathematics, computer science, law, etc.), such terms often have differing definitions. Consistent guidance from the federal government regarding the specific meanings of these terms in an AI context would inform better policy and business decision making.
Alignment With Current Regulation
Policymakers should avoid overly prescriptive approaches as the rate of change in AI technology and its applications is moving much faster than new legislation can be passed or new regulations promulgated. Instead, policymakers can align AI policy with existing legal protections. Policymakers should identify and clarify how existing laws apply to AI or AI-based applications rather than imposing broad, new regulatory schemes, lest we create unnecessary barriers to innovation or potential conflicts of law.
Policymakers must balance context-specific risk mitigation with unduly restricting innovators from developing and delivering the benefits of responsible AI. Companies and public institutions in the U.S. are already in competition with other nations and companies around the world seeking to apply AI as a means to gain economic or military advantage. The technology and its application will proceed with or without U.S. regulations. Therefore, public policy should encourage investment in AI research and development (R&D) and the open sharing of its results. Policymakers should promote investment, make funds available for R&D, and ensure that no barriers exist for AI development and knowledge sharing.
As a primary source of funding to address long-term global societal challenges, we encourage government funding of AI-powered flagship initiatives to find solutions to the world’s greatest challenges, such as curing cancer, ensuring food security and mitigating climate change. Policymakers should lead the way in demonstrating the applications of AI by sufficiently investing in the infrastructure to support and deliver AI-based services. Moreover, they should foster the creation of conditions necessary for controlled testing and experimentation of AI in the real world, such as designating selfdriving test sites in cities and enabling pilot programs in live environments with relaxed regulatory burdens. Partnering with industry, academia and other stakeholders for the common understanding, promotion and demonstration of AI applications will maximize its benefits for the economy.
Implementing a Risk-Based, Context-Specific Approach
The key to striking a balance between regulations and innovation is to provide means for businesses to obtain guidance based on their risk profile. There are several ways to achieve this in the areas of cybersecurity, privacy and transparency without affecting the intellectual property of organizations. Policymakers should take a risk-based approach that is balanced, flexible and context-specific. Regulatory frameworks for AI should be specific to the AI system and proportionate to the level of risk. For example, the draft EU AI Act establishes a risk-based model outlining four levels of risk: low-risk systems, limited or minimal risk systems, high-risk systems, and unacceptable risk systems. A lighter regulatory framework applies to AI applications with lower risk while a more stringent framework is set into place for higher risk categories. A risk-based approach creates a layered regulatory mechanism that provides more clarity when addressing AI concerns and risk. Simultaneously, it allows for innovation and agility in AI development and delivery.
Developing a Pipeline of AI Tech Talent
Demand for a highly specialized tech workforce to develop and implement emerging AI is growing exponentially and driving competition for the limited talent pool across industries. Software developers, coders, web designers and other tech workers are already in demand in every sector, from financial institutions to retailers to universities to health care systems. Organizations seeking to develop or apply AI are now aggressively recruiting talent across the globe.
The United States is not a clear, global leader in AI. To remain competitive and to capture our full economic and strategic potential, policymakers must collaborate with industry to develop and attract a highly skilled workforce. AI policy should provide incentives for students to pursue courses of study to create the next generation of AI and for education institutions to deliver such programs. Policies should also support upskilling and reskilling of an AI enabled workforce focused on using AI to improve efficiencies and outcomes. Perhaps most importantly, policymakers should create an AI exemption for H-1B visa applications to allow the US to compete for global talent.
Developing a Federal Privacy Law
The safeguarding of consumer data is paramount to ensure confidence in the application of all technologies and particularly AI. Equally as important, however, is a fair and transparent playing field for organizations developing the technology of the future – especially the small-to-mid size companies driving some of the most useful applications of AI. For innovation to continue, industry must have clear and consistent privacy regulations across the states.
The current privacy laws regarding the collection, storage and sharing of personal data are not yet standardized across states. The United States lacks a comprehensive statutory or regulatory framework to guide and govern data usage and compliance. Many smaller companies working on the most advanced new AI technologies and applications have limited resources and simply cannot navigate the increasing patchwork of state privacy laws.
As the recent TECNA study “Tech Workforce Trends: The Migration of Tech Jobs Since the Pandemic” shows, most companies are becoming tech companies, many with the potential to create new technologies and use technology in new ways. These businesses, which cut across multiple industry and service sectors, need consistent, clear guidance on AI development and utilization. It is increasingly cumbersome to keep pace with new, and sometimes conflicting, state-based regulatory requirements, which impose barriers to innovation.
An alignment of privacy laws in the form of a streamlined federal privacy standard will allow organizations to simplify the compliance process, decrease legal and compliance costs, and better prepare for development and deployment of new, responsible AI. A comprehensive federal privacy law would also make it easier for consumers to understand and safeguard their privacy rights. Consumers and AI developers alike deserve clear and consistent regulations to compete effectively on a global stage.
As global leaders seek to develop and deploy AI, U.S. policymakers should strive to collaborate with other countries to ensure aligned approaches. The Organisation for Economic Cooperation and Development (OECD) has established important AI principles, such as the Fair Information Practice Principles (FIPPs). But with AI innovation, we must rethink how we apply these models to new technology. Many countries and U.S. states have their own data mining and usage regulation. Diversity of regulatory constraints across geographic borders remains problematic, and alignment should be pursued.
Additionally, removing barriers and opening access to large repositories of collected data, where appropriate, will help to achieve more enhanced AI model development. The United States should collaborate with other countries and international bodies to develop responsible and clearly delimited data use policies that acknowledge the complexities of multinational organizations’ work while respecting the various data laws in place globally.
We thank the Agency for affording us and other stakeholders the opportunity to comment on the many significant opportunities for the future evolution of AI. We appreciate the Agency’s consideration of these comments and look forward to continued dialogue and collaboration in the future.
Jennifer G. Young
Chief Executive Officer