Getting Started With AI Governance & Why It Matters
As AI systems and tools become increasingly prevalent, it is essential to think about how to incorporate them into our businesses and organizations as ethically as possible. If you are a technologist – product manager, data platform manager or director, an information architect, data scientist, data engineer, researcher, or technology manager –, where do you begin to make sure your systems are not harmful? To ensure we can use AI in a responsive way, we need to look at the most important foundation of AI, data. The way or how the data was collected, cleaned, and secured will be critical to its performance and ethical usage. If you work with or manage a big corpus of data, the data infrastructure must be trustworthy, accurate, and hopefully up to date. Following an AI governance framework or guideline will help teams working with AI systems to build trustworthy, accurate AI systems that their stakeholders, such as your customers, deserve and expect.
AI governance refers to the processes, standards, and guardrails that help ensure AI systems and tools are safe and ethical to ensure safety, fairness, and respect for human rights.
Overview of AI Governance
In this section, we are going to look at the AI governance frameworks and how they are applied in the United States and the European Union.
United States
AI is evolving and changing, and so are the AI governance frameworks and guidelines. As of this writing in December 2024, there is a government change in the US that has promised to repeal existing AI guidelines such as the AI Bill of Rights, a non-legally enforceable guideline which has five main principles framework that should guide the design and development of AI systems, put in place by President Biden’s 2023 Executive Order. The AI Bill of Rights aims to safeguard America's public civil rights from the potential harm that could come from automated systems.
As of now, the five principles are the following:
- Safe and effective systems: Communities that could be affected by technology, subject matter experts, and stakeholders should participate and be involved in the process to identify any potential negative impact of autonomous/ AI systems.
- Algorithmic discrimination protections: Designers and developers of the system should protect individuals and communities from unjustified biases.
- Data privacy: Individuals should be protected from abusive data privacy practices and have a say on how their data is handled.
- Notice and explanation: Individuals should be informed that they are using an automated system and are not interacting with a real human and the designer and developers of the system should set clear expectations of the outcomes of using such systems.
- Human alternatives, consideration, and fallback: Individuals should be able to opt out, where appropriate, and have access to a person who can quickly consider and solve problems they encounter as individuals use the system.
It is unpredictable how and what the incoming administration will replace the AI of the Bill of Rights with, or if something will be swapped in or out. Currently, there is not a single entity that is tasked to ensure that these guidelines are followed. Companies choose how and where to follow these guidelines.
European Union
As for Europe, in June 2024, the European Union (EU) announced the creation of the EU AI Act. The EU AI Act is a legally enforceable, tiered, risk-based system to determine the level of oversight needed for a system’s processes. The aim of the EU AI Act is to “foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models”.
Here is the EU AI Act tiered risk-based system:
- The first tier is unacceptable risk systems that are prohibited. These include systems deploying subliminal, manipulative, or deceptive techniques to manipulate behavior and impair informed decision-making, causing significant harm. This tier also covers biometric categorization systems, social scoring, compiling facial recognition databases, and inferring emotions in workplaces or educational institutions.
- The second tier is high-risk systems, which are systems that must be registered and bear the burden of proving that they do not pose a significant threat to health, safety, and fundamental rights. This tier includes technology used in critical infrastructures, educational and vocational training, product safety, border control management, law enforcement, essential services, administration of justice, and employment.
- The third tier is limited and minimal risk systems. This tier is subject to its own transparency requirements, ensuring that humans are informed whenever necessary, fostering trust. AI providers (developers) must ensure that AI-generated content is identifiable. This tier includes text, voice, and video content, as well as deepfake content.
However, there are exceptions to the EU AI ACT. These three systems are exempt from the EU AI ACT:
- Any system developed exclusively for the military, defense, or national security.
- AI developed exclusively for scientific research.
- Free and open-source AI, in which the code is in the public domain and available for anyone to use, modify, and distribute .
The European AI Office oversees the enforcement of the EU AI ACT within its member states. Companies and organizations working in the EU who will fail to comply with the EU AI Act will have to pay a maximum of 35 million Euros or 7% of all worldwide revenue.
Discussion of AI Governance
As you can see, these AI governances put in place are different from one another. They all provide the “what” should not be done, but companies and organizations are tasked to figure out “how” these frameworks should be applied to ensure responsible use of AI, create trust with companies and organization customers and users, and avoid legal liability. The future of the existing AI governance is unknown, given the political landscape, and the sheer number of players in the field (i.e.: different regulatory jurisdictions, provider of large language models (LLM), regional regulations, industry standards, etc..). It is challenging for companies and organizations to chart a path that would work for every environment. However, it is necessary to understand the company’s and organization’s beliefs and values that pertain to AI governance. These well-defined beliefs and missions will be the foundation that guides them in enacting sensible policies and principles that protect their customers and users, their reputation, and their trust.
Proposal Based on Weizenbaum’s Questions
If you are a technologist that interacts or manages data or data teams, where do you begin to make sure your systems are not harmful? As mentioned above, the AI governance and AI Assessment are fragmented and sometimes they seem like patch work.
If you care about responsible AI, I propose another way to create your company’s or organization’s AI framework, I hope that looking at AI governance through these lenses will help teams create a foolproof AI governance strategy that will survive changes in political administration, and the ever-changing AI governance landscape. It is important to think about the short term and long-term impact of technology- impact on the social, technological, ethical, and environmental spheres.
My proposal is to honestly and systematically answer Joseph Weizenbaum's warning. In 1978, a computer scientist and AI pioneer, Joseph Weizenbaum, in his article ‘Once more—a computer revolution’ asked important questions that are still relevant and important to answer to this day, if we, as a society, care about a fair and equitable world. He argues that some of the questions are almost never asked about these new technologies:
“ Who is the beneficiary of our much-advertised technological progress and who are its victims?
"'Who taught the multi-unit 3000 to lie?'"
What limits ought we, the people generally and scientists and engineers particularly, to impose on the application of computation to human affairs?
What is the impact of the computer, not only on the economies of the world or on the war potential of nations, etc., but on the self-image of human beings and on human dignity?
What irreversible forces is our worship of high technology, symbolized most starkly by the computer, bringing into play?
Will our children be able to live with the world we are here and now constructing?
Much depends on answers to these questions.”
Folks, Weizenbaum warning is still true, and one would argue, even more so today than it was in 1978, and we need to act. Technologists, government services, organizations, and businesses have the incentives to think about these questions to comply with existing regulations and build trust with their users, customers, and constituents. And truly, the future depends on how we answer these questions as a collective.
How does Weizenbaum's warning apply to the current AI era? New AI techniques, such as deep learning, have resulted in powerful large language models (LLMs) capable of discovering new proteins and accelerating drug discovery. However, they can also allow bad actors to build lethal autonomous weapons that are capable of mass killing. As the saying goes, with great power comes great responsibility. To take responsibility, we need to translate Weizenbaum’s warning to today’s context.
Let us ask these same questions in today’s AI context:
Looking at the first principles: given the current environment, and comparing to the time of Weizenbaum’s writing, how have things changed or not changed regarding AI since his warning in 1978? Here is how we can reconsider these questions:
- Who is benefitting from the AI system and who might this system harm? Is this the outcome you intended?
- Is what we are building sustainable (you will have to define this yourself depending on what sector or space you occupy) for the future? Or are we racing just to follow everybody else without considering the repercussions for our actions in the long term?
- How do we explain the output of these systems? Are there ways to align our values with them so they stop "hallucinating"?
- What are the guardrails and limits of using AI? What are the AI governance and accountability frameworks that should guide the design and development of this AI tool?
- Have we understood the genuine cost of these systems? Are these systems as ubiquitous (i.e., should these AI systems solve every problem and use case? Are there exceptions?) as we make them, or useful in all cases? Is this truly the best use of our finite resources (anyway you quantify that)?
- Will our children be able to live with the world we are here and now constructing? (Well, this does not need any current commentary. It still applies).
I will not pretend to know the answers to these questions. However, I believe it is the best use of your time and of your organization’s time to critically think about these questions and honestly answer them if you hope to achieve your mission.
The toolkit
Here are some of the tools that can help you in the journey to clarity- the journey to answering these questions. These tools, like any other tool, are not perfect, but they are a starting point to help you answer the above:
1. To ensure the AI systems serve all:
- Data and content evaluation: To limit as much algorithmic bias, understand what is in your data, comb through the AI training data. Make sure that it accurately reflects your current view.
- Be purposeful: We know most of the data on the internet, that was used to train LLMs, is Western-centric. Also, we know that the data we have might have bias built into them. What is lacking in the data or who might not be represented in the data will help you assess how contextually broad or narrow your output will be.
- Engage stakeholders: Include diverse people who are users of the systems to understand their unique experience during the system's design, development, and deployment. The earlier they are a part of the experience, the easier it will be to catch harmful, potentially discriminatory, or biased data.
- Continue to question your assumptions. Consider if you are doing the right thing for your mission and the people or space you are meant to serve.
2. To ensure the system is following regulatory and existing compliance:
Depending on the region, your company or organization interacts with (or has customers in) different regulations and standards may apply.
- Keep an eye on relevant regional and product scope:
- Currently, many states in the United States have elected certain AI governance regulations or guidelines to protect their constituents. It is essential for companies to have an ongoing check system to make sure they comply with this ever-growing and changing landscape.
- For example, the EU has enabled the General Data Protection Regulation (GDPR), an EU law that protects individuals' privacy and security when their personal data is processed. Companies and organizations serving customers and users must follow this law. Thus, companies working or serving customers and individuals must adhere to it.
3. To ensure accuracy:
- Review of underlying data: Designers and developers should have an awareness of structural injustice and the kinds of biases that may lurk in training data. That means testing the data at the beginning for a list of undesirable attributes in the data and fixing it as they go along through the AI product development lifecycle.
4. To ensure data is unbiased:
- Bring in the affected communities for their input: Many ethical issues may be beyond the competence of technology professionals to handle them on their own. This process can help in discovering problems and help in looking for lasting and fitting solutions.
- Consider potential harm: Get well acquainted with your data. If you are using third-party data, consider acquiring high-quality data (accurate, complete, up-to-date, and relevant).
5. To ensure sustainability:
- Determine usefulness: AI utility should be justifiable to all relevant parties. AI, like any other tool that ever existed, should not necessarily be a cure-all. AI should be used only when it is the right tool for the job. Like any investment, other solutions must be evaluated to see if there is something else that could be optimal for the problem at hand.
- Consider the true cost: With limited resources- ecologically and financially, we ought to think about how we use these resources and to account for the real cost of using AI tools compared to other- technological or otherwise –solutions. Considering the hidden costs and other externalities of using the tool will help us make clear-eyed decisions.
6. To ensure trustworthy AI system's output:
- Ensure explainability and traceability: The key to trusting the output of your AI system is understanding how the system has arrived at a certain conclusion. In addition, early investigation of the data will help in tracing and finding problems and their provenance once we know where to look.
7. To ensure you follow best practices, regulations, and risk management, consider checklists:
- Create robust documentation: The main tool for regulatory compliance is robust documentation of the new laws, including if these laws are being taken care of in the data infrastructure and throughout the AI lifecycle.
It might include:
- Algorithmic impact assessment: The tool foresees and offers mitigation strategies for potential unintended or harmful consequences. This tool can be as complicated as new software or as simple as an Excel document. It includes risk identification, mitigation strategies, stakeholder involvement, transparency, and continuous monitoring and evaluation.
- AI nutrition labels: A tool to transparently present information about an AI model's development, including training data, biases, accuracy, and limitations. Create a document with the model description, privacy level, optional features, model type, and base model. Include a "trust ingredients" section detailing customer data usage, data logging/sharing, anonymization, data deletion, human oversight, data retention, logging/auditing, guardrails, and input/output consistency.
- Regulation tracker: A tool used to monitor and visualize changes in regulatory policies over time. Create a tracker to document laws and regulations that your company or organization needs to comply with. It allows you to assess the quantity and impact of regulations on your business or organization.
8. To ensure your company is successful in the AI governance
- Consider AI governance ownership: As one can see, AI governance can be complex, and certainly there is a lot of regulatory and guideline ambiguity. A company or organization needs to provide an owner (someone who is accountable for this process to keep up with many pieces needed- from data auditing, to including diverse stakeholders, to reviewing the output of the system, and regulatory and compliance) with a trustworthy AI tool.
- Consider including an AI framework to ‘the definition of done (Dod): As previously mentioned, the consequences of negligently developing AI products are severe. Therefore, ensure the product meets the criteria of your AI framework before deploying it to users.
Final thoughts
AI is changing our world, for better or for worse. And the stakes are high. It is a powerful tool that has the capacity to positively supercharge many aspects of our lives such as improvement in health care and education. On the other hand, the tool could lead to mass killing. Given the power of this tool, it is important to exercise responsibility, and Weizenbaum's still-relevant questions provide valuable insights into today's AI challenges. I hope the toolkit will be initial food for thought that will help you create a responsible AI tool.