Policy Development & Governance
Hearing the words “artificial intelligence” seems inescapable in the 21st century. While the field of AI has been around since the 1950s, the past few years have been dominated by this technology. The rapidly developing nature of AI technology has been lauded by researchers, developers, and policymakers alike. The applications of AI in cancer detection, forecasting, and automating busywork are revolutionizing their respective fields. However, the unbridled excitement around all AI technologies, especially large language models, must be tempered with an understanding of the risks to create responsible policy.
The risks posed by AI usage are unique to their audience. For students and educators, prolific use of generative AI can encourage plagiarism. Additionally, AI use, such as chatbots, can expose younger children and adolescents to harmful and explicit content. Adults are not immune to the risks of AI, as they can also encounter harmful AI-generated content and mis/disinformation. Security-wise, AI advancements lead to more convincing scams, making our older generations more vulnerable. Concerningly, the proliferation of deepfakes specifically target tribal leaders. An understanding of the risks different AI technologies pose to specific audiences can inform robust policies that protect all members of the community.
There are also some risks that affect all members of the community. In a global context, these technologies require immense energy usage to perform, as well as land for data centers, posing concerns for the environment. On the users’ end, there are numerous cases where large language models have provided content with unsafe and dangerous content. Even if the output seems relatively safe, it can perpetuate harmful biases against Indigenous people and Tribal Nations. For example, large language models are more likely to provide output riddled with bias and mis/disinformation when answering in a low resource language. For context, a low resource language is one that does not have a huge presence online, resulting in less training data in that language.
While there are many concerns around AI technologies that seem herculean to solve, there are solutions policymakers can implement to mitigate the risks. Identifying different AI uses within Tribal Nations can contextualize broad issues like AI safety to address community needs. For example, schools can create policies around AI-usage and citation practices. This is also an opportune time to use community values to inform best practices for AI prompts to empower users rather than harm them.
This age of AI can be exceptionally difficult to navigate; however, there is hope to create policy to successfully co-exist with this technology through active discussions and community-centered decision making.
SUMMARY
AI has many benefits, but also many risks that can harm community members
Some of these risks include unsafe content, scams, deepfakes, and perpetuation of biases
Addressing the problems locally first (e.g. looking at how schools can manage the use of AI) directly addresses community members
Embedding community values when creating best practices for the use of AI can address the needs of all audiences