Guardrails in an AI tool “is a safeguard that is put in place to prevent [AI] from causing harm…they are both created to keep people safe and guide position outcomes.” [1]
- Guardrails can be created via controls within the AI tool itself, or via policies, laws, and regulations.
Canada and the US does not currently have any laws or regulations governing the development and use of AI tools. While many developers have implemented guardrails, they have only done so on their own due to reputational or liability (etc.) issues.
Innovation, Science and Economic Development Canada (ISED) has created the Canadian Guardrails for Generative AI – Code of Practice that recommends six elements that should be included with the development of every AI model or tool.
- Safety – identify and mitigate negative impacts and misuse.
- Fairness & Equity – use training data that is appropriate and representative, and ensure relevant, accurate, and unbiased outputs.
- Transparency – ensure outputs from AI models are always identified as such and provide provenance for the training data.
- Human Oversight & Monitoring – always include human insight into the development, deployment, and operation of the AI, and implement a way for negative impacts to be identified and reported.
- Validity & Robustness – rigorously test AI models to measure performance and identify vulnerabilities and unintended consequences, and employ cybersecurity measures to prevent and identify attacks on the AI.
- Accountability – implement a comprehensive and multifaceted risk management process around the AI tools and develop policies, procedures, and training to clearly identify roles and responsibilities.
[1] https://www.techopedia.com/definition/ai-guardrail