Artificial intelligence (AI) is a technology with the potential. To contribute to incredible gains in a variety of fields, such as medicine, education, and environmental health. But it also includes the potential for many types of misuse, including discrimination, bias, changing the role of human responsibility, and other ethical considerations. That’s why many experts are calling for the development of responsible AI rules and laws.
Recognition of Responsible AI
Responsible AI means different things to different people. Various interpretations highlight transparency, responsibility, and accountability or following laws, regulations, and customer and organizational values.
Another take on it is avoiding the use of biased data or algorithms and assuring that automated decisions are explainable. The concept of explainability is especially important. According to IBM, explainable artificial intelligence (XAI) is “a set of processes and methods that allow human users to comprehend and trust the results and output created by machine learning algorithms.”
The EU Leads
In 2018, the European Union (EU) passed measures guaranteeing online service users some control over their own personal technology data. The most well known is the General Data Protection Regulation (GDPR). The EU is again leading the way in ensuring ethical use of AI, which can generate algorithms that handle very personal information, such as health or financial status.
What About Compliance?
Another consideration in this discussion is, “Even if governments and other entities create ethical AI rules and laws, will companies cooperate?” According to a recent Reworked article, “10 years from today, it is unlikely that ethical AI design will be widely adopted.” The article goes on to explain that business and policy leaders, researchers, and activists are worried that the evolution of AI instead “will continue to be primarily focused on optimizing profits and social control.”