Ethical Principles Must Undergird AI
November 1st, 2023
By David Cass
Artificial intelligence needs to be deployed in a way that benefits humanity. That requires looking beyond the short-term model to long-term use and AI’s widescale impact on the broader society.
As the use of artificial intelligence and machine learning grows, so, too, will the deployment of automated decision-making systems that could greatly impact well-being, privacy, and livelihood. Organizations must, therefore, develop ethical principles to guide the design, development, and deployment of AI and ML systems to ensure that the power of these technologies is used responsibly.
This is a two-stage process. Stage one is developing the principles. Stage two defines the various core AI ethics principles that will guide the organization.
When developing the principles, the first step is to get multidisciplinary input from a mixed community of ethicists, technologists, legal experts, and sociologists. Representatives of affected communities — for example, health care or finance — also have to be involved to guarantee there’s a comprehensive understanding of the potential implications for its use.
The second step would be a broader public consultation if it’s an AI or ML model that impacts society at large. Public consultations, such as a town hall, can offer insights from ordinary citizens who might be affected while helping to foster trust in the use of AI and ML.
Regularly reviewing ethical principles is critical because AI is evolving so quickly, and they need to remain relevant.
It’s also important to put a feedback mechanism in place to ensure that the AI developers, users, and affected individuals can provide observations and critiques on the AI systems and their implications once they’re deployed. It’s important to know whether the system is working as expected.
To read the full article, go to Security Current