Tough EU and US laws for algorithmic accountability and fairness will shape AI usage in future
New laws will soon shape how companies use artificial intelligence (AI). It has sent companies scrambling to demystify the flurry of regulations from the US, European Union (EU) and prepare themselves to comply with the rules that will carry harsh penalties. Government are compelling companies to come up with algorithms that are unbiased, fairer, do not discriminate between genders, ethnicity, races, and respect & protect privacy. The intention is to use AI for good and curb every possibility of its harmful usage.
New Regulations Unveiled
In April this year, the EU unveiled a set of regulations which govern the use of AI across the bloc’s 27 member states, the violation of which will attract fines of up to 6% of a company’s annual revenues for noncompliance, higher than the historic penalties of up to 4% of global turnover that can be levied under the General Data Protection Regulation (GDPR) in 2016. A few weeks before the EU regulations, the US Federal Trade Commission (FTC) released an uncharacteristically bold set of guidelines on “truth, fairness, and equity” in AI — defining unfairness, and therefore the illegal use of AI, broadly as any act that “causes more harm than good.”
The first-of-its-kind EU proposals will take years to implement, but the consequences are far-reaching. It bans — with some exceptions — on the use of biometric identification systems in public, including facial recognition. Other forbidden applications of AI include social credit scoring, the infliction of harm, and subliminal behavior manipulation.
Re-evaluating Risks
The EU has classified certain AI applications as having ‘unacceptable risk’ which would be banned and others under the ‘high risk’ category would be strictly regulated. Those that fall into the unacceptable risk category include AI systems or applications that ‘manipulate human behavior to circumvent users’ free will… and systems that allow “social scoring” by governments.’
High-risk applications include AI used in critical infrastructure, such as transport; safety components of products; law enforcement, such as evaluating the reliability of evidence; or border control, such as verifying documents. In all those scenarios – and others – the proposal is for them to go through a set of checks before they could be released into the market. These would include risk assessment, use of high-quality datasets and some level of human oversight, among other obligations.
All remote biometric identification, including facial recognition, would fall into the high-risk category. However, it might be harder to stamp EU’s will on countries outside its legal boundaries, given the overwhelming presence of the US and China in AI. Even the UK, no longer in the EU, has much higher levels of investment in AI companies than any EU country. But technology will become a regulated industry, and this is one step in that direction.
New Checkpoints
Per the new EU rule, training, validation, and testing data sets should be subject to appropriate data governance and management practices, that should take into consideration the following:
- Relevant design choices
- Data gathering
- Relevant data preparation processing operations
- Formulation of relevant assumptions, especially with respect to the information that the data are supposed to measure and represent
- Prior assessment of the availability, quantity, and suitability of the required data sets
- Examination in view of possible biases
- Identification of any possible data gaps or shortcomings, and potential remediations.
Furthermore, to prevent outcomes entailing prohibited discrimination, each training, validation, and testing data set should be relevant, representative, free of errors and complete. They should also have the appropriate statistical properties, including as regards the individuals or groups of individuals on which AI system is intended to be used, especially to ensure that all relevant dimensions of gender, ethnicity and other possible grounds of prohibited discrimination are appropriately reflected in those data sets.
The regulations are emblematic of an increased desire on the part of consumers for privacy-preserving, responsible implementations of AI and machine learning. A study by Capgemini found that customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them — and in turn, punish those that don’t. And 87% of executives told Juniper in a recent survey that they believe organizations have a responsibility to adopt policies that minimize the negative impacts of AI.
The MIT Initiative
Enmeshed with the EU proposal is a data privacy-focused initiative to bring computer science research together with public policy engagement, which also announced almost simultaneously. The MIT Future of Data, Trust, and Privacy (FOD) initiative, will involve collaboration between experts in specific technical areas, gets at the heart of what the EU AI regulations hope to accomplish.
The initiative is the brainchild of MIT Computer Science and Artificial Intelligence Laboratory managing director Lori Glover and Danny Weitzner, who runs the Internet Policy Research Initiative at MIT. The goal is to integrate research on privacy policy with privacy-preserving technologies to create a virtuous cycle of R&D and regulation. In many fields, such as medical diagnostics and finance, sharing of data can produce significantly better outcomes and predictions, but sharing is disallowed by laws such as HIPAA. There are private collaborative analytics techniques that can help with this problem, but it is not always clear if techniques or approaches satisfy regulations, because regulations are oftentimes vague. The initiative would like to address this issue.
Learn–Unlearn–Relearn
The EU regulations impose requirements on “high-risk” applications of AI, including medical devices and equipment. Companies developing them will have to use “high-quality” training data to avoid bias, agree to “human oversight,” and create detailed documentation that explains how the software works to both regulators and users. Moreover, to provide transparency about what technologies are in use, all high-risk AI systems will be indexed in an EU-wide database.
Data scientists working on AI projects would now need to be aware of the myriad regulations that are being introduced to ensure fairness in the algorithms and fix accountability for the actions of those. Maybe, they will have to come up with another machine learning algorithm which would automate compliance and can be trained to learn new regulations as those are introduced.