Can machines think?
Scientists, mathematicians, and philosophers started debating this question at the beginning of the 20th century, setting the stage for Alan Turing’s pathbreaking paper in 1950 about intelligent machines. Today, we have astoundingly powerful AI systems, built to make logical decisions with speed, accuracy and efficiency. And yet, AI’s modeling doesn’t allow it to make decisions based on ethical dilemmas.
AI and the machine learning operations that it works on are only as good as the examples and limits set for them. Deviations in the real-world can cause grave errors in judgement.
This brings us to the real question: Sure, machines can think and act intelligently, but are they ready to handle ethical and moral dilemmas that our world faces today?
Today, AI is everywhere. As a chatbot, it answers people when they call their digital assistants. It answers queries and provides information. It talks to customers when they have a service-related issue. It drives people from one place to another. It is responsible for selecting the right resumés for the next step of the recruitment process. It curates newsfeeds and entertainment options. It recognises people’s faces and voices, completes sentences, paints, writes, and composes music.
The AI market is expected to reach USD 500 billion in 2023. And by 2030, AI is predicted to contribute USD 15.7 trillion to the global economy. But while companies are leveraging AI to develop competitive products, enhance internal and external efficiencies, and effectively supercharge their business, questions on the ethical and responsible use of AI linger on.
Initially, discussions on the importance of responsible AI were limited to academia and NGOs. Now, corporations are also throwing their weight behind responsible AI, partly because ethical anomalies in AI products can lead to loss of reputation and legal risks.
Timely human intervention is key to the success of AI systems that are yet to reach maturity in decision-making. By involving people to double-check the results, look for poor recommendations, and actively watch out for unintended repercussions, it is possible to reduce bias.
For instance, there have been multiple cases where AI bots have assessed the situation wrongly, needing urgent human intervention for course correction. At the peak of the Covid-19 pandemic, the credit assessment AI tool by FICO flagged several online shoppers when its code suggested that a sudden surge in number meant fraudsters were active in the system and needed to be blocked. Many genuine customers of the member financial institutions were denied legitimate purchases, affecting their credit limits and ability to buy basic things like toilet paper. Could such a situation have been avoided if organisations launching AI- based programmes also had closer human monitoring during abnormal events, like the pandemic?
Nevertheless, what we really need is a balance - of human oversight with the autonomy of AI - if we are to maintain the efficacy and purpose of AI implementation.
A common hurdle that organisations face is lack of knowledge on how to define and gauge responsible use of AI. Factors like algorithmic fairness cannot be measured using traditional metrics. Plus, the definitions of such terms also change according to industry and business. To create a fair model, organisations need to create systems and platforms that are trustworthy, unbiased, and explainable by design.
For example, a chatbot for a hospital could be programmed to automatically switch to voice-enabled support when attending calls from senior citizens or visually impaired patients.This granular explainability to AI, along with adherence to the organisation’s core values must be baked into the system at the implementation stage itself.
While deploying AI, organisations should enable regular evaluation of risks and biases throughout the whole project life cycle, from prototype to deployment and use at scale. If there are concerns, AI practitioners should highlight them to concerned managers, subject matter experts, or a governance committee. Such periodical reviews should not focus only on algorithms but cover the entire system instead, from data gathering to users acting on the systems' recommendations.
Businesses must also be prepared for errors and have a response plan that can reduce the impact on customers and the business. This action plan should include steps to stop further damage, fix technical problems, a provision to track the cause, remedials, and preventive actions along with regular reporting of the same to customers and staff.
Much like humans, AI is bound to make mistakes and will perhaps never be perfect. But that doesn’t mean we do away with AI entirely. Rather, we need to understand that developing responsible AI is not a one-time project but a continuous effort to integrate ethics into every step of the AI journey.
At Salesforce, we believe that ethical and inclusive technology design, development, and use are essential to develop trusted solutions. Following our guiding principle of Ethics by Design, Salesforce creates technology solutions that use Trusted AI and inclusive product language, and follow ethical and humane use guiding principles around human rights, privacy, safety, honesty and inclusions.
Salesforce promotes a culture of cross-examination of its products and their impact on stakeholders, which is complemented by academic research on subjects like bias in AI.
We also collaborate with a number of civil society groups to encourage the development of inclusive and ethical technology. This includes partnering with the World Economic Forum's Responsible Use of Technology Steering Committee to develop the field of ethical technology, collaborating in multi-stakeholder projects to develop standards for privacy and the advancement of underrepresented groups in the AI industry, and more.
Want to know more about how to empower customers and users to utilise data and AI ethically? These could be on your reading list:
What Organisations Should Know About Building Ethical AI
Ethical AI Can't Wait: 4 Ways To Drive Greater Equality in Your AI
Responsible and Humane Technology: Why the Ethical Use of AI is Important Today