Can machines think?

Scientists, mathematicians, and philosophers started debating this question at the beginning of the 20th century, setting the stage for Alan Turing’s pathbreaking paper in 1950 about intelligent machines. Today, we have astoundingly powerful AI systems, built to make logical decisions with speed, accuracy and efficiency. And yet, AI’s modeling doesn’t allow it to make decisions based on ethical dilemmas.

AI and the machine learning operations that it works on are only as good as the examples and limits set for them. Deviations in the real-world  can cause grave errors in judgement.

This brings us to the real question: Sure, machines can think and act intelligently, but are they ready to handle ethical and moral dilemmas that our world faces today?

AI - impacting every aspect of modern life

Today, AI is everywhere. As a chatbot, it answers people when they call their digital assistants. It answers queries and provides information. It talks to customers when they have a service-related issue. It drives people from one place to another. It is responsible for selecting the right resumés for the next step of the recruitment process. It curates newsfeeds and entertainment options. It recognises people’s faces and voices, completes sentences, paints, writes, and composes music.

The AI market is expected to reach USD 500 billion in 2023. And by 2030, AI is predicted to contribute USD 15.7 trillion to the global economy. But while companies are leveraging AI to develop competitive products, enhance internal and external efficiencies, and effectively supercharge their business, questions on the ethical and responsible use of AI linger on. 

Initially, discussions on the importance of responsible AI were limited to academia and NGOs. Now, corporations are also throwing their weight behind responsible AI, partly because ethical anomalies in AI products can lead to loss of reputation and legal risks.

Responsible AI is a necessity

Responsible AI, often referred to as ethical AI or trustworthy AI, is a set of guiding principles that describe how AI systems should be created, used, and managed to adhere to ethics and laws. It involves the creation and deployment of AI that is fair, inclusive, empowering, transparent, accountable, and responsible. Responsible AI fosters trust between corporations, their customers, and their internal stakeholders.

5 pillars of the corporate framework needed for responsible AI

AI -powered programmes could lead to multiple instances of discrimination, bias, and privacy violations. The vast amount of data used to train AI is also under scrutiny, having sometimes been obtained dubiously. Responsible AI needs an ethical framework that includes rules to govern AI. And this is best driven by a top-down approach: C-suite executives must commit themselves completely towards developing ethical AI within the organisation.
Organisations must:

1. Define their ethical principles

Everyone in the organisation must know, understand, and follow the ethical stance of the company, communicated in the form of principles. Although principles alone won't lead to responsible AI, they are critical since they form the framework for the larger programme. It is then necessary to translate principles into precise, actionable guidelines for teams. These help organisations put in place policies, which can be used to develop procedures, methods, tools and controls related to responsible AI.

2. Understand and eliminate bias

Bias in AI can creep in through multiple sources – from the way data was collected to the inherent bias of the designer and the interpretation of the AI output. In order to minimise bias, the organisation must ensure that the training data is representative of the target audience and purpose. There is a need to ask questions about whether enough information has been gathered to make an informed decision, whether any group has been excluded from the data collection and why, and so on.

3. Balance autonomy of AI with human insight

Timely human intervention is key to the success of AI systems that are yet to reach maturity in decision-making. By involving people to double-check the results, look for poor recommendations, and actively watch out for unintended repercussions, it is possible to reduce bias. 

For instance, there have been multiple cases where AI bots have assessed the situation wrongly, needing urgent human intervention for course correction. At the peak of the Covid-19 pandemic, the credit assessment AI tool by FICO flagged several online shoppers when its code suggested that a sudden surge in number meant fraudsters were active in the system and needed to be blocked. Many genuine customers of the member financial institutions were denied legitimate purchases, affecting their credit limits and ability to buy basic things like toilet paper. Could such a situation have been avoided if organisations launching AI- based programmes also had closer human monitoring during abnormal events, like the pandemic?

Nevertheless, what we really need is a balance - of human oversight with the autonomy of AI - if we are to maintain the efficacy and purpose of AI implementation.

4. Build ethics into core systems

A common hurdle that organisations face is lack of knowledge on how to define and gauge responsible use of AI. Factors like algorithmic fairness cannot be measured using traditional metrics. Plus, the definitions of such terms also change according to industry and business. To create a fair model, organisations need to create systems and platforms that are trustworthy, unbiased, and explainable by design. 

For example, a chatbot for a hospital could be programmed to automatically switch to voice-enabled support when attending calls from senior citizens or visually impaired patients.This granular explainability to AI, along with adherence to the organisation’s core values must be baked into the system at the implementation stage itself.

5. Have review and remediation strategies in place

While deploying AI, organisations should enable regular evaluation of risks and biases throughout the whole project life cycle, from prototype to deployment and use at scale. If there are concerns, AI practitioners should highlight them to concerned managers, subject matter experts, or a governance committee. Such periodical reviews should not focus only on algorithms but cover the entire system instead, from data gathering to users acting on the systems' recommendations.

Businesses must also be prepared for errors and have a response plan that can reduce the impact on customers and the business. This action plan should include steps to stop further damage, fix technical problems, a provision to track the cause, remedials, and preventive actions along with regular reporting of the same to customers and staff.

Responsible AI is the way forward

Much like humans, AI is bound to make mistakes and will perhaps never be perfect. But that doesn’t mean we do away with AI entirely. Rather, we need to understand that developing responsible AI is not a one-time project but a continuous effort to integrate ethics into every step of the AI journey.

At Salesforce, we believe that ethical and inclusive technology design, development, and use are essential to develop trusted solutions. Following our guiding principle of Ethics by Design, Salesforce creates technology solutions that use Trusted AI and inclusive product language, and follow ethical and humane use guiding principles around human rights, privacy, safety, honesty and inclusions. 

Salesforce promotes a culture of cross-examination of its products and their impact on stakeholders, which is complemented by academic research on subjects like bias in AI. 

We  also collaborate with a number of civil society groups to encourage the development of inclusive and ethical technology. This includes partnering with the World Economic Forum's Responsible Use of Technology Steering Committee to develop the field of ethical technology, collaborating in multi-stakeholder projects to develop standards for privacy and the advancement of underrepresented groups in the AI industry, and more.

 

Want to foster an ethics-focused AI organisation culture?

Read how Salesforce does it here

Want to know more about how to empower customers and users to utilise data and AI ethically? These could be on your reading list:

What Organisations Should Know About Building Ethical AI

Ethical AI Can't Wait: 4 Ways To Drive Greater Equality in Your AI

Responsible and Humane Technology: Why the Ethical Use of AI is Important Today