Telecommunications companies have plenty to gain from advances in AI—like the ability to deliver truly individualized customer experiences. But these gains should never come at the expense of customer privacy and trust, which is why a responsible approach to AI is so critical. Today, we’re going to unpack the trust challenge, how to overcome it, and take a look at how TELUS, one of Canada’s top telecommunication carriers, applies responsible AI.
The trust challenge
PwC predicts that AI could deliver up to $15.7 trillion of business value by 2030—but there’s a lot of work to be done first and adopting AI in a way that preserves customer privacy is critical if this new technology is to deliver on its promise.
That’s why trust is so important. A recent Forrester survey found that 45% of executives have issues trusting AI systems. And customer trust is at an all-time low. According to Salesforce’s Trends in Customer Trust, 59% of customers believe their personal data is vulnerable. To seize the benefits of AI and make good on consumer trust, it’s critical to cross this trust gap. That means building AI that, by design, has strong privacy, ethics, security, fairness, and explainability.
In other words, a responsible AI.
What’s responsible AI?
In our responsible AI white paper, we list the five key principles of responsible AI:
Aligned: Models should be aligned to the populations they intend to benefit. Affected populations must be consulted, prior to the design of an AI system, in order to root out biases of the creator of the model.
Relevant: An AI system must have updated data that accurately represents the population it is dealing with—and it must incorporate feedback loops. It must also have a statistically significant amount of data.
Reliable: Models should be statistically reliable.
Explainable: Algorithms should be explainable, at least in the context for which it was designed.
Accountable: When machines make a mistake—and they will—the people affected must have access to a human for recourse.
Responsible AI enables businesses and consumers to experience the full benefits of AI without compromising privacy or trust. For users, this translates into confidence that you’re building stronger, more valuable relationships with your customers at every touchpoint across your channels. For customers, it means a better service and a continued sense of trust.
TELUS, a use case
Key players in the world of telecom are already exploring AI to enhance customer experience. Among them, TELUS has taken a leadership role by evolving their existing practices and ensuring that AI is used responsibly, to benefit their customers while ensuring their privacy is protected.
There is great promise in using data for decision making, yet TELUS recognizes that data must be used responsibly and ethically. This all starts with trust. One of the most important initiatives that TELUS took on was building a trust model that governs the use of data. In order to earn and maintain trust with customers, regulators and shareholders, TELUS understands that they have to generate value, promote respect, and deliver security. If any of these three core tenets are not met, trust ultimately disappears.
This was a core reason why TELUS decided to work with integrate.ai. Our platform is focused on privacy, security, and trust, and uses privacy-preserving AI techniques to keep consumer data safe and secure. And we achieve this while helping companies like TELUS deliver more meaningful and relevant customer interactions. AI has helped TELUS become even more customer centric and they’ve done it without relying on personally identifiable information (PII). They pull meaningful insights from de-identified data sets and use it to better enhance their customer experiences.
The big takeaway
AI poses a huge opportunity, but only if it’s applied responsibly, with a customer-first mindset. It’s what business leaders, customers, and society expect. If you’re interested in learning more about our partnership with TELUS, feel free to check out our webinar on customer privacy and how to avoid the creep factor.
And if you want to dig deeper into the responsible AI fundamentals, check out our white paper on the subject.