Pega launches Ethical Bias Check

New feature helps businesses practice responsible AI in their engagement strategies.

  • 4 years ago Posted in
Pegasystems has introduced Ethical Bias Check, a new capability of Pega Customer Decision Hub that helps eliminate biases hidden in the artificial intelligence (AI) driving customer engagements. The feature flags possible discriminatory offers and messages generated by AI across all channels before they reach the customer.

 

Done right, AI has helped businesses add more value to every customer interaction, leading to deeper brand loyalty and higher profits. But in some cases, AI models can unintentionally ‘learn’ biases over time related to factors such as age, ethnicity, or gender. Left undetected, this can lead to harmful discriminatory practices such as offering fewer loans, insurance policies, or product discounts to underserved populations. In today’s unpredictable economy, businesses can’t risk losing the trust of their customers, who are already more vulnerable due to the pandemic.

 

With its Ethical Bias Check, Pega adds its latest capability to help businesses practice responsible AI when used in customer interaction. It detects unwanted discrimination by using predictive analytics to simulate the likely outcomes of a given strategy. After setting their testing thresholds, clients receive alerts when the bias risk reaches unacceptable levels – such as if the audience for a particular offer skews toward or away from specific demographics. Operations teams can then pinpoint the offending algorithm and adjust the strategy to help ensure a fair and more balanced outcome for everyone.

 

With the advanced AI-power of Pega Customer Decision Hub sitting at the center of all customer interactions, Ethical Bias Check differentiates itself from other customer bias prevention methods with its unique capabilities such as:

 

  • Making bias detection simple and easy: Most other AI bias tools require separate bias tests for each individual offer – which can be time consuming and error prone. Pega streamlines bias testing by simulating an entire engagement strategy at once across all connected channels. With Pega’s central AI “brain,” all AI decisions can be screened for bias – including which marketing offers to display on the web, which promotions to deliver in an email, or what service suggestions to make for each customer. Pega’s detailed bias detection reports help clients understand why and where the issues might arise so they can correct them before they become problems.

 

  • Offering more flexibility in controlling bias: Companies can set acceptable thresholds for an element that could cause bias, such as age, gender, or ethnicity. Business people can adjust these thresholds for scenarios where slanted outcomes may be justified – such as when a healthcare company reaches out mostly to senior citizens with information about relevant Medicare services. Clients have the flexibility to widen or narrow the thresholds to account for the desired outcomes of their engagement strategies.

 

  • Providing continual bias protection: Clients can now include bias testing as a standard course of action when simulating strategy results. Even as those strategies are adjusted and new offers or actions are added, clients have peace of mind that those customer engagement programs are being screened for bias by the software.

 

In addition to Ethical Bias Check, Pega has previously introduced other features and functions in Pega Customer Decision Hub aimed at helping clients act responsibly with AI. This includes the T-Switch, which gives organizations control over the transparency of their artificial intelligence (AI) customer engagement models.
Large businesses, particularly those with revenues exceeding $500 million are making substantial...
PNY Technologies is collaborating with Canonical - the two companies have officially signed a...
Enterprise AI investment expected to grow as chief executives reveal future technology priorities,...
53% of Tech Companies Integrate Cloud Solutions With AI, According to Survey of IT Decision-Makers
72.4% are aware of the “significant” energy required to train or run AI models, 49.8% are...
Cisco’s end-to-end solution protects both the development and use of AI applications so...
Dynatrace will provide VCARB with unprecedented insights into vehicle dynamics, driver performance,...
Nscale’s AI data centres will support the UK Government’s AI Opportunities Action Plan and...