Genesys Cloud CX helps employees improve quality, speed and accuracy

Genesys has introduced expanded generative AI capabilities for experience orchestration, helping organisations unlock deeper customer and operational insights using the power of Large Language Models (LLMs) as a force multiplier for employees.

  • 3 months ago Posted in

Now with auto-summarisation for Agent Assist, the Genesys Cloud CX platform helps organisations drive increased quality, speed and accuracy by enabling employees to efficiently capture conversational intelligence from digital and voice interactions.

The latest generative AI addition to the platform deepens Genesys AI’s expansive predictive, conversational language processing and analytics capabilities. This provides a powerful foundation for organisations to continuously improve customer and employee experiences through smarter automation, personalisation and optimisation.

“We’ve long used large language models within Genesys AI to help organisations proactively orchestrate experiences that lead to stronger customer and employee outcomes,” said Olivier Jouve, chief product officer at Genesys. “Through responsible development that responds to our customers' needs, we’re accelerating our pace of innovation with the latest generations of generative AI to help organisations gain greater value from their data, rapidly create new content and break language barriers. We’re also considering the roles and expertise we may need to fuel our R&D strategy for the future, like prompt engineering and curation.”

The Genesys AI platform for customer and employee experience gives organisations a reliable foundation for innovation to apply capabilities such as Natural Language Processing (NLP) to understand sentiment, intent, empathy and effort across any interactions. Earlier this year, the public’s attention was catapulted toward the potential of generative AI because of open AI technologies like ChatGPT, which are often trained on unvetted, untrustworthy public data sources. Genesys trains its embedded models with curated, trusted data across multiple industries, languages, use cases, dimensions and more. The company has adopted stringent AI ethics guidelines and is committed to creating customer value through the best technology.

As LLMs reached enterprise readiness, Genesys introduced multiple capabilities starting with entity recognition (2020) and followed with sentiment extraction, conversational models, intent mining, topic mining and semantic search. Agent Assist auto-summarisation benefits from years of increasingly sophisticated application of LLMs and uses the latest models, trained with proprietary, curated data to help organisations improve reliability and accuracy.

Auto-summarisation for Agent Assist is the latest of several new generative AI-based offerings expected for Genesys Cloud CX. In addition, the open APIs of the Genesys Cloud CX platform allow organisations to innovate using other generative AI solutions available on the market to support their businesses’ unique needs. Genesys also leverages generative AI within, which gives sales teams a tool to auto-generate email content for lead development, pipeline nurture and more.

New Auto-summarisation for Genesys Agent Assist Benefits Employees and Organisations

According to a recent survey of Genesys customers, organisations are excited by the possible applications of generative AI, with the top anticipated benefits being improved quality (77%), speed (73%) and consistency (67%) of using the technology. Survey respondents also revealed that most contact centre agents spend as many as three minutes summarising, typing and correcting notes from each customer conversation — with no consistent format. Now, summarisation can be virtually instantaneous using Genesys Cloud CX auto-summarisation capabilities for Genesys Agent Assist.

Genesys expanded knowledge and automation capabilities of Agent Assist with auto-summarisation to help employees rapidly facilitate customer engagement and post follow-up actions, while saving valuable time to assist other customers or decompress between tasks. To maintain quality, employees can review and approve the content before making the summary a part of the customer interaction record.

More consistent and contextual conversation reporting allows organisations to better capture and preserve valuable post-interaction data, which they can use to mine for insights through Genesys Speech and Text Analytics. Organisations can benefit with improved service experiences, history tracking, operational efficiency and compliance adherence.

Dataiku has unveiled at its Everyday AI Conference in New York, the LLM Mesh, addressing the critical need for an effective, scalable, and secure platform for integrating Large Language Models (LLMs) in the enterprise. In addition, Dataiku is joined in this announcement by its LLM Mesh Launch Partners Snowflake, Pinecone, and AI21 Labs.
First-of-its kind GPU Stack with public and private Container Registry provides full AI application lifecycle management from anywhere in the world.
Report indicates a complex tech environment for R&D and IT, key barriers to address, and investment priorities for next 12 months.
IT leaders report a significant impact to customer value, employees’ well-being, and enterprises’ return on AI investments.
Leading intellectual property firm reveals soaring AI patent applications across Europe as use cases for AI increase in the physical economy.
The report also reveals the impact of poor Digital Employee Experience on workers, 57% report serious friction with work tech at least weekly and 61% say negative experiences with work tech impacts morale.
Together, Cisco and Splunk will help move organizations from threat detection and response to threat prediction and prevention.
Intel presents a software-defined, silicon-accelerated approach built on a foundation of openness, choice, trust and security.