Paola Zeni is the Chief Privacy Officer at RingCentral. She is an international privacy attorney with more than 20 years of privacy experience and a veteran of the cybersecurity industry, having worked at Symantec and at Palo Alto Networks, where she built the privacy program from the ground up.
What inspired you to pursue a career in data privacy?
In the late 1990s, when EU Member States were implementing the 1995 EU Data Protection Directive of , data privacy started to emerge in Europe as an important issue. As a technology attorney working with technology companies such as HP and Agilent Technologies, I considered this a relevant topic and started paying close attention and growing my understanding of privacy requirements. I quickly knew that this was an area I wanted to be involved in, not only because I found it legally interesting and challenging, but also because it’s an issue that touches many teams and many processes across the entire organization. Being involved in data privacy means working with different groups and individuals and learning about multiple aspects of the business. Being able to influence and drive change on an important issue across many functions in the organization, while following a burgeoning legal area, has been extremely rewarding. Working in data privacy today is more exciting than ever, considering the technological developments and the increased legal complexities at global level.
When you first joined RingCentral, you created a Trust Center, what is this specifically?
At RingCentral we believe that providing our customers and partners with information about the privacy and the security of their data is essential to build and maintain trust in our services. For this reason we continue to create collateral and resources, such as product privacy datasheets for our core offerings, whitepapers, and compliance guides, and make them available to customers and partners on our public facing Trust Center. Most recently we added our AI Transparency Whitepaper. The Trust Center is a critical component of our commitment to transparency with key stakeholders.
How does RingCentral ensure that privacy principles are integrated into all AI-driven products and services?
Artificial intelligence can empower businesses to unlock new potential and quickly extract meaningful information and insights from their data – but with those benefits, comes responsibility. At RingCentral, we remain relentlessly focused on protecting customers and their data. We accomplish this through the privacy pillars that guide our product development practices
Privacy by Design: We leverage our privacy by design approach by working closely with product counsel, product managers, and product engineers to embed privacy principles and privacy requirements across the aspects of our products and services that implement AI. Privacy assessments are integrated in the product development lifecycle, from ideation to deployment and we build on that to conduct AI reviews and guidance.
Transparency: We offer collateral and resources to customers, partners, and users about how their data is collected and used, as part of our commitment to transparency and building trust in our services.
Customer control: We provide options that empower customers to maintain control in deciding how they want our AI to interact with their data.
Can you provide examples of specific privacy measures embedded within RingCentral’s AI-first communication solutions?
First of all, we have added to our product documentation information detailing how we collect and process data: who stores it, what third parties have access to it, etc. in our privacy data sheets, which are posted on our Trust Center. We specifically call out which data serves as input for AI and which data is generated as output from AI. Also, as part of our product reviews in collaboration with product counsel, we implement disclosures to meet our commitment to transparency, and we provide our customers’ administrators with options to control sharing of data with AI.
Why is it crucial for organizations to maintain complete transparency about data collection and usage in the age of AI?
To foster adoption of trustworthy AI, it’s imperative for organizations to establish trust in how AI processes data and in the accuracy of the output. This extends to the data AI is trained on, the logic applied by the algorithm, and the nature of the output.
We believe that when providers are transparent and share information about their AI, how it works, and what it’s used for, customers can make informed decisions and are empowered to provide more specific disclosures to their users, thus improving adoption of AI and trust. When developing and providing AI we think of all stakeholders: our customers , but also their employees, partners, and customers.
What steps can organizations take to ensure that their vendors adhere to stringent AI usage policies?
At RingCentral, we believe deploying AI requires trust between us and our vendors. Vendors must commit to embed privacy and data security into the architecture of their products. For this reason we have built on our existing vendor due diligence process by adding a specific AI review, and we have implemented a standard for the use of third party AI, with specific requirements for the protection of RingCentral and our customers.
What strategies does RingCentral employ to ensure the data fed into AI systems is accurate and unbiased?
With fairness as a guiding principle, we are constantly considering the impact of our AI, and remain committed to maintaining an awareness of potential biases and risks, with mechanisms in place to identify and mitigate any unintended consequences.
- We have adopted a specific framework for the identification and prevention of biases as part of our Ethical AI Development Framework, which we apply to all our product reviews.
- Our use cases for AI involve a human-in-the-loop to evaluate the outputs of our AI systems. For example, in our Smart Notes, even without monitoring the content of the notes produced, we can infer from users’ actions whether the notes were accurate or not. If a user edits the notes constantly, it sends a signal to RingCentral to tweak the prompts.
- As another example of human-in-the-loop, our retrieval augmented generation process allows the output to be strictly focused on specific knowledge databases and provides references for the sources for the outputs generated. This allows the human to verify the response and to dig deeper into the references themselves.
By ensuring our AI is accurate, we stand by our promise to provide explainable and transparent AI.
What privacy challenges arise with AI in large-scale enterprise deployments, and how are they addressed?
First of all it is important to remember that existing privacy laws contain provisions that are applicable to artificial intelligence. When laws are technology-neutral, legal frameworks and ethical guideposts apply to new technologies.. Therefore, organizations need to ensure their use of AI complies with existing privacy laws, such as GDPR and CPRA.
Second, the responsibility of privacy professionals is to monitor nascent and emerging AI laws, which vary from state to state and country to country. AI laws address numerous aspects of AI, but one of the top priorities for new AI regulation is the protection of fundamental human rights, including privacy.
The critical success factors in addressing privacy issues are transparency towards users, especially where AI performs profiling or makes automated decisions impacting individuals and enabling choices, so users can opt out from AI usage they do not feel comfortable about.
What future trends do you see in AI and data privacy, and how is RingCentral preparing to stay ahead?
The major trends are new laws that will continue to come into force, users increasing demands for transparency and control, the ever-growing need to manage AI-related risk, including third party risks, and the rise of cyber risks in AI.
Companies need to put in place robust governance and teams must collaborate across functions in order to ensure internal alignment, minimize risks, and grow users’ trust. At RingCentral, our ongoing commitment to privacy, security and transparency remains unmatched. We take these things seriously. Through our AI governance and our AI privacy pillars, RingCentral is committed to ethical AI.
Thank you for the great interview, readers who wish to learn more should visit RingCentral.