The Linux Foundation Projects
Skip to main content

Welcome to the Open Voice Trustmark Initiative website

Here to Help Master Trustworthy AI Conversations for the Common Good

As a developer friend in the voice tech community would say: “You belong here.” 

 An independently run project of the Linux Foundation, Open Voice Trustmark is dedicated to tracking and translating into action key principles needed to help make Conversational AI worthy of trust. Whether you’re here to learn more for yourself, implement best practices for your team, or be part of shaping the future, we’re here for you with resources and advisors that can help.

Societal benefits of humans speaking with machines include greater accessibility for more. Trust enables us to speak or text naturally when engaging with health and caretaking tools, conducting commerce, and supporting the customer service professional, serving the public through government and other organizations, and enhanced research and discovery. Of course, conversational AI is popular in the entertainment realm, and has long been an integral part of games people of various ages play.

Earning the Open Voice Trustmark Symbol is a way to demonstrate that you, your company, or organization

  • Understand the implications and obligations of engaging people with conversation (there’s a ton of meaningful data in the human voice and background environment)
  • respect personal privacy and protecting other people’s data
  • are clear (“transparent”) about if you are collecting data and what you will do with it
  • ensure and keep checking to see that technology hasn’t outpaced its guardrails (“accountability”), doesn’t harm people through bias, inaccuracies, or unauthorized voice prints, and continues to work well for most people (“inclusiveness”).

Yes, there will be a test – but we’ve got you covered with resources and courses from understanding the basics of conversational AI to advanced considerations for enterprises and organizations.

Why focus on the human voice in AI risk mitigation and share our research widely?

  • Bias and Ethical Concerns
  • Disinformation and Misinformation
  • Privacy and Data Security
  • Lack of Explainability and Transparency
  • Intellectual Property (IP) Issues
  • Damage to Brand/Reputation

That’s just a few reasons we’re advocating for awareness and action for putting people first.

Resources

Review the basic considerations behind human-to-machine interactions in our white papers.  

Learn more about our organization and how you can contribute.  Contact us.  

Endorse the TrustMark Initiative!

Ready to learn more? Take a self-administered course on ethical AI considerations.