WorkBoard's Generative AI Toolkit for IT

On this page, you’ll find information, technical facts, and details on how WorkBoard uses GenAI and its related security and efficacy.

Capability Summary

WorkBoard’s Co-Author transforms your colleagues experience setting OKRs from frustrating to fun. That delight comes from the magic combination of doing incredibly fast what humans do frustratingly slow.

WorkBoard’s Co-Author solves these problems and more!

  • Eliminates the need to know the OKR structure to align quickly and effectively 
  • Reduces the time to find the right inspiration for objectives to just seconds 
  • Converts tasks and questions into good outcome measures and leading indicators in one second 
  • Identifies related results and work automatically for each key result 

Teams align on outcomes functionally and cross functionally faster, find the discussions more fruitful and fun, and spend less time setting OKRs.

The solution is designed to protect and secure your data and your users: 

  • We use Microsoft Azure OpenAI and our data centers were Microsoft Azure already (an existing sub processor)
  • Your data is not shared with OpenAI, with Microsoft, nor any other WorkBoard customer
  • Your data is not used to train WorkBoard data sets or any other data set
  • We are a part of Workday’s Trustworthy AI marketplace
  • There is a human in the loop in all our GenAI interactions

Our Commitment to Responsible AI

While WorkBoard and many other organizations have used AI, machine learning, and natural language processing to improve stakeholder experiences, the advent of generative AI presents tremendous new opportunities and creates new concerns. While generative AI is still in the early stages of adoption, it is already making a real difference in our ability to gather, synthesize, and author information more quickly – activities at the center of knowledge work.

To drive the most beneficial outcomes for our customers, our use of AI must be trustworthy and ethical, and as regulations emerge, compliant.

Our Pillars of Responsible AI

We accelerate teams, not replace them

WorkBoard’s use of AI generates drafts of OKRs, action plans, Scorecards, and other strategy execution artifacts that enable the user to make their own decisions faster. The Co-Author and AI are collaborators with the user – visibly and transparently – rather than obviating the user’s idea, input, or insight. Users can choose to accept, modify or reject the draft or briefs that our Co-Author generates. This "human in the loop” use of AI honors the agency of our users while providing benefits that neither AI nor a human could achieve alone.

We are transparent

We make clear documentation on our architecture and software available to our customers, including how we use AI. This includes explaining how our learning models are built, trained, and tested. We value inclusion and fairness, and our governance process monitors how we use AI for unintended consequences. As we continue to both innovate and learn, we will maintain our deep commitment and controls for explainability.

Your data is private by design

Data privacy and confidentiality are the foundations of trust for any platform, and our privacy and information security policies apply to our use of AI technologies. We grant customers control over their data's usage in our AI solutions. Privacy-by-design is a first principle of all of our development practices, and our use of AI and learning models is no exception. WorkBoard enables companies to harness the intelligence of their own strategy execution data managed in our platform while benefiting from the power of a domain-specific large language model to generate strong suggestions faster – without worrying that their data ever lands in the public domain, on ChatGPT, or in a competitors’ hands.

Generative AI Data Security FAQ

How does the OKR functionality in WorkBoard utilize generative AI?

WorkBoard leverages AI Service to provide intelligent suggestions and prompts for generating OKRs based on the data available in the platform.

What steps are taken to ensure the security and privacy of our OKR data when using this functionality?

At WorkBoard, we take the security and privacy of your data seriously. We follow industry best practices and employ robust security measures to protect your OKR data. All communication between WorkBoard and AI is encrypted, and access to your data is strictly controlled and limited to authorized personnel.

How is the OKR data transmitted to AI, and what measures are in place to secure this data during transmission?

The OKR data is securely transmitted to AI using encrypted channels and industry-standard security protocols. We adhere to secure data transmission practices to ensure the confidentiality and integrity of your OKR data.

Are there any measures in place to prevent unauthorized access or misuse of our OKR data within AI?

Yes, AI maintains a robust security infrastructure to prevent unauthorized access or misuse of customer data. They have implemented strict access controls, monitoring systems, and auditing mechanisms to ensure the protection of your OKR data.

What level of control do we have over our OKR data?

You retain full ownership and control over your OKR data. AI does not use or retain your data beyond the scope of the OKR generation process. WorkBoard does not share your data with any third parties without your explicit consent.

Are there any data anonymization techniques used when transmitting our OKR data to AI?

Your data is encrypted in transit. Any sensitive information is anonymized or tokenized before being sent to AI, ensuring the privacy and confidentiality of your data.

Can we trust the suggestions and prompts generated to maintain the confidentiality of our OKR data?

Yes, the WorkBoard Co-Author is designed to provide high-quality suggestions while maintaining the confidentiality of your OKR data. The model focuses on generating relevant and useful prompts without retaining or disclosing any sensitive information.

Does Azure Open AI and Anthropic Claude have any data retention policies regarding our OKR data?

Azure Open AI and Anthropic Claude does not retain your OKR data beyond the scope of the OKR generation process. Once the suggestions are generated, the data is discarded, ensuring that your data remains secure and private.

Are there any additional security certifications or compliance standards that Azure Open AI and Anthropic Claude adheres to?

Both companies maintain various security certifications and compliance standards, including ISO 27001, SOC 2 Type II, HIPAA, and GDPR. These certifications ensure that their infrastructure and practices meet the highest security and privacy standards.

Additional Security Information

For more information on our SOC II, ISO 27001 and other security certifications, visit our Info Sec page.

For more information on Azure OpenAI, visit the Azure Open AI FAQ.

For more information on Anthropic Claude, visit Anthropic's Trust Page.

Download our full technical and architecture information here.


Was this article helpful?
0 out of 0 found this helpful
Have more questions? Submit a request