18555117. FRAMEWORK FOR TRUSTWORTHINESS simplified abstract (Nokia Technologies Oy)

From WikiPatents
Revision as of 08:40, 14 June 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

FRAMEWORK FOR TRUSTWORTHINESS

Organization Name

Nokia Technologies Oy

Inventor(s)

Tejas Subramanya of Munich (DE)

Janne Ali-tolppa of Espoo (FI)

Henning Sanneck of Munich (DE)

Laurent Ciavaglia of Massy (FR)

FRAMEWORK FOR TRUSTWORTHINESS - A simplified explanation of the abstract

This abstract first appeared for US patent application 18555117 titled 'FRAMEWORK FOR TRUSTWORTHINESS

The patent application describes a method that involves receiving a trust level requirement for a service, translating it into a requirement for fairness, explainability, or robustness of a calculation performed by an artificial intelligence pipeline related to the service, and providing this requirement to a trust manager of the AI pipeline.

  • The method involves translating trust level requirements into requirements for fairness, explainability, or robustness of AI calculations.
  • The trust manager of the AI pipeline is responsible for ensuring that the AI calculations meet these requirements.
  • This innovation aims to enhance trust in AI systems by addressing issues related to fairness, explainability, and robustness.

Potential Applications: - This technology can be applied in various industries where trust in AI systems is crucial, such as healthcare, finance, and autonomous vehicles. - It can also be used in customer service chatbots to ensure fair and transparent interactions with users.

Problems Solved: - Addresses concerns about the lack of transparency and fairness in AI decision-making processes. - Helps improve trust and acceptance of AI technologies by providing explanations and ensuring fairness in their calculations.

Benefits: - Enhances trust in AI systems by ensuring fairness, explainability, and robustness in their calculations. - Helps organizations comply with regulations related to AI transparency and accountability.

Commercial Applications: - This technology can be utilized by AI companies, regulatory bodies, and organizations using AI systems to improve trust and transparency in their operations.

Prior Art: - Researchers have explored various methods to improve the fairness and explainability of AI systems, but this specific approach may offer a unique solution.

Frequently Updated Research: - Stay updated on the latest research in AI ethics, fairness, and explainability to further enhance the trustworthiness of AI systems.

Questions about the technology: 1. How does this method contribute to improving trust in AI systems? 2. What are the potential implications of implementing this technology in various industries?


Original Abstract Submitted

Method comprising: receiving a trust level requirement for a service; translating the trust level requirement into a requirement for at least one of a fairness, an explainability, and a robustness of a calculation performed by an artificial intelligence pipeline related to the service; providing the requirement for the at least one of the fairness, the explainability, and the robustness to a trust manager of the artificial intelligence pipeline.