ISO/IEC JTC 1 SC 42 Artificial Intelligence - Working Group 4
Use Cases & Applications
   04/26/2024

Editor's comments and enhancements are shown in green. [ Reviewed]

The quality of use case submissions will be evaluated for inclusion in the Working Group's Technical Report based on the application area, relevant AI technologies, credible reference sources (see References section), and the following characteristics:

  • [1] Data Focus & Learning: Use cases for AI system which utilizes Machine Learning, and those that use a fixed a priori knowledge base.
  • [2] Level of Autonomy: Use cases demonstrating several degrees (dependent, autonomous, human/critic in the loop, etc.) of AI system autonomy.
  • [3] Verifiability & Transparency: Use cases demonstrating several types and levels of verifiability and transparency, including approaches for explainable AI, accountability, etc.
  • [4] Impact: Use cases demonstrating the impact of AI systems to society, environment, etc.
  • [5] Architecture: Use cases demonstrating several architectural paradigms for AI systems (e.g., cloud, distributed AI, crowdsourcing, swarm intelligence, etc.)
  • [6] Functional aspects, trustworthiness, and societal concerns
  • [7] AI life cycle components include acquire/process/apply.
These characteristics are identified in red in the use case.

No. 13 ID: Use Case Name: AI solution to identify automatically false positives from a specific check for "untranslated target segments”"from an automated quality assurance tool
Application
Domain
Other
Deployment
Model
Cloud services
StatusPoC
ScopeThe scope of this use case is limited to automated linguistic quality assurance tools, but the outcome of this use case could be applicable to other areas, such as for example: Machine Translation, automated post-editing, Computer Aided Translation Analysis and pre-translation, etc. This use case will be relevant for contents across any domain.
Objective(s)To reduce the number of false positive issues for check for untranslated target segment for bilingual content with in-house automated quality assurance tool.
Short
Description
(up to
150 words)
In the future, we aim to build an AI solution that could automatically identify likely false positives issues from the results of the "check for untranslated target segments" following an approach where we could use machine learning based on already identified false positives by our users. The expected outcome would be to increase end user’s productivity when reviewing automated quality assurance findings and to change user behaviour to pay more attention to this type of issues by reducing the number of false positives in 80%. In addition, we would like to reduce the amount of time, we spent on a yearly basis on refining this check manually based on users' feedback.
Complete Description Untranslated target segments contain characters, symbols, and words that remain the same in source and target language. These segments can contain, numbers, alphanumeric content, numbers, code, e-mail addresses, prices, proper nouns, etc. or any combination of those. On a yearly basis, this check produces over 1 Million potential issues across over 50 different languages. Refining this check manually based on annotated false positive data for each specific customer and product and for specific language pairs is very costly, and the coverage is never sufficient, as new content is constantly produced and there are always new opportunities for refining this check via code. In addition, because of the high proportion of false positives over (95.5%) our translators tend to ignore the output from this valuable check and in many cases, we suspect that valid relevant issues for situations when there are real forgotten translations are missed. There are typically three types of false positives for this type of check: 1) Language specific false positives, for example for situations where source and target segment need to be the same as the words from these segments are "cognates" with the same meaning. For example:
Fig.1
2) Customer profile specific false positives, for example situations where certain segments are to be left untranslated based on specific guidelines from the customer, for example for segments that jut consist of Company names, Product Names or specific words and segments that have been determined as not to be translated by our customer:
Fig.2
3) Segments that remain the same in source and target, because they act as special type of entities with some special meaning, for example: alphanumeric segments, for example part numbers, placeholders, code.
Fig.3
The idea is to create an AI solution that can automatically identify results from the "check for untranslated target segment" that are likely to be a False Positive. With this solution, we expect to reduce the number of potential issues presented by this check to our end users in 80%. This way our end users can focus their efforts on those potential issues that are more likely to be valid corrections because there could have been a forgotten translation. In addition, we will be able to increase the productivity of our end users when reviewing automated quality assurance potential issues from their bilingual content evaluation, and we will be able to save costs internally as we won't have to manually implement code changes in this check based on manual analysis of our data based on user's annotation.
StakeholdersCustomers, Translation partners, end users of the translated content.
Stakeholders'
Assets, Values
Systems'
Threats &
Vulnerabilities
Bias from changes in requirements on the customer’s end or inappropriate training data.
Performance
Indicators (KPIs)
Seq. No. Name Description Reference to mentioned
use case objectives
1 Coverage Ratio of potential issues which are "of interest" for human evaluation. Ideal target is to reduce the current volume by 80%. Improve accuracy
2 Split Proportion of the potential issues which are "more likely to be a valid issue" for our end users. Improve efficiency
AI Features Task(s)Recognition
Method(s)Machine Learning
Hardware
Topology
Terms &
Concepts Used
Machine Learning
Standardization
Opportunities
Requirements
Challenges
& Issues
Challenges: Try to achieve eventually 80% of the accuracy of linguists when identifying false positives for untranslated target segments, preventing as much as possible false negatives. Issues: segmentation of false positive data by Customer and Product profile could be challenging.
Societal Concerns Description Not applicable
SDGs to
be achieved
Data Characteristics
Description Data from end user identification of false positives and valid corrections for the "untranslated target segment check" results of Moravia QA Tools.
Source RWS Moravia Analytics Portal (https://analytics.moravia.com/Dashboard/459 )
Type Structured content in a table with additional metadata fields (source segment, target segment, source language, target language, valid correction, false positive, customer and product profile, frequency)
Volume (size) Data for last 18 months
Velocity Every hour
Variety Data types will be the same but there would be different variables to be considered (source language, target language, customer and product profile)
Variability
(rate of change)
No changes
Quality End-user dependent
Scenario Conditions
No. Scenario
Name
Scenario
Description
Triggering Event Pre-condition Post-Condition






Training Scenario Name:
Step No. Event Name of
Process/Activity
Primary
Actor
Description of
Process/Activity
Requirement






Specification of training data
Scenario Name Evaluation
Step No. Event Name of
Process/Activity
Primary
Actor
Description of
Process/Activity
Requirement






Input of Evaluation
Output of Evaluation
Scenario Name Execution
Step No. Event Name of
Process/Activity
Primary
Actor
Description of
Process/Activity
Requirement






Input of Execution
Output of Execution
Scenario Name Retraining
Step No. Event Name of
Process/Activity
Primary
Actor
Description of
Process/Activity
Requirement






Specification of retraining data
References
No. Type Reference Status Impact of
use case
Originator
Organization
Link








  • Peer-reviewed scientific/technical publications on AI applications (e.g. [1]).
  • Patent documents describing AI solutions (e.g. [2], [3]).
  • Technical reports or presentations by renowned AI experts (e.g. [4])
  • High quality company whitepapers and presentations
  • Publicly accessible sources with sufficient detail

    This list is not exhaustive. Other credible sources may be acceptable as well.

    Examples of credible sources:

    [1] B. Du Boulay. "Artificial Intelligence as an Effective Classroom Assistant". IEEE Intelligent Systems, V 31, p.76-81. 2016.

    [2] S. Hong. "Artificial intelligence audio apparatus and operation method thereof". N US 9,948,764, Available at: https://patents.google.com/patent/US20150120618A1/en. 2018.

    [3] M.R. Sumner, B.J. Newendorp and R.M. Orr. "Structured dictation using intelligent automated assistants". N US 9,865,280, 2018.

    [4] J. Hendler, S. Ellis, K. McGuire, N. Negedley, A. Weinstock, M. Klawonn and D. Burns. "WATSON@RPI, Technical Project Review".
    URL: https://www.slideshare.net/jahendler/watson-summer-review82013final. 2013