ETSI TC INT has launched a new standardization request on “AI in Test Systems and Testing AI Models”
Rationale
Technical Committee (TC) INT Working Group (WG) AFI delivered:
- TS 103 195-2 - “Autonomic Network Engineering for the Self-Managing Future Internet (AFI); Generic Autonomic Network Architecture (GANA)”
- TS 103 194 - ”Scenarios, Use Cases and Requirements for Autonomic /Self-Managing Future Internet”
- ETSI White Paper No.16 on “ETSI GANA Model and GANA Implementation Guide”, complemented by
- ETSI 5G PoC White Paper No.4
AFI WG is also running a series of 5G PoC Demos aiming at operationalizing this “GANA Multi-layer AI Framework for AMC” (Autonomic Management & Control):
AFI WG is driving a PoC Program on other various topics pertaining to the introduction of Autonomics in various network architectures and their associated management and control architectures.
TC INT AFI WG performed instantiation of the GANA Framework Standard (TS 103 195-2) onto various Architectures to introduce the functional blocks and reference points that are specific to enabling to implement autonomics (AMC) in the target architectures and their associated Management and Control architectures:
- GANA instantiation onto BBF Architecture Scenarios: TR 103 473
- GANA instantiation onto 3GPP Backhaul and EPC Core Network: TR 103 404
- GANA instantiation onto Wireless/ Ad Hoc / Mesh: TR 103 495
- GANA Principles in NGMN E2E 5G Architecture
- GANA Principles in TMForum ODA (Open Digital Architecture / Intelligence Management Functional Block)
- GANA Principles in TMForum Customer Experience 2025 Guidebook
- ETSI GANA adoption in ITUT-T SG13 Recommendation Y.3324 and How to apply Recommendation Y.3172 in designing Cognitive GANA Decision-making-Elements (DEs) as AI Models
The gap we identified is on the need for a ”Test & Certification Framework for AI Models for AMC (including GANA Cognitive DEs)” to support the Industry in implementing ETSI GANA Multi-layer AI Framework through our 5G PoC Demo series and the SDOs/Fora that are now leveraging the “ETSI GANA Framework”.
This is the reason why we created the newly launched standardization request (“AI in Test Systems and Testing AI Models”) to bridge this gap and answer this urgent need by delivering a full set of Deliverables on:
- Artificial Intelligence (AI) in Test Systems
- Testing AI Models in General and Standardized Metrics for Measurements and Assessments in Testing and Certification of AI Models of Autonomic Components/Systems
- Testing ETSI GANA Model's Cognitive Decision Elements (DEs)
- Generic Test and Certification Framework for Testing ETSI GANA Multi-Layer Autonomics Components & their AI Algorithms for Closed-Loop Network Automation.
The diagram below depicts the structure of this new TC INT standardization request:
Structure of this TC INT standardization request on “Testing AI Models and AI in Test Systems”
An ETSI Technical Report (TR) will be produced in 2020/2021 to extend the early Draft Generic Test Framework in ETSI EG 203 341 V1.1.1.
We shared this view with the ETSI Centre for Testing and Interoperability (CTI) during the TC INT#43, in September, and with TC MTS during the 7th UCAAT (Bordeaux, France) meeting where the new TC INT standardization request was part of a presentation on this topic “AI in Test Systems and Testing AI Models” and which identified the areas for joint collaborations with TC MTS.
Structure of the standardization request
This new TC INT standardization request was formed to overcome Technical / Operational and Regulation challenges linked to the development and deployment of AI exhibiting Systems such as GANA Components and Knowledge Plane (KP) Platforms for Autonomic and Cognitive Management and Control of Networks and Services (AMC), as Testing of Cognitive GANA DEs (as AI / ML Models) is becoming crucial. Various challenges need to be addressed in the lifecycle of such AI exhibiting systems with respect to the following systems engineering aspects “Development – Training – Testing – Certification – Deployment- Execution”.
Thousands of AI Models instances MAY simultaneously and concurrently be executed in operations and each enforcing a set of Business and Operational rules injected at the Governance interface as “Policy, Goals and Data Configuration” within a Blueprint / Design Template which includes a mapping table that respects the concept of “Managed Entities (MEs) Parameters ownership by specific DEs in One ME-Parameter-to-One DE Ownership relationship” and “Coordination among DEs to prevent conflicts that may happen” as described in TS 103 195-2.
Some of the Objectives of the WI include but not limited to the following:
- to assess the benefits AI/ML brings to Test Systems (e.g. in reduction of Test Suites execution time in Performance Testing of complex systems)
- to define Methodology and Metrics for testing AI Models using “Qualified Automated Test Component(s) or System(s)” that exhibit quality of AI/ML capabilities
- to define a framework for testing AI Component(s)
- to consider AI Test Methodology and AI Test Framework within both Off-line Training and On-line Learning modes
- ways to determine Time it may take for an AI Model for autonomic management and control to meaningfully be applicable and be able to keep pace with dynamics of the network
- ways to determine Time it may take for an AI Model embedded in a Test Component/System to meaningfully be applicable and be able to keep pace with dynamics of the network
- verdicts Passing in Testing AI Models, and How Suppliers of AI Models (e.g. Cognitive GANA DEs) to be Tested and Certified can produce “Claims/Assertions Specifications of Measurable Metrics/KPIs and certain observable and verifiable outputs” on what the AI Model can achieve under certain conditions during its operation
- idea of using the concept of a “Qualified Automated Test Component(s) or System” that exhibits best quality AI capabilities, in testing comparable capabilities of AI Component(s)/System Under Test