Acumen IT Training, Inc.

CERTIFIED TESTER AI TESTING (CT-AI)

COURSE DESCRIPTION

The ISTQB® AI Testing (CT-AI) certification extends understanding of artificial intelligence and/or deep (machine) learning, most specifically testing AI-based systems and using AI in testing.

The Certified Tester AI Testing certification is aimed at anyone involved in testing AI-based systems and/or AI for testing. This includes people in roles such as testers, test analysts, data analysts, test engineers, test consultants, test managers, user acceptance testers, and software developers. This certification is also appropriate for anyone who wants a basic understanding of testing AI-based systems and/or AI for testing, such as project managers, quality managers, software development managers, business analysts, operations team members, IT directors, and management consultants.

COURSE OUTLINE

1. Introduction to AI

    1.1. Definition of AI and AI Effect

    1.2. Narrow, General and Super AI

    1.3. AI-Based and Conventional Systems

    1.4. AI Technologies

    1.5. AI Development Frameworks

    1.6. Hardware for AI-Based Systems

    1.7. AI as a Service (AIaaS)

                1.7.1. Contracts for AI as a Service

                1.7.2. AIaaS Examples

    1.8. Pre-Trained Models

                1.8.1. Introduction to Pre-Trained Models

                1.8.2. Transfer Learning

                1.8.3. Risks of using Pre-Trained Models and Transfer Learning

    1.9. Standards, Regulations and AI

2. Quality Characteristics for AI-Based Systems

    2.1. Flexibility and Adaptability

    2.2. Autonomy

    2.3. Evolution

    2.4. Bias

    2.5. Ethics

    2.6. Side Effects and Reward Hacking

    2.7. Transparency, Interpretability and Explainability

    2.8. Safety and AI

3. Machine Learning (ML) – Overview

    3.1. Forms of ML

                3.1.1. Supervised Learning

                3.1.2. Unsupervised Learning

                3.1.3. Reinforcement Learning

    3.2. Workflow

    3.3. Selecting Form of ML

    3.4. Factors Involved in ML Algorithm Selection

    3.5. Overfitting and Underfitting

                3.5.1. Overfitting

                3.5.2. Underfitting

                3.5.3. Hands-Exercise: Demonstrate Overfitting and Underfitting

4. ML – Data

    4.1. Data Preparation as Part of the ML Workflow

                4.1.1. Challenges in Data Preparation

                4.1.2. Hands-On Exercise: Data Preparation for ML

    4.2. Training, Validation and Test Datasets in the ML Workflow

                4.2.1. Hands-On Exercise: Identify Training and Test Data and Create an ML Model

    4.3. Dataset Quality Issues

    4.4. Data Quality and its Effect on the ML Model

    4.5. Data Labelling for Supervised Learning

                4.5.1. Approaches to Data Labelling

                4.5.2. Mislabeled Data in Datasets

5. ML Functional Performance Metrics

    5.1. Confusion Matrix

    5.2. Additional ML Functional Performance Metrics for Classification, Regression and Clustering

    5.3. Limitations of ML Functional Performance Metrics

    5.4. Selecting ML Functional Performance Metrics

                5.4.1. Hands-On Exercise: Evaluate the Created ML Model

    5.5. Benchmark Suites for ML

Please contact us for the full course outline, schedules and for booking a private class.
;