Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Purpose

cQube solution architecture document:cQube | Design Document (Nov 2022)

In this document focus is on evolving Design Specs, which can be used by the development team.

Audience

Solution development team

How to use this document

This document is organized by topics. Various illustrations/models used in this document are present in this document: cQube | Design Specs | Models - v24Nov2022

TopicsData processing pipeline alternatives

2 possible solutions were considered:

  1. Using Postgres

  2. Using Nifi

Alt: Postgres model:

  • v4.0 - as-is

    • ingestion: python api, storage: s3, processing: nifi, aggregate data storage: postgres exhaust: s3 (json), viz engine: nodejs, viz:angular, spec: python based generator (fragments of SQL)

  • v5: ingestion: nodejs api, store (till processed):postgres db

  • POST /spec/* → when a new spec is added (spec=event + dataset + pipe)

    • /event - create table <input> ⇒ e

      • Acts as a Q

      • Helps type validation

    • /dataset - create table <input> ⇒ c

    • /pipe -

      • create a new pipe in table <pipeline>

      • pipe connects event to cube via transformer ⇒ event → transformer → dataset

      • transformer ⇒ create trigger <transformer> on event

  • POST /event → when a new event arrives

    • j ⇒ select e.c1, c.c2+1, c.c3+e.c3 from cqb1, evt1 inner join cqb1.c1=evt.c1 and cqb1

    • update cqb1 set c1=j.c1, c2=j.c2, c3=j.c3 from j

    • Transformer is a PG function connected via trigger on event table

    • Transformer updates dataset

Alt: Nifi model:

  1. Events are passed to API (ajv based validation)

  2. API writes to event_batch_file

  3. Every 5 min, trigger intermediate_cube_processor

  4. API will write to flowfile & ideally that should nifi process group

  5. Process group - transformer - event_data, event_specs, cube_schema, cube_data - Py/Js

    1. Receives Event data

    2. Pull all required data (from store)

    3. Perform transformation

    4. Store in an intermediate cube.csv

    5. Upsert the cube, once in every 5 min

Based on

  • Traceability (where in the flow is my data)

  • Horizontal scalability

  • Flexibility - ability to add additional steps or pipelines or create side-effects

Nifi based data processing model is chosen for cQube.

Physical Architecture

...

  1. Kong

    1. Used as a API gateway

    2. Used for AuthN & AuthZ <<TBD>>

  2. Auth: handles AuthN and AuthZ <<TBD - Alternatives>>

    1. AuthN (identity and role assignment):

      1. Implementation choices

        1. Location

          1. Internal

          2. External

        2. Solution

          1. Keycloak

          2. Using a 3rd party - Auth0

      2. Design Factors/Requirements

        1. State may have their own - duplication

        2. Management Overhead

        3. Users

          1. Adapter

          2. Dashboard Users

            1. Admin

            2. Public?

          3. Internal Service Communication?

        4. Keeping services stateless assuming access is authenticated already - JWT being the easiest to manage.

        5. Should support any OIDC based authentication (allow for external SSO integration)

    2. AuthZ: access to resources based on role

      1. Implementation Choices

        1. OPA - I am leaning towards this as it fulfills all requirements

      2. Requirements

        1. Need it to be internal to cQube

        2. Role based authorization

        3. Works on

          1. UI

          2. SQL - SQL based filtering layer which dynamically filters rows and columns, based on role and configuration

          3. APIs

          4. Infrastructure

  3. Flow Engine: manages various data ingestion pipelines

  4. User MS: To manage users and roles

  5. Spec MS: Exposes APIs to process specifications - events, dimensions, datasets, transformers, schedules and pipelines

  6. Ingest MS: Exposes APIs to ingest event, dimension and dataset data

  7. Spec db: stores various specification details

  8. Config db: stores cQube specific data such as usage, roles, access control etc

  9. Ingest db: stores ingested data created by collecting events, dimensions and datasets before updating dataset (KPI) db

  10. Dataset db: stores KPI data. It is updated from ingested data <<TBD - should we consider Timescale>>

  11. SQL Access: <<TBD - potentially Hasura>>

Deployment Architecture

...

Spec pipeline

Defines how specs are processed into cQube.

Image Removed

  1. Specs can be added to cQube via Spec microservice

  2. Spec MS interacts with data flow engine (nifi) and spec db (postgrsql)

  3. Spec microservice namespace: /spec/*

  4. Key APIs & processing steps

    1. POST /spec/event -

      1. Compile and verify validity

      2. Store in event_spec table

      3. Update cache of Ingest MS with changes

    2. POST /spec/dimension

      1. Compile and verify validity

      2. Store spec in dimension_spec table

      3. Update cache of Ingest MS with changes

      4. Create/update the actual dimension table

    3. POST /spec/dataset

      1. Compile and verify validity

      2. Store spec in dataset_spec table

      3. Update cache of Ingest MS with changes

      4. Create/update the actual dataset table

    4. POST /spec/transformer

      1. This is accessible to only superadmins

      2. Compile and verify validity - aware of required event, dimensions and datasets

      3. Store spec in transformer_spec table

      4. It is assumed that each transformer is tested and coming from a trusted source

      5. Where applicable, internally, a code generator will be used to create a transformer

    5. POST /spec/pipeline

      1. Compile and verify validity

      2. Store spec in pipeline_spec table

      3. Nifi compilation - abstracted by a wrapper layer which hides nifi calls

        1. Create a new nifi processor_group

        2. Link the transformer into relevant processor group

    6. POST /spec/schedule

      1. Create necessary schedule for a pipeline

Spec Flow

Yaml Link -> https://github.com/Sunbird-cQube/spec-ms/blob/main/spec.yaml

The specifications flow diagram represents the schema of the events, datasets, dimensions, transformers and pipeline. All these specs will be defined by the cQube platform prior to ingesting the data into the cQube. These specifications are derived by considering the KPIs as the Indicator.

Dimension Specification

...

  1. The dimension spec will be validated with the predefined specification using AJV framework.

  2. When the validation is successful then the spec will be checked for duplicacy.

  3. If the spec already exists with the same data then an error response will be sent.

  4. Else stored in the dimension spec table.

  5. Once the specs are stored, dimension tables will be created/updated

Event Specification

...

  1. The event spec will be validated with the predefined specification using AJV framework.

  2. When the validation is successful then the spec will be checked for duplicacy.

  3. If the spec already exists with the same data then an error response will be sent.

  4. Else stored in the event spec table.

Dataset Specification

  1. The Dataset spec will be validated with the predefined specification using AJV framework.

  2. When the validation is successful then the spec will be checked for duplicacy.

  3. If the spec already exists with the same data then an error response will be sent.

  4. Else stored in the dataset spec table.

  5. Once the specs are stored, dataset tables will be created/updated

...

Transformer Specification

  1. The Transformer spec will be validated with the predefined specification using AJV framework.

  2. When the validation is successful then the spec will be checked for duplicacy.

  3. If the spec already exists with the same data then an error response will be sent.

  4. Else stored in the transformer spec table.

  5. Once the specs are stored,a generator function will be called to generate the transformers and store them in the python file for the implementation.

...

Pipeline Specification

  1. Event processing

    1. Event collection pipeline - /ingestion/event

    2. Event collection to aggregation pipeline - /ingestion/pipeline <event name>

    3. Aggregation to dataset upsert pipeline - /ingestion/pipeline <dataset name>

  2. Dimension processing

    1. Dimension collection pipeline - /ingestion/dimension

    2. Dimension collection to dimension upsert pipeline - /ingestion/pipeline <dimension name>

  3. Dataset processing

    1. Dataset collection pipeline - /ingestion/dataset

    2. Dataset collection to Dataset upsert pipeline - /ingestion/pipeline <dataset name>

  1. The Pipeline spec will be validated with the predefined specification using AJV framework.

  2. When the validation is successful then the spec will be checked for duplicacy.

  3. If the spec already exists with the same data then an error response will be sent.

  4. Else stored in the pipeline spec table.

  5. Pipeline types are listed below

  • Dimension to Collection

  • Dimension collection to db

  • Event to Collection

  • Dataset to collection

  • Event collection to Aggregate

  • Aggregate to DB

If the Pipeline type is Aggregate to DB then processor group will be created by calling Nifi APIs

Processor group will be mapped to the transformer.

...

Spec DB Schema

Below are the tables included in spec schema

  1. spec.dimension

  2. spec.dimension_track

  3. spec.event

  4. spec.event_track

  5. spec.dataset

  6. spec.dataset_track

  7. spec.transformer

  8. spec.transformer_track

  9. spec.pipeline

  10. spec.pipeline_track

  11. spec.schedule

  12. spec.schedule_track

...

Image Removed

Image Removed

Image RemovedImage Removed

Ingestion pipeline

...

  1. Data is ingested into cQube via Ingest MS

  2. Ingest MS stores the data into Ingest Store

  3. Based on schedule Ingest Flow engine interacts with ingest Store and Updates Dataset DB

  4. Ingestion microservice namespace: /ingestion/*

  5. Key APIs & processing steps

    1. POST /ingestion/event

      1. Validate event data

      2. Append to corresponding event file (csv)

    2. POST /ingestion/dimension

      1. Validate dimension data

      2. Append to corresponding dimension file (csv)

    3. POST /ingestion/dataset

      1. Validate dataset data

      2. Append to corresponding aggregate file (csv)

    4. POST /ingestion/pipeline

      1. Trigger pipeline by - event, dimension, aggregate or dataset from ingestion store and transform and upsert dataset store

  6. Flow engine is configured to trigger pipelines based on schedule

Ingestion Flow

...

  1. The dimension data will be added using POST /ingestion/dimension API.

  2. Data will be validated with the dimension spec using AJV.

  3. When the validation is successful then the dimension data will be stored in the CSV file.

  4. Then the CSV file will be uploaded to S3 bucket which will be Dimension DB.

...

Event Data
  1. The Event data will be added using POST /ingestion/event API.

  2. Data will be validated with the event spec using AJV.

  3. When the validation is successful then the event data will be stored in the CSV file.

  4. Then the CSV file will be uploaded to S3 bucket which will be Ingest DB.

...

Dataset

...

  1. The Dataset data will be added using POST /ingestion/dataset API.

  2. Data will be validated with the dataset spec using AJV.

  3. When the validation is successful then the dataset data will be stored in the CSV file.

  4. Then the CSV file will be uploaded to S3 bucket which will be Aggregate DB.

Pipeline Flow
  1. POST /ingestion/scheduler API will add/update the schedule for Nifi Scheduler.

  2. Nifi Scheduler will call the POST /ingestion/pipeline API

  3. The Pipeline API will have following functionalities

  4. Read the data from dimension collection and write to Dimension DB(S3)

...

II. Get the Ingest DB and store the data into the Dataset DB

...

Ingestion schema

In the illustration below two dimensions as examples have been used.

  1. ingestion.state

  2. ingestion.district

  3. ingestion.student_count ->one example has been used to illustrate.Dataset tables will have dynamic columns and static columns. Dynamic columns will be added when dataset_spec is added. Static columns are sum,count,min,max,avg.

...

Image Removed

KPIs Flow

KPI (Key Performance Indicator)

  • KPIs are the one which defines what has to be derived from the input data or events

  • Each item in the Viz can be called as KPI.

Example: School_attendance

KPI Category : Student attendance Compliance

KPI VisualisationType : Table

KPI Indicator : School-wise average attendance compliance %

KPI Levels : School

If there are other Indicators need to show on dashboard, for example :

Cluster-wise Average attendance compliance %

Block-wise Average attendance compliance %

District-wise Average attendance compliance %

State-wise Average attendance compliance %

Events
  • A data structure that records an occurrence at a particular time for an entity (eg: school, etc). It is a combination of simple data types (eg: integer, varchar, etc.). An event should always contain a column/set of columns that helps you calculate the Indicator. A table with a timestamp doesn’t necessarily mean that it is an event; it should contribute to either aggregation or filtering of the dataset.

  • An Event should be in such a way that it should be able to update the dataset in some form

  • Transformation of Events to data that updates datasets - Transformation happens through a transformer: f(eventDetails, eventSchema, datasetSchema, dimesionConfig) = [array of columns]

  • Updating datasets

Example:

...

school_id

...

class

...

date

...

total_students

...

attendance_marked

...

<Unique id>

...

<Integer Value>

...

<dd/mm/yy>

...

<integer Value>

...

<integer Value>

Dimensions
  • Dimensions Describe events.

  • Dimensions are Static records

  • In the final aggregation we will be mapping Dimensions with Datasets.

Example:

Dimensions for above mentioned KPI are :

School : School_id,School_name,Cluster_Id,Block_id,District_id,State_Id

Cluster : Cluster_id,Cluster_name

Block : Block_id,Block_name

District : District_id,District_name

State_Id : State_Id,State_name

Datasets
  • High-level data which is computed by aggregating events. It is a data representation of the indicator. Datasets are persistent within cQube Ed. A dataset is created for at least one indicator.

  • A dataset can be derived from one or more specified events, independently.

  • It may additionally contain dimensions and derived values from other datasets during the mapping process.

Example : School_attendance

Dataset for the above mentioned KPI is defined below:

...

date

...

school_id

...

Sum(total_students)

...

sum(attendance_marked)

...

Average Attendance %

...

<yy-mm-dd>

...

<unique id>

...

<sum of parameters>

...

<sum of parameters>

...

<Sum of parameter/ sum of parameter>

Datasets for the above mentioned Indicators :

...

Block_attendance

District_attendance

State_attendance

Note:

  • There is a list of transformers. In this dataset we are using Sum

  • There is a list of dimensions. In the above mentioned dataset School is Dimension.

These are some sample KPIs derived as a set of events, dimensions, and datasets from the PAT & SAT datasource which are used in earlier cQube version v4.0 . Link

Developer & QC WorkFlow

...

The Developer & QC workflow activity is described in the above diagram from how the task is identified & assigned till it merges to the respective branch in the Developer workflow. Once the code is merged, QC will then test their scripts and execute the test cases and produce the test result.

DevOps

Installation Process

Following technologies will be used to implement cQube as a one step installation.

  1. Shell Scripting - We are using shell script to install the basic softwares required to run ansible and docker. To generate a configuration file by prompting the questions to the user. To run the ansible playbooks.

  2. Docker - For Containerising the micro services

  3. Docker Swarm - Swarm consists of multiple docker hosts which can act as a manager which can handle multiple nodes. By implementing this we can ensure that the system continue to run even when one node fails

  4. Ansible - To easily build the code and to reuse the code.

  5. Github Actions - For Continuous Integration and Continuous Deployment

Steps: Users will clone the code from the github, checkout to the latest repository and run a shell script install.sh by giving executable permission to the script.

Install.sh File:

  • It checks if the user is running the script using sudo and throws an error accordingly.

  • Installs all the basic softwares like ansible, docker, python dependencies

  • Triggers a configuration.sh file which is used to generate a config.yml file by popping up respective questions to the user.

  • The value entered by the user will be validated and it will loop the same variable until the correct value is entered.

  • Once the config file is created, It will be previewed to the user to re-check the entered values.

  • Once the user confirms the configuration values, then the ansible scripts will get triggered.

  • Once all ansible scripts are triggered, Shell script shows the message “cqube installed successfully”

Ansible Script:

  • Ansible script will be triggered and it will help to build the docker images with the respective docker files provided.

  • Ansible triggers a docker-compose file to start all the docker containers within the server.

  • Ansible script to trigger nginx setup and configuration in the remote machine. ( nginx server )

Docker Containers:

  1. Spec-ms

  2. Ingestion-ms

  3. Generator-ms

  4. Nifi-ms

  5. Postgres-ms

  6. Kong-ms

  7. Nginx-ms

Flow Diagram of One step Installation

...

Github Actions for CI/CD

GitHub Actions gives developers the ability to automate their workflows across issues, pull requests, and more—plus native CI/CD Functionality.

All GitHub Actions automations are handled via workflows, which are

YAML files placed under the .github/workflows directory in a repository

that define automated processes.

Every workflow consists of several different core concepts. These include:

● Events: Events are defined triggers that kick off a workflow.

● Jobs: Jobs are a set of steps that execute on the same runner.

● Steps: Steps are individual tasks that run commands in a job.

● Actions: An action is a command that’s executed on a runner—and

the core element of GitHub Actions,

● Runners: A runner is a GitHub Actions server. It listens for available

jobs, runs each in parallel, and reports back progress, logs and

Results.

The below diagram shows the flow of github actions workflow.

Image Removed

Build and Push Docker Image to AWS EC2 Using GitHub Actions:

The below diagram shows how github actions are being implemented for CI/CD. Whenever a user commits a code to the repository, Based on the event defined in the workflow file the deployments gets triggered.

Github actions build the code and perform unit testing for code coverage in its own virtual runners.

If the code is successfully built in its runners without any errors then it will build the docker image using the docker file which is provided.

Github actions then login to the docker hub account through the credentials provided as environments in its secrets.

Once the image is built, then it will push the image to the docker hub with the tags specified in the workflow.

Once the image is pushed to dockerhub, then the github actions trigger to deploy the code to ec2 server. It used a ssh predefined action which helps github actions to login to the ec2 server and pull the latest code from the github repository and deploys it into the server.

Image Removed

Deployment strategy in Dev, QA, Demo Environments

The below process is followed in each environment using its respective github repository branch. Two workflow files will be created in the github repository.

  • On the event of pull request:
    workflow action will get triggered where it checks the build of image and code coverage in the github action runners.

  • On the event of merge request:

A workflow action will get triggered where it builds the docker image out of the dockerfile >> login to the docker hub >> pushed the latest image to the docker hub repository >> connect to the ec2 server >> pulls the latest images from the dockerhub >> deploys into the server.

Image Removed

  • Developers raise a pull request

  • On the event of pull request github actions workflow triggers Continuous Integration where it will build and test the code.

  • Once the build is successful, Code Reviewer checks the code and merge it to the repository

  • In the event of a merge, github actions trigger continuous Deployment where it will deploy the code to the server.

  • Once the code is merged to dev the code will be merged to QA which again follows the same above process followed by demo Environment.

Visualization Design

Purpose

cQube solution architecture document:cQube | Design Document (Nov 2022)

In this document focus is on evolving Design Specs, which can be used by the development team.

Audience

Solution development team

How to use this document

This document is organized by topics. Various illustrations/models used in this document are present in this document: cQube | Design Specs | Models - v24Nov2022 

TopicsData processing pipeline alternatives

2 possible solutions were considered:

  1. Using Postgres 

  2. Using Nifi

Alt: Postgres model:

  • v4.0 - as-is

    • ingestion: python api, storage: s3, processing: nifi, aggregate data storage: postgres exhaust: s3 (json), viz engine: nodejs, viz:angular, spec: python based generator (fragments of SQL)

  • v5: ingestion: nodejs api, store (till processed):postgres db

  • POST /spec/* → when a new spec is added (spec=event + dataset + pipe)

    • /event - create table <input> ⇒ e

      • Acts as a Q

      • Helps type validation

    • /dataset - create table <input> ⇒ c

    • /pipe - 

      • create a new pipe in table <pipeline>

      • pipe connects event to cube via transformer ⇒ event → transformer → dataset

      • transformer ⇒ create trigger <transformer> on event

  • POST /event → when a new event arrives

    • j ⇒ select e.c1, c.c2+1, c.c3+e.c3 from cqb1, evt1 inner join cqb1.c1=evt.c1 and cqb1

    • update cqb1 set c1=j.c1, c2=j.c2, c3=j.c3 from j

    • Transformer is a PG function connected via trigger on event table

    • Transformer updates dataset

Alt: Nifi model:

  1. Events are passed to API (ajv based validation)

  2. API writes to event_batch_file

  3. Every 5 min, trigger intermediate_cube_processor

  4. API will write to flowfile & ideally that should nifi process group

  5. Process group - transformer - event_data, event_specs, cube_schema, cube_data - Py/Js

    1. Receives Event data

    2. Pull all required data (from store)

    3. Perform transformation

    4. Store in an intermediate cube.csv

    5. Upsert the cube, once in every 5 min

Based on 

  • Traceability (where in the flow is my data)

  • Horizontal scalability

  • Flexibility - ability to add additional steps or pipelines or create side-effects

Nifi based data processing model is chosen for cQube.

Physical Architecture

   

...

  1. Kong

    1. Used as a API gateway

    2. Used for AuthN & AuthZ <<TBD>>

  2. Auth: handles AuthN and AuthZ <<TBD - Alternatives>>

    1. AuthN (identity and role assignment): 

      1. Implementation choices

        1. Location

          1. Internal

          2. External

        2. Solution

          1. Keycloak

          2. Using a 3rd party - Auth0

      2. Design Factors/Requirements

        1. State may have their own - duplication

        2. Management Overhead

        3. Users

          1. Adapter

          2. Dashboard Users

            1. Admin

            2. Public?

          3. Internal Service Communication?

        4. Keeping services stateless assuming access is authenticated already - JWT being the easiest to manage.

        5. Should support any OIDC based authentication (allow for external SSO integration)

    2. AuthZ: access to resources based on role

      1. Implementation Choices

        1. OPA - I am leaning towards this as it fulfills all requirements

      2. Requirements

        1. Need it to be internal to cQube

        2. Role based authorization

        3. Works on

          1. UI

          2. SQL - SQL based filtering layer which dynamically filters rows and columns, based on role and configuration

          3. APIs

          4. Infrastructure

  3. Flow Engine: manages various data ingestion pipelines

  4. User MS: To manage users and roles

  5. Spec MS: Exposes APIs to process specifications - events, dimensions, datasets, transformers, schedules and pipelines

  6. Ingest MS: Exposes APIs to ingest event, dimension and dataset data

  7. Spec db: stores various specification details

  8. Config db: stores cQube specific data such as usage, roles, access control etc

  9. Ingest db: stores ingested data created by collecting events, dimensions and datasets before updating dataset (KPI) db

  10. Dataset db: stores KPI data. It is updated from ingested data <<TBD - should we consider Timescale>>

  11. SQL Access: <<TBD - potentially Hasura>>

Deployment Architecture

AWS - Network Architecture

The following steps define the cQube setup and workflow completion processes in AWS. cQube mainly comprises the areas mentioned below:

  1. EC2 Server

  2. IAM user and Role creation for S3 connectivity.

The cQube network setup process is described in the block diagram below:

Image Added

Microservices Details

   Following are the details of the microservices which get installed in the cqube server.

  • Ingestion-ms: The ingestion-ms is used to upload the data of the events, datasets, dimensions, transformers and pipeline. All these apis will be to ingesting the data into the cQube.

  • Spec-ms: The spec-ms is used to import schema of the events, datasets, dimensions, transformers and pipeline. All these specs will be defined by the cQube platform prior to ingesting the data into the cQube. These specifications are derived by considering the KPIs as the Indicator.

  • Generator-ms: The generator-ms is used to create the specs & transformers for the derived datasets. Performed aggregation logics, updating data to datasets based on transformation. Status update of file processing

  • Nifi-ms: Apache NiFi is used as a real-time integrated data logistics and simple event processing platform

  • Postgres-ms: Postgres microservice contains the schema and tables

  • Nginx-ms: It is commonly used as a reverse proxy and load balancer to manage incoming traffic and distribute it to slower upstream servers

  • Kong-ms: It is a lightweight API Gateway that secures, manages, and extends APIs and microservices.

cQube Deployment Procedure      

 Install.sh sh file contains a shell script where it will run following shell scripts and ansible-playbook to setup the cqube

Basic_requirements.sh:

This script basically updates and upgrades the software packages in the server and installs the basic softwares such as 

  • Python3

  • Pip3

  • Ansible

  • Docker

  • Docker compose

Config_file_generator.sh:

  This script is used to generate a configuration file which contains some constant values and few required variables should be entered by the user. Following are the variables which get added in the config file.

Note: Users should follow the Hints provided in the description and should enter the variables accordingly. If the entered value is wrong then an error message gets displayed and the user should modify the variable value accordingly.Constant Variables: These variables are auto generated

  • System_user_name

  • base_dir

  • Private_ip

  • aws_default_region

User Input Variables-These are variables which need to be entered by the user by following the Hint provided

  • state_name ( Enter the required state code by referring to the state list provided )

  • api_end_point ( Enter the url in which cqube to be configured )

  • s3_access_key

  • s3_secret_key

  • s3 archived bucket name

  • s3 error bucket name         

Optional_variables - Database credentials contain default values. If the user wishes to enter their own credentials then the user should opt for yes to enter their credentials otherwise can opt for no when the question pops up

  • db_user_name ( Enter the postgres database username ) 

  • db_name ( Enter the postgres database name )

  • db_password ( Enter the postgres password )

Once the config file is generated, A preview of the config file is displayed followed by a question where the user gets an option to re enter the configuration values on choosing yes. If option no is selected then the install.sh moves to the next section.

Repository_clone.sh:

   This script clones the following repositories in the microservices directory and checkout to the required release branch

Image Added

    Note: If the repository is already cloned then the script will pull the updated code.

Ansible-playbook:

Install.yml

      An install.yml ansible playbook gets triggered where it triggers the required roles to build the following microservices images.

  • Ingestion-ms

  • Spec-ms

  • Generator-ms

  • Postgres-ms

  • Nifi-ms

  • Kong-ms

  • Nginx-ms

Image Added

compose.yml:

      A docker compose ansible script gets triggered where it will up all the containers to running state.

      Note: The following commands can be used from the Ansible directory to down the containers and to start the containers, respectively.

  • docker-compose -f docker-compose.yml down

  • docker-compose -f docker-compose.yml up -d

     Once the installation is completed, You will be prompted with the following messages and required reference urls.

cQube Installed Successfully

Image Added

 We can check the containers running status by using following command

  • sudo docker ps

Image Added

Spec pipeline

Defines how specs are processed into cQube.

Image Added

  1. Specs can be added to cQube via Spec microservice

  2. Spec MS interacts with data flow engine (nifi) and spec db (postgrsql)

  3. Spec microservice namespace: /spec/*

  4. Key APIs & processing steps

    1. POST /spec/dimension - 

      1. Compile and verify validity

      2. Store in spec.dimension table

      3. Create the actual dimension table

  • The data type given in the spec will be mapped to the equivalent data type in the postgres, like if the datatype in spec is “string” then the column datatype in the database will be “VARCHAR”

  • Similarly will get all the columns and their respective data types and generate a dynamic query and create a dimension table.

  1. POST /spec/event -

    1. Compile and verify validity

    2. Store in spec.event table

  2. POST /spec/dataset -

    1. Compile and verify validity

    2. Store spec.dataset table

    3. Create the actual dataset table

  • The data type given in the spec will be mapped to the equivalent data type in the postgres, like if the datatype in spec is “number” then the column datatype in the database will be “NUMERIC”

  • Similarly will get all the columns and their respective data types and generate a dynamic query and create a dataset table.

  1. POST /spec/transformer - 

    1. This is accessible to only superadmins

    2. Compile and verify validity - aware of required ingestion type, key_file, program name and operation.

    3. Where applicable, internally, a code generator will be used to create a transformer

    4. It is assumed that each transformer is tested and coming from a trusted source

    5. Store in spec.transformer table

  2. POST /spec/pipeline - 

    1. Compile and verify validity

    2. Store in spec.pipeline table

    3. Nifi compilation - abstracted by a wrapper layer which hides nifi calls

      1. Create a new nifi processor_group

      2. Add processors to the newly created processor group

      3. Connect the transformer into relevant processor

  3. POST /spec/schedule - 

  1. Compile and verify validity

  2. Create/Update schedule for a pipeline

                     

  

Spec Flow

Yaml Link -> https://github.com/Sunbird-cQube/spec-ms/blob/dev/spec.yaml ;

The specifications flow diagram represents the schema of the events, datasets, dimensions, transformers and pipeline. All these specs will be defined by the cQube platform prior to ingesting the data into the cQube. These specifications are derived by considering the KPIs as the Indicator. 

Dimension Specification

...

  1. The dimension spec will be validated with the predefined specification using AJV framework.

  2. When the validation is successful then the spec will be checked for duplication.

  3. If the spec already exists with the same data then an error response will be sent.

  4. Else stored in the spec.dimension table.

  5. Add a record into spec.pipeline table

  6. Once the spec is stored, a dimension table will be created.

Event Specification

...

  1. The event spec will be validated with the predefined specification using AJV framework.

  2. When the validation is successful then the spec will be checked for duplicacy.

  3. If the spec already exists with the same data then an error response will be sent.

  4. Else stored in the event spec table.

  5. Add a record into spec.pipeline table

Dataset Specification

  1. The Dataset spec will be validated with the predefined specification using AJV framework.

  2. When the validation is successful then the spec will be checked for duplicacy.

  3. If the spec already exists with the same data then an error response will be sent.

  4. Else stored in the dataset spec table.

  5. Add a record into spec.pipeline table

  6. Once the spec is stored, dataset table will be created

...

Transformer Specification

  1. The Transformer spec will be validated with the predefined specification using AJV framework.

  2. When the validation is successful then internally python api is called to generate transformer files and store them for later implementation for the given program by passing ingestion_name, key_file, program and operation. 

  3. If the transformer file is successfully created then a record is added into spec.transformer table with the file name

...

Pipeline Specification

  1. Event processing

    1. Event collection pipeline - /ingestion/event

    2. Event collection to aggregation pipeline - /ingestion/pipeline <event name> 

    3. Aggregation to dataset upsert pipeline - /ingestion/pipeline <dataset name> 

  2. Dimension processing

    1. Dimension collection pipeline - /ingestion/dimension

    2. Dimension collection to dimension upsert pipeline - /ingestion/pipeline <dimension name> 

  3. Dataset processing 

    1. Dataset collection pipeline - /ingestion/dataset

    2. Dataset collection to Dataset upsert pipeline - /ingestion/pipeline <dataset name> 

  1. The Pipeline spec will be validated with the predefined specification using AJV framework.

  2. When the validation is successful then the spec will be checked for duplicacy.

  3. If the spec already exists with the same data then an error response will be sent.

  4. Else stored in the pipeline spec table.

  5. Pipeline types are listed below

  • Dimension to Collection

  • Dimension collection to db

  •  Event to Collection  

  • Dataset to collection

  • Event collection to Aggregate

  • Aggregate to DB

If the Pipeline type is Aggregate to DB then processor group will be created and processors will be added inside the processor group by calling Nifi APIs

  1. Processor group will be mapped to the transformer.

  2. And connection will be established between the processors

...

Spec DB Schema

Below are the tables included in spec schema

  1. spec.dimension

  2. spec.event

  3. spec.dataset

  4. spec.transformer

  5. spec.pipeline

  6. spec.schedule

...

Ingestion pipeline

...

  1. Data is ingested into cQube via Ingest MS

  1. Ingest MS stores the data into Ingest Store

  2. Based on schedule Ingest Flow engine interacts with ingest Store and Updates Dataset DB

  3. Ingestion microservice namespace: /ingestion/*

  4. Key APIs & processing steps

    1. POST /ingestion/event

      1. Validate event data

      2. Append to corresponding event file (csv)

    2. POST /ingestion/dimension

      1. Validate dimension data

      2. Append to corresponding dimension file (csv)

    3. POST /ingestion/dataset

      1. Validate dataset data

      2. Append to corresponding aggregate file (csv)

    4. POST /ingestion/pipeline

      1. Trigger pipeline by - event, dimension, aggregate or dataset from ingestion store and transform and upsert dataset store

    5. POST /ingestion/csv

     i. Upload the CSV file with event, dimension and dataset data

    ii. Validates the data

   iii. Internally make /ingestion/event, ingestion/dimension, ingestion/dataset API 

  1. GET /ingestion/file-status

   i. Gives the Status of the file which was uploaded using /ingestion/csv API  

  1. PUT /ingestion/file-status

   ii. This is an internal API called from transformer files when the processing starts and when the processing completes

  1. Flow engine is configured to trigger pipelines based on schedule

Ingestion Flow

     Dimension Data
  1. The dimension data will be added using POST /ingestion/dimension API.

  2. Data will be validated with the dimension spec using AJV.

  3. When the validation is successful then the dimension data will be stored in the CSV file.

  4. Based on the schedule time processor group will be triggered and data will be added into DB.

...

Event Data 
  1. The Event data will be added using POST /ingestion/event API.

  2. Data will be validated with the event spec using AJV.

  3. When the validation is successful then the event data will be stored in the CSV file.

  4. Based on the schedule time processor group will be triggered and data will be ingested into DB.

...

Dataset

...

  1. The Dataset data will be added using POST /ingestion/dataset API.

  2. Data will be validated with the dataset spec using AJV.

  3. When the validation is successful then the dataset data will be stored in the CSV file.

  4. Based on the schedule time processor group will be triggered and data will be ingested into DB.

Upload CSV Flow
  1. Data can be ingested in two ways 

  1. Through Ingestion APIs like /ingestion/dimension, /ingestion/event and ingestion/dataset

  2. Using /ingestion/csv API

  1. CSV file will be imported using POST /ingestion/csv api 

  1. When the validation is successful then a record will be added to ingestion.file_tracker table with file details like file_name, file_size and uploaded time

  1. Asynchronous call will be made and response will be sent

  1. In the asynchronous call, if the ingestion_type is event then records will be inserted into ingestion.file_pipeline_tracker table for all the datasets which will be generated using that event data. 

  1. spec schema will fetched from DB for the given ingestion type and validates the CSV data for the datatype 

  1. If there is any error then ingestion.file_tracker will be updated with the error status

  1. If there is no error, then data will be read in batch with the batch limit of 1 Lakh and passed as input the ingestion/event, ingestion/dimension and ingestion/dataset API based on the ingestion type.

  1. Once all the data is processed then the Uploaded status will be updated in ingestion.file_tracker table.

...

Pipeline Flow
  1. POST /ingestion/scheduler API will add/update the schedule for  Nifi Scheduler.

  2. Nifi Scheduler will call the POST /ingestion/pipeline API

  3. The Pipeline API will have following functionalities 

  1. Read the data from dimension collection and write to Dimension DB(S3)

...

          II. Get the Ingest DB and store the data into the Dataset DB

...

File Status Flow

  1. File status will be checked using GET /ingestion/file-status API

  2. To update the file status PUT /ingestion/file-status API

  3. If the file is of ingestion type event then update the file status in ingestion.file_tracker table and update the status in ingestion.file_pipeline_tracker table for the dataset for which the event is processed.

  4. If all the dataset has completed processing then change the status in ingestion.file_tracker table as ‘Ready_to_archive”.

  5. If the ingestion_type is dimension and dataset then update the file status in ingestion.file_tracker table as “Ready_to_archive”.

Ingestion schema

                  In the illustration below two dimensions as examples have been used.

  1. ingestion.dimension_state

  2. ingestion.dimension_district

  3. ingestion.sac_stds_atd_cmp_by_school ->one example has been used to illustrate.Dataset tables will have dynamic columns and static columns. Dynamic columns will be  added when dataset_spec is added. Static columns are sum,count,min,max,avg.

...

Image Added

KPIs Flow

KPI (Key Performance Indicator)

  • KPIs are the one which defines what has to be derived from the input data or events

  • Each item in the Viz can be called as KPI.

    Example:   School_attendance

              KPI Category                 :    Student attendance Compliance

   KPI VisualisationType  :    Table

  KPI Indicator                 :    School-wise average attendance compliance %

  KPI Levels                     :    School

 If there are other Indicators need to show on dashboard, for example :

Cluster-wise Average attendance compliance %

Block-wise Average attendance compliance %

District-wise Average attendance compliance %

State-wise Average attendance compliance %

Events
  • A data structure that records an occurrence at a particular time for an entity (eg: school, etc). It is a combination of simple data types (eg: integer, varchar, etc.).  An event should always contain a column/set of columns that helps you calculate the Indicator. A table with a timestamp doesn’t necessarily mean that it is an event; it should contribute to either aggregation or filtering of the dataset.

  • An Event should be in such a way that it should be able to update the dataset in some form

  • Transformation of Events to data that updates datasets - Transformation happens through a transformer: f(eventDetails, eventSchema, datasetSchema, dimesionConfig) = [array of columns]

  • Updating datasets

Example:

school_id

class

date

total_students

attendance_marked

<Unique id>

<Integer Value>

<dd/mm/yy>

<integer Value>

<integer Value>

Dimensions
  • Dimensions Describe events.

  • Dimensions are Static records

  • In the final aggregation we will be mapping Dimensions with Datasets.

Example:

Dimensions for above mentioned KPI are :

School  :  School_id,School_name,Cluster_Id,Block_id,District_id,State_Id

Cluster  :  Cluster_id,Cluster_name

Block     :  Block_id,Block_name

District   :  District_id,District_name

State_Id :  State_Id,State_name

Datasets 
  • High-level data which is computed by aggregating events. It is a data representation of the indicator. Datasets are persistent within cQube Ed. A dataset is created for at least one indicator. 

  • A dataset can be derived from one or more specified events, independently.

  • It may additionally contain dimensions and derived values from other datasets during the mapping process.

            Example : School_attendance

 Dataset for the above mentioned KPI is defined below:

              

date

school_id

Sum(total_students)

sum(attendance_marked)

Average Attendance %

<yy-mm-dd>

<unique id>

<sum of parameters>

<sum of parameters>

<Sum of parameter/ sum of parameter>

            Datasets for the above mentioned Indicators :

Cluster-attendance

Block_attendance

District_attendance

State_attendance

Note: 

  • There is a list of transformers. In this dataset we are using Sum

  • There is a list of dimensions. In the above mentioned dataset School is Dimension. 

These are some sample KPIs derived as a set of events, dimensions, and datasets from the PAT & SAT datasource which are used in earlier cQube version v4.0 . Link

Developer & QC WorkFlow

...

The Developer & QC workflow activity is described in the above diagram from how the task is identified & assigned till it merges to the respective branch in the Developer workflow.  Once the code is merged, QC will then test their scripts and execute the test cases and produce the test result.

Devops 

Installation Process     

Following technologies will be used to implement cQube as a one step installation.

  1. Shell Scripting - We are using shell script to install the basic softwares required to run ansible and docker. To generate a configuration file by prompting the questions to the user. To run the ansible playbooks.

  2. Docker - For Containerising the micro services

  3. Docker Swarm - Swarm consists of multiple docker hosts which can act as a manager which can handle multiple nodes. By implementing this we can ensure that the system continue to run even when one node fails

  4. Ansible - To easily build the code and to reuse the code.

  5. Github Actions - For Continuous Integration and Continuous Deployment 

Steps: Users will clone the code from the github, checkout to the latest repository and run a shell script install.sh by giving executable permission to the script.

Install.sh File:

  • It checks if the user is running the script using sudo and throws an error accordingly.

  • Installs all the basic softwares like ansible, docker, python dependencies

  • Triggers a configuration.sh file which is used to generate a config.yml file by popping up respective questions to the user.

  • The value entered by the user will be validated and it will loop the same variable until the correct value is entered.

  • Once the config file is created, It will be previewed to the user to re-check the entered values.

  • Once the user confirms the configuration values, then the ansible scripts will get triggered.

  • Once all ansible scripts are triggered, Shell script shows the message “cqube installed successfully”

Ansible Script:

  • Ansible script will be triggered and it will help to build the docker images with the respective docker files provided.

  • Ansible triggers a docker-compose file to start all the docker containers within the server.

  • Ansible script to trigger nginx setup and configuration in the remote machine. ( nginx server )

Docker Containers:

  1. Spec-ms

  2. Ingestion-ms

  3. Generator-ms

  4. Nifi-ms

  5. Postgres-ms

  6. Kong-ms

  7. Nginx-ms

Flow Diagram of One step Installation

...

Github Actions for CI/CD

GitHub Actions gives developers the ability to automate their workflows across issues, pull requests, and more—plus native CI/CD Functionality.

All GitHub Actions automations are handled via workflows, which are

YAML files placed under the .github/workflows directory in a repository

that define automated processes.

Every workflow consists of several different core concepts. These include:

● Events: Events are defined triggers that kick off a workflow.

● Jobs: Jobs are a set of steps that execute on the same runner.

● Steps: Steps are individual tasks that run commands in a job.

● Actions: An action is a command that’s executed on a runner—and

the core element of GitHub Actions, 

● Runners: A runner is a GitHub Actions server. It listens for available

jobs, runs each in parallel, and reports back progress, logs and

Results.

The below diagram shows the flow of github actions workflow.

Image Added

Build and Push Docker Image to AWS EC2 Using GitHub Actions:

The below diagram shows how github actions are being implemented for CI/CD. Whenever a user commits a code to the repository, Based on the event defined in the workflow file the deployments gets triggered.

Github actions build the code and perform unit testing for code coverage in its own virtual runners.

If the code is successfully built in its runners without any errors then it will build the docker image using the docker file which is provided.

Github actions then login to the docker hub account through the credentials provided as environments in its secrets.

Once the image is built, then it will push the image to the docker hub with the tags specified in the workflow.

Once the image is pushed to dockerhub, then the github actions trigger to deploy the code to ec2 server. It used a ssh predefined action which helps github actions to login to the ec2 server and pull the latest code from the github repository and deploys it into the server.

Image Added

Deployment strategy in Dev, QA, Demo Environments

The below process is followed in each environment using its respective github repository branch. Two workflow files will be created in the github repository.

  • On the event of pull request:          workflow action will get triggered where it checks the build of image and code coverage in the github action runners. 

  • On the event of merge request:

            A workflow action will get triggered where it builds the docker image out of the dockerfile >> login to the docker hub >> pushed the latest image to the docker hub repository >> connect to the ec2 server >> pulls the latest images from the dockerhub >> deploys into the server.

Image Added

  • Developers raise a pull request

  • On the event of pull request github actions workflow triggers Continuous Integration where it will build and test the code.

  • Once the build is successful, Code Reviewer checks the code and merge it to the repository

  • In the event of a merge, github actions trigger continuous Deployment where it will deploy the code to the server.

  • Once the code is merged to dev the code will be merged to QA which again follows the same above process followed by demo Environment.

Visualization Design

A common generic function reads the metadata from the configuration file and generates a visualization.

This configuration file consists of various queries which will be applied on cubes on load of the report and also on various levels mentioned in the filters section of the configuration file.

A complete configuration file can be found here : student attendance module configuration file

There are generic functions developed to construct filters, tooltips and also few functions to build queries provided with the dynamic values from filters.

A Time Series Component was developed which takes min and max dates of the data available and lets the user choose the range of dates he/she needs to filter within the available dates.

For all the above functionalities to work there will be different queries which need to be configured in the file in a certain way.

Example of the queries needed for table and bignumber visualization of student attendance compliance report on the selection of a district level filter.

Image Added

Here on selection of the district level, there will be 2 collections of queries that need to be executed. One is to get the filter options for the next level of filter.

And another is a set of queries which will be selected depending on whether the time series filter is already selected or not.

timeSeriesQueries whenever the time series is selected, else the queries in the actions sections will be executed

Here we can provide multiple queries depending on the data needed for the current visualization. Since this visualization needs 3 queries, 1 for Table and 2 for big number representation.

Also for a specific visualization we need to provide the metadata needed to convert the data from the queries to a standard format that specific report type component expects.

For Table: 

Ex: 

...

Here we need to add metadata for every column in the table and some options for sorting data on load of the report.

For Big Number:

In big numbers a value suffix as an option for the indicator.

For Bar Chart: 

In the Bar Chart we can configure the label, title and values for both axes, and specifically for xAxis we can have more than one label-values as metrics as shown below.

...

For Map Report:

In the Map report there is a separate functionality called levels which needs to be passed in the config along with other options. (Level defines which level of data should be for a selected filter.)

Each Level contains name, value of the button, hierarchy Level (which defines the order in which the filters are constructed and to which order that specific level belongs to) and actions (which is to provide any metadata that is needed by a specific functionality to construct the query.)

Image Added

In options we can provide a flag for metric filter which if is true then can provide more than one selectable metrics for the same map.

If true then pass the label-value in a form of array of objects to metrics as shown below.

If a single metric is to be represented in the map then just pass the metric value to the indicator key in the options.

Image Added

Tooltip in map report is also configurable where one needs to pass the required tooltipMetric as shown below (value is nothing metric to be shown in the tooltip and valueSuffix and valuePrefix if any needed for that metric.)

...

The Code for the above POC can be found here : PR for UI POC.

cQUBE UI Enables non-technical user to setup visualization of metrics in simple & easy manner

...

Proposed Configurable Parts of the UI

  1. App App 

    1. Logo

    2. Profile

    3. Header

    4. Menu

    5. Page(s)

    6. Footer

  2. App is collection of pages

  3. Each Page contains

    1. Filters

    2. Header: Metrics in text format only

    3. Overview: KPI cards - number

    4. Footer: Metrics in text format only, Date

    5. Actions - Download, Share

    6. Report(s)

  4. Each Report contains

    1. Filters

    2. Metrics/charts

    3. Actions

...

<<TBD - will UI configurability be dependent on dashlet spec?>>

SRE & Telemetry design

<<SRE requirements need to be added here>>

Needs: 

  1. Distributed Traceability - 

    1. Need to define what all we want to be able to trace

  2. Distributed Debugging - Ability to debug remote services with minimal dev machine setup

<<Todo>>Decide based on alternatives and value assignment

Alt: Sunbird Telemetry

<<Todo>>

Alt: Custom Design

This is showing how the custom approach can work, based on the UI (a typical client) interactions.

...

  1. Listener UI service listens to any keyboard and mouse movements movements 

  2. Depending on the whitelisted list of events, specific events are sent to Recorder microservice

  3. Events are recorded in storage

  4. A metric calculator generates various metrics from events data

  5. Relay microservice serves requests for telemetry data

Authentication & Authorization

<<Todo>> List alternatives and discuss

Testing

The testing can be started when the yaml file provided contains the list of APIs to be tested. Test Cases & Scripts for the specified yaml file will be created and Test Execution Results will be added whenever multiple iterations of the same feature are presented for testing.

When the features are being integrated especially with the external API ingestion, all required testings like integration, regression, etc will have to be done.

Features

Test Type

Tools

Tools  Using

Where it runs?

One step Installation

Smoke Testing and Functional Testing

Manual

Dev, QA

On code commit

Unit Level

JEST

Github transient containers

API Cqube specification (Education_domain api, Dimensions api, Transformer api, Indicator api, Event api, Dataset api, Pipe - spec api )

Smoke Testing, Functional Testing and Non Functional Testing(Load & Volume)

Karate Framework , WRK

Dev, QA

cQube Dashboards

Regression Testing

Selenium

Transient UAT

KPI Testing

Data Validation Testing

<tool to be used will be identified>

Transient UAT

Manual testing

For manually verifying end to end flow

Manual

Transient UAT

Release testing

Before any release

Automated - combination o tools

UAT/DEMO

Dashboard/UI testing testing 

The UI Testing of an cQube application having several map, chart and table reports and testing of these can be accomplished by following steps.

Example: 

School infrastructure Map Report Report 

Below Points to be validated with manual process process 

  • Validate the map - markers are showing tooltip information or not not 

  • Verify the whether all the dropdown options are selectable or not not 

  • Verify the District level data is loaded on map or not not 

  • Verify the block level data is loading on map or not not 

  • Verify that cluster level data is loaded in the map or not not 

  • Check whether download button is working or not in the all levels levels 

  • Verify that legend card buttons are working or not and based on filtration data should be displayed on map markers markers 

  • Check whether based on selection of infrastructure metrics , in marker tooltip is highlighted or not... etc etc 

Testing Plans

Smoke Testing: Verifying if the build is working as expected or not, Once the build is released from the development team , QA team is going to start smoke testing to verify the basic features of build are working as expected or not if any blocker or critical defects are found then build to be rejected. After completion of smoke testing if all the features are working properly then continue with further testing activities.

...

The test plan usually includes the following information:1.

  1. The overall objective of the testing effort.

...

  1. A detailed outline of how testing will be conducted (the test approach).

...

  1. The features, applications, or components to be tested.

...

  1. Detailed scheduling and resource allocation plans for testers and developers throughout all stages of testing.

...

Image Added

Test Deliverables

  • Test Summary and Test case execution Report Report 

  • Defect Status - can be updated on Jira Tool Tool 

  • Karate API automation TestCase document and Execution Reports Reports 

  • Jmeter - Scenario document and Test Reports Reports 

Test Plan -Flow Diagram

...

Features to be Tested

Test Type

Activity

Activity 

Features to be

Tested

Tested 

Test Schedule

Installation Testing

  • One step installation testing.

  • Config files validation testing.

  • Verifying the application services(Postgres,Nifi,Kong,Node JS…)

Build is Released to QA - in this phase of

testing

testing  verifying the both positive & negative scenarios. 

Smoke

Test

Test 

(Automation)

  • Spec , Ingestion process

  • API - Testing using Karate framework

  • Baseline Test - jmeter | WRK tool

Build is Released to QA - in this phase of

testing

testing  verify the positive scenarios of the application from end to

end

end 

Functional

Test

Test 

(Automation)

  • Spec , Ingestion process

  • API - Testing using Karate framework

  • Performance

Test
  • Test 

After completion of smoke test if there is no blockers/critical defects then functional testing can be initiated

KPI

Test

Test 

(Automation)

  • Validating the KPI of all the

reports
  • reports 

KPI testing will be starting once the cubes has been generated

Release Readiness Test

(Automation)

  • Spec , Ingestion process

Once all the defects are resolved from functional and regression testing , in this phase we called as Final Round of Testing before release to

production

production 

Release Activities - Smoke

Test

Test 

(Automation)

  • Verify the

APIs
  • APIs 

Code deployed to release environment by devops team deploy the build , developers will do the file processing and QC team just verify the API’s

Test Approach

  • Every feature will be tested as per the project timelines once it is made available for testing.

  • We will be ready with test cases for the build once the build is ready for QA, and the QA team will discuss their tasks, and we will provide the effort list for each task.

  • The QA team starts with smoke testing the below-mentioned components of the application.

    • One-Step Installation

    • Data - API specification, Ingestion

  • If any major/ blocker defects are discovered during smoke testing, the build should be rejected by QA immediately.

  • Creating a Jira ticket and assigning it to the respective developers

  • After defects are cleared from the smoke test, initiate functionality testing and complete the application with positive and negative scenarios.

  • Completion of both smoke and functional testing; updating the test execution result into each test case result; and preparing a summary report

  • Finally, once all the defects have been successfully resolved, we can start with system testing, which covers the end-to-end feasibility of the application.

  • Modifying the test cases and test result documents and uploading them to the repository

Test Pass / Fail Criteria

  • Code Coverage of about 70-80% & Pass percentage of test cases 95%+

  • Critical bugs: 0

Test Suspension / Resumption Criteria

  • When any of the major functionality of the feature is not functional or working, testing of the feature should be halted.

  • When the feature is fully functional, testing should be resumed.

Test Prerequisites

  • The test environment with the appropriate feature(s) to be tested should be ready prior to the test execution phase. All the dependencies for testing the feature(s) should also be made available with appropriate access.

Test Deliverables

  • Test Summary and Test case execution Report - TC and TR TR 

  • Defect Defect  Status - can be updated on Jira Jira 

  • Karate API automation TestCase document and Execution Reports Reports 

  • Jmeter - Scenario document and Test Reports Reports 

Roles And Responsibilities

Role

Responsibility

Test plan

Test Lead

Test Cases

QA Team

Test Execution Report

QA

Team

Team 

Test code - repo management

  • After completion of all the testing activities, the following QC documents will be pushed to the test folder, which is created under a certain repository.

    • API Automation Scripts Scripts 

    • WRK and Jmeter automation scripts scripts 

    • Test Case and Test Result documents

    • KPI Test scripts and Execution Reports Reports 

    • Test Execution Reports Reports  

QA Criteria Details Details 

Entry Criteria for QA QA 

  • Requirements are defined and approved.

  • availability of sufficient and desired test data.

  • Test cases have been developed and are ready.

  • The test environment has been set up, and all other necessary resources, such as tools and devices, are available.

  • The development phase and process provide useful information pertaining to software, its design, functionalities, structure, and other relevant features, which offer assistance in deciding the accurate entry criteria like functional and technical requirements, system design, etc.

  • The following inputs are taken into account during the testing phase:

    • Test Plan.

    • Test Strategy.

    • Test data and testing tools

    • Test Environment.

Exit Criteria for QA QA 

  • Deadlines meet.

  • Execution of all test cases

  • Desired and sufficient coverage of the requirements and functionalities under the test.

  • All the identified defects are corrected and closed.

  • No high-priority, severe, or critical bug has been left out.

API  API Automation Approach

  • Using the Karate framework, we will be able to do the API automation for the cQube specification and ingestion.

Required SET-UP configurations:

Follow the steps to set up the karate framework setup

  • Java(8+), IDE, Maven dependencies in pom.xml

  • Open Eclipse

  • File > New > Maven project > Click on create a simple project > Next > Group Id: com.karate.com>Artifactidcom>Artifactid: karate>Finish

  • Click on pom.xml

  • Add dependencies within the project

  • Add Maven dependencies in pom.xml (https://mvnrepository.com/artifact/com.intuit.karate )

  • Karate core - compiling the feature files

  • Karate Apache - Interface between API to server server 

  • Karate JUnit4 - logs and reports generation

Process  Process of API Automation:

  • Once the framework is created. we have to create .feature (cucumber files) in the src/test/java package which is used to write the API scripts in the .feature file file 

  • In the created .feature file we have to follow up gherkins syntax to get API response and add the validation assertions.

<   –  Gherkins Syntax – >

Feature: Keyword explains the name of the feature we are testing

Background: Prerequisite section - define url, json paths paths 

Scenario: 

Given URL

When method <http method>

...

Example: API Name:- cQube Spec - /spec/event

Feature: Event Spec creation creation 

Background: Define the url url 

Given url ‘application url’ *

  • def requestbody= read(‘json file location’)

Scenario: Validate the Spec/event creation creation 

Given path ‘/spec/event’

And request requestbody

...

Referring the above the example we will follow up for the all the api’s listed below below 

  • spec/event

  • spec/dimension

  • spec/dataset

  • spec/transformer

  • spec/pipeline

  • spec/schedule

  • ingestion/event

  • ingestion/dimension

  • ingestion/dataset

  • ingestion/pipeline

Listed possible API Response validations:

  • Schema validation - body and response response 

  • Response code and status name validation validation 

  • Data type validation of each response response 

  • Null validation validation 

  • Format validation validation 

Listed Negative validations:

  • Response code code 

  • Invalid body syntax validation

  • Data type validation - providing other data type

  • Not null validations.

  • Format validation - json , xml xml 

  • Error message validation - re triggering same API

Execution Approach:  

  • Some scenarios are marked as @smoke or @functional, which come under the test strategy.

  • creating the java file, adding the karate options, and adding up the tag name

  • Running the specified test types based on parameters given in the command

 

Commands: 

mvn test -Dkarate.options="--tags @smoke"

mvn test -Dkarate.options="--tags @functional"

  • Once execution completes - Test Reports are stored in the

...

  • location 

/target/karate-reports/karate-summary.

...

html 

KPI Test Approach

What is KPI?

  • Key performance indicators (KPIs) in software testing are the calculated data that help measure the performance and effectiveness of testing. It gives an idea of whether the software testing is progressing in the right direction and whether it will be done on time.

  • KPIs are the key targets you should track to make the most impact on your strategic business outcomes. KPIs support your strategy and help your teams focus on what’s important. An example of a key performance indicator is "targeted new customers per per 

  • Metrics measure the success of everyday business activities that support your KPIs. While they impact your outcomes, they’re not the most critical measures. Some examples include "student attendance" and "teacher teacher 

What is the Difference Between Software Testing Metrics and KPIs?

Software testing metrics are the data used to track and monitor the various operations performed by the testers. Whereas organizations and testers use key performance indicators (KPIs) to determine testing effectiveness and the time and cost required to complete the testing.

Prerequisites needed for KPI Testing

  • Data Source Aggregation data file - input file

  • Business logic documents documents 

  • Output result (API Response or DB result)

Technologies approach
  • Framework - pytest/Unittest Unittest 

  • Python, pytest library library 

  • Pandas, data frame, NumPy NumPy 

Difference Between KPI and Metrics:

...

When is the Software Testing KPIs not useful?

Although measuring the effectiveness of a process is essential to know if you are doing it right, measuring the testing process via quality KPIs will not make sense in a scenario where:

  1. If your product has just started with Testing: If you are going to launch your product for the first time and testing has just started, there won’t be much to measure. This time will be crucial to put a testing process in place rather than measuring the effectiveness of the testing process.

  2. If you are not planning to have a long testing cycle: If you are making a product that would not be changing for a long time after the launch and testing will be a one-time process, measurement of the effectiveness of the process would not be beneficial as you won’t have any new testing cycles to improve upon.

  3. If you are on a restricted budget: Just like doing any activity, measuring testing KPIs also takes time and effort and, consequently, costs. So rather than measuring the KPIs, applying a cost-effective testing process should be the primary focus when the budget for testing is restricted.

KPI Testing Approach

...

How to perform Metrics Test: 

By applying the same business logic used in cQube (Postgres and Nifi) through developing code using Python.

Steps Involved in Performing KPI Test:

  • Install cQube in the demo test server.

  • Create the pytest framework and configure the dependencies

  • Connect to Docker where the CSV file is stored.

  • Reading the CSV files from Docker by using Python functions.

  • Performing the same business logic on the input data using python as Performed in cQube development.

  • Storing the output generated into a data frame.

  • Reading the output files from the Database table(as per the current POC) by using Python functions.

  • Comparing the cQube output and output generated using python and creating the result that says ‘TRUE’ if both cQube and python generated outputs match and says ‘FALSE’ if doesn’t match.

  • Performing the Unit Test on each data source so that once the unittest is executed, all the respective comparison files get generated.

Different types of files used in the KPI Test

  1. Config.ini: Storing the credentials of server IP, API endpoint

  2. Data_Sources: The folder contains each data source script. 

  3. TestSuite: Consolidation of all the test files files 

  4. UnitTest.py: TestRunner HTML3 and generates the reports.

Execution Flow

  • UnitTest.py file is executed which contains unittest on the business logic functions which hits the python file in

...

  • which  Business_logic is written and all the comparison results will be stored in the form of a CSV file in the mentioned destination file path/location.

Input file result (applied business logic) in JSON format == API Response Response 

  • Business_logic python file will be calling the functions written in Functions_Files.

...

  • py  to read details present in the Config.ini file.

NON NON-FUNCTIONAL Test Approach:

Performance Test - Introduction:

  • As part of the delivery of the cQube, it is required that the solution meet the acceptance criteria, both in terms of functional and non-functional areas. The purpose of this document is to provide an outline for non-functional testing of the cQube solution.

This document covers the following:

...

There will be one or more dedicated "load injectors" set up to initiate the required load for performance testing. The load injector could be a VM or multiple VMs that have an instance of JMeter running, initiating the requests.

 

Test Tools

The test tool used for Volume and Performance testing will be JMeter, an open-source load testing tool predominantly used for volume and performance testing.

...

  • Start JMeter. Open the JMeter window in the terminal by clicking on on  cd Downloads/apache-jmeter-5.5/bin/jmeter.sh

  • Rename the Test Plan. Change the name of the test plan node to Sample Test in the Name text box

  • Add Thread Group.

  • Add http request→url , path , port number, body

  • Add Sampler

  • Add Listener

  • Run the Test Plan.

  • View results in, summary report and View result tree listeners

  • View the Output in .csv csv  file

Post-Test Analysis and Reporting

...

  • We are going to test each API with with  the users of 100,  500, 1000, 3000, 5000, 10000, 20000, 30000, 40000 40000  and 50000

  • With a ramp-up period of 1 sec, the average time should not be > 3000 milliseconds(3 sec)

  • Final Report will be stored in csv file

  • jmeter load testing test cases

https://docs.google.com/spreadsheets/d/1od4uaW-65DTWSgbjrS1sW5Ll3i3EczVU4GKLAwW7C6g/edit?usp=sharing

JMeter scripts:

  • Start JMeter. Open the JMeter window in the terminal by clicking on on  cd Downloads/apache-jmeter-5.5/bin/jmeter.sh

  • Rename the Test Plan. Change the name of the test plan node to Sample Test in the Name text box

  • Add Thread Group.

  • Add HTTP request→provide the API url , path, port number, and body

  • Add Listener - result tree , graph, view table report report 

  • Run the Test Plan. 

  • View results in,summary report, and View result tree listeners

  • Add assertions-->Response code assertion

  • View the Output in CSV

Benchmarking Tool:

We use WRK, a modern HTTP benchmarking tool capable of generating significant load when run on a single multi-core CPU for Benchmarking.

...

  • Install wrk tool by using this command

  • git clone --depth=1 https://github.com/wg/wrk.git

  • cd wrk

  • make -j

  • The generated wrk executable is under this folder. This is how we use wrk for GET request benchmark: 

wrk -t 6 -c 200 -d 30s --latency https://google.com

Some of the command flags for wrk

  • -c: the number of connections to use

  • -t: the number of threads to use

  • -d: the test duration, e.g., 60s

  • -s: the lua script to use for load testing our service (will cover in later section)

  • --timeout how many seconds to timeout a request

  • --latency: show the latency distribution for all the requests

Automation Tool Approach for cQube 5.X

Feature

Automation Tools

Installation

Installation 

Python

cQube spec and ingestion

API

API 

Karate

cQube

Dashboard

Dashboard 

Selenium with Pytest framework

Performance Test

Jmeter and wrk

Transformers

List of transformers Identified With Two different UPSERT OPERATION:

Dataset  Dataset without filter/Agg: 

When we get the aggregated data. Will be updating data directly into a dataset using this transformer,it's just like a copy paste function.

Event  Event to Dataset with Agg:

In this case taking Event as an input doing required aggregation and storing it into a dataset table.

Dataset   Dataset to Dataset with Agg:

In  In this case dataset is an input doing required aggregation and storing it into dataset table

Event   Event to Dataset with filter bases Agg:  

In this case taking Event as an input doing aggregation with filters and storing it into dataset table

Dataset    Dataset to Dataset with filter based Agg:

 In this case dataset is an input doing aggregation with filters  and storing it into dataset table

NOTE:  all 5 transformers will will  be written for Two different UPSERT operations like explained below.So we will be having 10 Transformers.

Upsert By Replace Operation: 

Using UPSERT queryReplacing data in the table.

  EX:  If consider student_attendance data data 

in           in the below table total_preset is 5 may be later later  for the same school_id and date if the the  value comes then we need to replace the value instead incrementing.

Before UPSERT

   

Date

School Id

Total Present

Total Students

13-11-2022

101

5

50

INSERT INTO

dataset  dataset_attendance (Date, school_id,Total Present, Total Students)

VALUES

  (‘13 - 11 - 2022’,101, 20, 50,40) ON CONFLICT(Date,school_id) DO

UPDATE

SET

Total   Total Present =EXCLUDED.Total Present,

Total   Total Students= EXCLUDED.Total Students;

  

After UPSERT     

 

Date

School Id

Total Present

Total Students

13-11-2022

101

20

50

Upsert by Increment Operation: 

EX: If we consider any programs like NISTHA,DiKSHA….etc in this case we should increment as the data comes for the same date and for the same program_name and course_id id 

Before  Before UPSERT

Program_name

course_id

total_enrolment

Date

NISHTHA

course_01

50

13-11-2022

INSERT  

 INSERT INTO

dataset  dataset_attendance (programe_name,course_id,total_enrolment,Date)

VALUES

  (NISHTHA,course_01,10,13-11-2022) ON CONFLICT(date,school_id) DO

UPDATE

SET

total  total_enrolment =EXCLUDED.total_enrolment+10;

  

After UPSERT UPSERT 

Program_name

course_id

total_enrolment

Date

NISHTHA

course_01

60

13-11-2022