Purpose
cQube solution architecture document:cQube | Design Document (Nov 2022)
In this document focus is on evolving Design Specs, which can be used by the development team.
Audience
Solution development team
How to use this document
This document is organized by topics. Various illustrations/models used in this document are present in this document: cQube | Design Specs | Models - v24Nov2022
TopicsData processing pipeline alternatives
2 possible solutions were considered:
Using Postgres
Using Nifi
Alt: Postgres model:
v4.0 - as-is
ingestion: python api, storage: s3, processing: nifi, aggregate data storage: postgres exhaust: s3 (json), viz engine: nodejs, viz:angular, spec: python based generator (fragments of SQL)
v5: ingestion: nodejs api, store (till processed):postgres db
POST /spec/* → when a new spec is added (spec=event + dataset + pipe)
/event - create table <input> ⇒ e
Acts as a Q
Helps type validation
/dataset - create table <input> ⇒ c
/pipe -
create a new pipe in table <pipeline>
pipe connects event to cube via transformer ⇒ event → transformer → dataset
transformer ⇒ create trigger <transformer> on event
POST /event → when a new event arrives
j ⇒ select e.c1, c.c2+1, c.c3+e.c3 from cqb1, evt1 inner join cqb1.c1=evt.c1 and cqb1
update cqb1 set c1=j.c1, c2=j.c2, c3=j.c3 from j
Transformer is a PG function connected via trigger on event table
Transformer updates dataset
Alt: Nifi model:
Events are passed to API (ajv based validation)
API writes to event_batch_file
Every 5 min, trigger intermediate_cube_processor
API will write to flowfile & ideally that should nifi process group
Process group - transformer - event_data, event_specs, cube_schema, cube_data - Py/Js
Receives Event data
Pull all required data (from store)
Perform transformation
Store in an intermediate cube.csv
Upsert the cube, once in every 5 min
Based on
Traceability (where in the flow is my data)
Horizontal scalability
Flexibility - ability to add additional steps or pipelines or create side-effects
Nifi based data processing model is chosen for cQube.
Physical Architecture
Kong
Used as a API gateway
Used for AuthN & AuthZ <<TBD>>
Auth: handles AuthN and AuthZ <<TBD - Alternatives>>
AuthN (identity and role assignment):
Implementation choices
Location
Internal
External
Solution
Keycloak
Using a 3rd party - Auth0
Design Factors/Requirements
State may have their own - duplication
Management Overhead
Users
Adapter
Dashboard Users
Admin
Public?
Internal Service Communication?
Keeping services stateless assuming access is authenticated already - JWT being the easiest to manage.
Should support any OIDC based authentication (allow for external SSO integration)
AuthZ: access to resources based on role
Implementation Choices
OPA - I am leaning towards this as it fulfills all requirements
…
Requirements
Need it to be internal to cQube
Role based authorization
Works on
UI
SQL - SQL based filtering layer which dynamically filters rows and columns, based on role and configuration
APIs
Infrastructure
Flow Engine: manages various data ingestion pipelines
User MS: To manage users and roles
Spec MS: Exposes APIs to process specifications - events, dimensions, datasets, transformers, schedules and pipelines
Ingest MS: Exposes APIs to ingest event, dimension and dataset data
Spec db: stores various specification details
Config db: stores cQube specific data such as usage, roles, access control etc
Ingest db: stores ingested data created by collecting events, dimensions and datasets before updating dataset (KPI) db
Dataset db: stores KPI data. It is updated from ingested data <<TBD - should we consider Timescale>>
SQL Access: <<TBD - potentially Hasura>>
Deployment Architecture
Spec pipeline
Defines how specs are processed into cQube.
Specs can be added to cQube via Spec microservice
Spec MS interacts with data flow engine (nifi) and spec db (postgrsql)
Spec microservice namespace: /spec/*
Key APIs & processing steps
POST /spec/event -
Compile and verify validity
Store in event_spec table
Update cache of Ingest MS with changes
POST /spec/dimension
Compile and verify validity
Store spec in dimension_spec table
Update cache of Ingest MS with changes
Create/update the actual dimension table
POST /spec/dataset
Compile and verify validity
Store spec in dataset_spec table
Update cache of Ingest MS with changes
Create/update the actual dataset table
POST /spec/transformer
This is accessible to only superadmins
Compile and verify validity - aware of required event, dimensions and datasets
Store spec in transformer_spec table
It is assumed that each transformer is tested and coming from a trusted source
Where applicable, internally, a code generator will be used to create a transformer
POST /spec/pipeline
Compile and verify validity
Store spec in pipeline_spec table
Nifi compilation - abstracted by a wrapper layer which hides nifi calls
Create a new nifi processor_group
Link the transformer into relevant processor group
POST /spec/schedule
Create necessary schedule for a pipeline
Spec Flow
Yaml Link -> https://github.com/Sunbird-cQube/spec-ms/blob/main/spec.yaml
The specifications flow diagram represents the schema of the events, datasets, dimensions, transformers and pipeline. All these specs will be defined by the cQube platform prior to ingesting the data into the cQube. These specifications are derived by considering the KPIs as the Indicator.
Dimension Specification
The dimension spec will be validated with the predefined specification using AJV framework.
When the validation is successful then the spec will be checked for duplicacy.
If the spec already exists with the same data then an error response will be sent.
Else stored in the dimension spec table.
Once the specs are stored, dimension tables will be created/updated
Event Specification
The event spec will be validated with the predefined specification using AJV framework.
When the validation is successful then the spec will be checked for duplicacy.
If the spec already exists with the same data then an error response will be sent.
Else stored in the event spec table.
Dataset Specification
The Dataset spec will be validated with the predefined specification using AJV framework.
When the validation is successful then the spec will be checked for duplicacy.
If the spec already exists with the same data then an error response will be sent.
Else stored in the dataset spec table.
Once the specs are stored, dataset tables will be created/updated
Transformer Specification
The Transformer spec will be validated with the predefined specification using AJV framework.
When the validation is successful then the spec will be checked for duplicacy.
If the spec already exists with the same data then an error response will be sent.
Else stored in the transformer spec table.
Once the specs are stored,a generator function will be called to generate the transformers and store them in the python file for the implementation.
Pipeline Specification
Event processing
Event collection pipeline - /ingestion/event
Event collection to aggregation pipeline - /ingestion/pipeline <event name>
Aggregation to dataset upsert pipeline - /ingestion/pipeline <dataset name>
Dimension processing
Dimension collection pipeline - /ingestion/dimension
Dimension collection to dimension upsert pipeline - /ingestion/pipeline <dimension name>
Dataset processing
Dataset collection pipeline - /ingestion/dataset
Dataset collection to Dataset upsert pipeline - /ingestion/pipeline <dataset name>
The Pipeline spec will be validated with the predefined specification using AJV framework.
When the validation is successful then the spec will be checked for duplicacy.
If the spec already exists with the same data then an error response will be sent.
Else stored in the pipeline spec table.
Pipeline types are listed below
Dimension to Collection
Dimension collection to db
Event to Collection
Dataset to collection
Event collection to Aggregate
Aggregate to DB
If the Pipeline type is Aggregate to DB then processor group will be created by calling Nifi APIs
Processor group will be mapped to the transformer.
Spec DB Schema
Below are the tables included in spec schema
spec.dimension
spec.dimension_track
spec.event
spec.event_track
spec.dataset
spec.dataset_track
spec.transformer
spec.transformer_track
spec.pipeline
spec.pipeline_track
spec.schedule
spec.schedule_track
Ingestion pipeline
Data is ingested into cQube via Ingest MS
Ingest MS stores the data into Ingest Store
Based on schedule Ingest Flow engine interacts with ingest Store and Updates Dataset DB
Ingestion microservice namespace: /ingestion/*
Key APIs & processing steps
POST /ingestion/event
Validate event data
Append to corresponding event file (csv)
POST /ingestion/dimension
Validate dimension data
Append to corresponding dimension file (csv)
POST /ingestion/dataset
Validate dataset data
Append to corresponding aggregate file (csv)
POST /ingestion/pipeline
Trigger pipeline by - event, dimension, aggregate or dataset from ingestion store and transform and upsert dataset store
Flow engine is configured to trigger pipelines based on schedule
Ingestion Flow
Dimension Data
The dimension data will be added using POST /ingestion/dimension API.
Data will be validated with the dimension spec using AJV.
When the validation is successful then the dimension data will be stored in the CSV file.
Then the CSV file will be uploaded to S3 bucket which will be Dimension DB.
Event Data
The Event data will be added using POST /ingestion/event API.
Data will be validated with the event spec using AJV.
When the validation is successful then the event data will be stored in the CSV file.
Then the CSV file will be uploaded to S3 bucket which will be Ingest DB.
Dataset
The Dataset data will be added using POST /ingestion/dataset API.
Data will be validated with the dataset spec using AJV.
When the validation is successful then the dataset data will be stored in the CSV file.
Then the CSV file will be uploaded to S3 bucket which will be Aggregate DB.
Pipeline Flow
POST /ingestion/scheduler API will add/update the schedule for Nifi Scheduler.
Nifi Scheduler will call the POST /ingestion/pipeline API
The Pipeline API will have following functionalities
Read the data from dimension collection and write to Dimension DB(S3)
II. Get the Ingest DB and store the data into the Dataset DB
Ingestion schema
In the illustration below two dimensions as examples have been used.
ingestion.state
ingestion.district
ingestion.student_count ->one example has been used to illustrate.Dataset tables will have dynamic columns and static columns. Dynamic columns will be added when dataset_spec is added. Static columns are sum,count,min,max,avg.
KPIs Flow
KPI (Key Performance Indicator)
KPIs are the one which defines what has to be derived from the input data or events
Each item in the Viz can be called as KPI.
Example: School_attendance
KPI Category : Student attendance Compliance
KPI VisualisationType : Table
KPI Indicator : School-wise average attendance compliance %
KPI Levels : School
If there are other Indicators need to show on dashboard, for example :
Cluster-wise Average attendance compliance %
Block-wise Average attendance compliance %
District-wise Average attendance compliance %
State-wise Average attendance compliance %
Events
A data structure that records an occurrence at a particular time for an entity (eg: school, etc). It is a combination of simple data types (eg: integer, varchar, etc.). An event should always contain a column/set of columns that helps you calculate the Indicator. A table with a timestamp doesn’t necessarily mean that it is an event; it should contribute to either aggregation or filtering of the dataset.
An Event should be in such a way that it should be able to update the dataset in some form
Transformation of Events to data that updates datasets - Transformation happens through a transformer: f(eventDetails, eventSchema, datasetSchema, dimesionConfig) = [array of columns]
Updating datasets
Example:
school_id | class | date | total_students | attendance_marked |
<Unique id> | <Integer Value> | <dd/mm/yy> | <integer Value> | <integer Value> |
Dimensions
Dimensions Describe events.
Dimensions are Static records
In the final aggregation we will be mapping Dimensions with Datasets.
Example:
Dimensions for above mentioned KPI are :
School : School_id,School_name,Cluster_Id,Block_id,District_id,State_Id
Cluster : Cluster_id,Cluster_name
Block : Block_id,Block_name
District : District_id,District_name
State_Id : State_Id,State_name
Datasets
High-level data which is computed by aggregating events. It is a data representation of the indicator. Datasets are persistent within cQube Ed. A dataset is created for at least one indicator.
A dataset can be derived from one or more specified events, independently.
It may additionally contain dimensions and derived values from other datasets during the mapping process.
Example : School_attendance
Dataset for the above mentioned KPI is defined below:
date | school_id | Sum(total_students) | sum(attendance_marked) | Average Attendance % |
<yy-mm-dd> | <unique id> | <sum of parameters> | <sum of parameters> | <Sum of parameter/ sum of parameter> |
Datasets for the above mentioned Indicators :
Cluster-attendance
Block_attendance
District_attendance
State_attendance
Note:
There is a list of transformers. In this dataset we are using Sum
There is a list of dimensions. In the above mentioned dataset School is Dimension.
These are some sample KPIs derived as a set of events, dimensions, and datasets from the PAT & SAT datasource which are used in earlier cQube version v4.0 . Link
Developer & QC WorkFlow
The Developer & QC workflow activity is described in the above diagram from how the task is identified & assigned till it merges to the respective branch in the Developer workflow. Once the code is merged, QC will then test their scripts and execute the test cases and produce the test result.
DevOps
Installation Process
Following technologies will be used to implement cQube as a one step installation.
Shell Scripting - We are using shell script to install the basic softwares required to run ansible and docker. To generate a configuration file by prompting the questions to the user. To run the ansible playbooks.
Docker - For Containerising the micro services
Docker Swarm - Swarm consists of multiple docker hosts which can act as a manager which can handle multiple nodes. By implementing this we can ensure that the system continue to run even when one node fails
Ansible - To easily build the code and to reuse the code.
Github Actions - For Continuous Integration and Continuous Deployment
Steps: Users will clone the code from the github, checkout to the latest repository and run a shell script install.sh by giving executable permission to the script.
Install.sh File:
It checks if the user is running the script using sudo and throws an error accordingly.
Installs all the basic softwares like ansible, docker, python dependencies
Triggers a configuration.sh file which is used to generate a config.yml file by popping up respective questions to the user.
The value entered by the user will be validated and it will loop the same variable until the correct value is entered.
Once the config file is created, It will be previewed to the user to re-check the entered values.
Once the user confirms the configuration values, then the ansible scripts will get triggered.
Once all ansible scripts are triggered, Shell script shows the message “cqube installed successfully”
Ansible Script:
Ansible script will be triggered and it will help to build the docker images with the respective docker files provided.
Ansible triggers a docker-compose file to start all the docker containers within the server.
Ansible script to trigger nginx setup and configuration in the remote machine. ( nginx server )
Docker Containers:
Spec-ms
Ingestion-ms
Generator-ms
Nifi-ms
Postgres-ms
Kong-ms
Nginx-ms
Flow Diagram of One step Installation
Github Actions for CI/CD
GitHub Actions gives developers the ability to automate their workflows across issues, pull requests, and more—plus native CI/CD Functionality.
All GitHub Actions automations are handled via workflows, which are
YAML files placed under the .github/workflows directory in a repository
that define automated processes.
Every workflow consists of several different core concepts. These include:
● Events: Events are defined triggers that kick off a workflow.
● Jobs: Jobs are a set of steps that execute on the same runner.
● Steps: Steps are individual tasks that run commands in a job.
● Actions: An action is a command that’s executed on a runner—and
the core element of GitHub Actions,
● Runners: A runner is a GitHub Actions server. It listens for available
jobs, runs each in parallel, and reports back progress, logs and
Results.
The below diagram shows the flow of github actions workflow.
Build and Push Docker Image to AWS EC2 Using GitHub Actions:
The below diagram shows how github actions are being implemented for CI/CD. Whenever a user commits a code to the repository, Based on the event defined in the workflow file the deployments gets triggered.
Github actions build the code and perform unit testing for code coverage in its own virtual runners.
If the code is successfully built in its runners without any errors then it will build the docker image using the docker file which is provided.
Github actions then login to the docker hub account through the credentials provided as environments in its secrets.
Once the image is built, then it will push the image to the docker hub with the tags specified in the workflow.
Once the image is pushed to dockerhub, then the github actions trigger to deploy the code to ec2 server. It used a ssh predefined action which helps github actions to login to the ec2 server and pull the latest code from the github repository and deploys it into the server.
Deployment strategy in Dev, QA, Demo Environments
The below process is followed in each environment using its respective github repository branch. Two workflow files will be created in the github repository.
On the event of pull request:
workflow action will get triggered where it checks the build of image and code coverage in the github action runners.On the event of merge request:
A workflow action will get triggered where it builds the docker image out of the dockerfile >> login to the docker hub >> pushed the latest image to the docker hub repository >> connect to the ec2 server >> pulls the latest images from the dockerhub >> deploys into the server.
Developers raise a pull request
On the event of pull request github actions workflow triggers Continuous Integration where it will build and test the code.
Once the build is successful, Code Reviewer checks the code and merge it to the repository
In the event of a merge, github actions trigger continuous Deployment where it will deploy the code to the server.
Once the code is merged to dev the code will be merged to QA which again follows the same above process followed by demo Environment.
Visualization Design
cQUBE UI Enables non-technical user to setup visualization of metrics in simple & easy manner
Proposed Configurable Parts of the UI
App
Logo
Profile
Header
Menu
Page(s)
Footer
App is collection of pages
Each Page contains
Filters
Header: Metrics in text format only
Overview: KPI cards - number
Footer: Metrics in text format only, Date
Actions - Download, Share
Report(s)
Each Report contains
Filters
Metrics/charts
Actions
Note: The block diagram shown is representative only. Look & feel will be based on UX design. However, configurable parts will remain the same as proposed.
<<TBD - will UI configurability be dependent on dashlet spec?>>
SRE & Telemetry design
<<SRE requirements need to be added here>>
Needs:
Distributed Traceability -
Need to define what all we want to be able to trace
Distributed Debugging - Ability to debug remote services with minimal dev machine setup
<<Todo>>Decide based on alternatives and value assignment
Alt: Sunbird Telemetry
<<Todo>>
Alt: Custom Design
This is showing how the custom approach can work, based on the UI (a typical client) interactions.
B - Broadcasts key click & mouse movement
L - Listens key click & mouse movement
R - Receives Telemetry data
C - Computed Telemetry data
U - Display of Telemetry data
Key Processing Steps
Listener UI service listens to any keyboard and mouse movements
Depending on the whitelisted list of events, specific events are sent to Recorder microservice
Events are recorded in storage
A metric calculator generates various metrics from events data
Relay microservice serves requests for telemetry data
Authentication & Authorization
<<Todo>> List alternatives and discuss
Testing
The testing can be started when the yaml file provided contains the list of APIs to be tested. Test Cases & Scripts for the specified yaml file will be created and Test Execution Results will be added whenever multiple iterations of the same feature are presented for testing.
When the features are being integrated especially with the external API ingestion, all required testings like integration, regression, etc will have to be done.
Features | Test Type | Tools Using | Where it runs? |
One step Installation | Smoke Testing and Functional Testing | Manual | Dev, QA |
On code commit | Unit Level | JEST | Github transient containers |
API Cqube specification (Education_domain api, Dimensions api, Transformer api, Indicator api, Event api, Dataset api, Pipe - spec api ) | Smoke Testing, Functional Testing and Non Functional Testing(Load & Volume) | Karate Framework , WRK | Dev, QA |
cQube Dashboards | Regression Testing | Selenium | Transient UAT |
KPI Testing | Data Validation Testing | <tool to be used will be identified> | Transient UAT |
Manual testing | For manually verifying end to end flow | Manual | Transient UAT |
Release testing | Before any release | Automated - combination o tools | UAT/DEMO |
Dashboard/UI testing
The UI Testing of an cQube application having several map, chart and table reports and testing of these can be accomplished by following steps.
Example:
School infrastructure Map Report
Below Points to be validated with manual process
Validate the map - markers are showing tooltip information or not
Verify the whether all the dropdown options are selectable or not
Verify the District level data is loaded on map or not
Verify the block level data is loading on map or not
Verify that cluster level data is loaded in the map or not
Check whether download button is working or not in the all levels
Verify that legend card buttons are working or not and based on filtration data should be displayed on map markers
Check whether based on selection of infrastructure metrics , in marker tooltip is highlighted or not... etc
Testing Plans
Smoke Testing: Verifying if the build is working as expected or not, Once the build is released from the development team , QA team is going to start smoke testing to verify the basic features of build are working as expected or not if any blocker or critical defects are found then build to be rejected. After completion of smoke testing if all the features are working properly then continue with further testing activities.
Functional Testing: Both positive and negative scenarios are covered in the functional testing - covering all the validation of API response
End to End Testing: After completion of functional testing of all the KPIs , final round of testing will be performed. It involves testing of all the KPIs with only positive scenarios to verify each API, to check if it provides the responses properly or not and also validating the response code, API responses and schema.
Once all the KPIs meet with QC Expectations, release activities will be cleared for.
KPI Testing: Defining the sample input data and expecting the output based on the input provided and verifying the output expected matches or not is referred to as KPI testing
Test Pass / Fail Criteria
Code Coverage of about 70-80% & Pass percentage of test cases 95%+
Critical bugs: 0
Test Schedule
A test schedule includes the testing steps or tasks, the target start and end dates, and responsibilities. It should also describe how the test will be reviewed, tracked, and approved
Test Plan
The test plan usually includes the following information:
1. The overall objective of the testing effort.
2. A detailed outline of how testing will be conducted (the test approach).
3. The features, applications, or components to be tested.
4. Detailed scheduling and resource allocation plans for testers and developers throughout all stages of testing.
Test Deliverables
Test Summary and Test case execution Report
Defect Status - can be updated on Jira Tool
Karate API automation TestCase document and Execution Reports
Jmeter - Scenario document and Test Reports
Test Plan -Flow Diagram
Features to be Tested
Test Type Activity | Features to be Tested | Test Schedule |
Installation Testing | One step installation testing. Config files validation testing. Verifying the application services(Postgres,Nifi,Kong,Node JS…) | Build is Released to QA - in this phase of testing verifying the both positive & negative scenarios. |
Smoke Test (Automation) | Spec , Ingestion process API - Testing using Karate framework Baseline Test - jmeter | WRK tool | Build is Released to QA - in this phase of testing verify the positive scenarios of the application from end to end |
Functional Test (Automation) | Spec , Ingestion process API - Testing using Karate framework Performance Test | After completion of smoke test if there is no blockers/critical defects then functional testing can be initiated |
KPI Test (Automation) | Validating the KPI of all the reports | KPI testing will be starting once the cubes has been generated |
Release Readiness Test (Automation) | Spec , Ingestion process | Once all the defects are resolved from functional and regression testing , in this phase we called as Final Round of Testing before release to production |
Release Activities - Smoke Test (Automation) | Verify the APIs | Code deployed to release environment by devops team deploy the build , developers will do the file processing and QC team just verify the API’s |
Test Approach
Every feature will be tested as per the project timelines once it is made available for testing.
We will be ready with test cases for the build once the build is ready for QA, and the QA team will discuss their tasks, and we will provide the effort list for each task.
The QA team starts with smoke testing the below-mentioned components of the application.
One-Step Installation
Data - API specification, Ingestion
If any major/ blocker defects are discovered during smoke testing, the build should be rejected by QA immediately.
Creating a Jira ticket and assigning it to the respective developers
After defects are cleared from the smoke test, initiate functionality testing and complete the application with positive and negative scenarios.
Completion of both smoke and functional testing; updating the test execution result into each test case result; and preparing a summary report
Finally, once all the defects have been successfully resolved, we can start with system testing, which covers the end-to-end feasibility of the application.
Modifying the test cases and test result documents and uploading them to the repository
Test Pass / Fail Criteria
Code Coverage of about 70-80% & Pass percentage of test cases 95%+
Critical bugs: 0
Test Suspension / Resumption Criteria
When any of the major functionality of the feature is not functional or working, testing of the feature should be halted.
When the feature is fully functional, testing should be resumed.
Test Prerequisites
The test environment with the appropriate feature(s) to be tested should be ready prior to the test execution phase. All the dependencies for testing the feature(s) should also be made available with appropriate access.
Test Deliverables
Test Summary and Test case execution Report - TC and TR
Defect Status - can be updated on Jira
Karate API automation TestCase document and Execution Reports
Jmeter - Scenario document and Test Reports
Roles And Responsibilities
Role | Responsibility |
Test plan | Test Lead |
Test Cases | QA Team |
Test Execution Report | QA Team |
Test code - repo management
After completion of all the testing activities, the following QC documents will be pushed to the test folder, which is created under a certain repository.
API Automation Scripts
WRK and Jmeter automation scripts
Test Case and Test Result documents
KPI Test scripts and Execution Reports
Test Execution Reports
QA Criteria Details
Entry Criteria for QA
Requirements are defined and approved.
availability of sufficient and desired test data.
Test cases have been developed and are ready.
The test environment has been set up, and all other necessary resources, such as tools and devices, are available.
The development phase and process provide useful information pertaining to software, its design, functionalities, structure, and other relevant features, which offer assistance in deciding the accurate entry criteria like functional and technical requirements, system design, etc.
The following inputs are taken into account during the testing phase:
Test Plan.
Test Strategy.
Test data and testing tools
Test Environment.
Exit Criteria for QA
Deadlines meet.
Execution of all test cases
Desired and sufficient coverage of the requirements and functionalities under the test.
All the identified defects are corrected and closed.
No high-priority, severe, or critical bug has been left out.
API Automation Approach
Using the Karate framework, we will be able to do the API automation for the cQube specification and ingestion.
Required SET-UP configurations:
Follow the steps to set up the karate framework setup
Java(8+), IDE, Maven dependencies in pom.xml
Open Eclipse
File > New > Maven project > Click on create a simple project > Next > Group Id: com.karate.com>Artifactid: karate>Finish
Click on pom.xml
Add dependencies within the project
Add Maven dependencies in pom.xml (https://mvnrepository.com/artifact/com.intuit.karate)
Karate core - compiling the feature files
Karate Apache - Interface between API to server
Karate JUnit4 - logs and reports generation
Process of API Automation:
Once the framework is created. we have to create .feature (cucumber files) in the src/test/java package which is used to write the API scripts in the .feature file
In the created .feature file we have to follow up gherkins syntax to get API response and add the validation assertions.
< – Gherkins Syntax – >
Feature: Keyword explains the name of the feature we are testing
Background: Prerequisite section - define url, json paths
Scenario:
Given URL
When method <http method>
Then match <status code>
And match < assertions based on response >
Example: API Name:- cQube Spec - /spec/event
Feature: Event Spec creation
Background: Define the url
Given url ‘application url’
* def requestbody= read(‘json file location’)
Scenario: Validate the Spec/event creation
Given path ‘/spec/event’
And request requestbody
When method Post
Then assert responseStatus == 200
And match And match response != {}
And match responseType == 'json’
Referring the above the example we will follow up for the all the api’s listed below
spec/event
spec/dimension
spec/dataset
spec/transformer
spec/pipeline
spec/schedule
ingestion/event
ingestion/dimension
ingestion/dataset
ingestion/pipeline
Listed possible API Response validations:
Schema validation - body and response
Response code and status name validation
Data type validation of each response
Null validation
Format validation
Listed Negative validations:
Response code
Invalid body syntax validation
Data type validation - providing other data type
Not null validations.
Format validation - json , xml
Error message validation - re triggering same API
Execution Approach:
Some scenarios are marked as @smoke or @functional, which come under the test strategy.
creating the java file, adding the karate options, and adding up the tag name
Running the specified test types based on parameters given in the command
Commands:
mvn test -Dkarate.options="--tags @smoke"
mvn test -Dkarate.options="--tags @functional"
Once execution completes - Test Reports are stored in the location
/target/karate-reports/karate-summary.html
KPI Test Approach
What is KPI?
Key performance indicators (KPIs) in software testing are the calculated data that help measure the performance and effectiveness of testing. It gives an idea of whether the software testing is progressing in the right direction and whether it will be done on time.
KPIs are the key targets you should track to make the most impact on your strategic business outcomes. KPIs support your strategy and help your teams focus on what’s important. An example of a key performance indicator is "targeted new customers per
Metrics measure the success of everyday business activities that support your KPIs. While they impact your outcomes, they’re not the most critical measures. Some examples include "student attendance" and "teacher
What is the Difference Between Software Testing Metrics and KPIs?
Software testing metrics are the data used to track and monitor the various operations performed by the testers. Whereas organizations and testers use key performance indicators (KPIs) to determine testing effectiveness and the time and cost required to complete the testing.
Prerequisites needed for KPI Testing
Data Source Aggregation data file - input file
Business logic documents
Output result (API Response or DB result)
Technologies approach
Framework - pytest/Unittest
Python, pytest library
Pandas, data frame, NumPy
Difference Between KPI and Metrics:
When is the Software Testing KPIs not useful?
Although measuring the effectiveness of a process is essential to know if you are doing it right, measuring the testing process via quality KPIs will not make sense in a scenario where:
If your product has just started with Testing: If you are going to launch your product for the first time and testing has just started, there won’t be much to measure. This time will be crucial to put a testing process in place rather than measuring the effectiveness of the testing process.
If you are not planning to have a long testing cycle: If you are making a product that would not be changing for a long time after the launch and testing will be a one-time process, measurement of the effectiveness of the process would not be beneficial as you won’t have any new testing cycles to improve upon.
If you are on a restricted budget: Just like doing any activity, measuring testing KPIs also takes time and effort and, consequently, costs. So rather than measuring the KPIs, applying a cost-effective testing process should be the primary focus when the budget for testing is restricted.
KPI Testing Approach
How to perform Metrics Test:
By applying the same business logic used in cQube (Postgres and Nifi) through developing code using Python.
Steps Involved in Performing KPI Test:
Install cQube in the demo test server.
Create the pytest framework and configure the dependencies
Connect to Docker where the CSV file is stored.
Reading the CSV files from Docker by using Python functions.
Performing the same business logic on the input data using python as Performed in cQube development.
Storing the output generated into a data frame.
Reading the output files from the Database table(as per the current POC) by using Python functions.
Comparing the cQube output and output generated using python and creating the result that says ‘TRUE’ if both cQube and python generated outputs match and says ‘FALSE’ if doesn’t match.
Performing the Unit Test on each data source so that once the unittest is executed, all the respective comparison files get generated.
Different types of files used in the KPI Test
Config.ini: Storing the credentials of server IP, API endpoint
Data_Sources: The folder contains each data source script.
TestSuite: Consolidation of all the test files
UnitTest.py: TestRunner HTML3 and generates the reports.
Execution Flow
UnitTest.py file is executed which contains unittest on the business logic functions which hits the python file in which Business_logic is written and all the comparison results will be stored in the form of a CSV file in the mentioned destination file path/location.
Input file result (applied business logic) in JSON format == API Response
Business_logic python file will be calling the functions written in Functions_Files.py to read details present in the Config.ini file.
NON-FUNCTIONAL Test Approach:
Performance Test - Introduction:
As part of the delivery of the cQube, it is required that the solution meet the acceptance criteria, both in terms of functional and non-functional areas. The purpose of this document is to provide an outline for non-functional testing of the cQube solution.
This document covers the following:
Entry and Exit Criteria
Environmental Requirements
Volume and Performance Testing Approach
Performance Testing Activities
Entry Criteria for Non functional testing:
The following work items should be completed or agreed upon beforehand in order to proceed with the actual performance testing activities:
Document containing non-functional test requirements, with quantified NFRs where possible.
The critical use-cases should be functionally tested and free of critical bugs.
Design architectural diagrams that are approved and available
Key use-cases have been identified and defined.
Performance test types agreed.
Load injectors and setup
Any data setup required, such as a sufficient number of users in DATASTORE>
Exit Criteria for Non functional testing:
The performance testing activity will be completed when The NFR targets have been met, and performance test results have been presented to the team and approved.
Environmental Requirements:
The performance tests will be run against a stable version of the cQube solution. and performed on a dedicated production-like environment (pre-production) assigned for performance testing, with no deployments on that environment during the course of the performance testing.
Load Injectors
There will be one or more dedicated "load injectors" set up to initiate the required load for performance testing. The load injector could be a VM or multiple VMs that have an instance of JMeter running, initiating the requests.
Test Tools
The test tool used for Volume and Performance testing will be JMeter, an open-source load testing tool predominantly used for volume and performance testing.
Performance Test Plan using Jmeter
Start JMeter. Open the JMeter window in the terminal by clicking on cd Downloads/apache-jmeter-5.5/bin/jmeter.sh
Rename the Test Plan. Change the name of the test plan node to Sample Test in the Name text box
Add Thread Group.
Add http request→url , path , port number, body
Add Sampler
Add Listener
Run the Test Plan.
View results in, summary report and View result tree listeners
View the Output in .csv file
Post-Test Analysis and Reporting
Capture and back up all the relevant data reports and archives.
Determine the success or failure by comparing the test results to the performance targets. If the targets are not met, then the appropriate changes should be made, and then another test execution cycle will commence. It is unknown how many execution cycles will be needed in order to meet the agreed targets.
Document and present the test results to the team.
Performance testing implementation in cQube:
We are going to test each API with the users of 100, 500, 1000, 3000, 5000, 10000, 20000, 30000, 40000 and 50000
With a ramp-up period of 1 sec, the average time should not be > 3000 milliseconds(3 sec)
Final Report will be stored in csv file
jmeter load testing test cases
https://docs.google.com/spreadsheets/d/1od4uaW-65DTWSgbjrS1sW5Ll3i3EczVU4GKLAwW7C6g/edit?usp=sharing
JMeter scripts:
Start JMeter. Open the JMeter window in the terminal by clicking on cd Downloads/apache-jmeter-5.5/bin/jmeter.sh
Rename the Test Plan. Change the name of the test plan node to Sample Test in the Name text box
Add Thread Group.
Add HTTP request→provide the API url , path, port number, and body
Add Listener - result tree , graph, view table report
Run the Test Plan.
View results in,summary report, and View result tree listeners
Add assertions-->Response code assertion
View the Output in CSV
Benchmarking Tool:
We use WRK, a modern HTTP benchmarking tool capable of generating significant load when run on a single multi-core CPU for Benchmarking.
Set up:
Install wrk tool by using this command
git clone --depth=1 https://github.com/wg/wrk.git
cd wrk
make -j
The generated wrk executable is under this folder. This is how we use wrk for GET request benchmark:
wrk -t 6 -c 200 -d 30s --latency https://google.com
Some of the command flags for wrk
-c: the number of connections to use
-t: the number of threads to use
-d: the test duration, e.g., 60s
-s: the lua script to use for load testing our service (will cover in later section)
--timeout how many seconds to timeout a request
--latency: show the latency distribution for all the requests
Automation Tool Approach for cQube 5.X
Feature | Automation Tools |
Installation | Python |
cQube spec and ingestion API | Karate |
cQube Dashboard | Selenium with Pytest framework |
Performance Test | Jmeter and wrk |
Transformers
List of transformers Identified With Two different UPSERT OPERATION:
Dataset without filter/Agg:
When we get the aggregated data. Will be updating data directly into a dataset using this transformer,it's just like a copy paste function.
Event to Dataset with Agg:
In this case taking Event as an input doing required aggregation and storing it into a dataset table.
Dataset to Dataset with Agg:
In this case dataset is an input doing required aggregation and storing it into dataset table
Event to Dataset with filter bases Agg:
In this case taking Event as an input doing aggregation with filters and storing it into dataset table
Dataset to Dataset with filter based Agg:
In this case dataset is an input doing aggregation with filters and storing it into dataset table
NOTE: all 5 transformers will be written for Two different UPSERT operations like explained below.So we will be having 10 Transformers.
Upsert By Replace Operation:
Using UPSERT query Replacing data in the table.
EX: If consider student_attendance data
in the below table total_preset is 5 may be later for the same school_id and date if the value comes then we need to replace the value instead incrementing.
Before UPSERT
Date | School Id | Total Present | Total Students |
13-11-2022 | 101 | 5 | 50 |
INSERT INTO
dataset_attendance (Date, school_id,Total Present, Total Students)
VALUES
(‘13 - 11 - 2022’,101, 20, 50,40) ON CONFLICT(Date,school_id) DO
UPDATE
SET
Total Present =EXCLUDED.Total Present,
Total Students= EXCLUDED.Total Students;
After UPSERT
Date | School Id | Total Present | Total Students |
13-11-2022 | 101 | 20 | 50 |
Upsert by Increment Operation:
EX: If we consider any programs like NISTHA,DiKSHA….etc in this case we should increment as the data comes for the same date and for the same program_name and course_id
Before UPSERT
Program_name | course_id | total_enrolment | Date |
NISHTHA | course_01 | 50 | 13-11-2022 |
INSERT INTO
dataset_attendance (programe_name,course_id,total_enrolment,Date)
VALUES
(NISHTHA,course_01,10,13-11-2022) ON CONFLICT(date,school_id) DO
UPDATE
SET
total_enrolment =EXCLUDED.total_enrolment+10;
After UPSERT
Program_name | course_id | total_enrolment | Date |
NISHTHA | course_01 | 60 | 13-11-2022 |
0 Comments