VSK Starter Pack - FAQs
- 1 Overview
- 2 Q1: What is VSK Starter Pack?
- 3 Q2: What are the indicators in VSK Starter Pack?
- 4 Q3: How does the data flow in VSK Starter Pack work?
- 5 Q4: How does the state deploy the starter pack on its state servers? What will be the requirements and procedure for the state to set up the VSK Starter Pack on their own?
- 6 Q5: If a state doesn’t have the required server and infrastructure, can the state still use the Starter Pack? What will be the process to do this?
- 7 Q6: What are the skillsets / HR requirements to set up and enable daily-level basic configurations in VSK Starter Pack?
- 8 Q7: What is the technology stack that VSK Starter Pack is built on?
- 9 Q8: What does the installation process for VSK Starter Pack look like?
- 10 Q9: How can the state ingest data for existing programs in the starter pack?
- 11 Q10: How can the state process data for existing programs in the starter pack?
- 12 Q11: How can I rectify an error made in the format while ingesting data into the starter pack?
- 13 Q12: How can the enhancements / configurations in the visualisation layer be done in the starter pack?
- 14 Q13: How can the state create users on VSK Starter Pack?
- 15 Q14: How can the state add a new program in the VSK Starter Pack?
- 16 Q15: How can the state process the data for a new program added in the VSK Starter Pack?
- 17 Q16: How can the state get data from the centrally-controlled applications (DIKSHA) for the state onto the starter pack?
Overview
Vidya Samiksha Kendra (VSK) Starter Pack, under the umbrella of NDEAR, is an initiative to help set up the VSK and enable States & UTs with the ability to see, act and solve problems based on data. By developing a VSK, the state will be able to track their programs and monitor data to course-correct and improve the program outcomes. There are multiple ways a state can set up their own NDEAR-compliant VSK.
This document highlights the use of NIC-enabled VSK Starter Pack which can enable a state to initiate their own VSK by tracking primary datasets in education.
Q1: What is VSK Starter Pack?
The VSK Starter Pack makes it quick and easy to initiate the VSK in any state. The VSK Starter Pack, when enabled for a State/UT will have metrics visualised for six key programs as listed below:
NISHTHA: NISHTHA is a capacity building programme for "Improving Quality of School Education through Integrated Teacher Training". It aims to build competencies among all the teachers and school principals at the elementary stage. NISHTHA is the world's largest teachers' training program.
UDISE: The Unified District Information System for Education (UDISE) is an educational management information system collected by school administrators on a yearly basis.
PGI: The Performance Grading Index (PGI) is a tool to provide insights on the status of school education in States and Union territories including key levers that drive their performance and critical areas for improvement.
DIKSHA: The Digital Infrastructure for Knowledge Sharing is an initiative of the National Council of Educational Research and Training (Ministry of Education, Govt. of India). This platform enables implementation of two key programs- Nishtha and Energized Textbooks.
National Achievement Survey (NAS): The National Achievement Survey (NAS) is a nationally representative large-scale survey of students' learning undertaken by the Ministry of Education, Government of India.
PM POSHAN: Pradhan Mantri Poshan Shakti Nirman (PM POSHAN) earlier known as the National Programme of Mid-Day Meal in Schools is one of the foremost rights based Centrally Sponsored Schemes under the National Food Security Act, 2013 (NFSA). The primary objective of the scheme is to improve the nutritional status of children studying in classes I-VIII in eligible schools.
Q2: What are the indicators in VSK Starter Pack?
The starter pack consists of 15 indicators across the six programs mentioned above. Following is the list of these indicators:
NISHTHA:
Implementation Status
Courses and Medium status
% against Potential Base
District wise Status
Course wise Status
DIKSHA:
ETB Coverage Status
Content Coverage on QR
Learning Sessions
Learning Sessions on Potential Users
PM POSHAN:
Progress Status
NAS:
District Wise Performance
Grade & Subject performance
UDISE:
District Wise Performance
Correlation
PGI:
District Wise Performance
Q3: How does the data flow in VSK Starter Pack work?
The data flow mechanism in the VSK Starter Pack has 3 main steps:
Data collection or data source access - State should have data available in digital format for programs which it would like to see on the VSK starter pack. Data will then be ingested periodically for visualisations on the VSK starter pack.
Data processing - This step processes the data of programs ingested by the state to generate any aggregate metrics to create visualisations on the VSK starter pack.
Data Visualisations - This step visualises the aggregate metrics, once data is processed, in an easily consumable format to monitor the required programs. Following chart types are available for visualisation: Maps, Bar chart, Scatter Plot, Tabular reports, Heat Map, Trend chart and Big Number.
Q4: How does the state deploy the starter pack on its state servers? What will be the requirements and procedure for the state to set up the VSK Starter Pack on their own?
2 servers are required on the state’s end: 1 for VSK and another for nginx. There are some other prerequisites as well for the state to install cQube as mentioned below:
ubuntu 18.04 (supported)
32GB of System RAM (minimum requirement)
8 core CPU (minimum requirement)
Domain name (with SSL)
1 TB Storage
Here is the link to detailed software requirements to install cQube.
Q5: If a state doesn’t have the required server and infrastructure, can the state still use the Starter Pack? What will be the process to do this?
There are 3 options for the state to use the VSK Starter Pack in this case:
As a temporary solution, NCERT can provide server & required infrastructure to the state to deploy the starter pack for a period of 3 months, post which the state is responsible for shifting to a state server.
The state can reach out to EdCIL / NIC to provide server & required infrastructure.
The state can roll out an RFP open for all tech vendors in order to get the required server & infrastructure.
Q6: What are the skillsets / HR requirements to set up and enable daily-level basic configurations in VSK Starter Pack?
A data analyst and a DevOps engineer are required in the state with the following skills:
Type of Engineer | Role | Skills Required |
Data Analyst | Prepare the data to be ingested into NVSK / VSK Instances | Excel |
DevOps | Deploy & Manage NVSK / VSK Instances | Basic understanding of cloud (Azure), Ansible, Shell scripting |
Q7: What is the technology stack that VSK Starter Pack is built on?
For installation:
Ansible
Shell Script
For development purposes:
Node JS version 14 - APIs for the back-end end logic implementation
AngularJS version 14 - For the UI development
Supported libraries with AngularJS:
tailwind css
leaflet
leaflet-responsive-popup
highcharts
color2k
xlsx
Q8: What does the installation process for VSK Starter Pack look like?
After the state has procured hardware & software requirements as mentioned in Question 4 in this document, the VSK Starter Pack can be installed by following the steps mentioned here. Other information with respect details to be filled in configurations during installation are mentioned below:
Details for config.yml file:
List of state codes:https://docs.google.com/spreadsheets/d/1GIXLSsjKSZxYMt76dWs3fVGhFB9QB4AQZeUhhCR2Yqw/edit?usp=sharing
If azure was selected as storage_type, copy the azure_container_config.yml.template to azure_container_config.yml
>cp azure_container_config.yml.template azure_container_config.yml
Edit using nano azure_container_config.yml. Fill in all the required fields and close the file.
Details for azure_container_config.yml file
If azure is selected as storage_type, fill the following details in azure_container_config.yml
azure_input_container: Provide the name of input container [ Eg. input_container_name ]
azure_output_container: Provide the name of output container [ Eg. output_container_name ]
azure_account_name: Provide azure account name for creation of azure container
azure_account_key: Provide azure account key for creation of azure container
If s3 is selected as storage_type, copy aws_s3_config.yml.template to aws_s3_config.yml
>cp aws_s3_config.yml.template aws_s3_config.yml
Edit using nano azure_container_config.yml. Fill in all the required fields and close the file.
Details for aws_s3_config.yml file
If S3 is selected as storage type, fill the following details in aws_s3_config.yml
s3_input_bucket: Provide the name of input bucket [ Eg. bucket_name ]
s3_output_bucket: Provide the name of output bucket [ Eg. output_bucket_name ]
s3_access_key: Provide aws access key for creation of s3 bucket
s3_secret_key: Provide aws secret key for creation of s3 bucket
aws_default_region: Provide aws default region for creation of s3 bucket
Q9: How can the state ingest data for existing programs in the starter pack?
The state needs to ingest data for the required programs in CSVs in specific formats. Additionally, all the CSV files need to be comma-separated and should be emitted individually in a ZIP format.
Following are the steps for the state to ingest data into the starter pack:
Step - 1: Create a new user from the Admin console with a username and password. Select the role of the user as ‘Emission’. Click on ‘Create user’ and the user will be successfully created. Post this, a JWT token will be generated and shown on the same page.
Step - 2: Pass the JWT token as header when calling the Ingestion API Endpoint to upload file: /data/upload-url. This will return the S3 presigned URL with a 1-hour expiry. Document
Step - 3: User / client / admin should use the presigned URL to upload the file directly into S3. The files should be in comma-separated CSVs in the predefined formats, emitted individually in a ZIP Format.
Q10: How can the state process data for existing programs in the starter pack?
The aggregated data should be placed inside the Input data container (Blob storage in Azure, S3 Bucket in AWS)
Execute the API mentioned and the actual source files will copy to the backup folder (For future reference)
“__basepath__/api/common/uploadSourceData”
The source data files will be deleted from the Input folder once the data is completed the conversion to JSON output.
Since the source files are backed up after the successful processing, The input container will be available to accept the new input data files for the peculiar reports.
Need to run the API mentioned above for each subsequent data input file placing.
Note: The output file will be overwritten when the API processes the new Input data file.
Q11: How can I rectify an error made in the format while ingesting data into the starter pack?
Errors made can be viewed through logs in the admin console. The admin will have an option to view the last 200 lines in the log file on the browser with the help of the user interface provided. There will be a download option provided to download the entire log file in .log format. The list of log files available to the admin are as follows:
Application Logs (Error and Info logs)
Admin Logs (Angular and Node logs)
System logs
NIFI logs
Data emission process logs
Database logs
In v5.0 and further releases scheduled from March 2023 onward, the user will receive error logs while ingesting data through the API, along with a description of the error being made.
Q12: How can the enhancements / configurations in the visualisation layer be done in the starter pack?
There are 3 types of enhancements / configurations that can be done in the visualisation layer of VSK Starter Pack:
Login Page - Updating the State Logo & State Name:
The login page of VSK Starter Pack is designed with fixed and configurable fields, as shown below.
a. Fixed Fields: These fields are fixed throughout all installations and have their own actions & representations. For example - User ID, Password & the submit button.
b. Configurable Fields: These fields are configurable with dynamic fields. A state may configure their branding through these fields. For example - Logo, Header, State Name, Description at the bottom of the page, Image on the right side of page.
The configurable fields for the login page are present in a property called loginObj in this file: cQube_Edu/deploy/ansible/roles/angular_config/templates/environment.prod.ts.j2
Structure of the loginObj property is as follows:
title: Header to be displayed in the login page
imageURL: Location of the image to be displayed on the right side of the login page. The image comprises 50% of the screen size and gets optimised accordingly on the login page.
tagLine: Tagline information (Description) to be displayed at the bottom of the login page
logoURL: Location of the logo image to be displayed at the top of the login page. The size of logo image is restricted to a width of 60 pixels.
The images should be uploaded in this location: cQube_Edu/development/user_interface/src/assets/images
2. Home Page (Post Login) - Updating Header with State Name
Based on the state code mentioned in the configuration file https://docs.google.com/spreadsheets/d/1GIXLSsjKSZxYMt76dWs3fVGhFB9QB4AQZeUhhCR2Yqw/edit?usp=sharing, the state name will be displayed automatically on the home page post login.
3. Connecting cQube URL to state’s landing page:
Once the starter pack instance for the state is created, they can connect it to the state landing page using the URL for the VSK starter pack.
Q13: How can the state create users on VSK Starter Pack?
Two types of users can be created for cQube_Edu configuration of the VSK Starter Pack:
Admin Users: Able to access the admin console & view dashboards (they can create other users)
Viewer Users: Able to view dashboards
There are two ways in which users can be created:
Bulk user creation: Following are the steps to create bulk users -
An excel file with file name users_creation.xlsx should be placed in the input data container (Blob storage in Azure, S3 Bucket in AWS). Below are the columns in the excel sheet:
User_id
Email_id
State_id (optional)
user_role
Node JS API fetches the users_creation.xlsx file and reads all the details from the file. It then inserts the user details into the ‘users’ table in PostgreSQL.
The default password Admin@123 is added to all the users.
The ‘users’ table will have the below columns
Column Name | Data Type | Column Description |
user_id | Varchar | Unique ID of the user |
user_name | Varchar | Name of the user |
email_id | Varchar | Email ID of the user |
password | Varchar | Password of the user |
is_first_login | Boolean |
|
state_id | BigInt | ID of the state |
user_role | Varchar | Role of the user |
created_date | Time Stamp | Date when the user was created |
updated_date | Time Stamp | Date when the user was created |
2. Single user creation: Following are the steps to create single users -
a. A hyperlink ‘Add New User’ will be present in the login screen for the ‘Admin’ users.
b. They can click on the link and then the user registration page will open. Admin users can fill all the details and click on the ‘Register User’ button.
c. Entry for new users entry will be added into the ‘users’ table
Process for user login:
Users should enter the user_id & Password in the login page. For first time login users, the password would be “Admin@123”
Once the first time user clicks on the login button, they will be redirected to the page to change password.
Second login onwards, the users will be directed to the dashboard page.
‘Change password’ functionality will be present on the dashboard page in the user profile.
Users need to enter the old password to change it and have a new password.
Q14: How can the state add a new program in the VSK Starter Pack?
Refer to https://cqube.sunbird.org/use/use-case to understand the process of data ingestion & processing for new programs.
Following are the steps to configure the visualisations for the new programs:
The aggregated data should be ingested into the system
Create new module for the new program in the Angular Project
Note: Ignore the first step and fourth step if you want to create a new report in the existing program or module
Steps to create a new module or component for the program:
Run the following command to create a new module
ng generate module __module_name__
For example,
ng generate module views/etb
Create a root component in the file
ng generate component __component_name__
For example,
ng generate component views/etb
Register the new route in the app.routing.module.ts file and include the new module package
Update the main_metrics.csv file which contains the list of programs to be displayed in the application.
Include the new program icon and url slug to access the report along with the general program information.
Create a new configuration file if you are creating a new program otherwise you can include the new report information in the existing configuration file of that program
The configuration file should be created inside the following location
For NVSK: cQube/apis/node-api/core/config/national
For VSK: cQube/apis/node-api/core/config/state
The following is the file structure for the data source info Object:
“__reportName__”: {
/*** Property name should be the Report Name */
“__reportType__”: {
/*** Property name should be the Report Type */
“pathToFile”: string,
/*** Path of the file */
“columns”: [
/*** Information of the columns for data processing */
{
“name”: string,
/*** Name of the column */
“property”: string,
/*** Property name of the column in the data source */
“isLocationName”?: boolean,
/*** Flag to denote the location name in the data set. This can be used for the map report type */
“weightedAverageAgainst”?: string
/*** Name of the other column in the actual data source to be considered while calculating the weighted average for the column*/
}],
“filters”: [
/*** List of columns for the filters to be displayed in the web application and for data processing */
{
“name”: string,
/*** Name to be displayed for this filter in the web application */
“property”: string
/*** Property name of the column in the data source */
}
]
}
}
}
Note: ‘?’ after the property name means that the property is not mandatory.
For example,
const dataSourceInfo = {
studentPerformance: {
map: {
pathToFile: 'common/nas/nas_data',
columns: [
{
name: "State",
property: "State",
isLocationName: true,
isSSPColumn: false,
isMainFilterForSSP: true,
isGroupByColumn: true,
},
{
name: "District",
property: "District",
isLocationName: true,
isSSPColumn: true,
isGroupByColumn: true,
},
{
name: "Latitude",
property: "Latitude",
},
{
name: "Longitude",
property: "Longitude",
},
{
name: "Performance",
property: "Performance",
weightedAverageAgainst: "Students Surveyed"
}
],
filters: [
{
name: 'State',
property: 'State'
},
{
name: 'Grade',
property: 'Grade'
},
{
name: 'Subject',
property: 'Subject'
}
]
}
}
};
Q15: How can the state process the data for a new program added in the VSK Starter Pack?
The aggregated data should be placed inside the Input data container (Blob storage in Azure, S3 Bucket in AWS)
The below Node JS API will pick the xlsx/csv file from the cloud Input data folder and converts it into JSON file
“__basepath__/api/common/uploadSourceData”
Run the above API using the curl command with the ‘Get’ Request type
Create the configuration file for the new program and place the configuration file at the below path
/development/ui/back-end/core/config_files/nvsk/”program name_config.js
Fill the below details in the newly created config file
const dataSourceInfo section will have the details of the
“program name”
“Report Name”
“Report Type”
“pathToFile”
“columns”
“Filters”
Each report will have its own input data location path, Required columns, and filters to be displayed on the report page.
Q16: How can the state get data from the centrally-controlled applications (DIKSHA) for the state onto the starter pack?
Data from the centrally-controlled applications can be received via APIs. Specific steps for DIKSHA integration have been defined below:
Step - 1: As DIKSHA progress exhaust and summary rollup datasets have been integrated with the starter pack, the user needs to configure the DIKSHA production base_url, token, and encryption key of these datasets during the installation / upgradation of the starter pack. This can be done using the file ‘cQube/development/python/cQube-raw-data-fetch-parameters.txt’ during an upgrade or installation with the production parameters.
This enables the direct download of daily files for course and textbook usage from the DIKSHA summary rollup dataset into the cQube S3 emission bucket. Post download, the data gets processed to generate visualisations.
Step - 2: In order to get course enrolment and completion data into cQube from the DIKSHA progress exhaust dataset, the user needs to upload the list of batch IDs in a CSV file through the emission API. cQube then downloads all the data related to batch IDs into the emission bucket. The data is then unencrypted using the configured encryption password and stored in the emission bucket, after which the data gets processed to generate visualisations.