Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Content

Table of Contents
minLevel1
maxLevel7
stylenone

...

Table of Content

  1. Objective 

  2. Hardware Requirements

  3. IAM user and Role creation for S3 connectivity

  4. IAM user and Role creation for AZURE Storage Blob Container connectivity

  5. Local SDC Installation

  6. cQube Deployment Process

  7. Appendix

  • AWS - Network Architecture

  • Microservice Details

  • cQube Deployment Procedure

Objective

This document mainly focuses on the hardware requirements/ AWS network architecture for installing cQube and the deployment process step-by-step.

AWS - Network Architecture:

The following steps define the cQube setup and workflow completion processes in AWS. cQube mainly comprises the areas mentioned below:

  1. EC2 Server

  2. IAM user and Role creation for S3 connectivity.

The cQube network setup process is described in the block diagram below:

Image Removed

Ec2 Instance

...

Hardware Requirements

   Create an AWS EC2 or AZURE VM or in LOCAL SDC or Oracle VM instance with below configurations to install all the cQube micro services.

  • Ubuntu 22.04 (supported) 

  • 16 GB of System RAM (minimum requirement)

  • 4 core CPU for AWZ, AZURE, LOCAL / 2 OCPU for ORACLE (minimum requirement)

  • 250GB HDD

...

  • Port 80 inbound from 0.0.0.0/0

  • Port 443 inbound from 0.0.0.0/0

  • Port 8000 inbound from nginx private ip ( to communicate with kong )

  • 5432 inbound to the particular ip which needs access

...

  • Create a domain name

  • Configure cname of AWS ec2 or AZURE VM or LOCAL sdc  or ORACLE VM instance to the domain name

  • Create a SSL certificate for the domain name.

IAM user and Role creation for S3 connectivity (AWS)

An AWS Identity and Access Management (IAM) user is an entity that is created in AWS to represent the person or application that uses it to interact with AWS. A user in AWS contains a name and credentials. An IAM user with administrator permissions is different from the AWS account root user. One has to create an IAM user with a supported role to provide the connectivity between EC2 and S3. The role should have list, read and write permissions.

IAM user and Role creation for AZURE Storage Blob Container connectivity

An AZURE Identity and Access Management (IAM) user is an entity that is created in AZURE to represent the person or application that uses it to interact with AZURE. A user in Azure contains a name and credentials. An IAM user with administrator permissions is different from the AWS account root user. One has to create an IAM user with a supported role to provide the connectivity between azure VM and blob container. The role should have list, read and write permissions.

IAM User:

  • Create a IAM user

  • Assign IAM policy to the user

  • Download the Default Connection String and Azure account name and account name and container name

IAM Policy:

  • Create a IAM policy from AZURE IAM

  • Provide access to list, read and write the objects to  azure Blob container

For Local SDC installation

If a user opting for local installation, Enter storage_type as local. Then automatically install.sh will install and configure the minio application in localhost and it will set default username and password and it will create a minio bucket as well.

cQube Deployment Process FOR AWS

Step - 1: Connect to the cqube AWS ec2 instance

  • For linux and macOS:

    • Download the .pem file which is generated while creating the EC2 instance

    • Open the terminal and navigate to the folder where .pem file has been downloaded

    • Then give the read permission to the .pem file using following command

  sudo chmod 400 <aws.pem>

  • Use the following command to connect to the instance

ssh -i <path_to_the_pem_file>  <user_name>@<public_ip_of_the_instance>

Upgradation process for AZURE

Step - 1: Connect to the cqube Azure VM instance

  • For linux and macOS:

    • Download the .pem file which is generated while creating the azure VM instance

    • Open the terminal and navigate to the folder where .pem file has been downloaded

    • Then give the read permission to the .pem file using following command

  sudo chmod 400 <aws.pem>

  • Use the following command to connect to the instance

ssh -i <path_to_the_pem_file>  <user_name>@<public_ip_of_the_instance>

Upgradation process for LOCAL

Prerequisites to install cQube on local machine

  • Ubuntu 22.04 (supported)

  •  16 GB of System RAM (minimum requirement)

  •  4 core CPU (minimum requirement)

  •  Domain name (with SSL)

  •  250 GB Storage

Step - 2: Clone the cqube-devops repository using following command

git clone https://github.com/Sunbird-cQube/cqube-devops.git      

...

Step - 3: Navigate to the directory where cqube is cloned or downloaded and checkout to the desired release branch

cd cqube-devops/ && git checkout <branch_name>

...

Step - 4: Give the following permissions to the install.sh file

sudo chmod u+x install.sh

Step - 5: Install cqube with non root user with sudo privileges

sudo ./install.sh

...

Install.sh file contains a shell script where it will run shell scripts and ansible-playbook to setup the cqube

Step - 6: User Input Variables - These are the variables which need to be entered by the user by following the Hint provided

  • state_name ( Enter the required state code by referring to the state list provided )

  • api_end_point ( Enter the url in which cqube to be configured )

  • Storage_type (Enter the storage_type as aws or local or azure.  If User opting for aws You will be prompted with the following  AWS s3 credentials to enter and it will create the s3 bucket. If s3_bucket already exists it will prompt you to enter a unique s3 bucket name. And it will be generated in the config.yml.)

  • s3_access_key

  • s3_secret_key

  • s3 bucket name

...

  • Storage_type (Enter the storage_type as local, minio will install  and configure the minio username and minio password and create the minio bucket And it will be generated in the config.yml.)

...

  • Storage_type (Enter the storage_type as Azure, If User opting for Azure you will be prompted with the following  Azure credentials to enter and it will create an azure container , if azure container exists it will prompt to enter a unique azure container name. And it will be generated in the config.yml.)

  • azure_connection_string

  • azure_account_name

  • azure_account_key

...

Step - 7: Optional_variables- Database credentials contain default values.

...

Following If the user wishes to enter their own credentials then the user should opt for yes to enter their credentials otherwise can opt for no when the question pops up

  • db_user_name ( Enter the postgres database username ) 

  • db_name ( Enter the postgres database name )

  • db_password ( Enter the postgres password )

...

Step - 8: Optional_variables- Read Only Database credentials contain default values. If the user wishes to enter their own credentials then the user should opt for yes to enter their credentials otherwise can opt for no when the question pops up

  • read_only_db_user ( Enter the read only postgres database username ) 

  • read_only_db_password ( Enter the read onlypostgres password )

Image Added

Step - 9: Keycloak_variables- Keycloak credentials with Keycloak admin user name and keycloak admin password.

  • Keyclaok_adm_name (Enter the keycloak admin name eg: admin)

  • Keyclaok_adm_password (enter the keycloak admin password eg: Admin@123)

...

Step - 10: Once the config file is generated, A preview of the config file is displayed followed by a question where the user gets an option to re enter the configuration values on choosing yes. If option no is selected then the install.sh moves to the next section.

...

Step - 11:  A preview of the program_selector.yml file is displayed followed by a question where the user gets an option to enable or disable the programs on choosing yes. If option no is selected then the install.sh moves to the next section.

...

Step - 12: Once the installation is completed, You will be prompted with the following messages and required reference urls.

cQube Installed Successfully!!

...

cQube ingestion api can be accessible using <domain_name>

Ingestion Usage Documentation :https://project-sunbird.atlassian.net/l/cp/PPn7AfAW  

Schema Documentation :https://project-sunbird.atlassian.net/l/cp/xpVi7HbS  

UI Usage Documentation :https://project-sunbird.atlassian.net/l/cp/mv7JvLXd

Appendix

AWS - Network Architecture

The following steps define the cQube setup and workflow completion processes in AWS. cQube mainly comprises the areas mentioned below:

  1. EC2 Server

  2. IAM user and Role creation for S3 connectivity.

The cQube network setup process is described in the block diagram below:

...

Microservices Details

   Following are the details of the microservices which get installed in the cqube server.

  • Ingestion-ms: The ingestion-ms is used to upload the data of the events, datasets, dimensions, transformers and pipeline. All these apis will be to ingesting the data into the cQube.

  • Spec-ms: The spec-ms is used to import schema of the events, datasets, dimensions, transformers and pipeline. All these specs will be defined by the cQube platform prior to ingesting the data into the cQube. These specifications are derived by considering the KPIs as the Indicator.

  • Generator-ms: The generator-ms is used to create the specs & transformers for the derived datasets. Performed aggregation logics, updating data to datasets based on transformation. Status update of file processing

  • Nifi-ms: Apache NiFi is used as a real-time integrated data logistics and simple event processing platform

  • Postgres-ms: Postgres microservice contains the schema and tables

  • Nginx-ms: It is commonly used as a reverse proxy and load balancer to manage incoming traffic and distribute it to slower upstream servers

  • Kong-ms: It is a lightweight API Gateway that secures, manages, and extends APIs and microservices.

IAM user and Role creation for S3 connectivity

An AWS Identity and Access Management (IAM) user is an entity that is created in AWS to represent the person or application that uses it to interact with AWS. A user in AWS contains a name and credentials. An IAM user with administrator permissions is different from the AWS account root user. One has to create an IAM user with a supported role to provide the connectivity between EC2 and S3. The role should have list, read and write permissions

S3 Buckets:
Create following s3 buckets

  • Archiving

  • Error logging

IAM User:

  • Create a IAM user

  • Assign IAM policy to the user

  • Download the access key and secret key

IAM Policy:

  • Create a IAM policy from AWS IAM

  • Provide access to list, read and write the objects to s3 buckets

cQube Deployment Process:

  • Connect to the cqube ec2 instance

  • Open terminal

  • Clone the cqube-devops repository using following command

git clone https://github.com/Sunbird-cQube/cqube-devops.git

Navigate to the directory where cqube is cloned or downloaded

cd cqube-devops/

Checkout to the required branch

git checkout dev

Give the following permissions to the install.sh file

sudo chmod u+x install.sh

Install cqube with non root user with sudo privileges

sudo ./install.sh

...

...

  • Dashboard-ms: It consists of an angular app, it is used to visualize the datasets present in postgres-ms in the form of charts. On run time it requests spec-ms to fetch data from postgres-ms and load it into the client side(Browser)

  • Query_builder-ms: It consists of backend API, which consists of JWT,METRICS,QUERY apis 

JWT - it will generate a jwt token to restrict the other apis.

METRICS - it consists of menus for the navigation bar and dashboard cards.

QUERY - this api used for executing the SQL queries which integrated with Dashboard-ms.

LASTMODIFIED - this api will use for the last modified data  in the s3,azure and minio

cQube Deployment Procedure      

 Install.sh  file contains a shell script where it will run following shell scripts and ansible-playbook to setup the cqube

...

This script basically updates and upgrades the software packages in the server and installs the basic softwares such as as 

  • Python3

  • Pip3

  • Ansible

  • Docker

  • Docker compose

Config_file_generator.sh:

  This script is used to generate a configuration file which contains some constant values and few required variables should be entered by the user. Following are the variables which get added in the config file.

Note: Users should follow the Hints provided in the description and should enter the variables accordingly. If the entered value is wrong then an error message gets displayed and the user should modify the variable value accordingly.Constant Variables: These variables are auto generated

  • System_user_name

  • base_dir

  • Private_ip

  • aws_default_region

Optional_variables: Database credentials contain default values. If the user wishes to enter their own credentials then the user should opt for yes to enter their credentials otherwise can opt for no when the question pops up

  • db_user_name

  • db_name

  • db_password

User Input Variables: -These are variables which need to be entered by the user by following the Hint provided

  • state_name ( Enter the required state code by referring to the state list provided )

  • api_end_point ( Enter the url in which cqube to be configured )

  • Storage_type (Enter the storage type as aws, azure or local

Storage_type is aws enter the below variables

  • s3_access_key

  • s3_secret_key

  • s3 archived bucket name

  • s3 error bucket name

Image Removed

...

  • bucket name       

Storage_type is azure enter the below variables

  • azure connection string

  • azure account name

  • Azure account key

  • azure container  name

Storage_type is a local system that will automatically create the minio username and minio password and minio bucket.

  • minio username

  • minio password

  • minio bucket

Minino can be accessed by Dashboard with http://localhost:9001 end point, here you can see the minio bucket which is created  by default and the username is minio admin and password is minio admin

Optional_variables - Database credentials contain default values. If the user wishes to enter their own credentials then the user should opt for yes to enter their credentials otherwise can opt for no when the question pops up

  • db_user_name ( Enter the postgres database username ) 

  • db_name ( Enter the postgres database name )

  • db_password ( Enter the postgres password )

Optional_variables- Read Only Database credentials contain default values. If the user wishes to enter their own credentials then the user should opt for yes to enter their credentials otherwise can opt for no when the question pops up

  • read_only_db_user ( Enter the read only postgres database username ) 

  • read_only_db_password ( Enter the read only postgres password )

Once the config file is generated, A preview of the config file is displayed followed by a question where the user gets an option to re enter the configuration values on choosing yes. If option no is selected then the install.sh moves to the next section.Image Removed

Repository_clone.sh:

   This script clones the following repositories in the microservices directory and checkout to the required release branch

Image Removed

...

...

    Note: If the repository is already cloned then the script will pull the updated code.

Ansible-playbook:

Install.yml

      An install.yml ansible playbook gets triggered where it triggers the required roles to build the following microservices images.

  • Ingestion-ms

  • Spec-ms

  • Generator-ms

  • Postgres-ms

  • Nifi-ms

  • Dashboard-ms

  • Query_builder-ms

  • Kong-ms

  • Nginx-ms

...

compose.yml:

A       A docker compose ansible script gets triggered where it will up all the containers to running state.

      Note: The following commands can be used from the Ansible directory to down the containers and to start the containers, respectively.

  • docker-compose -f docker-compose.yml down

  • docker-compose -f docker-compose.yml up -d

Once      Once the installation is completed, You will be prompted with the following messages and required reference urls.

cQube Installed Successfully!!Image Removed

...

 We can check the containers running status by using following command

  • sudo docker ps

Image Removed

---------------- end -----------------------

Step - 1: Run install.sh on

<Image of what you will see>

Step - 2: There will a few configurations that need to be set up: <<Some description of configurations>>

...