Table of Content
Objective
Hardware Requirements
IAM user and Role creation for S3 connectivity
IAM user and Role creation for AZURE Storage Blob Container connectivity
Local SDC Installation
cQube Deployment Process
Appendix
AWS - Network Architecture
Microservice Details
cQube Deployment Procedure 8
Objective
This document mainly focuses on the hardware requirements/ AWS network architecture for installing cQube and the deployment process step-by-step.
Hardware Requirements
Create an AWS EC2 or AZURE VM or in LOCAL SDC instance with below configurations to install all the cQube micro services.
...
Create a domain name
Configure cname of AWS ec2 or AZURE VM or LOCAL sdc instance to the domain name
Create a SSL certificate for the domain name.
IAM user and Role creation for S3 connectivity (AWS)
An AWS Identity and Access Management (IAM) user is an entity that is created in AWS to represent the person or application that uses it to interact with AWS. A user in AWS contains a name and credentials. An IAM user with administrator permissions is different from the AWS account root user. One has to create an IAM user with a supported role to provide the connectivity between EC2 and S3. The role should have list, read and write permissions.
IAM user and Role creation for AZURE Storage Blob Container connectivity
An AZURE Identity and Access Management (IAM) user is an entity that is created in AZURE to represent the person or application that uses it to interact with AZURE. A user in Azure contains a name and credentials. An IAM user with administrator permissions is different from the AWS account root user. One has to create an IAM user with a supported role to provide the connectivity between azure VM and blob container. The role should have list, read and write permissions.
...
Create a IAM user
Assign IAM policy to the user
Download the access key and secret keyDefault Connection String and Azure account name and account name and container name
IAM Policy:
Create a IAM policy from AWS AZURE IAM
Provide access to list, read and write the objects to s3 bucketsto azure Blob container
For Local SDC installation
If a user opting for local installation, Enter storage_type as local. Then automatically install.sh will install and configure the minio application in localhost and it will configure default username and password and it will create a minio bucket.
cQube Deployment Process FOR AWS
Step - 1: Connect to the cqube AWS ec2 instance
...
For windows:
Download the .pem file which is generated while creating the EC2 instance
Use puttygen to connect to the instance.
Refer following link to use puttyGen for connecting.
Upgradation process for AZURE
Step - 1: Connect to the cqube Azure VM instance
For linux and macOS:
Download the .pem file which is generated while creating the azure VM instance
Open the terminal and navigate to the folder where .pem file has been downloaded
Then give the read permission to the .pem file using following command
sudo chmod 400 <aws.pem>
Use the following command to connect to the instance
ssh -i <path_to_the_pem_file> <user_name>@<public_ip_of_the_instance>
For windows:
Download the .pem file which is generated while creating the azure VM instance
Use puttygen to connect to the instance.
Refer following link to use puttyGen for connecting.
Upgradation process for LOCAL
Prerequisites to install cQube on local machine
Ubuntu 22.04 (supported)
16 GB of System RAM (minimum requirement)
4 core CPU (minimum requirement)
Domain name (with SSL)
250 GB Storage
Step - 2: Clone the cqube-devops repository using following command
...
state_name ( Enter the required state code by referring to the state list provided )
api_end_point ( Enter the url in which cqube to be configured )
Storage_type (Enter the storage_type as aws or local or azure. If User opting for aws You will be prompted with the following AWS s3 credentials to enter and it will create the s3 bucket. If s3_bucket already exists it will prompt you to enter a unique s3 bucket name. And it will be generated in the config.yml.)
s3_access_key
s3_secret_key
s3 bucket name
...
Storage_type (Enter the storage_type as local, minio will install and configure the minio username and minio password and create the minio bucket And it will be generated in the config.yml.)
...
read_only_db_user ( Enter the read only postgres database username )
read_only_db_password ( Enter the read onlypostgres password )
Step - 9: Once the config file is generated, A preview of the config file is displayed followed by a question where the user gets an option to re enter the configuration values on choosing yes. If option no is selected then the install.sh moves to the next section.
...
Step - 10: A preview of the program_selector.yml file is displayed followed by a question where the user gets an option to enable or disable the programs on choosing yes. If option no is selected then the upgrade.sh moves to the next section.
...
Step - 11: Once the installation is completed, You will be prompted with the following messages and required reference urls.
...
The cQube network setup process is described in the block diagram below:
Microservices Details
...
Ingestion-ms: The ingestion-ms is used to upload the data of the events, datasets, dimensions, transformers and pipeline. All these apis will be to ingesting the data into the cQube.
Spec-ms: The spec-ms is used to import schema of the events, datasets, dimensions, transformers and pipeline. All these specs will be defined by the cQube platform prior to ingesting the data into the cQube. These specifications are derived by considering the KPIs as the Indicator.
Generator-ms: The generator-ms is used to create the specs & transformers for the derived datasets. Performed aggregation logics, updating data to datasets based on transformation. Status update of file processing
Nifi-ms: Apache NiFi is used as a real-time integrated data logistics and simple event processing platform
Postgres-ms: Postgres microservice contains the schema and tables
Nginx-ms: It is commonly used as a reverse proxy and load balancer to manage incoming traffic and distribute it to slower upstream servers
Kong-ms: It is a lightweight API Gateway that secures, manages, and extends APIs and microservices.
Dashboard-ms: It consists of an angular app, it is used to visualize the datasets present in postgres-ms in the form of charts. On run time it requests spec-ms to fetch data from postgres-ms and load it into the client side(Browser)
Query_builder-ms: It consists of backend API, which consists of JWT,METRICS,QUERY apis
JWT - it will generate a jwt token to restrict the other apis.
METRICS - it consists of menus for the navigation bar and dashboard cards.
QUERY - this api used for executing the SQL queries which integrated with Dashboard-ms.
LASTMODIFIED - this api will use for the last modified data in the s3,azure and minio
cQube Deployment Procedure
Install.sh sh sh file contains a shell script where it will run following shell scripts and ansible-playbook to setup the cqube
...
state_name ( Enter the required state code by referring to the state list provided )
api_end_point ( Enter the url in which cqube to be configured )
Storage_type (Enter the storage type as aws, azure or local
Storage_type is aws enter the below variables
s3_access_key
s3_secret_key
s3 bucket name
Storage_type is azure enter the below variables
azure connection string
azure account name
Azure account key
azure container name
Storage_type is a local system that will automatically create the minio username and minio password and minio bucket.
minio username
minio password
minio bucket
Minino can be accessed by Dashboard with http://localhost:9001 end point, here you can see the minio bucket which is created by default and the username is minio admin and password is minio admin
Optional_variables - Database credentials contain default values. If the user wishes to enter their own credentials then the user should opt for yes to enter their credentials otherwise can opt for no when the question pops up
...
...
Note: If the repository is already cloned then the script will pull the updated code.
...
Ingestion-ms
Spec-ms
Generator-ms
Postgres-ms
Nifi-ms
Dashboard-ms
Query_builder-ms
Kong-ms
Nginx-ms
compose.yml:
A docker compose ansible script gets triggered where it will up all the containers to running state.
...
cQube Installed Successfully
We can check the containers running status by using following command
sudo docker ps