Deployment Options
PrivateSaaS

Add A Remote Cluster

Bootstrap the remote cluster

OpsVerse supports deployments to remote clusters hosted and managed by the customer (also referred to as "Private SaaS" or "Remote"). This section has the steps to bootstrap a new remote cluster.

Once a remote cluster is bootstrapped, it can be used to deploy any of the OpsVerse apps.

Prerequisites

  • Check the context / kubeconfig and make sure that it is pointing to the right cluster.
  • Ensure that the kubectl and helm binaries are installed.
  • (AWS Only) Check that the private subnets associated with this EKS cluster have the tag kubernetes.io/role/internal-elb set to 1
  • (AWS Only) SSL Cert: If the load balancer cert is being maintained in AWS ACM, send the associated ARN to the OpsVerse POC.

Please note that this needs supervision from the OpsVerse team and is a collaborative step. Please coordinate with the OpsVerse team before proceeding further.

Install the bootstrap components

Argo CD is used as the remote agent to manage the cluster. Along with it, Bitnami Sealed Secrets is used to securely transfer secrets to the cluster. Run the following script to install these two components.

Curl


Values for keys opsverse_repo_username, opsverse_repo_password, opsverse_registry_username, and opsverse_registry_password are custom values for each customer. Credentials have a short-lived TTL (Generally 7 days). Please reach out to OpsVerse POC to get these values.

Substitute the placeholders (Mentioned as <>) with the actual values.

The script is publically accessible for review

For instance, following is the command if the cluster name is opsverse-eks-cluster, cluster_provider is aws, cluster_region is us-west-2, opsverse_repo_username is opsverse-user, opsverse_repo_password is !DontRememberPassword, opsverse_registry_username is opsverse-user , opsverse_registry_password is !DontRememberPassword and customer name is opsdemo

Shell


The output is expected to be something like this:

Shell


The above command generates a key pair in the remote cluster. Send the public key back to the OpsVerse POC.

Also, when ArgoCD is fully up, it will automatically pull the following additional components and deploy:

  • nginx-ingress controller
  • Jaeger, Prometheus and Victoria Metrics operators
  • OpsVerse agent

Check the status

The status of the bootstrap components can be checked with the following commands:

Text

Shell


Enable the Argo CD UI and check the apps

Running the following command will make the Argo CD UI accessible on https://localhost:8001

Shell


Use admin as the username and please reach out to OpsVerse POC for the default password.

Deploy the observability stack

The deployment will happen by an OpsVerse admin through the below inputs by the customer

Input

The following details are required for deploying the observability stack:

  • DNS Names
  • Name of the object storage bucket to be used for log storage (e.g., S3 bucket, GCS bucket, or Azure storage container)
  • ARN of the role with access to this S3 bucket (or GCP IAM Service Account or Azure storage account key)

Deployment

This is done by the OpsVerse admin remotely by pushing the deployment configs to the GitHub repo polled by the Argo CD agent.

DNS entries

Find out the host name of the nginx-ingress LoadBalancer using the following command:

Shell


Set the above host name as a CNAME record for all the DNS entries identified in the previous step.

Access Grafana

Access the grafana URL in a browser. SSO can be used to login to Grafana. By default, SSO based users are granted Viewer permission in Grafana. This permission can be changed by logging in as the admin user. To find out the admin users password, run the following command:

Shell


Collect telemetry and start observing

At this point, your observability backend is fully ready to receive telemetry data. Follow the steps under the collection section to collect telemetry from your infrastructure.