Deployment Options
Self Hosted
Cluster Requirements
to install opsverse instances onto your own cloud using the self hosted model, a k8s cluster is required on your cloud most customers run a managed kubernetes (e g , eks, aks, or gke on aws, azure, or gcp, respectively) and that's what the examples below will show however, any kubernetes cluster will work as long as it meets the requirements below this page will show some requirements and examples in this case, these are the validations ran against observenow requirements cluster version and size kubernetes version v1 25 x <= version <= 1 30 x minimum of 3 worker nodes ( with at least 2 vcpus and 8gb ram each ) vpc configured with cidr block of /21 or smaller to ensure there are atleast 2048 ips available for the cluster cluster creation in general, below mentioned things are needed in a cluster ( irrespective of the provider ) that runs any of the apps offered by opsverse networking and security resources the creation of network resources ( vpc in aws and gcp / vnet in azure and other appropriate resources like subnets, gateways, route tables, certificate manager, etc) is crucial for creating a secure and well connected infrastructure that runs all the apps smoothly eks cluster specify the cluster and its configuration such as name, k8s version, node group (name, os, type of os), network and security configs, etc object storage buckets object storage buckets are a type of storage service provided by cloud providers ( called s3 in aws, google cloud storage (gcs) in gcp, and azure blob storage in azure ) they are designed to store and retrieve vast amounts of unstructured data in the form of objects object storage is a must for opsverse's observenow to function properly iam resources iam (identity and access management) resources are components used to manage secure access to the services and resources provided by the cloud providers they allow admins to control who can access the cloud infrastructure and what actions a user/automated bot account can perform this is needed as opsverse's observenow frequently talks to object storage buckets access to object storage iam should be set in such a way that the pods running in the cluster should have access to the object storage this is very crucial for opsverse's observenow as the app relies on object storage for log storage/retrieval and backup operations aws create an iam role and a policy gcp create an iam service account that binds to workload identity azure create a storage account key via the azure portal to access the storage container aws to create an amazon eks (elastic kubernetes service) cluster using terraform, the following steps need to be followed set up the provider providers are a logical abstraction of an upstream api they are responsible for understanding api interactions and exposing resources configure the appropriate provider define the network and security resources specify the eks cluster and its configuration such as name, version, networking, object storage buckets network configs define the configs to create vpc, subnets, igw, nat, route tables, and other appropriate resources security configs define the configs to create iam and certificate manager here is a terraform snippet that has all the configs to create the resources the terraform code snippets used in these examples can also be found here https //github com/opsverseio/private saas https //github com/opsverseio/private saas // creates one vpc in atleast 2 availability zones with multiple subnets per availability zone (atleast 1 public subnet and 'n' private subnets) module "vpc" { source = "terraform aws modules/vpc/aws" version = "5 5 1" name = "\<vpc name>" cidr = "\<vpc cidr>" azs = "\<vpc availability zones>" private subnets = "\<cidr private subnet>" public subnets = "\<cidr public subnet>" enable nat gateway = true single nat gateway = true enable dns hostnames = true public subnet tags = { "terraform" = "true" "environment" = "opsverse cluster" } private subnet tags = { "terraform" = "true" "environment" = "opsverse cluster" } } object storage bucket (s3) this step creates an s3 bucket for the observenow to store the logs and the backups here is a terraform snippet that has all the configs to create the resource module "s3 bucket opsverse" { source = " /modules/s3" bucket name = "opsverse bucket" bucket tags = { name = "opsverse bucket" environment = "production" } } // filename /modules/s3/main tf resource "aws s3 bucket" "bucket" { bucket = var bucket name acl = var public access ? "public read" "private" } resource "aws s3 bucket public access block" "public access policy" { bucket = aws s3 bucket bucket id block public acls = var public access ? false true block public policy = var public access ? false true ignore public acls = var public access ? false true restrict public buckets = var public access ? false true } // filename /modules/s3/variables tf variable "bucket name" {} variable "public access" { default = false } note it is recommended to create the s3 bucket in the same region as the cluster iam (identity and access management) role creation this step creates a role for the observenow instance ( specifically loki pods to access the s3 bucket to store and retrieve the logs) the required role is as follows { "version" "2012 10 17", "statement" \[ { "effect" "allow", "principal" { "federated" "arn\ aws\ iam \<youraccount>\ oidc provider/oidc eks \<region> amazonaws com/id/\<youreksclusteridprovider>" }, "action" "sts\ assumerolewithwebidentity", "condition" { "stringlike" { "oidc eks \<region> amazonaws com/id/\<youreksclusteridprovider>\ sub" "system\ serviceaccount " } } } ] } configure iam policy for the role/object storage access this step defines an iam role in such a way that the pods in the cluster should be able to access the s3 bucket to store/retrieve the logs and backup files here is a sample policy to attach to the created role { "version" "2012 10 17", "statement" \[ { "effect" "allow", "action" \[ "s3\ getobject", "s3\ putobject", "s3\ deleteobject", "s3\ listbucket" ], "resource" \[ "arn\ aws\ s3 \<yourbucketname>/ " ] }, { "effect" "allow", "action" \[ "s3\ listbucket" ], "resource" \[ "arn\ aws\ s3 \<yourbucketname>" ] }, { "effect" "allow", "action" \[ "tag\ getresources", "cloudwatch\ getmetricdata", "cloudwatch\ getmetricstatistics", "cloudwatch\ listmetrics" ], "resource" \[ " " ] } ] } here is a terraform snippet that has all the configs to create the resource resource "aws iam role" "iam for loki pods" { name = "eks opsverse s3 pod role" assume role policy = <\<eof { "version" "2012 10 17", "statement" \[ { "effect" "allow", "principal" { "federated" "${module opsverse eks cluster oidc provider arn}" }, "action" "sts\ assumerolewithwebidentity", "condition" { "stringlike" { "${replace(module opsverse eks cluster oidc provider arn, "${element(split("/", module opsverse eks cluster oidc provider arn), 0)}/", "")}\ sub" "system\ serviceaccount " } } } ] } eof } resource "aws iam policy" "loki pod permissions" { name = "opsverse eks pod permissions" policy = <\<eof { "version" "2012 10 17", "statement" \[ { "effect" "allow", "action" \[ "s3\ getobject", "s3\ putobject", "s3\ deleteobject", "s3\ listbucket" ], "resource" \[ "arn\ aws\ s3 ${var s3 bucket}/ " ] }, { "effect" "allow", "action" \[ "s3\ listbucket" ], "resource" \[ "arn\ aws\ s3 ${var s3 bucket}" ] }, { "effect" "allow", "action" \[ "tag\ getresources", "cloudwatch\ getmetricdata", "cloudwatch\ getmetricstatistics", "cloudwatch\ listmetrics" ], "resource" \[ " " ] } ] } eof } resource "aws iam role policy attachment" "loki pod permissions" { role = aws iam role iam for loki pods name policy arn = aws iam policy loki pod permissions arn } output "loki pod role arn" { value = aws iam role iam for loki pods arn } note when you create your eks cluster, in the terraform you can set enable irsa = "true" to make sure you have an iam openid connect (oidc) provider for your eks cluster eks cluster creation this step creates a new eks cluster that has 1 worker node pool cluster configs such as name, k8s version, networking/security, and object storage buckets can be defined specify the ec2 instances that will act as worker nodes in the cluster provider "aws" { region = var aws region } module "opsverse eks cluster" { source = "terraform aws modules/eks/aws" version = "19 21 0" cluster name = var cluster name cluster version = "1 28" vpc id = module vpc vpc id subnet ids = module vpc private subnets enable irsa = "true" eks managed node group defaults = { disk size = 50 } eks managed node groups = { user group one = { name = "node group 1" instance types = \["m5a xlarge"] ami type = "al2 x86 64" capacity type = "on demand" \# by default, the module creates a launch template to ensure tags are propagated to instances, etc , \# so we need to disable it to use the default template provided by the aws eks managed node group service \# use custom launch template = false min size = 2 max size = 4 desired size = 3 root volume type = "gp2" key name = var keypair name subnet ids = module vpc private subnets } } } after the successful cluster creation, please send the following details to your opsverse poc s3 bucket name arn details this will help your opsverse to set up the observenow and offer you a smooth experience when creating the opsverse apps there are 2 options when creating the cluster option 1 to use an already existing vpc and subnets and proceed with the cluster creation if a vpc and subnets already exist in aws, the same vpc and subnets can be used to create a cluster follow the below mentioned steps example snippet note this is a generic working example snippet that creates an eks ( assuming a vpc and subnets already exist ) cluster with the following resources eks cluster with 1 worker node that will have 3 nodes (4 vcpu and 16 gb memory each) s3 bucket for loki to store the logs and for the backups of victoriametrics, clickhouse, etc iam role to access the created s3 bucket iam policy that defines the scope of the iam role please feel free to add more granular resources (igw/nat gateways, route tables, acm, etc ) as per your organization's security and networking standards // aws/private saas/modules/s3/main tf resource "aws s3 bucket" "bucket" { bucket = var bucket name acl = var public access ? "public read" "private" tags = merge(var bucket tags) } resource "aws s3 bucket public access block" "public access policy" { bucket = aws s3 bucket bucket id block public acls = var public access ? false true block public policy = var public access ? false true ignore public acls = var public access ? false true restrict public buckets = var public access ? false true } // aws/private saas/modules/s3/variables tf variable "bucket name" {} variable "bucket tags" { type = map(string) default = {} } variable "public access" { default = false } // aws/private saas/opsverse eks iam/eks tf \# creates a 3 node eks cluster you may additionally want to \# add more subnets to span whichever networks you want \# add manage aws auth="true" in case you do auth maps here too \# change cluster/module name to one that fits your org conventions provider "aws" { region = var aws region } module "opsverse eks cluster" { source = "terraform aws modules/eks/aws" version = "19 21 0" cluster name = var cluster name cluster version = "1 28" // need at least 2 azs for eks to create cluster \# uncomment this if a customer already has a vpc and subnets subnet ids = \[ "${var subnet ids\[0]}", "${var subnet ids\[1]}", "${var subnet ids\[2]}", ] vpc id = "${var vpc id}" enable irsa = "true" eks managed node group defaults = { disk size = 50 } eks managed node groups = { user group one = { name = "node group 1" instance types = \["m5a xlarge"] ami type = "al2 x86 64" capacity type = "on demand" \# by default, the module creates a launch template to ensure tags are propagated to instances, etc , \# so we need to disable it to use the default template provided by the aws eks managed node group service \# use custom launch template = false min size = 2 max size = 4 desired size = 3 root volume type = "gp2" key name = var keypair name subnets = \[ "${var subnet ids\[0]}", "${var subnet ids\[1]}", "${var subnet ids\[2]}" ] } } } // aws/private saas/opsverse eks iam/iam tf \# creates a role for the loki pods to access the pre created s3 bucket \# for loki backend \# \# assumption, the bucket var s3 bucket is already created in same region \# \# note if you changed module name in eks tf from "opsverse eks cluster", please \# update this script to replace "opsverse eks cluster" resource "aws iam role" "iam for loki pods" { name = "eks opsverse s3 pod role" assume role policy = <\<eof { "version" "2012 10 17", "statement" \[ { "effect" "allow", "principal" { "federated" "${module opsverse eks cluster oidc provider arn}" }, "action" "sts\ assumerolewithwebidentity", "condition" { "stringlike" { "${replace(module opsverse eks cluster oidc provider arn, "${element(split("/", module opsverse eks cluster oidc provider arn), 0)}/", "")}\ sub" "system\ serviceaccount " } } } ] } eof } resource "aws iam policy" "loki pod permissions" { name = "opsverse eks pod permissions" policy = <\<eof { "version" "2012 10 17", "statement" \[ { "effect" "allow", "action" \[ "s3\ getobject", "s3\ putobject", "s3\ deleteobject", "s3\ listbucket" ], "resource" \[ "arn\ aws\ s3 ${var s3 bucket}/ " ] }, { "effect" "allow", "action" \[ "s3\ listbucket" ], "resource" \[ "arn\ aws\ s3 ${var s3 bucket}" ] }, { "effect" "allow", "action" \[ "tag\ getresources", "cloudwatch\ getmetricdata", "cloudwatch\ getmetricstatistics", "cloudwatch\ listmetrics" ], "resource" \[ " " ] } ] } eof } resource "aws iam role policy attachment" "loki pod permissions" { role = aws iam role iam for loki pods name policy arn = aws iam policy loki pod permissions arn } output "loki pod role arn" { value = aws iam role iam for loki pods arn } // aws/private saas/opsverse eks iam/provider tf terraform { required providers { aws = { \# region = "us west 2" source = "hashicorp/aws" version = " > 5 33 0" } } required version = ">= 1 3" } // aws/private saas/opsverse eks iam/s3 tf module "s3 bucket opsverse" { source = " /modules/s3" bucket name = "opsverse bucket" bucket tags = { name = "opsverse bucket" environment = "production" } } // aws/private saas/opsverse eks iam/variables tf variable "cluster name" {} variable "aws region" {} variable "keypair name" {} variable "s3 bucket" { } variable "subnet ids" { type = list } variable "vpc id" {} variable "aws profile" {} // aws/private saas/opsverse eks iam/vars tfvars aws profile = "default" aws region = "us west 2" cluster name = "opsverse eks cluster" s3 bucket = "opsverse bucket" subnet ids = \[ "\<subnet id 1>", "\<subnet id 2>", "\<subnet id 3>" ] vpc id = "\<vpc id>" keypair name = "bastion" option 2 to create a new vpc and subnets and proceed with the cluster creation if a vpc and subnets don't exist in aws and have to be created from scratch, follow the below mentioned steps example snippet note this is a generic working example snippet that creates an eks cluster with the following resources a vpc in atleast 2 availability zones multiple subnets per availability zone ( at least 1 public subnet and 'n' private subnets ) eks cluster with 1 worker node that will have 3 nodes (4 vcpu and 16 gb memory each) s3 bucket for loki to store the logs and for the backups of victoriametrics, clickhouse, etc iam role to access the created s3 bucket iam policy that defines the scope of the iam role please feel free to add more granular resources (ig/nat gateways, route tables, etc ) as per your organization's security and networking standards // aws/private saas/modules/s3/main tf resource "aws s3 bucket" "bucket" { bucket = var bucket name acl = var public access ? "public read" "private" tags = merge(var bucket tags) } resource "aws s3 bucket public access block" "public access policy" { bucket = aws s3 bucket bucket id block public acls = var public access ? false true block public policy = var public access ? false true ignore public acls = var public access ? false true restrict public buckets = var public access ? false true } // aws/private saas/modules/s3/variables tf variable "bucket name" {} variable "bucket tags" { type = map(string) default = {} } variable "public access" { default = false } // aws/private saas/opsverse eks iam/eks tf \# creates a 3 node eks cluster you may additionally want to \# add more subnets to span whichever networks you want \# add manage aws auth="true" in case you do auth maps here too \# change cluster/module name to one that fits your org conventions provider "aws" { region = var aws region } module "opsverse eks cluster" { source = "terraform aws modules/eks/aws" version = "19 21 0" cluster name = var cluster name cluster version = "1 28" vpc id = module vpc vpc id subnet ids = module vpc private subnets enable irsa = "true" eks managed node group defaults = { disk size = 50 } eks managed node groups = { user group one = { name = "node group 1" instance types = \["m5a xlarge"] ami type = "al2 x86 64" capacity type = "on demand" \# by default, the module creates a launch template to ensure tags are propagated to instances, etc , \# so we need to disable it to use the default template provided by the aws eks managed node group service \# use custom launch template = false min size = 2 max size = 4 desired size = 3 root volume type = "gp2" key name = var keypair name subnet ids = module vpc private subnets } } } // aws/private saas/opsverse eks iam/iam tf \# creates a role for the loki pods to access the pre created s3 bucket \# for loki backend \# \# assumption, the bucket var s3 bucket is already created in same region \# \# note if you changed module name in eks tf from "opsverse eks cluster", please \# update this script to replace "opsverse eks cluster" resource "aws iam role" "iam for loki pods" { name = "eks opsverse s3 pod role" assume role policy = <\<eof { "version" "2012 10 17", "statement" \[ { "effect" "allow", "principal" { "federated" "${module opsverse eks cluster oidc provider arn}" }, "action" "sts\ assumerolewithwebidentity", "condition" { "stringlike" { "${replace(module opsverse eks cluster oidc provider arn, "${element(split("/", module opsverse eks cluster oidc provider arn), 0)}/", "")}\ sub" "system\ serviceaccount " } } } ] } eof } resource "aws iam policy" "loki pod permissions" { name = "opsverse eks pod permissions" policy = <\<eof { "version" "2012 10 17", "statement" \[ { "effect" "allow", "action" \[ "s3\ getobject", "s3\ putobject", "s3\ deleteobject", "s3\ listbucket" ], "resource" \[ "arn\ aws\ s3 ${var s3 bucket}/ " ] }, { "effect" "allow", "action" \[ "s3\ listbucket" ], "resource" \[ "arn\ aws\ s3 ${var s3 bucket}" ] }, { "effect" "allow", "action" \[ "tag\ getresources", "cloudwatch\ getmetricdata", "cloudwatch\ getmetricstatistics", "cloudwatch\ listmetrics" ], "resource" \[ " " ] } ] } eof } resource "aws iam role policy attachment" "loki pod permissions" { role = aws iam role iam for loki pods name policy arn = aws iam policy loki pod permissions arn } output "loki pod role arn" { value = aws iam role iam for loki pods arn } // aws/private saas/opsverse eks iam/network tf \# creates one vpc in atleast 2 availability zones with multiple subnets per availability zone (atleast 1 public subnet and 'n' private subnets) module "vpc" { source = "terraform aws modules/vpc/aws" version = "5 5 1" name = var vpc name cidr = var vpc cidr azs = var vpc network azs private subnets = var private subnet cidr public subnets = var public subnet cidr enable nat gateway = true single nat gateway = true enable dns hostnames = true public subnet tags = { "terraform" = "true" "environment" = "opsverse cluster" } private subnet tags = { "terraform" = "true" "environment" = "opsverse cluster" } } // aws/private saas/opsverse eks iam/provider tf terraform { required providers { aws = { \# region = "us west 2" source = "hashicorp/aws" version = " > 5 33 0" } } required version = ">= 1 3" } // aws/private saas/opsverse eks iam/s3 tf module "s3 bucket opsverse" { source = " /modules/s3" bucket name = "opsverse bucket" bucket tags = { name = "opsverse bucket" environment = "production" } } // aws/private saas/opsverse eks iam/variables tf variable "cluster name" {} variable "aws region" {} variable "keypair name" {} variable "s3 bucket" { } variable "vpc id" {} variable "aws profile" {} variable "vpc name" {} variable "vpc cidr" {} variable "vpc network azs" { type = list } variable "private subnet cidr" { type = list } variable "public subnet cidr" { type = list } // aws/private saas/opsverse eks iam/vars tfvars aws profile = "default" aws region = "us west 2" cluster name = "opsverse eks cluster" s3 bucket = "opsverse bucket" keypair name = "bastion" \# this is relevant if vpc and subnets has to be created by the terraform ignore if these are already present vpc name = "opsverse vpc" vpc network azs = \["us west 2a", "us west 2b"] vpc cidr = "10 242 0 0/16" private subnet cidr = \["10 242 0 0/18", "10 242 64 0/18"] public subnet cidr = \["10 242 128 0/18", "10 242 192 0/18"] please refer to this working example for more details https //github com/opsverseio/private saas https //github com/opsverseio/private saas gcp please work with your customer success rep to get privatesaas enabled on your gcp account azure please work with your customer success rep to get privatesaas enabled on your azure account