Deployment Options
PrivateSaaS
Add A Remote Cluster
bootstrap the remote cluster opsverse supports deployments to remote clusters hosted and managed by the customer ( also referred to as "private saas" or "remote" ) this section has the steps to bootstrap a new remote cluster once a remote cluster is bootstrapped, it can be used to deploy any of the opsverse apps prerequisites check the context / kubeconfig and make sure that it is pointing to the right cluster ensure that the kubectl and helm binaries are installed (aws only) check that the private subnets associated with this eks cluster have the tag kubernetes io/role/internal elb set to 1 ( aws only) ssl cert if the load balancer cert is being maintained in aws acm, send the associated arn to the opsverse poc please note that this needs supervision from the opsverse team and is a collaborative step please coordinate with the opsverse team before proceeding further install the bootstrap components argo cd is used as the remote agent to manage the cluster along with it, bitnami sealed secrets is used to securely transfer secrets to the cluster run the following script to install these two components curl s https //raw\ githubusercontent com/devopsnow deployments/tools/main/scripts/remote cluster bootstrap sh | \\ bash s \\ cluster name=\<cluster name> \\ cluster type=remote \\ cluster provider=\<cloud provider> \\ cluster region=\<cluster region> \\ namespace=devopsnow \\ opsverse repo username=\<repo username> \\ opsverse repo password=\<repo password> \\ opsverse registry username=\<registry username> \\ opsverse registry password=\<registry password> \\ opsverse application sourcerepourl=https //github com/devopsnow deployments/\<customer name> git values for keys opsverse repo username , opsverse repo password , opsverse registry username , and opsverse registry password are custom values for each customer credentials have a short lived ttl ( generally 7 days ) please reach out to opsverse poc to get these values substitute the placeholders ( mentioned as <> ) with the actual values the script is publically accessible for review for instance, following is the command if the cluster name is opsverse eks cluster , cluster provider is aws , cluster region is us west 2 , opsverse repo username is opsverse user , opsverse repo password is !dontrememberpassword , opsverse registry username is opsverse user , opsverse registry password is !dontrememberpassword and customer name is opsdemo curl s https //raw\ githubusercontent com/devopsnow deployments/tools/main/scripts/remote cluster bootstrap sh | \\ bash s \\ cluster name=opsverse eks cluster \\ cluster type=remote \\ cluster provider=aws \\ cluster region=us wast 2 \\ namespace=devopsnow \\ opsverse repo username="opsverse user" \\ opsverse repo password="!dontrememberpasswor" \\ opsverse repo username="robot\\$corcentric bootstrap" \\ opsverse registry username="opsverse user" \\ opsverse registry password="!dontrememberpasswor" \\ opsverse application sourcerepourl=https //github com/devopsnow deployments/opsdemo git the output is expected to be something like this validating input arguments all required arguments are present continuing installing argocd crd customresourcedefinition apiextensions k8s io/applications argoproj io configured installing the bootstrap components to the namespace devopsnow warning kubernetes configuration file is group readable this is insecure location \<redacted> warning kubernetes configuration file is world readable this is insecure location \<redacted> release "remote bootstrap now" does not exist installing it now name remote bootstrap now last deployed tue apr 23 19 56 18 2024 namespace devopsnow status deployed revision 1 test suite none notes \ opsverse remote bootstrap \ cluster boostrap has been completed successfully you can now register this cluster as a deployment target for opsverse \ some important links admin console \<redacted> docs https //docs opsverse io website https //opsverse io waiting for sealed secrets component to create the key pair please send the following public key (base64 encoded) back to opsverse ls0tls1crudjtibdrvjusuzjq0furs0tls0tck1jsuv6vendqxjxz0f3sujbz0lsqu1lddvqdiszczzob1pavjl2uxryuvf3rffzsktvwklodmnoqvfftejrqxckqurbzuz3mhloreewtwpnee5estnorfzhrncwek5eqtbnakv4tkrjm05evmfnquf3z2djau1bmeddu3fhu0limwpeuuvcqvfvque0suned0f3z2djs0fvsunbuun5turiyjjayxfpqnhzru5vyun0tzq0zwxrmys1ekdvr0jpwmtnckg5znzkanftbstdek5kk0vroxffd0zfbvdibjhkamdxzurwtu9atlrvvjradufwntduz0xoynzwcexqvg1jeucksvezaetia0jzunlqqldeuljos3hnzvnddc9mk3bul1jpcnjav1ntk3axbxvlcegxs2jls0gyajgrzk1qwxd6egoxm2dpqzhcrxawnjjmohvleej3cgnvvklhyjq2otvvy2rqm0p3vmfume5asmnjqk43tvrpzxzmwey0ustlbmrkcm1zrgxzv3hxrlgrm3zptk9bsldedjlda3psbxrsrgxbt3hheny2otlyrhluetm2stizvvoyd09sudhfrnpuem8ksxivdetrefbianlewxhisehyovcwt0zda2z2vxmyblb5nuk4ejhlzdyru00yskpbswpnckj3cdlluflwsmxrawpxzno1a291bghnckfjswswcxpudmpymdrvynlqexqwddcwcdnstjfanfb6zey3v0f2zgnud2rezum4bu9swtricjrvew9qrtdudtl3akvsvvnybey1vgtzyzkzmelxatjlbuhwruewovvetitor2nlu2hrngjtyxbhvm45k0cwk28kvmpmsk9kymvrymlkufi1sxhjmjhvawvsbuj2dedmwu9stumvblljcug2ufrmulq2wvfqbxrnm0czc0tpa0t4oqpmsk5drtbkt2fizfa5rg8yz0ppdkzjcgm0nmtjefo1agzzz042u3fxeuluzm1rujczq0gwn0vpvwn3rzu2nxr2cjaymmm5wwxyvgwxvvp2bgt0bwnis2tqkzd5n3jabdf4cxevrjvsb0hkz2dgwuhnmjvsd2v4qnyvak1wbulgq28kqwjznmp3surbuufcbzbjd1fequ9cz05wsfe4qkfmoevcqu1dquffd0r3wurwujbuqvfil0jbvxdbd0vcl3pbzapcz05wsfe0ruznuvvunxkrnwpwafbbk0dgl2n1wjlauuhzek1pwxd3rffzsktvwklodmnoqvfftejrqurnz0lcckfbsnlua0vcevfsenbhr0u0z1fzynnooxl2ckr4octvrgpqwm1tbzbkngfamkcwmkjmatnna0mwm3zvnddpbuwkthhbaljuwi9rdznvmjfraulfv0nodk0vcfp3skphruswwdz1ueftcmhhzekwqnq0d2xfvkmzrxjyoup6nthnbwo5os9hegrzzkdks3nmdnnmmzq5mjfrdhlfmljld1pmnzliym91metlb2jhcm9eq3uzcgpzy25ud2lyrlj0amfwadhycmx6dwr0a1v6s0iyuk5wmwrhq1zhrvk4r2lzvdnock9jqznqvdlkvnrhcyttykxuystibs9ss1rowmdrtisxreykewvzutrjwlhmddhobfpjrs9tyk51dg02ugtqzldrrdqwztzhdnb3egnrytc2cmf6d00vwmhhotroawlmv0f0vgpumjnaqmnrsmpwatrmalpewvjnue1ss0d2qthsdjllcwtowjdsek5ua0lyrfzonfjfz3p2ehztyvznsw5mq1q3cktnvwnlahzlrwi1dhvyogzmczh4c1dis0jwymgxd0p3nzffmg9lynmvowpvrxhxbhjwnfpqq1bod0h1wgo2bwekuw1qtmwydgxkmmrldgryl2fzzmtqymfgrlnermhzzjqymlrgrmkzm3q1cnyxuisvrm14t0j0twvxyi9uwuhjc2ykzxdxd2voyupswgl2te8xuwhur2r5tytbcvhhbuyvmgnmvghjbknvndcwzjnucvnlbg1lvva0q04vsml5akv5vwpzn1flzlvpd240acrqzkhgcehwbjq3curon0dvk00wbxriynewag41djjztkv4djj2bxiry0tydmzmwhgxlwpbeww0r2frk0sxqzhttkjhwegvzejyqxprynizcelvy2zack4rd0hzyjlwdqotls0tluvorcbdrvjusuzjq0furs0tls0tcg== the above command generates a key pair in the remote cluster send the public key back to the opsverse poc also, when argocd is fully up, it will automatically pull the following additional components and deploy nginx ingress controller jaeger, prometheus and victoria metrics operators opsverse agent check the status the status of the bootstrap components can be checked with the following commands kubectl get pods n devopsnow $ kubectl get pods n opsverse name ready status restarts age devopsnow agent agent ct2zf 1/1 running 0 136m devopsnow agent agent dmlq4 1/1 running 0 136m devopsnow agent agent swq2s 1/1 running 0 136m devopsnow agent kubestatemetrics 5c9df46dcc 9jls4 1/1 running 0 10h operators now jaeger operator 869c5b7c6b wwqnz 1/1 running 0 10h operators now prometheus o operator 65bc895dbb 8bhbm 1/1 running 0 10h operators now vmop d99957474 cd7kp 1/1 running 0 10h remote bootstrap now argocd application controller b47b5c7qvjcv 1/1 running 0 10h remote bootstrap now argocd redis 576c9468d7 62qzf 1/1 running 0 10h remote bootstrap now argocd repo server 86f6b58cc4 rn6rt 1/1 running 0 10h remote bootstrap now argocd server 7cfdbb6569 7x9qm 1/1 running 0 10h remote bootstrap now sealedsecrets 787cfb47dc 6zxpm 1/1 running 0 10h enable the argo cd ui and check the apps running the following command will make the argo cd ui accessible on https //localhost 8001 https //localhost 8001 kubectl port forward n opsverse svc/remote bootstrap now argocd server 8001 80 use admin as the username and please reach out to opsverse poc for the default password deploy the observability stack the deployment will happen by an opsverse admin through the below inputs by the customer input the following details are required for deploying the observability stack dns names name of the object storage bucket to be used for log storage (e g , s3 bucket, gcs bucket, or azure storage container) arn of the role with access to this s3 bucket (or gcp iam service account or azure storage account key) deployment this is done by the opsverse admin remotely by pushing the deployment configs to the github repo polled by the argo cd agent dns entries find out the host name of the nginx ingress loadbalancer using the following command echo `kubectl get svc n nginx ingress nginx ingress now ingress nginx controller o jsonpath='{ status loadbalancer ingress\[0] hostname}'` set the above host name as a cname record for all the dns entries identified in the previous step access grafana access the grafana url in a browser sso can be used to login to grafana by default, sso based users are granted viewer permission in grafana this permission can be changed by logging in as the admin user to find out the admin users password, run the following command echo `kubectl get secret n \<orgname> \<instancename> observe grafana secret o jsonpath='{ data admin password}' | base64 d` collect telemetry and start observing at this point, your observability backend is fully ready to receive telemetry data follow the steps under the collection https //docs opsverse io/alha overview section to collect telemetry from your infrastructure