ObserveNow
Collecting Telemetry
Organizing Observability Telemetry with Standard Metadata
introduction effective observability requires not just collecting telemetry data but organizing it in a way that makes it accessible and actionable this document outlines our recommended approach to structuring observability telemetry using four key metadata dimensions environment, product, service, and version this consistent metadata framework enables powerful querying, correlation, and troubleshooting capabilities across your entire technology stack implementing this metadata framework is optional but highly recommended opsverse pre packaged dashboards and alerts will work even without these metadata dimensions however, adding them significantly enhances your observability capabilities and troubleshooting efficiency core metadata dimensions following are the recommended core metadata labels environment identifies the deployment context where services run (for example dev, staging, production, dr) product groups related services that together deliver a specific business capability or application service specifies the individual deployable component or microservice within a product version tracks the specific code release or build running in the environment benefits of standardized metadata cross signal correlation by applying consistent metadata across different telemetry types, you can correlate related signals connect high level application metrics with underlying infrastructure metrics like container/host cpu/memory link log entries to specific transactions and traces correlate application performance with user experience metrics group together all related telemetry for easy visualization and alerting simplified querying standardized metadata enables powerful filtering and aggregation filter by environment to focus on production issues compare metrics across different versions of the same service aggregate telemetry across all services in a product isolate problems to specific deployment environments or to specific product teams enhanced troubleshooting when investigating incidents, standardized metadata provides critical context quickly determine affected environments, products, and services compare behavior between working and non working versions identify blast radius of issues across service boundaries trace problems from user facing symptoms to root causes enabling metadata in different environments traces opentelemetry instrumentation for services instrumented with opentelemetry, add these as additional attributes add environment , product , service , and version attributes to your opentelemetry configuration this applies to services running in both kubernetes and non kubernetes environments if these additional aatributes are not included, the opsverse ingestion pipeline automatically adds these attributes with default as the value metrics and logs k8s based services for services running in kubernetes add the metadata as kubernetes labels to your pods opsverse agents will automatically detect and include these labels in collected metrics example pod specification apiversion v1 kind pod metadata name payment service labels environment "production" product "payment platform" service "payment api" version "v1 2 3" services running on vms and standalone servers for services running on virtual machines or standalone servers add the metadata as labels in the opsverse agent's configuration yaml file other integrations (kafka, databases, etc ) for third party services and integrations follow the documentation for those specific opsverse integrations if you need assistance, contact the opsverse support team for guidance on specific integrations