google-cloud-stackdriver

How to use latency of a service deployed on Kubernetes to Scale the deployment?

痴心易碎 提交于 2020-02-04 03:50:05
问题 I have a simple spring boot application deployed on Kubernetes on GCP. The service is exposed to an external IP address. I am load testing this application using JMeter. It is just a http GET request which returns True or False . I want to get the latency metrics with time to feed it to HorizontalPodAutoscaler to implement custom auto-scaler. How do I implement this? 回答1: Since you mentioned Custom Auto Scaler. I would suggest this simple solution which makes use of some of tools which you

How to call google time series monitoring API with SslSocket Factory

☆樱花仙子☆ 提交于 2020-01-25 09:37:25
问题 I'm getting below SSL error while fetching the time series List from Google Monitoring api. Is there any way to add custom SSL factory or custom trust in google monitoring client to make a secure call ? ListTimeSeriesResponse response = mServiceClient .listTimeSeriesCallable().call(request); sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:397) at sun

Stackdriver Monitoring API throwing an error while getting time series metrics data

こ雲淡風輕ζ 提交于 2020-01-25 06:46:12
问题 I'm getting the error unable to find valid certification path from this piece of code: MetricServiceClient mServiceClient = MetricServiceClient.create() ListTimeSeriesResponse response = mServiceClient.listTimeSeriesCallable().call(request); Error's stack trace: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ... 3 more Suppressed: com.google.api.gax.rpc.AsyncTaskException: Asynchronous task failed at com.google.api.gax.rpc

Stackdriver-trace on Google Cloud Run failing, while working fine on localhost

ⅰ亾dé卋堺 提交于 2020-01-24 21:01:47
问题 I have a node server running on Google Cloud Run. Now I want to enable stackdriver tracing. When I run the service locally, I am able to get the traces in the GCP. However, when I run the service as Google Cloud Run, I am getting an an error: "@google-cloud/trace-agent ERROR TraceWriter#publish: Received error with status code 403 while publishing traces to cloudtrace.googleapis.com: Error: The request is missing a valid API key." I made sure that the service account has tracing agent role.

How to export previous logs in Stackdriver

孤人 提交于 2020-01-12 14:07:49
问题 I have a log in Stackdriver that logs every request goes into my api and failed, and I want to write a script to count on the number of times each error message appears. The problem is, the export feature in Stackdriver V2 only allow me to sink upcoming error messages, but I only care about the logs entries that already lives in the log. Is there a way to download the complete log from Stackdriver? 回答1: You can now do this from the gcloud CLI tool, with gcloud logging read : https://cloud

How to export previous logs in Stackdriver

狂风中的少年 提交于 2020-01-12 14:06:08
问题 I have a log in Stackdriver that logs every request goes into my api and failed, and I want to write a script to count on the number of times each error message appears. The problem is, the export feature in Stackdriver V2 only allow me to sink upcoming error messages, but I only care about the logs entries that already lives in the log. Is there a way to download the complete log from Stackdriver? 回答1: You can now do this from the gcloud CLI tool, with gcloud logging read : https://cloud

How do I coalesce Stackdriver logs/sinks into one BigQuery project/dataset?

眉间皱痕 提交于 2019-12-24 05:37:10
问题 Setting up Stackdriver log sinks to BigQuery is straightforward. However, I have lots of projects, and instead of each export sink going to its corresponding project, I'd like to coalesce the logs from all my projects to one dedicated project. The configuration in the Stackdriver sink config doesn't appear to let me select a different project to send the logs to. How do I select a different project/dataset? 回答1: You need to select the 'Custom destination' option. This will allow you to plug

GCP Stackdriver Logs-Based Metrics for custom payload value

感情迁移 提交于 2019-12-20 03:54:14
问题 Already created my filters with a specific value from the jsonPayload. I want to create a metric which plots the values of Process_time. Can anyone please help, how to create such metrics? 回答1: Click on the "CREATE METRIC" button above the filter box and use jsonPayload.process_time as the Field name The other attributes are generally described in the instructions on https://cloud.google.com/logging/docs/logs-based-metrics/distribution-metrics 来源: https://stackoverflow.com/questions/56821181

Setup “Stackdriver Kubernetes Monitoring” for AWS

拈花ヽ惹草 提交于 2019-12-14 01:40:26
问题 Google Cloud Platform announced "Stackdriver Kubernetes Monitoring" at Kubecon 2018. It looks awesome. I am an AWS user running a few Kubernetes clusters and immediately had envy, until I saw that it also supported AWS and "on prem". Stackdriver Kubernetes Engine Monitoring This is where I am getting a bit lost. I cannot find any documentation for helping me deploy the agents onto my Kubernetes clusters. The closest example I could find was here: Manual installation of Stackdriver support,

StackDriver alert when there's no data

元气小坏坏 提交于 2019-12-11 16:46:08
问题 I've set up an alerting policy in StackDriver on the "instance/uptime" metrics. To alert when it's less 1 for 1 minute. Then I deleted the instance, and had no alerts. Is it so because at the following time window the data isn't 0 rather none, so no alerts are sent? 回答1: I reproduced your situation If the instance is deleted, there's no alerts being generated, this is expected behavior since the resource (VM) doesn't exist anymore. Uptime alerts are generated when only VM instances are up and