How do I enable Application Insights server telemetry on WebApi project that uses OWIN?

后端 未结 3 454
一整个雨季
一整个雨季 2021-02-05 09:37

We are having a bunch of problems (read long response times) with a couple of projects in production and wanted to see exactly what was happening on the server. I then proceeded

相关标签:
3条回答
  • 2021-02-05 10:09

    This is an old question but it was still in the top 3 results on searches for "web api application insights owin". After lots of searching and not a lot of answers that didn't require us to write our own middleware or explictly instrumenting everything. We came across an extension package that made things super simple:

    Here's the Github Repository for it and the associated NuGet Package

    For those too lazy to look at the links, all that was needed to be added was:

    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            app.UseApplicationInsights();
    
            // rest of the config here...
        }
    }
    

    and add this to your ApplicationInsights.Config

    <TelemetryInitializers>
        <!-- other initializers.. -->
        <Add Type="ApplicationInsights.OwinExtensions.OperationIdTelemetryInitializer, ApplicationInsights.OwinExtensions"/>
    </TelemetryInitializers>
    
    0 讨论(0)
  • 2021-02-05 10:16

    Below is our implementation of a OWIN Middleware for Application Insights.

    /// <summary>
    /// Extensions to help adding middleware to the OWIN pipeline
    /// </summary>
    public static class OwinExtensions
    {
        /// <summary>
        /// Add Application Insight Request Tracking to the OWIN pipeline
        /// </summary>
        /// <param name="app"><see cref="IAppBuilder"/></param>
        public static void UseApplicationInsights(this IAppBuilder app) => app.Use(typeof(ApplicationInsights));
    
    }
    
    /// <summary>
    /// Allows for tracking requests via Application Insight
    /// </summary>
    public class ApplicationInsights : OwinMiddleware
    {
    
        /// <summary>
        /// Allows for tracking requests via Application Insight
        /// </summary>
        /// <param name="next"><see cref="OwinMiddleware"/></param>
        public ApplicationInsights(OwinMiddleware next) : base(next)
        {
        }
    
        /// <summary>
        /// Tracks the request and sends telemetry to application insights
        /// </summary>
        /// <param name="context"><see cref="IOwinContext"/></param>
        /// <returns></returns>
        public override async Task Invoke(IOwinContext context)
        {
            // Start Time Tracking
            var sw = new Stopwatch();
            var startTime = DateTimeOffset.Now;
            sw.Start();
    
            await Next.Invoke(context);
    
            // Send tracking to AI on request completion
            sw.Stop();
    
            var request = new RequestTelemetry(
                name: context.Request.Path.Value,
                startTime: startTime,
                duration: sw.Elapsed,
                responseCode: context.Response.StatusCode.ToString(),
                success: context.Response.StatusCode >= 200 && context.Response.StatusCode < 300
                )
            {
                Url = context.Request.Uri,
                HttpMethod = context.Request.Method
            };
    
            var client = new TelemetryClient();
            client.TrackRequest(request);
    
        }
    }
    
    0 讨论(0)
  • 2021-02-05 10:20

    AI uses httpmodule to collect information on begin request and send it on end request. As described here Owin/Katana uses middelwares to execute logic on a different stages. As most of AI auto collection logic is internal you cannot reuse it in your middleware. But you can instrument your code yourself. Create TelemetryClient from your code and start sending Request, Traces and Exceptions (like described here)

    0 讨论(0)
提交回复
热议问题