Application logging is ubiquitous and invaluable for troubleshooting. Structured logging enables you to log formatted messages and the data fields separately so that you can see the messages but also filter on the data fields. Tracing takes this a step further, where you can correlate many log entries together as you follow a trace of execution through an application. Traces also include additional information about the execution process, such as the sequence of calls to dependencies and how long any given call may take.
Application Insights lets you see all of this data correlated together in an application. You can search for an error log and then see in the execution flow that the log entry was added right after a failed call to another service. Or you can see that a certain web request is slower than others because it spends a lot of time on many redundant data access calls.
OpenTelemetry is great - instead of having to add calls to a vendor-specific library for tracing in your code, you can use an open standard. However, the AppInsights support for it is in preview, and until the exporter is working, you really only get dependency tracking into AppInsights today. Write your code with that in mind, keeping a lightweight abstraction around the places you need to call the ApplicationInsights SDK and following the progress of that effort.
To get started, we will reference some packages and open the namespaces needed to configure our application for logging and tracing. To keep this to an interactive session, this will use the AppInsights WorkerService which can be used to instrument a simple executable.
Most of these namespaces are needed only on your application's entry point so it can fully configure the .NET service provider to create an general purpose ILogger
as well as an AppInsights TelemetryClient
for setting the boundaries for the trace.
#r "nuget: Microsoft.Extensions.Configuration"
#r "nuget: Microsoft.Extensions.Logging"
#r "nuget: Microsoft.Extensions.Logging.Console"
#r "nuget: Microsoft.ApplicationInsights"
#r "nuget: Microsoft.ApplicationInsights.WorkerService"
#r "nuget: Microsoft.Extensions.Logging.ApplicationInsights"
open System
open System.Diagnostics
open Microsoft.Extensions.DependencyInjection
open Microsoft.Extensions.Logging
open Microsoft.Extensions.Logging.ApplicationInsights
open Microsoft.ApplicationInsights
open Microsoft.ApplicationInsights.DataContracts
let appInsightsConnStr = "InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://eastus-8.in.applicationinsights.azure.com/"
let services = ServiceCollection()
// Lowers the default LogLevel to debug and filters anything below information level from AppInsights.
services.AddLogging(fun builder ->
builder.SetMinimumLevel(LogLevel.Debug)
.AddConsole()
.AddFilter<ApplicationInsightsLoggerProvider>("", LogLevel.Information)
.AddFilter<ApplicationInsightsLoggerProvider>("Category", LogLevel.Information)
|> ignore)
// Configures the telemetry client as needed when the logger is created.
services.AddApplicationInsightsTelemetryWorkerService(fun options -> options.ConnectionString <- appInsightsConnStr)
let serviceProvider = services.BuildServiceProvider()
// For the purposes of this script, the logger and telemetry client is resolved directly.
let logger = serviceProvider.GetRequiredService<ILogger<FSharp.Compiler.Interactive.InteractiveSession>>()
let telemetryClient = serviceProvider.GetRequiredService<TelemetryClient>()
(* Application code goes here *)
logger.LogDebug "Shutting down... (5 second delay for flush)"
telemetryClient.Flush()
System.Threading.Thread.Sleep(TimeSpan.FromSeconds 5.)
logger.LogDebug "Done."
In a normal service you don't shut down right after a request and give time for the telemetry client to flush on its own, but since this is just an F# script, this gives it a few seconds.
Using the standard logging interface will result in traces sent to AppInsights with structured data in fields that can be used to search and filter traces. The log level is translated to a 'Severity Level' field.
// Log some structured and unstructured data
let beachTrip (family:string, beachItems:string list) =
logger.LogInformation("{Family} heading to the beach", family)
for item in beachItems do
logger.LogInformation("Bringing {Item}", item)
logger.LogError("It started raining")
logger.LogInformation("Heading back inside")
beachTrip ("Brown", ["Shovel"; "Beach Toys"; "Chairs"; "Towels"])
These each appear as individual trace statements in AppInsights.
You can also filter by structured data passed to the ILogger
instance.
logger.LogInformation("Bringing {Item}", item)
An application will often perform many operations concurrently which results in the traces from different operations interleaving. They can be grouped by starting an operation, which creates a telemetry context. This is a disposable resource and any log statements within that context will be grouped under that operation until it goes out of scope and is disposed.
let beachTripCorrelated (family:string, beachItems:string list) =
use op = telemetryClient.StartOperation<RequestTelemetry>("Beach Trip")
// Structured data on the request telemetry.
op.Telemetry.Properties.Add("Family", family)
logger.LogInformation("{Family} heading to the beach", family)
for item in beachItems do
logger.LogInformation("Bringing {Item}", item)
logger.LogError("It started raining")
logger.LogInformation("Heading back inside")
Now we can click on a specific error and get all of the telemetry - all the log statements - that are correlated with it.
This gives us a nice picture how what happened in this operation right up until the error. You can also add structured data to the logging scope that will be added on all traces emitted by the logger in that scope by using ILogger.BeginScope
.
let beachTripCorrelated (family:string, beachItems:string list) =
use op = telemetryClient.StartOperation<RequestTelemetry>("Beach Trip")
op.Telemetry.Properties.Add("Family", family)
// Add some structured data to all the logs
use loggingScope =
seq { "TripType", box "Beach" }
|> readOnlyDict<string,obj>
|> logger.BeginScope
logger.LogInformation("{Family} heading to the beach", family)
for item in beachItems do
logger.LogInformation("Bringing {Item}", item)
logger.LogError("It started raining")
logger.LogInformation("Heading back inside")
In some cases, the operation needs to be disposed but later traces should still be correlated with it. This can be accomplished by saving the op.Telemetry.Id
from the original operation and creating an Activity
for the later operation, and then setting its parent to the original operation Id.
let familyTelemetry = System.Collections.Concurrent.ConcurrentDictionary<string,string>()
let beachTripCorrelated (family:string, beachItems:string list) =
use op = telemetryClient.StartOperation<RequestTelemetry>("Beach Trip")
// Other logic snipped
familyTelemetry.TryAdd(family, op.Telemetry.Id) |> ignore
let laterActivity (family:string, traceId:string ) =
use a = new Activity($"Followup Actions")
// This line sets the traceId from the other trace so this is a child activity.
a.SetParentId(traceId) |> ignore
use op = telemetryClient.StartOperation<RequestTelemetry>(a)
op.Telemetry.Properties.Add("Family", family)
// Use a URL that makes sense for your activity, many schemes are
// here: https://en.wikipedia.org/wiki/List_of_URI_schemes
op.Telemetry.Url <- Uri $"example://done-with-{family}"
logger.LogInformation("Done with {Family}", family)
// First operations happen and then their telemetry operations are disposed, but the `Telemetry.Id` is saved.
[
"Brown", ["Shovel"; "Beach Toys"; "Chairs"; "Towels"]
"Smith", ["Chairs"; "Towels"; "Umbrella"]
"Patterson", ["Towels"; "Sunscreen"]
] |> List.map (beachTripCorrelated >> async.Return) |> Async.Parallel
// Now the later operations run and their telemetry is correlated.
familyTelemetry |> Seq.iter (fun kvp -> laterActivity(kvp.Key, kvp.Value))
And now each correlated trace will show two operations, the parent "Beach Trip" and the child "Followup Action":
At this point, we've done quite a lot with just the standard logger and a telemetry operation. Most of the code can simply use the ILogger
. You'll usually start the operation on the service boundary for your code, such as an API entry point, which is all that really needs awareness of the AppInsights SDK.
Dependency tracking comes automatically for HTTP, AMQP, and database requests through instrumentation built into those libraries. That means that without doing anything else, calls that your code is making will emit traces for dependencies. Dependency telemetry includes the result and the duration for a dependency call. These will be correlated automatically in the request operation scope along with the logging statements to give a detailed trace of the application's behavior.
Dependency tracking is part of the distributed trace, but it is a very application-centric view. There may be multiple services involved in an overall operation, and these may create their own traces. A distributed trace shows the operation across all these services and is enabled by passing a traceparent
header when calling a dependency. The dependent service can include this as the parent of their trace so that AppInsights can correlate their traces together.
// Example here with a Suave web part, but works as well with any HTTP framework.
let tracingWebPart : WebPart =
fun (ctx:HttpContext) ->
async {
use a = new Activity("Service: Order Lunch")
// The 'traceparent' header is passed when the request is made in
// the context of another operation.
match ctx.request.header "traceparent" with
| Choice1Of2 traceParent ->
a.SetParentId(traceParent) |> ignore
| _ -> ()
use op = telemetryClient.StartOperation<RequestTelemetry>(a)
op.Telemetry.Url <- ctx.request.url
logger.LogInformation($"Accepted lunch order.")
return! OK $"Order {Guid.NewGuid()}." ctx
}
Now let's have our laterActivity
actually call that service to "order lunch":
let laterActivity (family:string, traceId:string ) =
task {
use a = new Activity($"Followup Actions")
a.SetParentId(traceId) |> ignore
use op = telemetryClient.StartOperation<RequestTelemetry>(a)
op.Telemetry.Properties.Add("Family", family)
// All we added is the HTTP call, but it will be tracked automatically.
let! order = http.GetStringAsync("http://localhost:8080")
logger.LogInformation("Placed {Order}.", order)
// Use a URL that makes sense for your activity, many schemes are
// here: https://en.wikipedia.org/wiki/List_of_URI_schemes
op.Telemetry.Url <- Uri $"example://done-with-{family}"
logger.LogInformation("Done with {Family}", family)
}
familyTelemetry
|> Seq.map (function KeyValue (family, traceId) -> laterActivity(family, traceId))
|> Seq.cast<Threading.Tasks.Task>
|> Array.ofSeq
|> System.Threading.Tasks.Task.WaitAll
Emitting telemetry is a necessary part of operating a service in production, as things will inevitably go wrong and you will want to be able to put together a flow of the events that occurred. Application Insights makes great use of runtime features like a disposable context and support for the common ILogger
interface to enable you to use the SDK with minimal impact on your code while gaining correlated - and even implicit - telemetry. This allows you to for an error message and immediately pull up a fully correlated trace of the activities and logs, or look at slow requests and determine all the database and service calls that are made. The growing support for Open Telemetry will only make this better as we are able to gather telemetry from more components that interact in the distributed systems we build today.