_From @StevenTaylor on October 5, 2018 5:50_
Documentation (https://docs.microsoft.com/en-us/azure/application-insights/app-insights-api-filtering-sampling) suggests that ITelemetryProcessor can be used to filtersample the telemetry before it is sent App Insights.
But the implementation code is never called in our Function App.
The code to set up the builder is called, which looks like:
var builder = TelemetryConfiguration.Active.TelemetryProcessorChainBuilder;
builder.Use((next) => new AppInsightsFilter(next));
builder.Build();
But the 'Process' method implemented in the class is never called, the class looks like:
` class AppInsightsFilter : ITelemetryProcessor
{
private ITelemetryProcessor Next { get; set; }
// Link processors to each other in a chain.
public AppInsightsFilter(ITelemetryProcessor next)
{
this.Next = next;
}
public void Process(ITelemetry item)
{
// To filter out an item, just return
if (!OKtoSend(item)) { return; }
// Modify the item if required
//ModifyItem(item);
this.Next.Process(item);
}
private bool OKtoSend(ITelemetry item)
{
// try something!
return (DateTime.Now.Second <= 20); // only send for the first 20 seconds of a minute
}
}`
PS. We have tried setting config values SamplingPercentage (App Insights config) and maxTelemetryItemsPerSecond (host.json) as low as possible to reduce the telemetry data, but there is still too much telemetry.
_Copied from original issue: Azure/Azure-Functions#981_
_From @roryprimrose on November 2, 2018 4:13_
This is a big issue for me as well. The amount of trace records being written to AI is large and increases the cost. Being able to filter out the unnecessary entries would save a lot of bandwidth and cost.
Curious on if work around dependency injection ( azure/azure-functions-host#3736 ) will help with this or anything else? @brettsam as well as he has tons of AI experience
My scenario is that I want to remove trace messages written to AI, primarily from the storage extension while allowing custom metrics to be sent to AI. I've tried just about everything to get a resolution to this without success. I'm also using DI for my C# functions.
Here are some of the things I've found:
I get a TelemetryClient from the host in an extension at the first available opportunity. I modify it as per the code from @StevenTaylor. I found that the client did appear to have the processors registered, but the code was never executed.
I tried disabling AI logging Information entries in config
"Logging": {
"LogLevel": {
"Default": "Information"
},
"ApplicationInsights": {
"LogLevel": {
"Default": "Warning"
}
}
}
This wiped out all Information tracing to AI. I would prefer to be more targeted than that, but it solved the first problem. Unfortunately it also wiped out reporting of customMetrics to AI. I suspect this is because customMetric records are written with a trace level of Information. This was a little weird though because on reflection I realised I was sending custom metrics directly from the TelemetryClient rather than via any injected ILogger.
This is a great scenario so it'd be interesting to walk through it here. and I can build these up on the wiki to help everyone else, too.
I'm guessing that the processor isn't working b/c the processor chain gets built here: https://github.com/Azure/azure-webjobs-sdk/blob/dev/src/Microsoft.Azure.WebJobs.Logging.ApplicationInsights/Extensions/ApplicationInsightsServiceCollectionExtensions.cs#L270. Once that happens, I believe it's immutable. If we wanted to allow custom processors, we'd need to expose some other builder mechanism that allowed you to plug in before we build. But we may be able to solve this with filters.
What are the specific messages that you're trying to silence? You can filter anything based on the category (or even the category prefix). So if you know what you want to let through and what you want to prevent, we should be able to craft the right filter. To see the category of a particular message, check the customDimensions.Category value in App Insights Analytics

@brettsam Yes, the categories between traces and customMetrics are different. I'll try using a config filter against the traces category.
@brettsam Nope, sorry, I was right the first time. My custom metrics, written directly via TelemetryClient, are being recorded with the same category used to write trace records from the queue trigger.
I have a queue trigger AnalyticsHintsProcessor which logs with the category Function.AnalyticsHintsProcessor. I also have a timer trigger AnalyticsScavenger which logs with the category Function.AnalyticsScavenger.
These are the customMetrics that are recorded for both scenarios.

These are the trace records written by the triggers.

So I can't use a logging config filter to wipe out the trace records for the storage triggers without also wiping out the custom metrics.
Why is TelemetryClient coupled to ILogger configuration filters?
The OOTB metrics for the function are the ones that use a different category being Host.Aggregator.

Can you share how you're logging your metrics? We've added a LogMetric() extension to ILogger that log metrics to App Insights with a Category of Function.FunctionName.User -- which is how we indicate logging coming from within your code. You could then filter based on that, which would remove all of those host logs you're seeing (the ones without .User)
I'm not using the ILogger.LogMetric extension because I wanted to do pre-aggregation of my custom metrics. I'm using some code that was originally on MSDN somewhere (for the life of me I can't find it now). The logic calculates aggregated metric data for a period of time before sending the metrics to AI.
For example, I have some processes where the metric data can be aggregated to one or five minute blocks of data where hundreds of individual metrics could be tracked within those periods. This saves a lot of raw metrics being sent to AI.
As far as getting the TelemetryClient in this scenario, this is how I hook it up in a function. This is part of my custom dependency injection configuration.
[assembly: WebJobsStartup(typeof(DependencyInjectionStartup))]
public class DependencyInjectionStartup : IWebJobsStartup
{
public void Configure(IWebJobsBuilder builder)
{
builder.AddExtension<InjectExtensionConfigProvider>();
}
}
public class InjectExtensionConfigProvider : IExtensionConfigProvider
{
private readonly InjectBindingProvider _bindingProvider;
public InjectExtensionConfigProvider(TelemetryClient telemetryClient, ILoggerFactory loggerFactory)
{
Ensure.Any.IsNotNull(telemetryClient, nameof(telemetryClient));
Ensure.Any.IsNotNull(loggerFactory, nameof(loggerFactory));
_bindingProvider = new InjectBindingProvider(telemetryClient, loggerFactory);
}
public void Initialize(ExtensionConfigContext context)
{
context.AddBindingRule<InjectAttribute>().Bind(_bindingProvider);
}
}
The InjectBindingProvider class holds an Autofac container which is configured with the TelemetryClient and ILoggerFactory values. My custom metric aggregation logic is created by the container using the TelemetryClient.
I have a related problem so I thought I'd write about it here rather than make a new issue.
In my function app, there are many calls to dependent services. These calls are all tracked as Dependencies in Application Insights. The vast majority of data in my Application Insights instances are from successful dependency calls. The cost of tracking these calls has become substantial.
I've tried implementing the ITelemetryProcess interface to filter them out, and then later found here that this doesn't work.
Is there some other way to disable Dependency Tracking in Application Insights for Function Apps? Currently, my two options are 1) Pay a substantial amount of money to needlessly track dependency successes, or 2) get rid of Application Insights.
I am also having this same problem. I have been trying to filter our Dependency Tracking as it's costing a large sum of money for data I don't really need.
@michaeldaw hit the nail on the head. We're having the exact same issue and would love to be able to reduce our AI bill by filtering out records. Particularly filtering out successful dependency calls would be a big positive.
For now we're going to have to replace our logging implementation to write useful data to another logging service until this gets some traction.
@brettsam do you have a good feel for what the right feature / fix is that could resolve this? Seems to be a fairly common type ask and not sure I'm clear on exactly if just a doc gap, feature gap, or something that could be solved with other in flight work.
I apologize for the delay -- I lost track of this issue. Looping in @lmolkova for her thoughts as well.
For the core issue filed -- registering a custom ITelemetryProcessor should be do-able and fairly easy. You could register one or more with DI and we could stick it at the end of the processor chain, after all of our built-in processors run. That shouldn't be too difficult and as long as we clearly document where it runs in the chain, I think it makes sense. It would allow a lot of what I've listed out below to be done in a custom way by anyone.
There's a couple of other issues brought up that are good as well:
TelemetryClient that we create via DI, which runs a custom ITelemetryInitializer. This will grab the current logging scope and apply it to the telemetry being logged. Because you're doing this in a background task, that likely explains why you're getting an incorrect category. If you, instead, created your own TelemetryClient there (using the same instrumentation key -- which can be pulled from an injected IConfiguration) -- you should get category-less metrics that you can log however you want.Function.FunctionName.Metric to allow for more granular filtering. They currently get lumped in as Function.FunctionName.User, which is workable but may not give enough switches for everyone.LogLevel to Error, which we should. That would allow you to filter and only show failed dependencies.Host.Bindings and scenarios where the category will be Function.FunctionName. We should try to reconcile this for better filtering.Warning?) ones that take longer than 1 second (configurable). @brettsam For me, the category seems to always be Function.FunctionName. I don't see any categorized as Host.Bindings.
Here is a code sample that demonstrates how to add a processor to the chain. We should probably simplify this and enable the same approach as we have for TelemetryIntializers
using System.Linq;
using FunctionApp12;
using Microsoft.ApplicationInsights.Channel;
using Microsoft.ApplicationInsights.DataContracts;
using Microsoft.ApplicationInsights.Extensibility;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Hosting;
using Microsoft.Extensions.DependencyInjection;
[assembly: WebJobsStartup(typeof(Startup))]
namespace FunctionApp12
{
public class Startup : IWebJobsStartup
{
public void Configure(IWebJobsBuilder builder)
{
var configDescriptor = builder.Services.SingleOrDefault(tc => tc.ServiceType == typeof(TelemetryConfiguration));
if (configDescriptor?.ImplementationFactory == null)
return;
var implFactory = configDescriptor.ImplementationFactory;
builder.Services.Remove(configDescriptor);
builder.Services.AddSingleton(provider =>
{
if (!(implFactory.Invoke(provider) is TelemetryConfiguration config))
return null;
config.TelemetryProcessorChainBuilder.Use(next => new AiErrorFilteringProcessor(next));
config.TelemetryProcessorChainBuilder.Build();
return config;
});
}
}
internal class AiErrorFilteringProcessor : ITelemetryProcessor
{
private readonly ITelemetryProcessor _next;
public AiErrorFilteringProcessor(ITelemetryProcessor next)
{
_next = next;
}
public void Process(ITelemetry item)
{
if (!(item is TraceTelemetry trace && trace.Message == "AI (Internal): Current Activity is null for event = 'System.Net.Http.HttpRequestOut.Stop'"))
{
_next.Process(item);
}
}
}
}
- If the dependency fails, we don't flip the
LogLeveltoError, which we should. That would allow you to filter and only show failed dependencies.
Agree on this, but the failure criteria problem would immediately arise after that (are 404, 401, 409, 429 failures).
Normally we approach this problem by sampling: you either want everything for the particular transaction to be sampled in or nothing at all.
If you don't track a dependency call, other features may be broken like end-to-end transaction viewer or app map. Even though dependency call is successful, you might still be interested in what happened for the whole transaction.
So I'd suggest to carefully weight everything before deciding to disable successful dependency tracking.
Please factor the following while weighing the pros and cons of tracking successful dependencies:
These are screenshots of the Application Insights instance and cost analysis for a particular Function App and related storage account in our system.

Above is a capture of the default 24-hour period search for the service in question. You can see that dependency tracking accounts for 1.4 million items, while Trace, Request, and Exception account for 40K, 18K, and 1.9K (really must look in to those exceptions), respectively. Dependency events account for approximately 96% of all events.

This is the cost projection of the Application Insights instance. As before, the image shows that "REMOTEDEPENDENCY" events make up for the vast majority of recorded events.

Finally, the above screenshot is a filtered selection from the "Cost by Resource" view, showing the cost of the Function App, Storage Accounts, and Application Insights instances in question. The cost of the Application Insights instance is 1252% that of the cost of the Function App it is monitoring.
These costs are woefully unsustainable for us. Of late, my decision to use Function Apps, which I'd touted to my colleagues as extremely cost effective, is being called in to question by my teammates and superiors. Application Insights has been an invaluable tool for diagnostics and troubleshooting. I'd liken using a Function App without an associated Application Insights instance to flying by the seat of my pants. That said, I will eventually have to choose to either stop using Application Insights, or stop using Function Apps. I'm sure I'm not the only one who would really appreciate if the Functions and Application Insights teams could find a solution by which that choice doesn't have to me made.
What @lmolkova has above will work -- it lets you grab the TelemetryConfiguration that we generate and append an ITelemetryProcessor to it. From there, you can filter out any DependencyTelemetry however you want. Thanks for putting that together @lmolkova! That at least gives folks an escape hatch while we work through the design here. We'll keep this issue open and work to come up with better configuration and extensibility for this.
And thanks a ton @michaeldaw (and others) -- having all this data makes it really easy to see how important this is.
A couple of other notes to anyone trying the DI approach above (neither of these will be required long-term, but are right now):
WebJobsStartup.@brettsam @lmolkova Thanks for looking in to this, everyone. I've implemented the code @lmolkova posted above.
Strangely, the code seems to work as expected when I debug the function app locally, but the filtering does not occur when I publish the app to Azure. I was hoping someone might have some suggestions as to why.
I know there are Application Insights configuration options available in the host.json file, but I don't have anything there except the "version": "2.0" statement. I also do not have an ApplicationInsights.config file.
Is there something else I might be missing?
Here's my implementation of ITelemetryProcessor specifically for filtering out DependencyTelemetry items:
internal class AiDependencyFilter : ITelemetryProcessor
{
private readonly ITelemetryProcessor _next;
public AiDependencyFilter(ITelemetryProcessor next)
{
_next = next;
}
public void Process(ITelemetry item)
{
var request = item as DependencyTelemetry;
if (request?.Name != null)
{
return;
}
_next.Process(item);
}
}
What version of the host are you running locally? Can you check the bin\extensions.json file both locally and in Azure?
Azure Functions Core Tools (2.4.379 Commit hash: ab2c4db3b43f9662b82494800dd770698788bf2d)
Function Runtime Version: 2.0.12285.0
I wasn't aware of the extensions.json file, but it looks like that was the problem.
Here's the local extensions.json file. The "Startup" class mentioned is the one I added that replicates the functionality @lmolkova mentioned.
{
"extensions":[
{ "name": "DurableTask", "typeName":"Microsoft.Azure.WebJobs.Extensions.DurableTask.DurableTaskWebJobsStartup, Microsoft.Azure.WebJobs.Extensions.DurableTask, Version=1.0.0.0, Culture=neutral, PublicKeyToken=014045d636e89289"},
{ "name": "AzureStorage", "typeName":"Microsoft.Azure.WebJobs.Extensions.Storage.AzureStorageWebJobsStartup, Microsoft.Azure.WebJobs.Extensions.Storage, Version=3.0.2.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"},
{ "name": "Startup", "typeName":"Scadavantage.Startup, Scadavantage.Data, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"}
]
}
The file in Azure did not have that last line mentioning the Startup class. I deploy to this Function App from Azure DevOps. Something there must ignore the file when deploying. I can look further in to that. I added the last line manually using Kudu.
With that last line in place, it seems to work! I'm not seeing Dependency calls in the either the Live Metrics Stream, or when I do a search for events. What a relief!
I wanted to mention that with this filter in place, the Live Metrics view seems to be adversely affected. I no longer see vales in the Request Rate or Request Duration graphs in the Live Metrics view. Could this be related to the concerns raised by @lmolkova about related events being affected by the absence of dependency events? Requests and traces still appear fine in the Search section.
In any event, I'm very much willing to live with a diminished experience with the Live Metrics Stream if it means so drastically reducing our costs. Thank you very much to everyone who's been looking in to this! It's a huge help.
Not sure if it still matters, but here's my local host version:
Azure Functions Core Tools (2.3.199 Commit hash: fdf734b09806be822e7d946fe17928b419d8a289)
Function Runtime Version: 2.0.12246.0
Can you share the package references in your .csproj? And what is your TargetFramework? I'm seeing several reports of build servers incorrectly generating the extensions.csproj so I'm trying to narrow it down.
Absolutely. The target framework is .NET Core 2.1.
Here are the package references from the Function App:
<ItemGroup>
<PackageReference Include="Microsoft.ApplicationInsights" Version="2.7.2" />
<PackageReference Include="Microsoft.Azure.WebJobs.Extensions.DurableTask" Version="1.7.1" />
<PackageReference Include="Microsoft.Azure.WebJobs.Extensions.Storage" Version="3.0.2" />
<PackageReference Include="Microsoft.Azure.WebJobs.Script.ExtensionsMetadataGenerator" Version="1.0.2" />
<PackageReference Include="Microsoft.Extensions.DependencyInjection" Version="2.1.1" />
<PackageReference Include="Microsoft.NET.Sdk.Functions" Version="1.0.24" />
<PackageReference Include="Scadavantage.Common.Core" Version="1.0.0-prerelease067" />
<PackageReference Include="Twilio" Version="5.26.1" />
</ItemGroup>
It mentions a "scadavantage.common.core". Here are the package references from that package:
<ItemGroup>
<PackageReference Include="Microsoft.ApplicationInsights" Version="2.7.2" />
<PackageReference Include="Microsoft.AspNetCore" Version="2.1.7" />
<PackageReference Include="Microsoft.Azure.DocumentDB.Core" Version="2.2.2" />
<PackageReference Include="System.Security.Cryptography.Algorithms" Version="4.3.1" />
<PackageReference Include="WindowsAzure.Storage" Version="9.3.3" />
</ItemGroup>
Let me know if I can help further.
Thanks that helped confirm the problem. I just opened this issue (feel free to comment over there so we don't derail this issue :-)) https://github.com/Azure/azure-functions-vs-build-sdk/issues/277
@michaeldaw
thanks for the great write up! Now I understand your motivation for disabling dependencies better.
This is something we should eventually handle in the ApplicationInsights SDK (for Functions and any other applications) and we need to find the right approach for it. I'll start this discussion internally.
Not tracking successful calls to the majority of bindings (e.g. tables) is reasonable and would not create any obvious issues with UX as long as these dependencies are leaf nodes.
Not tracking http calls to your own services or some output bindings (e.g queues) would break end-to-end tracing. Think about transaction traces as a tree. If one node is missing, reference is lost and instead of one tree, we now have two. We still know they are correlated, but causation is lost. In some cases, it is still good enough and for sure much better than the costs associated with redundant data.
As @brettsam mentioned (https://github.com/Azure/azure-webjobs-sdk/issues/2123), we'll provide better configuration to turn off the collection.
I'll also check why sampling was not useful for original issue author, maybe something is broken here:
PS. We have tried setting config values SamplingPercentage (App Insights config) and maxTelemetryItemsPerSecond (host.json) as low as possible to reduce the telemetry data, but there is still too much telemetry.
@lmolkova: thank you for explaining this. In our case, the functions in question are HTTP-triggered. The dependencies in questions are, for the most part, calls to tables using the .net storage API. It sounds like these kinds of calls would fall under the second part of your explanation (http calls, etc.).
I've modified the original AiDependencyFilter class that I posted earlier to allow failed dependency calls through the filter. I've only tested it a little bit, but it seems to restore the end-to-end tracing at least for those function calls. This is entirely sufficient for my purposes. It's really only when the calls fail that I'm interested in the tracing.
Thank you for your help.
@brettsam i am also running into a situation where I want to grab a reference to the TelemetryClient created by the runtime. Is there another way to get a hold of the instance without hooking into a custom IExtensionConfigProvider? it's my understanding without any custom bindings the extension would never be loaded. Here is what I have so far (i have a startup as well)
namespace Infrastructure.Logging
{
using DryIoc;
using Microsoft.ApplicationInsights;
using Microsoft.Azure.WebJobs.Host.Config;
using System;
public class TelemetryClientExtensionConfigProvider : IExtensionConfigProvider
{
private readonly TelemetryClient telemetryClient;
public TelemetryClientExtensionConfigProvider(TelemetryClient telemetryClient)
{
this.telemetryClient = telemetryClient ?? throw new ArgumentNullException(nameof(telemetryClient));
}
public void Initialize(ExtensionConfigContext context)
{
Main.Container.UseInstance(telemetryClient);
}
}
}
@michaeldaw The Live Metrics Stream will stop working when you simply replace TelemetryConfiguration in IWebJobsBuilder.Services. It is because when you call TelemetryProcessorChainBuilder.Build() QuickPulseTelemetryProcessor will be recreated, so you need have to register it again.
Here is my solution.
Add Microsoft.ApplicationInsights.PerfCounterCollector v2.7.2 through NuGet
```c#
var quickPulseProcessor = config.TelemetryProcessors
.OfType
.FirstOrDefault();
if (!(quickPulseProcessor is null))
{
var quickPulseModule = provider
.GetServices
.OfType
.FirstOrDefault();
quickPulseModule?.RegisterTelemetryProcessor(quickPulseProcessor);
}
config.TelemetryProcessors
.OfType
.ToList()
.ForEach(module => module.Initialize(config));
```
@mpaul31 -- have you tried putting a TelemetryClient in your function class's constructor? It should be injected there as TelemetryClient is a service registered with DI.
@dzhang-quest Thank you very much for posting that code. It's working great. That was really the final piece of the puzzle.
I'd thought I'd post the complete class in case anyone is interested. The programmer should be able to just add this class to their Azure Function project.
```c#
using MyNamespace;
using System.Linq;
using Microsoft.ApplicationInsights.Channel;
using Microsoft.ApplicationInsights.DataContracts;
using Microsoft.ApplicationInsights.Extensibility;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Hosting;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse;
[assembly: WebJobsStartup(typeof(Startup))]
namespace MyNamespace
{
public class Startup : IWebJobsStartup
{
public void Configure(IWebJobsBuilder builder)
{
var configDescriptor = builder.Services.SingleOrDefault(tc => tc.ServiceType == typeof(TelemetryConfiguration));
if (configDescriptor?.ImplementationFactory == null)
return;
var implFactory = configDescriptor.ImplementationFactory;
builder.Services.Remove(configDescriptor);
builder.Services.AddSingleton(provider =>
{
if (!(implFactory.Invoke(provider) is TelemetryConfiguration config))
return null;
config.TelemetryProcessorChainBuilder.Use(next => new AiDependencyFilter(next));
config.TelemetryProcessorChainBuilder.Build();
var quickPulseProcessor = config.TelemetryProcessors
.OfType<QuickPulseTelemetryProcessor>()
.FirstOrDefault();
if (!(quickPulseProcessor is null))
{
var quickPulseModule = provider
.GetServices<ITelemetryModule>()
.OfType<QuickPulseTelemetryModule>()
.FirstOrDefault();
quickPulseModule?.RegisterTelemetryProcessor(quickPulseProcessor);
}
config.TelemetryProcessors
.OfType<ITelemetryModule>()
.ToList()
.ForEach(module => module.Initialize(config));
return config;
});
}
}
internal class AiDependencyFilter : ITelemetryProcessor
{
private readonly ITelemetryProcessor _next;
public AiDependencyFilter(ITelemetryProcessor next)
{
_next = next;
}
public void Process(ITelemetry item)
{
var request = item as DependencyTelemetry;
if (request?.Name != null)
{
return;
}
_next.Process(item);
}
}
}
```
I am also having an issue getting this working in Azure.
I have implemented a small Azure Function using a version of the code outlined by @michaeldaw (thank you!) and can see that it is working when I execute locally, but when it is deployed to Azure it has no effect and all the dependency messages are appearing in AI.
Configuration
Directory.Build.targets to ensure the correct extensions.json gets deployed (see https://github.com/Azure/azure-functions-host/issues/3386)Packages
<PackageReference Include="Microsoft.ApplicationInsights" Version="2.7.2" />
<PackageReference Include="Microsoft.ApplicationInsights.PerfCounterCollector" Version="2.7.2" />
<PackageReference Include="Microsoft.Azure.WebJobs.Script.ExtensionsMetadataGenerator" Version="1.1.1" />
<PackageReference Include="Microsoft.NET.Sdk.Functions" Version="1.0.28" />
<PackageReference Include="MimeTypesMap" Version="1.0.7" />
<PackageReference Include="System.Data.SqlClient" Version="4.4.0" />
The reason I want to do this is that we make calls to Azure Blob storage to add/remove files including a call to await blob.CheckIfExistsAsync() which raises a 404 dependency error filling up our logs with issues that are not real (and wasting my time trying to figure out where these "errors" were coming from.
I have tried using newer and older versions of the Microsoft.ApplicationInsights libraries, using .Net Core 2.1 and .Net Standard 2.0, nothing effects the AI logging and I am not sure why.
As a side note, I have a related issue where the logging level in the host.json is ignored by AI as well (raised as a separate query here: https://github.com/Azure/azure-functions-host/issues/4474). I include it here as it may indicate some configuration problem that may be causing this issue.
Thanks for all your hard work, @michaeldaw! I've tried implementing this code snippet into our project, but there doesn't seem to be a TelemetryConfiguration IWebJobsBuilder service in our host when locally debugging. I've tried using both AI extension versions 2.10.0 and 2.9.1, but to no avail. Currently targeting the .NET Core 2.2 framework. Am I perhaps missing something here?
Edit: Silly mistake on my part, just forgot to include an instrumentation key in my local.settings.json file! I'll leave my question here in case others are curious 馃槃
According to an earlier post from @brettsam:
Make sure to reference Microsoft.ApplicationInsights 2.7.2 in your project. Anything newer will end up with type mismatches.
I'm still using 2.7.2. Haven't had the courage to upgrade yet, though I've been meaning to come back here and ask about it. Could your issue be that you are using the newer versions?
@marshall76963 I'm afraid I only just saw this now, sorry. Did you get it sorted out?
Bingo! Downgrading did the trick. Thanks so much!
Glad to hear. I guess that answers the question on whether or not I'll upgrade. @brettsam: I don't suppose there's any indication as to whether or not we'll be able to upgrade in the future?
@michaeldaw
We have found a solution for now. I have a modified version of your code (below) that is performing all the filtering correctly.
We reference Microsoft.Azure.WebJobs.Logging.ApplicationInsights NuGet package which includes the correct/supported version of the Microsoft.ApplicationInsights components (and NuGet does not complain that I am not using the latest version).
The filtering is covering two things:
await blob.CheckIfExistsAsync())host.jsonCode sample below - hopefully it will help someone else:
```C#
using Microsoft.ApplicationInsights.Channel;
using Microsoft.ApplicationInsights.DataContracts;
using Microsoft.ApplicationInsights.Extensibility;
using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Hosting;
using Microsoft.Extensions.DependencyInjection;
using System.Linq;
[assembly: WebJobsStartup(typeof(MyAzureFunctions.Startup))]
namespace MyAzureFunctions
{
public class Startup : IWebJobsStartup
{
public void Configure(IWebJobsBuilder builder)
{
var configDescriptor = builder.Services.SingleOrDefault(tc => tc.ServiceType == typeof(TelemetryConfiguration));
if (configDescriptor?.ImplementationFactory != null)
{
var implFactory = configDescriptor.ImplementationFactory;
builder.Services.Remove(configDescriptor);
builder.Services.AddSingleton(provider =>
{
if (implFactory.Invoke(provider) is TelemetryConfiguration config)
{
var newConfig = TelemetryConfiguration.Active;
newConfig.ApplicationIdProvider = config.ApplicationIdProvider;
newConfig.InstrumentationKey = config.InstrumentationKey;
newConfig.TelemetryProcessorChainBuilder.Use(next => new CustomTelemetryProcessor(next));
foreach (var processor in config.TelemetryProcessors)
{
newConfig.TelemetryProcessorChainBuilder.Use(next => processor);
}
var quickPulseProcessor = config.TelemetryProcessors.OfType
if (quickPulseProcessor != null)
{
var quickPulseModule = new QuickPulseTelemetryModule();
quickPulseModule.RegisterTelemetryProcessor(quickPulseProcessor);
newConfig.TelemetryProcessorChainBuilder.Use(next => quickPulseProcessor);
}
newConfig.TelemetryProcessorChainBuilder.Build();
newConfig.TelemetryProcessors.OfType
return newConfig;
}
return null;
});
builder.Services.AddSingleton
}
}
}
public class CustomTelemetryInitializer : ITelemetryInitializer
{
public void Initialize(ITelemetry telemetry)
{
if (telemetry is DependencyTelemetry dependency && dependency != null && dependency.ResultCode.Equals("404"))
{
dependency.Success = true;
}
}
}
public class CustomTelemetryProcessor : ITelemetryProcessor
{
private readonly ITelemetryProcessor _next;
public CustomTelemetryProcessor(ITelemetryProcessor next)
{
_next = next;
}
public void Process(ITelemetry item)
{
if (ShouldFilterTrace(item))
{
return;
}
_next.Process(item);
}
private bool ShouldFilterTrace(ITelemetry item)
{
var result = false;
if (item is TraceTelemetry trace)
{
result = true;
var category = trace.Properties["Category"];
var logLevel = trace.Properties["LogLevel"];
if (!string.IsNullOrWhiteSpace(category) && !string.IsNullOrWhiteSpace(logLevel))
{
switch (logLevel)
{
case "Error":
case "Warning":
result = false;
break;
case "Information":
switch (category)
{
case "Function.MyFunction1.User":
case "Function.MyFunction2.User":
result = false;
break;
}
break;
}
}
}
return result;
}
}
}
```
I tried all these solutions.
They certainly work to some degree, but i'm unable to allow failed dependency calls.
If i add my custom telemetry processor. All dependency calls is removed.
No matter what i do.
Event with no logic at all in my telemetry processor:
public void Process(ITelemetry item)
{
this.Next.Process(item);
}
So how can i allow failed dependency calls, but suppress succeeded dependency calls ?
private bool ShouldSend(ITelemetry item)
{
var dependency = item as DependencyTelemetry;
if (dependency == null)
{
/*Not a dependency so allow*/
return true;
}
if (dependency.Success.GetValueOrDefault(false) != true)
{
/*Dependency failed, so allow*/
return true;
}
/*Suppress succeeded dependency logging*/
return false;
}
Also tried to add this:
DependencyTrackingTelemetryModule depModule = new DependencyTrackingTelemetryModule();
depModule.Initialize(config);
This had no effect, no dependency calls is logged when a dependency call fails.
And for some reason i had to change the code from :
var configDescriptor = builder.Services.SingleOrDefault(tc => tc.ServiceType == typeof(TelemetryConfiguration)); //Does not work
to:
var configDescriptor = builder.Services.SingleOrDefault(tc => tc.ServiceType.FullName == "Microsoft.ApplicationInsights.Extensibility.TelemetryConfiguration"); //Works
I have no idea why the first method doesn't work.
I'm using Microsoft.NET.Sdk.Functions v 1.0.29 (latest)
And Microsoft.ApplicationInsights.AspNetCore v.2.7.1 (latest)
Azure functions v2 ( Core v2.2.5) (latest)
The behavior is the same when debugging locally and when deployed to Azure.
var telemetryConfigurationDescriptor = builder.Services.SingleOrDefault(tc => tc.ServiceType == typeof(TelemetryConfiguration)); works fine, I am using it.
If you inspect builder.Services you will find the TelemetryConfiguration ServiceDescriptor. Note the assembly version.
Microsoft.ApplicationInsights.AspNetCore v.2.7.1 wont work. Need to use Microsoft.ApplicationInsights, Version=2.9.1.0
Thats why Microsoft.Azure.WebJobs.Logging.ApplicationInsights 3.0.8 (latest) worked for @marshall76963
Thanks @espray
Finally got this working.
So what i did was just downgrade Microsoft.ApplicationInsights.AspNetCore from v2.7.1 to v2.6.1.
v.2.6.1 includes application insights v.2.9.1.
Then this code also started working (v2.7.1 returned null for some reason):
var configDescriptor = builder.Services.SingleOrDefault(tc => tc.ServiceType == typeof(TelemetryConfiguration)); //Works with 2.6.1
And filtering works beautifully now, all successful dependency calls is gone.
Failed dependency calls is logged.
Perfect.
Hopefully, there will be a much smoother solution in the near future :)
Thanks so very much to @marshall76963 and @dzhang-quest for putting this all together.
I needed a slightly different solution that used sampling but I had some auto-tracked dependencies (e.g. blob storage/HTTP) and some custom dependencies (e.g. Redis) so I had to come up with a few modifications.
It took my several hours to piece this all together so I'm posting my final solution in case it helps anyone else.
First I added the following package refs via Nuget
<PackageReference Include="Microsoft.ApplicationInsights.PerfCounterCollector" Version="2.10.0" />
<PackageReference Include="Microsoft.Azure.WebJobs.Logging.ApplicationInsights" Version="3.0.13" />
Then I turned off all auto-tracked dependencies by adding the following logLevel config to my host.json file.
"Host.Bindings" : "Warning"
I'm very interested in keeping my telemetry costs in check so I'm pretty conservative with what I allow logged. My host.json file looks like this in case anyone is interested in how I achieved this.
{
"version": "2.0",
"logging": {
"logLevel": {
"Host.Results": "Information",
"Function": "Information",
"Host.Aggregator": "Trace",
"Host.Bindings" : "Warning"
}
}
}
Next I added the following code which was highly inspired by the code posted above by @marshall76963
internal static class WebJobsBuilderExtensions
{
public static void AddDependencyTelemetrySamplingProcessor(this IWebJobsBuilder builder, int percentage = 100)
{
var configDescriptor = builder.Services.SingleOrDefault(tc => tc.ServiceType == typeof(TelemetryConfiguration));
if (configDescriptor?.ImplementationFactory != null)
{
var implFactory = configDescriptor.ImplementationFactory;
builder.Services.Remove(configDescriptor);
builder.Services.AddSingleton(provider =>
{
if (implFactory.Invoke(provider) is TelemetryConfiguration config)
{
var newConfig = TelemetryConfiguration.Active;
newConfig.ApplicationIdProvider = config.ApplicationIdProvider;
newConfig.InstrumentationKey = config.InstrumentationKey;
newConfig.TelemetryProcessorChainBuilder.Use(next => new DependencyTelemetrySamplingProcessor(next, percentage));
foreach (var processor in config.TelemetryProcessors)
{
newConfig.TelemetryProcessorChainBuilder.Use(next => processor);
}
var quickPulseProcessor = config.TelemetryProcessors.OfType<QuickPulseTelemetryProcessor>().FirstOrDefault();
if (quickPulseProcessor != null)
{
var quickPulseModule = new QuickPulseTelemetryModule();
quickPulseModule.RegisterTelemetryProcessor(quickPulseProcessor);
newConfig.TelemetryProcessorChainBuilder.Use(next => quickPulseProcessor);
}
newConfig.TelemetryProcessorChainBuilder.Build();
newConfig.TelemetryProcessors.OfType<ITelemetryModule>().ToList().ForEach(module => module.Initialize(newConfig));
return newConfig;
}
return null;
});
}
}
}
public class DependencyTelemetrySamplingProcessor : ITelemetryProcessor
{
private readonly ITelemetryProcessor next;
private readonly int percentage;
public DependencyTelemetrySamplingProcessor(ITelemetryProcessor next, int percentage)
{
this.next = next;
this.percentage = percentage;
}
public void Process(ITelemetry telemetry)
{
if (!(telemetry is DependencyTelemetry dependency)) return;
var sampled = MoreEnumerable.Random(100).First() >= 100 - percentage;
if (sampled)
{
Debug.WriteLine($"Sampled Dependency: {dependency.Name}:{dependency.Type}");
next.Process(telemetry);
}
else
{
Debug.WriteLine($"Skipped Dependency: {dependency.Name}:{dependency.Type}");
}
}
}
Then I using this code by adding the following to my Startup class.
[assembly: WebJobsStartup(typeof(Startup))]
namespace FxApp
{
public class Startup : IWebJobsStartup
{
public void Configure(IWebJobsBuilder builder)
{
builder.AddDependencyTelemetrySamplingProcessor(20);
}
}
}
Now I control all dependency tracking myself. I created implementations like this to make things easier downstream. You can do this any way you like such classes are not necessary at all for this to work. You just need to perform custom dependency tracking however you like.
internal class BlobStorageDependencyTracker : IDependencyTracker
{
private readonly CloudBlobContainer container;
private readonly TelemetryClient telemetryClient;
private readonly string data;
private IOperationHolder<DependencyTelemetry> operationHolder;
public BlobStorageDependencyTracker(CloudBlobContainer container, TelemetryClient telemetryClient, string data)
{
this.container = container ?? throw new ArgumentNullException(nameof(container));
this.telemetryClient = telemetryClient ?? throw new ArgumentNullException(nameof(telemetryClient));
this.data = data ?? throw new ArgumentNullException(nameof(data));
}
public IDependencyTracker TrackOperation(string operationName)
{
if (operationName == null) throw new ArgumentNullException(nameof(operationName));
operationHolder = telemetryClient.StartOperation<DependencyTelemetry>(operationName);
operationHolder.Telemetry.Type = "Azure blob";
// E.g. HEAD fooaccount
operationHolder.Telemetry.Name = $"{operationName} {container.ServiceClient.Credentials.AccountName}";
operationHolder.Telemetry.Data = data;
return this;
}
public void SetResult(bool success)
{
operationHolder.Telemetry.Success = success;
}
public void Dispose()
{
if (operationHolder != null)
{
telemetryClient.StopOperation(operationHolder);
}
}
}
A couple of things to take note of.
The package versions I used above worked for me but it somewhat depends on where you are at with your own WebJobs assembly versions. Look through the dependency chain and find compatible versions.
The use of "Host.Bindings" : "Warning" was something I reversed engineered by following the logs and just trying it. I could not find any documentation on this log category. It just worked for me but I admit I do not know if this is supported which means it could break at any time. It could also have side-effects that I have not discovered yet. [Read: Don't put this into production until you have tested it yourself and are confident.]
That being said, it has a dramatic effect.

HTH
A new wrinkle has appeared with the latest version of Microsoft.Azure.WebJobs.Logging.ApplicationInsights (version 3.0.14) that was released recently...
Warning CS0618 'TelemetryConfiguration.Active' is obsolete: 'We do not recommend using TelemetryConfiguration.Active on .NET Core. See https://github.com/microsoft/ApplicationInsights-dotnet/issues/1152 for more details'...
As the code that I, @DarinMacRae (and I assume others) are using to register a custom ITelemetryProcessor rely on getting the active TelemetryConfiguration in order to construct a new configuration based off it I am a bit stumped now!
The code in question is:
C#
builder.Services.AddSingleton(provider =>
{
if (implFactory.Invoke(provider) is TelemetryConfiguration config)
{
var newConfig = TelemetryConfiguration.Active;
newConfig.ApplicationIdProvider = config.ApplicationIdProvider;
newConfig.InstrumentationKey = config.InstrumentationKey;
newConfig.TelemetryProcessorChainBuilder.Use(next => new MyCustomProcessor(next));
foreach (var processor in config.TelemetryProcessors)
{
newConfig.TelemetryProcessorChainBuilder.Use(next => processor);
}
var quickPulseProcessor = config.TelemetryProcessors.OfType<QuickPulseTelemetryProcessor>().FirstOrDefault();
if (quickPulseProcessor != null)
{
var quickPulseModule = new QuickPulseTelemetryModule();
quickPulseModule.RegisterTelemetryProcessor(quickPulseProcessor);
newConfig.TelemetryProcessorChainBuilder.Use(next => quickPulseProcessor);
}
newConfig.TelemetryProcessorChainBuilder.Build();
newConfig.TelemetryProcessors.OfType<ITelemetryModule>().ToList().ForEach(module => module.Initialize(newConfig));
return newConfig;
}
return null;
});
For now I am not upgrading my reference, but that is not a sustainable solution - can anyone tell me what the alternative/correct way of getting hold of the active config is now, or better yet if there is a way of injecting our custom processor(s) early in the chain so that we do not need to?
Without this, I cannot see a way to implement our custom processing AND upgrade to the latest version of ApplicationInsights.
I have got it working again with the latest Microsoft.Azure.WebJobs.Logging.ApplicationInsights (version 3.0.14) without neededing to call the now obsolete TelemetryConfiguration.Active
For the benefit of others, here is the sample working code:
C#
public class Startup : IWebJobsStartup
{
public void Configure(IWebJobsBuilder builder)
{
var configDescriptor = builder.Services.SingleOrDefault(tc => tc.ServiceType == typeof(TelemetryConfiguration));
if (configDescriptor?.ImplementationFactory != null)
{
var implFactory = configDescriptor.ImplementationFactory;
builder.Services.AddSingleton<ITelemetryInitializer, CustomTelemetryInitializer>();
builder.Services.Remove(configDescriptor);
builder.Services.AddSingleton(provider =>
{
if (implFactory.Invoke(provider) is TelemetryConfiguration config)
{
// Construct a new TelemetryConfiguration based off the existing config
var newConfig = new TelemetryConfiguration(config.InstrumentationKey, config.TelemetryChannel)
{
ApplicationIdProvider = config.ApplicationIdProvider
};
// Add the telemetry initializers (including any custom ones added above)
config.TelemetryInitializers.ToList().ForEach(initializer => newConfig.TelemetryInitializers.Add(initializer));
// Add custom telemetry processor first
newConfig.TelemetryProcessorChainBuilder.Use(next => new CustomTelemetryProcessor(next));
// Add existing telemetry processors next (excluding any default processors that are already registered)
foreach (var processor in config.TelemetryProcessors.Where(processor => !newConfig.TelemetryProcessors.Any(x => x.GetType() == processor.GetType())))
{
newConfig.TelemetryProcessorChainBuilder.Use(next => processor);
if (processor is QuickPulseTelemetryProcessor)
{
// Re-register the QuickPulseTelemetryProcessor (reguired as we will re-build the chain below)
var quickPulseModule = new QuickPulseTelemetryModule();
quickPulseModule.RegisterTelemetryProcessor(processor);
}
}
// Build the chain after adding the telemetry processors
newConfig.TelemetryProcessorChainBuilder.Build();
newConfig.TelemetryProcessors.OfType<ITelemetryModule>().ToList().ForEach(module => module.Initialize(newConfig));
return newConfig;
}
return null;
});
}
}
}
Key changes are:
TelemetryConfiguration.Active replaced with new TelemetryConfiguration(config.InstrumentationKey, config.TelemetryChannel)TelemetryInitializers now required as it is not pulled across from ActiveHope this helps someone and it would be useful if the MS docs were updated to reflect this (there are still a lot recommending the use of TelemetryConfiguration.Active, especially with regard to Azure functions).
I've been able to use approach to filter out traces but I'm unable to filter any dependencies. The dependencies show up in my custom telemetry initializer but not in the telemetry processor. Looking at a stack trace I see that the dependency collector (under the namespace Microsoft.ApplicationInsights.DependencyCollector) has a reference to the original TelemetryConfiguration instance as well as a TelemetryClient instance based on that configuration.
Looking at the services collection during the startup method I can see that there is already an instance of the TelemetryClient at that point as well as all of the dependency collectors. Other people seem to have had some success on filtering out dependencies so I'm curious what I'm missing here. For reference, here is what I've got setup so far:
```C#
using Microsoft.ApplicationInsights.Extensibility;
using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse;
using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
[assembly: WebJobsStartup(typeof(Example.StartupWebJobs))]
namespace Example
{
public class StartupWebJobs : IWebJobsStartup
{
public void Configure(IWebJobsBuilder builder)
{
// Configure telemetry filter
var configDescriptor = builder.Services.SingleOrDefault(tc => tc.ServiceType == typeof(TelemetryConfiguration));
if (configDescriptor?.ImplementationFactory != null)
{
var implFactory = configDescriptor.ImplementationFactory;
builder.Services.AddSingleton<ITelemetryInitializer, CustomTelemetryInitializer>();
builder.Services.Remove(configDescriptor);
builder.Services.AddSingleton(provider =>
{
if (implFactory.Invoke(provider) is TelemetryConfiguration config)
{
//var newConfig = TelemetryConfiguration.Active;
var newConfig = new TelemetryConfiguration(config.InstrumentationKey, config.TelemetryChannel)
{
ApplicationIdProvider = config.ApplicationIdProvider
};
// Add the telemetry initializers (including any custom ones added above)
config.TelemetryInitializers.ToList().ForEach(initializer => newConfig.TelemetryInitializers.Add(initializer));
newConfig.TelemetryProcessorChainBuilder.Use(next => new AzureFunctionTelemetryFilter(next));
foreach (var processor in config.TelemetryProcessors)
{
newConfig.TelemetryProcessorChainBuilder.Use(next => processor);
if (processor is QuickPulseTelemetryProcessor quickPulseProcessor)
{
var quickPulseModule = new QuickPulseTelemetryModule();
quickPulseModule.RegisterTelemetryProcessor(quickPulseProcessor);
}
}
newConfig.TelemetryProcessorChainBuilder.Build();
newConfig.TelemetryProcessors.OfType<ITelemetryModule>().ToList().ForEach(module => module.Initialize(newConfig));
return newConfig;
}
return null;
});
}
}
}
}
```
Hi @Areson, I was able to filter out dependencies using a cutom ITelemetryInitializer in the code above (based on the builder.Services.AddSingleton<ITelemetryInitializer, CustomTelemetryInitializer>(); line that you have), I implemented a CustomTelemetryInitializer like this:
C#
public class CustomTelemetryInitializer : ITelemetryInitializer
{
/// <summary>
/// Ignore the following as they are noise and skew the telemetry data:
/// - 404 errors from dependencies (such as Azure Blob storage) as these are triggered when checking if items exists
/// Checks if the value passed is a <see cref="DependencyTelemetry"/> and the <see cref="DependencyTelemetry.ResultCode"/> = 404
/// </summary>
/// <param name="telemetry">The ITelemetry value to test</param>
public void Initialize(ITelemetry telemetry)
{
if (telemetry is DependencyTelemetry dependency && dependency != null && dependency.ResultCode.Equals("404"))
{
dependency.Success = true;
}
}
}
That coupled with the custom ITelemetryProcessor to filter specific trace messages enabled me to capture just the specific info I wanted.
Hope this helps :)
P.S. See my previous post on replacing TelemetryConfiguration.Active which is deprecated in the latest version of Microsoft.Azure.WebJobs.Logging.ApplicationInsights
@marshall76963 I heavily referenced your previous post as I was needing the exact same type of filtering (thanks for that!). To that end my CustomTelemetryInitializer looks exactly like you specified. This had the effect of the erroneous "errors" from the 404 in the dependencies no longer showing up as errors, but they do still how up in my telemetry feeds, just as successes. In fact, _all_ of my dependencies (that make it past sampling) show up in application insights. A number of my functions have a large number of dependency calls and the amount of data generated by logging the successes is too high. I need to filter those out and focus on errors if and when they occur.
I haven't been able to do that because none of the DependencyTelemetry objects show up in my TelemetryProcessor instance that I added to the TelemetryConfiguration. As I mentioned, I can see in the stack trace that the dependency collectors all contain a reference to a TelemetryClient instance that is based on the original TelemetryConfiguration and so do not have my TelemetryProcessor in the processor chain. I need to get those collectors to use a client based on my version of the configuration, but since they are already instantiated by the time my startup code runs I'm not sure what to do about it.
@Areson - Ahh, sorry I understand your meaning now.
I think the "correct" way would be to ammend your AzureFunctionTelemetryFilter so that the Process method filters out the item.
I have not coded this, so sorry if this is incorrect, but something like the following should do it:
C#
public void Process(ITelemetry item)
{
if (item is DependencyTelemetry dependency && dependency != null)
{
return; // should filter out all dependency items
}
_next.Process(item);
}
If this does not work, someone else will need to provide more insight into the issue (pun intended).
@marshall76963 Agreed, and that is essentially what I have in my Process method. The problem is that the none of the ITelemetry objects that are passed into the Process method are DependencyTelemetry objects. I know they are getting created, as I can see them in the live metrics and in my Application Insights metrics in the Azure portal. They just aren't getting passed to the Process method in my AzureFunctionTelemetryFilter class. Other items are, such as custom metrics or TraceTelemetry objects, but no dependencies.
As I noted, they aren't showing up because the classes that are creating those telemetry events aren't using a TelemetryClient that was setup using the configuration I provided. They are all using the original configuration and the client created from that. That setup happens well before I get a chance to replace the TelemetryConfiguration so I'm stuck at the moment. I'm hoping someone more savvy than me (maybe @lmolkova ?) can give some insight as to if this is possible or not.
@Areson I may not be correct here but what I have had to do to address this problem is to turn off all auto-tracked dependencies by setting this logLevel in the host.json as such:
// This turns off auto-tracked dependencies.
"Host.Bindings": "Warning"
and then add custom dependency tracking for each dependency I'm interested in. This custom telemetry will get passed into your processor for sampling and filtering.
I am stumbling around in these waters the same as everyone so please take this as just another attempt to work-around the current limitations of managing telemetry costs within Azure Functions and not a solution that will work for everyone. Also, if someone on the AF teams wants to chime in here and tell me that I'm doing this all wrong and that there's a better way please take the floor. :)
I hope this helps.
After reading through the code for the function host and how it gets setup, I think I finally understand what is going on here as well as a few things that can make the process a bit funky. I'll start with a few "how tos" based on what I found.
Custom Dependency Tracking
There is a way to turn off the dependency tracking, per the documentation (https://docs.microsoft.com/en-us/azure/azure-functions/functions-host-json#applicationinsights) by setting 'EnableDependencyTracking': false in the host.json file. This will disabled dependency tracking and allow us to setup our own dependency tracking.
After adding code to setup our own TelemetryConfiguration we can add back in the normal dependency tracking:
```C#
builder.Services.AddSingleton
{
var telemetryConfiguration = provider.GetService
var dependencyTracker = new DependencyTrackingTelemetryModule();
dependencyTracker.Initialize(telemetryConfiguration);
return dependencyTracker;
});
``
This will start dependency tracking using our version of theTelemetryConfigurationwhich means anyITelemetryProcessor` instances we have setup will be able to be used.
Limitations
Unfortunately there are some limitations to all of this based on what I've seen. The first is that we can only add ITelemetryProcessors to the beginning of the telemetry chain. This is due to the fact that once a TelemetryProcessor has be instantiated that the next object it has can't be changed. When creating our own TelemetryConfiguration we are really just co-opting the chain that was already created by the original configuration and adding a link to the beginning. I'm not 100% sure, but I'm thinking that any telemetry sent to Azure ends up using the original TelemetryConfiguration as a result of that. Based on this I'm thinking that unless you setup your own complete TelemetryConfiguration mimicking how it is setup in the function host you won't be able to get things setup exactly how you want.
For example, I wanted to put my custom filtering _after_ the QuickPulse processor, as I wanted my live metrics to include everything but only have ApplicationInsights log the filtered data. This isn't possible (I tried it!) due to what I mentioned above. I considered just creating my entire TelemetryConfiguration instance from scratch to do so but decided against it for now. If someone else is interested, it may be possible to make a second QuickPulse processor and add it to the begining of the chain before adding any filtering and disabled the built-in live metrics so that only yours is used. The EnableLiveMetrics option can be used for this. I also considered it but there is some filtering that the Azure Function folks have put in place to prevent a lot of internal noise from the function host from leaking through, which you'd end up having to deal with.
EDIT: Part of the reason I wanted to have filtering _after_ the QuickPulse module is that filtering out the dependencies before means I get no data in the live metrics on the dependency throughput, which is a bit frustrating.
At this point I think I'm about as good as I can get until we get better native support for controlling the telemetry chain.
Hey @Areson,
Just to add to your last comment, it turns out that the 'EnableDependencyTracking': false setting is not yet enabled in the Functions SDK: https://github.com/MicrosoftDocs/azure-docs/issues/42792
Hopefully this will be avaiable soon so we don't have to roll our own telemetry processors going forward...
@jublair You are correct. I forgot that I was using the 1.0.30-beta1 version.
@Areson - Just to clarify, if using 1.0.30-beta1 version of Microsoft.NET.Sdk.Functions, and setting
"EnableDependencyTracking": false
E.g.
"applicationInsights": {
"samplingSettings": {
"enableDependencyTracking": false
}
}
Should filter out DependencyTelemetry?
Everything under "applicationInsights" maps directly to the ApplicationInsightsLoggerOptions class. "enableDependencyTracking" needs to go under "applicationInsights", not under "samplingSettings".
I've submitted a PR to the docs: https://github.com/MicrosoftDocs/azure-docs/pull/43025.
What @brettsam said. But yes, from my testing it does appear to turn off telemetry tracking in that version.
Is this due for fixing any time soon without the hoops that are documented here? These hoops are not working for me anyway in my .net core 2.1 function app using the latest packages (3.0.14). The hack that @marshall76963 doesn't work for me locally in test as the call to this:
var configDescriptor = builder.Services.SingleOrDefault(tc => tc.ServiceType == typeof(TelemetryConfiguration));
returns null. Apparently in my app there are no TelemetryConfiguration classes already registered for this to hook into. I really don't know why :(
I'd rather there was some official answer (and backing documentation) as to how to do this correctly. I'm pretty frustrated that a pretty fundamental thing is so lacking. Even this suggested way of using your own telemetry initializer in Azure Functions
services.AddSingleton<ITelemetryInitializer, CustomTelemetryInitializer>()
in order to change the RoleName doesn't seem to work when there are other GitHub tickets that say it does.
Can we have an estimated fix for a proper way of doing this?
What version of Microsoft.Net.Sdk.Functions are you referencing? You may be hitting this: https://github.com/Azure/azure-functions-host/issues/5530#issuecomment-582553472.
The fix will be coming soon.
@grahambunce ran into the same issue See https://github.com/Azure/azure-functions-host/issues/3741#issuecomment-507264617
@brettsam I'm referencing what I believe to be the last available version for .net core 2.1.x, i.e. Microsoft.NET.Sdk.Functions 1.0.31.
@brettsam is there a release date? I'm hoping this will also clear up the significant amount of "noise" in AI trace logs that Azure Functions spit out. I can't seem to prevent them appearing in the logs, e.g.
I don't care about any of these at TRACE level - all I want are my TRACE logs and i can't find a way to not log these system messages
Does it not work to ignore their namespaces in appsettings.json?
@grahambunce -- you should be able to filter any messages you want by their Category (or even a partial match on their Category) in your host.json. See this for details: https://docs.microsoft.com/en-us/azure/azure-functions/functions-monitoring#configure-categories-and-log-levels.
For example, the host.json below would turn off all logging except:
MyNamespace would be filtered at the Trace level (in other words, not filtered).{
"version": "2.0",
"logging": {
"logLevel": {
"default": "None",
"MyNamespace": "Trace",
"Host.Results": "Information"
}
}
}
@brettsam thanks - that does work. I was confused as I use the AI NLog adapter and was trying to filter it there via NLog but it was making no difference. I guess these host logs use their own logging mechanisms.
A quick summary of what the flow looks like is (hopefully this helps):
RAW LOGS --> LIVE METRICS --> FILTER --> "PERSISTED" APP INSIGHTS
I think there are two issues outstanding with the filtering for me, though I have hacked around it with the solution I outlined above:
For number 1 -- I've repurposed an existing issue for tracking and added a proposal for an improvement here: https://github.com/Azure/azure-webjobs-sdk/issues/2447.
For number 2 -- yeah, this is a problem we've seen with the auto-dependency tracking in general. We've changed our code in a few places to reduce 404 and 409s just to address this. Sometimes "error codes" from REST calls aren't really errors, yet App Insights reports them as such by default. One option is to register your own ITelemetryInitializer with DI that can check for your APIs that you want to mark as success and do it there. @cijothomas -- do you know if there's any other way to manage this?
The recommended way in Application Insights to deal with these is to write telemetry processor to filter out unwanted dependencies. Unfortunately there is no easy way to hook up processors in Functions.
So writing TelemetryInitializers to override fields (success) is best bet until telemetry processor support is enabled.
@brettsam Any development on this, is it likely to be added to a sprint any time soon?
We're running into the issue described in https://github.com/Azure/azure-functions-host/issues/3741#issuecomment-584902726 issue locally (haven't tried Azure yet) when using Functions V3. We are trying to filter out synthetic traffic (FrontDoor/Traffic Manager probes) using a custom telemetry processor. Currently, since there is no service type of "TelemetryConfiguration" being registered, we can't hook into it and add our own processor. We're using the 3.0.7 SDK version, and I tried using the 2.11.0 version of App Insights, but that didn't fix it.
Note that using the extension method AddApplicationInsightsTelemetryProcessor doesn't work either (from Nuget Microsoft.ApplicationInsights.AspNetCore v2.8.0).
The same here, TelemetryConfiguration is always null for 3.0.7
@vitalybibikov, some initial questions:
If both of those are yes, can you share a sample application that reproduces this? Even a csproj file may be enough.
Adding the "APPINSIGHTS_INSTRUMENTATIONKEY" fixed it for me. I only previously tested it locally, didn't test it in Azure.
I've checked it out, when APPINSIGHTS_INSTRUMENTATIONKEY is set,
instance is no longer null.
Is it reflected in docs? if it's not, maybe it should be reflected, as it's not quite obvious.
Thanks.
What's the current state of this issue?
I've tried code posted by @michaeldaw and my custom TelemetryProcessor was indeed called but then Live Metrics in the portal are broken. Now they are showing only CPU/Memory usage. Cannot see traces/dependencies anymore (they are sent for sure because after a few minutes I can see them in the Performance -> Dependencies UI).
Has there been any progress on fixing this issue? I'm still seeing the issue with Azure Functions V3.
This change appears to resolve the issue:
https://github.com/luthus/azure-webjobs-sdk/commit/3137c5c8e59fdd2495c4ad0b4a09b8748f7ee1f9
With this change we should be able to use builder.Services.AddApplicationInsightsTelemetryProcessor
@luthus's solution above is the correct one, but if you don't want to have to fork the web jobs sdk, you can get it working correctly in durable functions (or azure functions in general) WITHOUT breaking live metrics dependency and error logging like @kamilzzz saw and I did as well with @lmolkova's solution.
Adding the ITelemetryModule and inserting your ITelemetryProcessors into the builder there works as expected. This gets around builder.Services.AddApplicationInsightsTelemetryProcessor() not working as expected.
// startup.cs
builder.Services.AddSingleton<ITelemetryModule, MyCustomTelemetryModule>();
builder.Services.AddApplicationInsightsTelemetry(Environment.GetEnvironmentVariable("APPINSIGHTS_INSTRUMENTATIONKEY"));
// custom module
public class MyCustomTelemetryModule : ITelemetryModule
{
public void Initialize(TelemetryConfiguration configuration)
{
// add custom processors
configuration.TelemetryProcessorChainBuilder.Use(next => new MyCustomTelemetryProcessor(next));
configuration.TelemetryProcessorChainBuilder.Build();
}
}
// custom processor
public class MyCustomTelemetryProcessor : ITelemetryProcessor
{
private readonly ITelemetryProcessor _next;
public MyCustomTelemetryProcessor(ITelemetryProcessor next)
{
_next = next;
}
public void Process(ITelemetry item)
{
bool myCustomSkipTelemetry = false;
if (myCustomSkipTelemetry)
return;
_next.Process(item);
}
}
@jschieck that works as a workaround for now.
@jschieck This is good!
I just want to clarify something though.
The docs specifically say not to add AddApplicationInsightsTelemetry...
here: https://docs.microsoft.com/en-us/azure/azure-functions/functions-dotnet-dependency-injection#logging-services
...but in your testing you found that it is ok to do so when customizing the processors?
@jschieck This is good!
I just want to clarify something though.
The docs specifically say not to addAddApplicationInsightsTelemetry...
here: https://docs.microsoft.com/en-us/azure/azure-functions/functions-dotnet-dependency-injection#logging-services
...but in your testing you found that it is ok to do so when customizing the processors?
The app we're doing this in runs in a kubernetes cluster not default azure functions so I'm not sure if different rules apply. But we are still getting all of the built in ILogger/app insights functionality from the functions host runtime without any side effects (that we've noticed)
Also I haven't tested in a normal function app if builder.Services.AddSingleton<ITelemetryModule, MyCustomTelemetryModule>(); is all you would need to get it working
@jschieck I tested your code sample in a "normal" Azure Function. I can confirm that the telemetry initializers and telemetry processors are called as expected (whereas telemetry processors are not being called for request telemetry items using the previous solution posted in this thread).
The drawback of your approach is that telemetry processors are being called for dependencies that are internal to the Azure Functions SDK. These dependencies are being discarded beforehand using the approach I linked to above. Example of such internal dependencies are retrieving new messages for the Service Bus trigger or the blob used for locking. In my sample Function App with a single Service Bus trigger, 135 "internal" dependencies were recorded in 60 seconds. These dependencies do end up being discarded before being ingested by Application Insights but that is still a lot of telemetry going through the processors.
Would it be possible to get the PR for this (https://github.com/Azure/azure-webjobs-sdk/pull/2657) reviewed and merged so users can start utilizing custom telemetry processors? I am in the process of trying to stand up a new functions app that is registering very large numbers of dependency telemetry so utilizing a custom filter to exclude this is a must have.
Most helpful comment
Please factor the following while weighing the pros and cons of tracking successful dependencies:
These are screenshots of the Application Insights instance and cost analysis for a particular Function App and related storage account in our system.
Above is a capture of the default 24-hour period search for the service in question. You can see that dependency tracking accounts for 1.4 million items, while Trace, Request, and Exception account for 40K, 18K, and 1.9K (really must look in to those exceptions), respectively. Dependency events account for approximately 96% of all events.
This is the cost projection of the Application Insights instance. As before, the image shows that "REMOTEDEPENDENCY" events make up for the vast majority of recorded events.
Finally, the above screenshot is a filtered selection from the "Cost by Resource" view, showing the cost of the Function App, Storage Accounts, and Application Insights instances in question. The cost of the Application Insights instance is 1252% that of the cost of the Function App it is monitoring.
These costs are woefully unsustainable for us. Of late, my decision to use Function Apps, which I'd touted to my colleagues as extremely cost effective, is being called in to question by my teammates and superiors. Application Insights has been an invaluable tool for diagnostics and troubleshooting. I'd liken using a Function App without an associated Application Insights instance to flying by the seat of my pants. That said, I will eventually have to choose to either stop using Application Insights, or stop using Function Apps. I'm sure I'm not the only one who would really appreciate if the Functions and Application Insights teams could find a solution by which that choice doesn't have to me made.