At a customer we make use of Hybrid Connections to connect App Services to on-premise applications. Today I was working on changing those hybrid connections as we are migrating the on-premise services which involves changing the hostnames.

I was cleaning up old connections but the amount of used connections in the App Service plan wouldn’t come down. We were hitting the limit of 25 connections, so I wasn’t able to add new ones for the new hostnames.

Obviously some connections were still in use in other App Services I wasn’t aware of. And there are a lot of web apps in this subscription, which I didn’t want to check one-by-one. So I needed an easy way to list the Hybrid connections in the subscription and where they were used.

First I tried Azure Resource Graph Explorer to find the used Hybrid connections, but unfortunately that doesn’t contain the child resources on the App Services which I was looking for. So I resorted to Azure CLI and PowerShell Core to do the job.

First, make sure you are logged in with Azure CLI and then list all the webapps in the subscription you want to query. We let Azure CLI output a json array of objects with the webapp name and resourceGroup as property and then pipe it to ConvertFrom-Json.

az login
$subscriptionId = '11111111-1111-1111-1111-111111111111'

$sites = az webapp list --subscription $subscriptionId --query "[].{WAName:name, WARg:resourceGroup}" -o json | ConvertFrom-Json

Then we can loop through all the sites and retrieve the linked Hybrid connections. We add the Hybrid connection to a hashset with the name of the webapp in an array as it value. If the hybrid connection is already added to the hashset before we add the name of the webapp to the value:

$h = @{}

foreach ($site in $sites)
{
    $hybridConnections = az webapp hybrid-connection list --name $site.WAName --resource-group $site.WARg --subscription $subscriptionId | ConvertFrom-Json
    
    foreach($hybridConnection in $hybridConnections){
        if ($h.ContainsKey($hybridConnection.name)){
            $h[$hybridConnection.name] += $site.WAName
        }
        else {
            $h.Add($hybridConnection.name, @($site.WAName))
        }
    }    
}

Then we can write out all the connections and the apps they are used in:

foreach ($key in $h.keys)
{
    Write-Host $key
    foreach ($webapp in $h[$key])
    {
        Write-Host "  -- $webapp"
    } 
}

So a little bit for PowerShell and Azure CLI goodness can help you if you have a lot of webapps you don’t want to check one-by-one.

One of the services I’m building at one of my customers is an API that provides invoice information for a customer self-service portal. The invoice information is stored (of course) in Azure CosmosDb. Invoices are partitioned by customerid, but those partitions can still contain a lot of items.

When querying for the total outstanding amount for not payed invoices you can use an aggregate query:

SELECT SUM(c.OutstandingAmount) AS TotalOutstandingAmount  FROM c WHERE c.Status <> 1

We executed the query with the following code:

var feedOptions = new FeedOptions
{
    EnableCrossPartitionQuery = false,
    PartitionKey = new PartitionKey(partitionKey)
};

var querySpec = new SqlQuerySpec() 
{ 
    QueryText = queryText, 
    Parameters = new SqlParameterCollection(queryParameters.Select(pair => new SqlParameter(pair.Key, pair.Value))) 
};

using( var query = _documentClient.Value.CreateDocumentQuery<T>(collectionUri, querySpec, feedOptions).AsDocumentQuery())
{
    var response = await query.ExecuteNextAsync<T>(cancellationToken);
} 

return response.First();

But sometimes this returned an item with an amount set to 0 where I knew this should not be the case. When I ran the same code against a test database, the issue did not arise.

So I resorted to Fiddler to help me find the issue between the two queries.

First I tried running the query against the test database:

And then against the acceptation database:

As you can see, the latter returns a continuation token in the response through a response header. So this means we should continue asking for results in our code:

var items = new List<T>();

using (var query = _documentClient.Value.CreateDocumentQuery<T>(collectionUri, querySpec, feedOptions).AsDocumentQuery())
{
    while (query.HasMoreResults)
    {
        var response = await query.ExecuteNextAsync<T>(cancellationToken);
        
        items.AddRange(response);
    }
}

return items;

Then you can create an aggregated result by using Linq to sum up the values of the returned items.

For a project I’m working on I needed to specify a outgoing proxy for accessing Azure Table Storage in a .NET console application.

Unfortunately the default way of setting a proxy in the app.config of classic .NET applications doesn’t work for .NET core.

After fiddling around for a bit I found the solution for setting it in a .NET core application (based on a answer on stackoverflow). If you use the Microsoft.Azure.Cosmos.Table Nuget package, instead of the old WindowsAzure.Storage package (I’m using version 1.0.1), the CloudTableClient allows you to pass in a TableClientConfiguration with a DelegatingHandler:

public TableStorage(string accountName, string keyValue, IWebProxy proxy)
{
	_storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, keyValue), true);
    var storageDelegatingHandler = new StorageDelegatingHandler(proxy);
    _tableClient = _storageAccount.CreateCloudTableClient(
        new TableClientConfiguration
        {
            RestExecutorConfiguration = new RestExecutorConfiguration
            {
                DelegatingHandler = storageDelegatingHandler
            } 
        });

    /// further config
}

In the DelegatingHandler you can set the proxy for the HttpClientHandler:

public class StorageDelegatingHandler : DelegatingHandler
{
  private readonly IWebProxy _proxy;

  private bool _firstCall = true;

  public StorageDelegatingHandler() 
  : base()
  {
  }

  public StorageDelegatingHandler(HttpMessageHandler httpMessageHandler)
  : base(httpMessageHandler)
  {
  }

  public StorageDelegatingHandler(IWebProxy proxy)
  {
  	_proxy = proxy;
  }

  protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
  {
    if (_firstCall && _proxy != null)
    {
      HttpClientHandler inner = (HttpClientHandler)InnerHandler;
      inner.Proxy = _proxy;
    }

    _firstCall = false;
    return base.SendAsync(request, cancellationToken);
  }
}

Now you can configure the proxy where you’re setting up dependency injection:

public class ProxySettings
{
  public bool Use => !string.IsNullOrWhiteSpace(Address);

  public string Address { get; set; }

  public bool BypassOnLocal { get; set; }
}
var proxySettings = Configuration
                .GetSection(nameof(ProxySettings))
                .Get<ProxySettings>();

IWebProxy proxy = null;
if (proxySettings != null && proxySettings.Use)
{
	proxy = new WebProxy(proxySettings.Address, proxySettings.BypassOnLocal);
	WebRequest.DefaultWebProxy = proxy;
}

services.AddSingleton<IStorage>(new TableStorage(Configuration["StorageAccountName"], Configuration["StorageAccountKey"], proxy));            

After a certificate in Azure KeyVault is renewed, you might need to push it to the App Services that are using it. Certificates are stored in Azure as separate resources under the same resource group as the Azure Service Plan resides in.

If you’re using Infrastructure-as-Code (and you should) through ARM templates, you can redeploy the template that deploys the certificate resources. But there’s also another way if you don’t want to redeploy the templates (e.g. because it takes a long time depending on the amount of resources).

If you lookup the certificate through Azure Resource Explorer you can update the certificate though the UI. Just click the “Edit” button:

Don’t change anything to the request message and just click the “PUT” button. This will trigger Azure Resource manager to get the renewed certificate from Azure KeyVault and update it in the Service Plan.

Last week I was busy updating the certificates stored in Azure KeyVault for a project I’m working on. Previously we added the certificates that were referenced in App Services as secrets in the KeyVault with a application/x-pkcs12 type.

We now wanted to change those to real certificates, so the renewal of the certificates can be managed from Azure KeyVault.

After storing the certificates in KeyVault and modifying the ARM templates for the certificate resources to reference the new secretNames I ran into an ARM deployment error with the following message: “The parameter KeyVaultId & KeyVaultSecretName has an invalid value.”.

It turns out the new certificate we were referencing was already newer than the certificate stored in the previously referenced secret. Apparently this gives the very cryptic error message above. The solution is to make sure the certicates are the same, before deploying the ARM template with the updated secretname.

You can download the original certificate as .pfx with some Powershell you can find here.

After that import the .pfx into the new KeyVault certificate with the Import-AzureKeyVaultCertificate cmdlet.

Now you can redeploy the ARM template to update the keyVaultSecretName. After that you can update the certificate again.