Azure CosmosDb doesn’t have a way to create an auto incrementing id for new documents such a SQL Server has for rows by using an identity column.

However, in one of the solutions I’m working on, I needed to make sure newly created documents would get a Id based on an auto incrementing integer. In this post I will share a solution I came up with to create such a functionality where multiple instances of the same web api can create items simultaneously.

I’ve created a small sample application that will insert multiple Todo items in a collection in parallel.

Please note that for a situation where you have multi region writes enabled, this will require more effort to create a solution that will handle conflict resolution.

First step is to create a document class that contains the last used id.

And a document class for a Todo item.

By using a TransactionalBatch we can insert the Todo item and update the TodoAutonumber class in a atomic transaction. This does requires that both documents are stored in the same Logical partition.

  • Retrieve the TodoAutonumber document with a point read, if it doesn’t exist, create it
  • Increment the MaxNumber property and use that as the value for the new TodoItem
  • Create a TransactionalBatch that replaces the TodoAutonumber document and inserts the TodoItem
  • Check errors and create an exception if one of the operations fail

By passing the ETag of the TodoAutonumber document the operation will fail if another update to TodoAutonumber has been done meanwhile.

We can then use Polly (and Polly.Contrib) to retry the operation if it fails because the Id was already claimed during execution (e.g. by another thread or web service instance). I’m using the DecorrelatedJitter method for the back-off period so not all retries run again at approximately the same time.

When we now start a batch with 100 tasks at once, in the end all items are created with sequential Id values.

The full sample application can be found here.

The solution isn’t battle-tested in high volume applications, but seems to work ok for the application we’re building.

At a customer we make use of Hybrid Connections to connect App Services to on-premise applications. Today I was working on changing those hybrid connections as we are migrating the on-premise services which involves changing the hostnames.

I was cleaning up old connections but the amount of used connections in the App Service plan wouldn’t come down. We were hitting the limit of 25 connections, so I wasn’t able to add new ones for the new hostnames.

Obviously some connections were still in use in other App Services I wasn’t aware of. And there are a lot of web apps in this subscription, which I didn’t want to check one-by-one. So I needed an easy way to list the Hybrid connections in the subscription and where they were used.

First I tried Azure Resource Graph Explorer to find the used Hybrid connections, but unfortunately that doesn’t contain the child resources on the App Services which I was looking for. So I resorted to Azure CLI and PowerShell Core to do the job.

First, make sure you are logged in with Azure CLI and then list all the webapps in the subscription you want to query. We let Azure CLI output a json array of objects with the webapp name and resourceGroup as property and then pipe it to ConvertFrom-Json.

az login
$subscriptionId = '11111111-1111-1111-1111-111111111111'

$sites = az webapp list --subscription $subscriptionId --query "[].{WAName:name, WARg:resourceGroup}" -o json | ConvertFrom-Json

Then we can loop through all the sites and retrieve the linked Hybrid connections. We add the Hybrid connection to a hashset with the name of the webapp in an array as it value. If the hybrid connection is already added to the hashset before we add the name of the webapp to the value:

$h = @{}

foreach ($site in $sites)
{
    $hybridConnections = az webapp hybrid-connection list --name $site.WAName --resource-group $site.WARg --subscription $subscriptionId | ConvertFrom-Json
    
    foreach($hybridConnection in $hybridConnections){
        if ($h.ContainsKey($hybridConnection.name)){
            $h[$hybridConnection.name] += $site.WAName
        }
        else {
            $h.Add($hybridConnection.name, @($site.WAName))
        }
    }    
}

Then we can write out all the connections and the apps they are used in:

foreach ($key in $h.keys)
{
    Write-Host $key
    foreach ($webapp in $h[$key])
    {
        Write-Host "  -- $webapp"
    } 
}

So a little bit for PowerShell and Azure CLI goodness can help you if you have a lot of webapps you don’t want to check one-by-one.

One of the services I’m building at one of my customers is an API that provides invoice information for a customer self-service portal. The invoice information is stored (of course) in Azure CosmosDb. Invoices are partitioned by customerid, but those partitions can still contain a lot of items.

When querying for the total outstanding amount for not payed invoices you can use an aggregate query:

SELECT SUM(c.OutstandingAmount) AS TotalOutstandingAmount  FROM c WHERE c.Status <> 1

We executed the query with the following code:

var feedOptions = new FeedOptions
{
    EnableCrossPartitionQuery = false,
    PartitionKey = new PartitionKey(partitionKey)
};

var querySpec = new SqlQuerySpec() 
{ 
    QueryText = queryText, 
    Parameters = new SqlParameterCollection(queryParameters.Select(pair => new SqlParameter(pair.Key, pair.Value))) 
};

using( var query = _documentClient.Value.CreateDocumentQuery<T>(collectionUri, querySpec, feedOptions).AsDocumentQuery())
{
    var response = await query.ExecuteNextAsync<T>(cancellationToken);
} 

return response.First();

But sometimes this returned an item with an amount set to 0 where I knew this should not be the case. When I ran the same code against a test database, the issue did not arise.

So I resorted to Fiddler to help me find the issue between the two queries.

First I tried running the query against the test database:

And then against the acceptation database:

As you can see, the latter returns a continuation token in the response through a response header. So this means we should continue asking for results in our code:

var items = new List<T>();

using (var query = _documentClient.Value.CreateDocumentQuery<T>(collectionUri, querySpec, feedOptions).AsDocumentQuery())
{
    while (query.HasMoreResults)
    {
        var response = await query.ExecuteNextAsync<T>(cancellationToken);
        
        items.AddRange(response);
    }
}

return items;

Then you can create an aggregated result by using Linq to sum up the values of the returned items.

For a project I’m working on I needed to specify a outgoing proxy for accessing Azure Table Storage in a .NET console application.

Unfortunately the default way of setting a proxy in the app.config of classic .NET applications doesn’t work for .NET core.

After fiddling around for a bit I found the solution for setting it in a .NET core application (based on a answer on stackoverflow). If you use the Microsoft.Azure.Cosmos.Table Nuget package, instead of the old WindowsAzure.Storage package (I’m using version 1.0.1), the CloudTableClient allows you to pass in a TableClientConfiguration with a DelegatingHandler:

public TableStorage(string accountName, string keyValue, IWebProxy proxy)
{
	_storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, keyValue), true);
    var storageDelegatingHandler = new StorageDelegatingHandler(proxy);
    _tableClient = _storageAccount.CreateCloudTableClient(
        new TableClientConfiguration
        {
            RestExecutorConfiguration = new RestExecutorConfiguration
            {
                DelegatingHandler = storageDelegatingHandler
            } 
        });

    /// further config
}

In the DelegatingHandler you can set the proxy for the HttpClientHandler:

public class StorageDelegatingHandler : DelegatingHandler
{
  private readonly IWebProxy _proxy;

  private bool _firstCall = true;

  public StorageDelegatingHandler() 
  : base()
  {
  }

  public StorageDelegatingHandler(HttpMessageHandler httpMessageHandler)
  : base(httpMessageHandler)
  {
  }

  public StorageDelegatingHandler(IWebProxy proxy)
  {
  	_proxy = proxy;
  }

  protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
  {
    if (_firstCall && _proxy != null)
    {
      HttpClientHandler inner = (HttpClientHandler)InnerHandler;
      inner.Proxy = _proxy;
    }

    _firstCall = false;
    return base.SendAsync(request, cancellationToken);
  }
}

Now you can configure the proxy where you’re setting up dependency injection:

public class ProxySettings
{
  public bool Use => !string.IsNullOrWhiteSpace(Address);

  public string Address { get; set; }

  public bool BypassOnLocal { get; set; }
}
var proxySettings = Configuration
                .GetSection(nameof(ProxySettings))
                .Get<ProxySettings>();

IWebProxy proxy = null;
if (proxySettings != null && proxySettings.Use)
{
	proxy = new WebProxy(proxySettings.Address, proxySettings.BypassOnLocal);
	WebRequest.DefaultWebProxy = proxy;
}

services.AddSingleton<IStorage>(new TableStorage(Configuration["StorageAccountName"], Configuration["StorageAccountKey"], proxy));            

After a certificate in Azure KeyVault is renewed, you might need to push it to the App Services that are using it. Certificates are stored in Azure as separate resources under the same resource group as the Azure Service Plan resides in.

If you’re using Infrastructure-as-Code (and you should) through ARM templates, you can redeploy the template that deploys the certificate resources. But there’s also another way if you don’t want to redeploy the templates (e.g. because it takes a long time depending on the amount of resources).

If you lookup the certificate through Azure Resource Explorer you can update the certificate though the UI. Just click the “Edit” button:

Don’t change anything to the request message and just click the “PUT” button. This will trigger Azure Resource manager to get the renewed certificate from Azure KeyVault and update it in the Service Plan.