Category Archives: Uncategorized

Azure Api Management – User Migration Tooling – Internal User

In Azure API Management users are managed in the Publisher Portal which will one day be deprecated. Till then we have the Azure Portal which is slowly being ported to allow us to manage groups, users and permissions. One element that is not as easy at it may seem. Is the migration of said users from one instance to another. Some API programs have up to 6 instances of API management, migrating test users and onboarded users to various boxes is tedious and doing so manually is not a consideration. 

Here are some things that are to be considered for our scenario:

a) Users are stored in an internal database and not in AAD. 

b) We will not use ARM templates to move the users over

c) We wish to use the underlying API from APIM to access users/groups and more in order to migrate new users and update certain elements as part of the CI/CD pipeline.

Let’s take a look at a solution.
  1. Get all users in the current zone
    1. API Call to :{SubscriptionIDTarget}/resourceGroups/{RessourceGroupNameTarget}/providers/Microsoft.ApiManagement/service/{APIMInstanceNameTarget}/users?api-version=2017-03-01
  2. Get all users in the target zone
    1. API Call to :{SubscriptionIDTarget}/resourceGroups/{RessourceGroupNameTarget}/providers/Microsoft.ApiManagement/service/{APIMInstanceNameTarget}/users?api-version=2017-03-01
  3. Compare and determine if the user is to be moved
    1. Use the Lists returned with 
      Intersect and Except to find out what is in both lists and what is not included
  4. Assign user to a group
    1. Get the users groups with{SubscriptionID}/resourceGroups/{RessourceGroupName}/providers/Microsoft.ApiManagement/service/{APIMInstanceName}/users/{userID}/groups?api-version=2017-03-01
  5. Assign user to a subscription
    1. Use{SubscriptionID}/resourceGroups/{RessourceGroupName}/providers/Microsoft.ApiManagement/service/{APIMInstanceName}/users/{userID}/subscriptions?api-version=2017-03-01
  6. Report on the interaction.
    1. Prepare a report.

This technique has demonstrated at many talks that I do and I have a full project for you if you wish. Let me know and I can send you the source if you would like to use it.

For the CI/CD Pipeline I use the release constructs to fire a unit test which contains the migration tooling. Great for test and dev zones.

Happy coding!



Azure API Management – x509 Policies and Security Constraints

When working with x509 certificates in Azure Api Management. It is possible to accept an x509 certificate from the initial call to identify the client.

This means the POST to Azure Api Management includes the x509 Certificate and in the Policies there should be a validation to ensure that the certificate is present.

Where thins go astray is when we have an x509 Certificate to secure the backend channel. Now we have a possiblitity of two certiifcates.

One to identify the client. One to secure the back end channel.

Great! No issues so far we can use a check to validate the certificate as it comes in and we can attach an x509 Certificate to secure the back end with a one liner in an APIM Policy.

Here is where issues arise!

What are the issue which can present themselves in this scenario?

Unable to update API definition manually

a) When securing the back end channel from APIM, try to update your API Definition from the GUI (Portal) and let me know if you can attach the x509 certificate in order to not have the API complain about a missing certificate before it renders the swagger definition for APIM to consume….

Move two certificates to the API

b) What is your first x509 Certificate is used to identify a particular client and match a database in the API. Now we have to send down 2 x509 Certificates.

Here are fixes to both these issues in APIM:

Unable to update API definition manually

For A where we have issues updating manually due to x509 Certificates not being able to be attached in the Portal. (For that matter it is also not possible to do so in the Developer Portal when you use the Try It! feature as well. So your clients are stuck using unit tests and cannot use the tooling manually)

Move two certificates to the API

<!–relay cert –>
<when condition=”@(context.Request.Certificate != null )”>
<set-header name=”X-APIM-ClientCert” exists-action=”override”>

<!–Send x509 Certifcate to secure back end –>
<authentication-certificate thumbprint=”your guid here” />

Relay will get you the client cert (x509) the client sent and move it in the x-apim-clientcert (custom) header, the authentication-certificate-thumbrint will relay your cert in the x-arr header. You have 2 headers going downstream…ensure you enforce HTTPS.

x-arr is for the apim to api/webapp mutual tls authentication
x-apim-client(or what ever you choose to call it) will be to relay the client cert downstream

Happy Coding!

Azure API Management x509 Certificates Demystified

x509 Certificates are heaven sent. They allow us the capability to do Mutual Authentication, we can secure back end channels, validate clients and in the end they are just a great security construct.

When building a professional API Program one must oversee the capabilities of the API Management appliance (SAAS) to secure the payloads. Having done some work with Azure API Management and x509 certificates I know of the shortcoming and the key features as well as techniques to relay the initial certificate to the back end channel even if that back end channel is also using an x509 certificate.

First and foremost, 

We will oversee the different use cases.

  1. Identifying the client in APIM
  2.  APIM securing a back end channel via an x509 certificate
  3. Both A and B, where as the API will receive both x509 certificates. One to secure the back end channel , and one to identify a client and possibly make decisions based on this.
  4. Developer trying to attach an x509 certificate in the developer portal in order to test and API

Web Application Fortification with ModSecurity over IIS : OWASP ZAP Zed Attack Proxy


Whom will prevail! Two of my favorite tools at hand, one is for offence and one is for defence however we can argue that both are for defence if you analyse it from another point of vue.

With what looks like one of the coolest logos:

The ZAP Attack Proxy is a free tool from OWASP that can act as a proxy intercepting traffic for analysis and also performs scans. Not to mention it can integrate with a large number of other tools.

From their site:

The OWASP Zed Attack Proxy (ZAP) is one of the world’s most popular free security tools and is actively maintained by hundreds of international volunteers*. It can help you automatically find security vulnerabilities in your web applications while you are developing and testing your applications. Its also a great tool for experienced pentesters to use for manual security testing.

Let’s get into the action. I have two sites on IIS , one is being secured by ModSecurity and the other isn’t. Let’s see the variances.

Port 80 is secured and 2016 has just seen a run.

When firing the execution on hte port 80 rendition we get a 403 which is what we want. Not long ago a consultant came in and advised us to use this tool , which I also advise! However I said what do you do if we have an IDS that knows this signature and stops the traffic instantly? She had not seen such a thing before usually seeing the tooling continue on with the scan. Now using this tool as  aproxy to intercept is a different ball game.

Take away, how many different scanners have you tested against your site? Do you stop them instantly? Do you have forensics telling you someone at x IP address is constantly scanning?

Food for thought!

Happy Defending!

Web Application Fortification with ModSecurity over IIS : HTTrack Website Copier

Excessive recursion is the number one problem plaguing modern web applications and API’s. I always use the analogy of the bank where the client continues to go to the teller and try credentials in order to have his card authenticated. One of the elements I like to utilise is the module that Mads did in 2007!

What 2007!, yes 2007! works wonders for customers who refuse to add Modsecurity or SNORT IDS or to have any appliances. Then whether on classic webforms / mvc / API’s we integrate this module and I can tweak it to allow only enough traffic that can mimic a human user.

ModSecurity over IIS is excellent when dealing with excessive recursion. I have seen it stop the OWASP ZAP Zed Attack Proxy in its tracks, stop Brutus from cycling its usual credential attacks, SQLMap from trying to pull databases from vulnerable SQLi sites. One element where it allowed the traffic to go through was with the HTTrack Website Copier.

What is the HTTrack Website Copier. From their site:

HTTrack is a free (GPL, libre/free software) and easy-to-use offline browser utility.

It allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site’s relative link-structure. Simply open a page of the “mirrored” website in your browser, and you can browse the site from link to link, as if you were viewing it online. HTTrack can also update an existing mirrored site, and resume interrupted downloads. HTTrack is fully configurable, and has an integrated help system.

I first utilised this when our product Manager was going to a remote part of Africa where the internet was to be scarce. I downloaded the utility and presto I had a nice offline rendition of the site.

Then it dawned on my, isn’t this a nice reconnaissance tool for sites that perhaps do not have alot of forensics and could allow us to download alot of elements and then we can do some analysis offline.

Evidently not all can be pulled and observing something live is better but we dont register hits on things that are local.

Notice that ModSecurity and IIS allow me to fire excessive recursions against the site. I had to add custom rules in order to halt the attack. 

Most likely no one is testing against it!

Happy defending!


ASP.Net Web API and ModSecurity over IIS

ModSecurity is a great tool and a great compliment to IIS. The best thing is that it can secure all site , some sites, and regardless of what you want to secure as long as you can run the HTTPModule you can secure the inbound and outbound payloads.

From their site:

What Can ModSecurity Do?

ModSecurity is a toolkit for real-time web application monitoring, logging, and access control. I like to think about it as an enabler: there are no hard rules telling you what to do; instead, it is up to you to choose your own path through the available features. That’s why the title of this section asks what ModSecurity can do, not what it does.

In order to install Modsecurity, head over to to get the latest installer. :

ModSecurity: Open Source Web Application Firewall
Here are the install steps and the discovery and startup of your first site on premise and in the cloud.
First and foremost – use the double click! It’s what us Devs do best!
Away we go…

There are 64 and 32 bit renditions and a repository for the OWASP CRS which stands for Core Rule Set. Which you want.

Next is the ability to configure the instance you will want to say yes unless you are doing more of a silent install or want to powershell these permissions/additions …otherwise select the box and move along.

We are now complete , finish and go explore.

This said, first thing to oversee is IIS itself.

Notice the addition of 2 new HTTPModules:

Excellent, now off to the root! Which should reside at:

 C:\Program Files\ModSecurity IIS

Peruse the files and concentrate on .conf.

Then for the site you want enabled use this in your web.config:

<ModSecurity enabled=”true”
configFile=”C:\Program Files\ModSecurity IIS\modsecurity_iis.conf” />

<!–<remove name=”ModSecurity IIS” />–>

<add name=”ModSecurity IIS (64bits)” preCondition=”bitness64″ />



Away you, go…in my next post I will be attacking a localhost site with various tools to see how ModSecurity and IIS react.

Happy Defence!

Microsoft Dynamics for DotNet Developers

With the somewhat recent announcement that Dynamics is going to be the CRM of choice at the GOC. We are announcing  a presentation on Mycrosoft Dynamics for .Net Developers. When we discussed doing a series to start up a study group the masses wanted B.A, FUnctinal and testing focused areas, however being that our user group is more technical in nature we will be concentrating on the .Net side of things with a lot of examples coming from either ASP.Net API or other elements.

Here is the event:

Microsoft Dynamics Certification Study Group Planning

Tuesday, Feb 14, 2017, 12:00 PM

Microsoft Canada Co. (Ottawa)
100 Queen Street, Suite 500 Ottawa, ON

26 IT Community Members Went

Planning and orchestration of a new study group for a series of MS Dynamics certification exams.We are pleased to announce that we will work as a group to create a new Study Group for Microsoft Dynamics exams.Previously we have had success with MCAD and MCSD study groups and we wish to continue with this new series.Planning:• Which exams• Exam…

Check out this Meetup →


Azure Lunch and Learn: Azure Api Management Showcase

I have great news to share with the community. I was able to secure a room for 12 engagments in order to go forth with an Azure MOnthly Series.

As an ASP.Net and ASP.Net WebAPi specialist I will be doing the demos around these constructs. 

The first is on Azure API Management where we will also see renditions of MuleSoft and APIGEE for API Managment. For the first lunch and learn we will concentrate on Modeling ASP.Net WebAPI’s and creating ASP.Net Web API’s. Once this base is completed we will continue and aggregate the API with Api Management. I would say the session will be 80% Web API and 20% Azure API Management.

See you there:

Azure Lunch and Learn: Azure Api Management Showcase

Tuesday, Feb 28, 2017, 12:00 PM

Microsoft Canada Co. (Ottawa)
100 Queen Street, Suite 500 Ottawa, ON

41 IT Community Members Went

How:This first segment in the Azure Lunch and Learn Series will focus on API Management and this in a nutshell. The goal is to oversee a product and its intrinsics but just enough over a lunch hour for you to take away key concepts and to start guiding your research. We are going to offer this series once a month for you to come in and learn Azure…

Check out this Meetup →


Web Application Fortification with ModSecurity over IIS : The classic three thwarted!

XSS, SQLi and the path traversal attack are the golden three payloads we see over and over again. In this segment we will oversee how ModSecurity securing IIS reacts to these payloads.

First and foremost , we will fire the payloads in ModSecurity’s demo site and see that the information is reflective in that it is sent back to the user for input. Evidently this is due to the fact that we are using a demo site and that a dashboard or log viewer is not available to oversee the errors. In the real world we would be exposing ourselves as we would let the attacker know that we are using brand x or IDS/IPS or in this case a WAF that is augmented with the OWAPS CRS to become and IDS.

Classic payloads and the online model.

Classic payloads and the on-premise solution housed in IIS.

Happy defending!

ASP.Net WebAPI and CorrelationID/RefCorrelatonID

Somethings can be foreshadowed and some things are always expected. As is the case of looking in log files and not seeing a Correlation ID nor any RefCorrelation ID’s. Thing is when your logs do not have alot of throughput you can find your customers/clients/elements/daemons log entry with ease with just a log stamp (datetime). However , even with low latency Correlation ID’s should always be utilised.

What is a Correlation Identifier?  Simple Guid that is Unique and identifies a transaction. See the link for a more enterprise integration perspective, but in the end a guid is all this is.

A curious thing that I always see is that in all the logging examples:

1) No provisioning for Correlation ID’s
2) No provisioning for Ref Correlation ID’s

I then ponder on …

how is it that you correlate?

(Pun intended)

Inbound Web.API calls should have a header containing a Correlation ID. Which is a GUID that can clearly and uniquely identity the transaction.

This in turn then becomes the RefCorrelation ID in your log, as you spawn a Correlation ID of you own based on the Request.CorrelationID which is a unique identifier to the transaction. The client should see his/her payload coming back with an echo of the ID they sent in. At times the RefCorrelationID is also sent as an echo. All depending on requirements and standards.

I very much like this implementation:

The only variant that I advise is to accept a correlation ID from the client and to use the correlation ID from the request [request.GetCorrelationId()] as the Unique ID in the logs.



First example of standard flow a request comes in and is processed internally via multiple logging points. The RefCorrelation ID is what the client passed in, the Correlation ID is what we generated or decided was going to be the unique idenitfier.

In the second example another service is call. Say to process a payment.

Which ID is passed?

Your correlation ID is passed and the new service uses yours as a REF and generated a new unique ID for its transaction. So on and so forthat as many times as we have subsequent calls.

Having worked in integration for over 12 years, I know the value of being able to correlate all data especially when execitions can go from Java to ASP.Net Web.API to IBM MQ to Biztalk to Appliances.

Food for thought!