Tuesday, August 27, 2013

Windows Azure Virtual Machine Single instance scheduled maintenance


Dear Customer,

Upcoming maintenance will affect single instance deployments of Windows Azure Virtual Machine….”impact single instance deployments of Virtual Machines that are not using availability sets…”

When do i have problems? in the weekend? Nope…during the week!image

What is my problem, roles, temporary storage, reboots?

  • Temporary storage?



  • Roles

“Please note that Cloud Services using Web or Worker roles aren’t impacted by this maintenance operation. “


  • Reboots

“Single instance virtual machine deployments that are not in availability sets will reboot once during this maintenance operation. “


So….i need an availability set!

  • Aiai…do i have one? Go to your virtual machine, ‘Configure’ and determine if any availability sets are mentionedimage


  • If this is not the case, we need to create an availability set, this is very easy, select ‘Create an Availability set’ and type in a name' (note: the machine must be running!)


  • Click Save and confirm



  • Watch the magic happen


  • Verifying the machine state


So…..what is happening, what is going to happen when i create a availability set?

  • The state of the machine will be persisted to disk
  • A copy of this disk will be stored in your Blog Storage container
  • Another Machine will be instantiated (with the name of your Availability set)
  • This machine will be running and will be used as the ‘switchover’ machine when Faults are detected/Updates are performed



So….we are basically achieving high availability! I want to know more….

  • Some slides (see this page below for reference), with some additional explanational, graphical view on the inner workings


  • How does this relate to the Azure Data center?


  • What components are in place, to detect these faults, ensures that everything works? Say hallo to "Fabric Controller”

The concept of fabric in Microsoft’s implementation in production exhibits itself in the so-called Fabric Controller or FC which is an internal subsystem of Windows Azure. FC, also a distribution point in cloud, inventories and stores images in repository, and:

  • Manages all Compute and Storage resources
  • Deploys and activates services
  • Monitors the health of each deployed service
  • Provisions the necessary resources for a failed service and re-deploys the service from bare metal, as needed



It’s amazing what happens underneath when you click through the portal, or when you are deploying a service into the cloud!


  • See more detailed deep dive material on the inner workings here;





For now….i’m safe and happy with my availability set, and learned a lot doing it as well Glimlach.




Friday, August 23, 2013

ESB Series – Final thoughts


This is the last post in the ESB Series which i’ve been doing;

Hopefully this has showed some ways of using the ESB Toolkit, and ways of extending it so that it is more suited for various scenarios.

This post will close the gaps i left open in earlier posts. In the end, i have hopefully demonstrated some common scenarios which are possible with the ESB Toolkit. My final thought on this subject is that the ESB toolkit is targeted to work in a more abstract/generic way, which is a good thing. After using the ESB Toolkit I think it is a nice addition to BizTalk, but I only recommend to use it when the problem demands for this type of a solution. There is a learning curve involved, it works differently as classic BizTalk, and means that you have to define your solution differently, sometimes it even makes the components more complex. In the end, it will bring flexibility, maintainability and allows re-use, however this requires an investment.

The ESB Toolkit is not a golden hammer, don’t use it per definition, think about the problem, think of the possible solutions, think of the process, possible changes and decide which approach would be the best.

ESB Toolkit Itinerary scenarios

In the first post ‘Itinerary’ i’ve showed the difference between a Receive-Send port and the same set up using an Itinerary. Below, you can see the different scenarios possible, and the comparison with a BizTalk set up.

  • Pub / sub (Port 2 Port)


This Itinerary basically, receives a messages from the Receive Pipeline, performs Tracing, Routing and send a message to a Send port. The subscription is management by BizTalk, however, the Send port is configured Dynamically.

Advantage: No Orchestration required, no Pipeline component required, just a BRE call with the Routing information.


  • Pub / Sub (Port 2 Orchestration)


Let’s say we have a backend System which needs to be called. By managing this in the Itinerary, we don’t have to manage the subscription based on the MessageType of the Orchestrations Receive port. This means we don’t have the MessageType Dependency and could work with Xml.

Advantage: Process is more flexible, the Orchestration is not depending on a Schema…thus deployment is easier.

Advantage: We can change our process by adding another Orchestration Itinerary Service, this allows us to manage our process dynamically. The orchestration has become an abstract ESB Service component which we call. So instead of defining our process in the Orchestration, we isolate the functionality and link them using the Itinerary. If the process flow changes, we don’t need to redeploy our processed, only our Itinerary, which is an Xml file without any dependencies.

  • Pub / sub (Orchestration to MessageBox)


By starting the process from an Orchestration, we can dynamically chain processes together, start Itineraries based on preconditions.

Advantage: This allows for more flexibility as we can control the process flow, for example based on backend result. In case of a backend result, we can determine that we need to start a human workflow (ItineraryA), or are able to automate the process (ItineraryB).

  • Repair and Resubmit from the ESB Management Portal Sample


This shows an example of the default Error handling capabilities, which can already be included in your custom portal

  • Resubmit from ItineraryStep X


This shows how to extend the (your)Portal so that restarting processes can be even more sophisticated.

ESB Toolkit -  top reasons for using it

I’ve tried to discuss with Tomasso Groenendijk, on what the most important features are provided by the ESB Toolkit, and discuss scenarios for which BizTalk is limited and the ESB Toolkit adds functionality. This is a quick list, and partly explained by the aforementioned examples and also some use cases mentioned below the list;

  • Components are developed to be made generic, as opposed to just for 1 process
  • Pipelines are developed to be made generic, as opposed to just for 1 process
  • ESB Toolkit allows for a higher performance in low latency because it is easier to use pipeline components
  • BAM support is out of the box (although limited)
  • Generic Error handling
  • Runtime flexibility by using business rules (which can be changed without impact on the BizTalk deployement)
  • Deployment, Orchestrations are untyped (not dependending on maps/schemas), schemas and maps can be redeployed without affecting the orchestration

Example: Flexibility
[Add generic <functionality> in each process]
BizTalk only: create a pipeline component, add this in all pipelines in all ports (redeploy interfaces)
+ESB Toolkit: add a generic ItineraryService / update Itineraries (no redeploy of interfaces required)

Example: [A new version of the map is available]
BizTalk only: redeploy the interfaces (due to dependencies)
+ESB Toolkit: only update the map + itinerary (no redeploy of interfaces)

Example: [Determine the process flow at runtime
BizTalk only: Orchestrations are fixed an not flexible
+ESB Toolkit: Itinerary can be started based on message properties, rules, and can contain dynamic mappings

Example: [Restart a process]
BizTalk only: Not possible, messages must be stored somewhere, custom plumbing is required to resubmit messages
+ESB Toolkit: Message goes to the ESB Db, message can be resubmitted (even at the point of failure)


Before using the toolkit, I would like to summarize with the following considerations;

1) The toolkit can be quite complex, there is a learning curve involved
2) There are some features not available out of the box, for this custom plumbing needs to be done

3) ESB Portal
Exception handling is sophisticated, however the ESB Portal is a Sample, therefore, the Toolkit is
most useful when you integrate the error handling within your own Dashboard.

The portal is a sample application and is out of the box not production ready, and requires customizations.

Some common changes required:

  • Cleanup db jobs (e.g. archiving)
  • Extending the stored procedures so that additional context / fault information can be stored
  • Change the Failed messages model, so that there is a notion of State (Exception can be ignored, etc)
  • UI modifications to make the portal more flexible, do not use webservices for performance with large data sets, add features such as bulk resubmit or ignore

Known challenges

Lastly, some challenges, already identified and solved, and a list of great resources;


Tomasso Groenendijk will present a combination of these posts and topics related to this on the next BTUG on the 12th of September.

I suspect, that he will also demonstrate fascinating features, and his tools to test and use maps!


Kind regards,


Wednesday, August 21, 2013

BizTalk 2013, replace HTTP adapter by WCF-WebHttp REST


The WCF-WebHTTP is designed to support REST, with support for the verbs GET, POST, PUT, and DELETE. We can use the POST verb to replace the HTTP adapter functionality, and use WCF custom behaviors to tailor the adapter to our needs.

This post explains how I migrated the HTTP adapter and why I decided to look at this approach. This should give a summary of how to do this.

Problem with the HTTP Adapter

1) Error handling

The BizTalk HTTP adapter hides away the logic of HTTP status codes, thus an error is handled in the adapter. Returning a custom error in a request response call, which calls a solicit-response HTTP port is therefore quite challenging, and at this moment I have not found a way to do this, except abandoning the Messaging only approach.


2) Flexibility

The HTTP adapter is somewhat limited in functionality, yes you can use a custom pipeline, client/server certificate authentication and stuff, but adding custom behavior is limited;


HTTP vs WCF-WebHTTP Adapter Send 



· HTTP Method: POST (set by the adapter)

· Header Content type: text/xml (set by the adapter)

· Body: xml (handled by the adapter)


After choosing the adapter


We need to set 2 aforementioned properties manually;

· HTTP Method: POST


· HTTP Method: POST (set by the adapter)

· Header Content type: text/xml (set by the adapter)


HTTP vs WCF-WebHTTP Adapter Receive

Using the HTTPReceive adapter (POST) works the same way as the WCF-WebHTTP adapter will be used; it is hosted as an EndPoint in IIS. However, one of the advantages is that you can add custom behaviors which could the migration to WCF-WebHTTP a good solution for received HTTP requests.

After running the wizrad


Selecting the protocol WCF-WebHTTP


You will get an Endpoint hosted in IIS, however with the WCF-WebHTTP instead of the HTTPReceive dll (which requires ISAPI extensions etc)


In the configuration screen, I use the BtsHTTPUriMapping, which implies POST instead configured otherwise;


Where the Operation Name = “Test” should map to the Operation name in your Orchestrations Receive Port.

Solution with the WCF-WebHTTP Adapter

After migrating to the WCF-WebHTTP adapter, you are able to use custom error handling, functionality. How to implement this is already described in detail.

1) Error handling

Although changing the HTTP to WCF-WebHTTP, I could not get the setup working to provide error handling in the request-response scenario with custom mappings. This because the HTTP status code is not wrapped inside an Error message. Although I expect that you can get it working using a custom behavior, I have not developed a solution at this point.

A good starting point is provided here, which uses an Orchestration;


2) Flexibility

The flexibility of WCF is the ability to add custom behaviors, how to do this is also described in detail;






Monday, August 19, 2013

Service Bus Deep Dive – personal notes


This video is a must see ‘Service Bus Deep Dive’ together with the presentation. Clemens explain all the Service Bus capabilities, which can be used in various scenarios, and most importantly, are being used in various scenarios which he explains as well.


As most of the explanation in his video is also covered in the presentation, I’ve tried to make notes (again: PERSONAL notes), of what I think is something to highlight.

These notes are grouped per main category in the presentation

Service Bus
Service bus will remain available in 2 flavors, the Server and Cloud version. The focus will be on delivering functionality in the Cloud version first, after which bugfixing, takes places and eventually a code merge to the Server version.

- Cloud functionality (released frequently)
- On Premise (released once 1 year – end of the year)

Brokered messaging
- Flow rate is roughly determined (throughput \ # subscriptions)
    - e.g. 2000 throughput
    - subscriptions == number of copies of the message
- Messages are persisted in SQL before the ACK is returned
- MSMQ team is integrated within Azure Service Bus team (will remain supported)

ServiceBus API
- SB Messaging protocol (ports 9354 or 443/80) ==> will be replaced by AMQP (in the future)
- AMQP(S) with (S) being the problably supported method (S: 5671, normal: 5672)
- AMQP vs Relay (idea to extend AMQP with Websockets to migrate SBMP features)
- HTTP(S) does not support transactions/sessions
- Possible implication: AMQP support in BizTalk
- Possible implication: Relay support in BizTalk will change

Message dimensions
- max total props: 64kb (-4kb reserved props)
- max body (256kb) - size of props!

Message protocol mapping:
- http (json) ==> brokered message (automatic mapping)
- AMQP ==> brokered message (automatic mapping)
- SOAP message ==> brokered message (explicit promotion required)

Delivery options:
- Peek Lock (reliable messaging) ==> possible loss of order
- Session + Peek lock (lock all messages, ensuring order)

Receive Operations:
- AMQP/SBMP session/connection is maintained ==> much more efficient
- HTTP single receive operation ==> less efficient

Pricing (1 feb 2013)
- 1 dollar = 1 million msgs
- each chunk of 64kb is considered a transaction!
- Long-polling (HTTP) means a transaction per poll, also charged in case of empty body
- pricing depending on zone (asia is the most expensive)

Messaging / Composite patterns ==> Must see
Correlation recommendation: Session Correlation

Messaging features
- sql subscription actions; can change message properties, can not change message content
- topic pre-filtering:
    - check if there are subscribers
    - slower
    - ack after executing rules
    - when: used to group a set of related activities, for example processing a large file splitted
    into smaller chunks
    - no time limit
    - !state! can be used to work stateless while processing workload (state machine / workflow capabilities) e.g.;
    Clients     Sends processing work
    Consumer     iterates through the sessions (acceptMessageSession)
    Consumer     receives the message (session.Receive())
    Consumer     processing part of the message and updates the state (session.SetState(...))

    Consumer2    Retrieves the state
    - no DTC
    - System.Transactions scoping model
    - Distributed transactions are possible using queues ("via" entity)

    - when: High throughput
    - Usage: in stead of processing per message using the 'Receive' method, processing takes places using a local buffer

Auto forwarding (Queue)
    - Allows decoupling
    - Allows dynamic reconfiguration (switch over based on message properties)

    - Things will fail
    - Exceptions with '.IsTransient' can be retried safely
    - Support requests
    NS Namespace
    TransactionID (from the exception)
    - Everything should be implemented A-Sync for high-throughput
    (later this year available in all API's)



Friday, August 16, 2013

ESB Toolkit Series – Part IV ‘Resubmit’


An Itinerary determines the business flow, when the business flow is interrupted we would like to be able to restart a process. When can do this with the ESB Portal, but this portal is not always the way to go (I will post more on that in the do’s and don’ts).


If we look at the options we have to resubmit a message in the ESB Toolkit context, we can;




I am now talking about the last option – Restart the Itinerary at step X;




Note: The ESB Portal allows you to resubmit a message, however, always starting from step 1.




The Itinerary is a fancy DSL diagram, and can be exported to Xml as which it is used inside the ESB Toolkit;




You can export to Xml and directly into the Database (depending on what you prefer for your release management);




So based on the component described in the orchestration on-ramp I thought that it should be possible to not only resubmit a message from any process, but also from any step! This can be useful to ‘manually fix’ a step in the process where you are not able to fix the request or know that a certain step fails. This also opens up the door to add an ‘Itinerary Submitter’ which can replace all the default ‘On-Ramps’ which are using various techniques (asmx allows for Resubmit using the Itinerary…., WCF uses the Itinerary from the database, thus has no state).




We want to specify which ESB Toolkit Itinerary step to start to perform a resubmit


Attempt #1 – Change the state of the Itinerary services (assumption: Itinerary is processed based on the service state of any of the services)




Attempt #2 – Change the position of the Itinerary metadata (assumption: Itinerary is processed based on the position of the Itinerary metadata)




Attempt #3 – Change the Name of the itinerary metadata (Assumption: name of the Itinerary metadata is used to start the correct itinerary service




Errors for #1-3: There was a failure executing the receive pipeline:.....Reason: The service instance does not have the same properties as the first pending service.


Attempt #4 – Change the position, State of previous Services before the position, Name of the Itinerary metadata




How did I do this? We need to change the Xml, this is attached in the context by the ESB Pipeline components, which can be manipulated using the component Tomasso has written. He originally posted on having an Orchestration as on-ramp to start an Itinerary. I figured that changing the state should be possible with that component as well.

I made a few changes;




Crucial part of code


// serialize the updated itinerary and update this in the resolver dictionary (ESB context)

using (StringWriter sw = new StringWriter())


ser.Serialize(sw, it);

sw.Flush(); // ensure all data is written

itineraryData = sw.ToString();



msg.SetPropertyValue(typeof(Microsoft.Practices.ESB.Itinerary.Schemas.ItineraryHeader), itineraryData);




Testing…..start at the last step! Attempt #4 – result




What I did, was create an custom Orchestration with a published WCF On-Ramp, I am now able to start an Itinerary, but also start the Itinerary from any step. This allows me to create a very flexible Error building blocks which can easily be integrated in existing portals.




If we then look at the ESB ExceptionDatabase, we can use the information from the various tables, to display the Faults / Message and retrieve the Itinerary;




  Fault: Metadata related to the fault

- application

- description

- errotype

- failurecatagory

- faultcode

- faultdescription

- faultseverity

- scope

- faultgenerator


  Message : Metadata related to the message

- FaultID

- MessageName

- InsertedData


  MessageData : The payload of the message

- MessageID

- MessageData


  ContextProperty : contextproperties related to the message in MessageData

- MessageID

- Name (e.g.. : Itinerary)

- Value

- Type

- InsertedDate


Retrieving the Itinerary is possible using the contextproperty ‘ItineraryHeader’ ;




As I cannot share anymore code than this, I’m sorry the full solution is not provided, however, this should get you going! An example of a tool I created to prove the functionality works, before we integrate this in a custom portal looks like this.






·         Retrieve all the Faults and related Messages with an Itinerary attached

o    Linq datamodel to query the ESBException db easily

·         Display the Itinerary and Services (Deserialize the Itinerary) and select the Itinerary Step to start

o    Reference to the Itinerary OM Model (Assembly)

·         Send a new request which starts at step #

o    Combination of all the information in this post



This now allows you to:

·         Use the ESB Toolkit to define your processes (I mean business processes) in a more abstract way by using the Itinerary instead of tightly coupling this in your Orchestration

·         Use the ESB Toolkit Exception handling without having the need to use the ESB Portal

·         Implement functionality in your own preferred portal, using your preferred method





Kind regards,

Sander Nefs


Tags van Technorati: ,,

Wednesday, August 14, 2013

Azure subscription migration process


I had a windows Live ID linked to my MSDN subscription, when changing jobs my former employer motion10 was so kind to keep the subscription active for a short period, allowing me to migrate my subscription to my new employer Caesar.

Hereby the steps for this process to succeed

  • The first requirement is that the bill has been paid Smile


  • Determine the current and new subscription ids



  • Contact Microsoft support (from within the portal)


Choose the subscription for which the ticket is created


Choose the type of support call (subscription transfer-migration)



  • The new subscription must be bound to the same Live ID for the migration to work


  • The destination subscription may not contain any artefacts, so be careful with an Active Directory as this is created and active for all your subscriptions linked to your LiveID


  • In my case the Active Directory was causing an issue, fortenately, the backend team is prepared for this type of migration

“I will now go ahead and work with my backend team to do a force migration so that we can complete your request. This might take 48 to 72hrs.”

  • As my previous subscription was an US based I could use the Azure Store, my new subscription is Europe based, so these apps are not migrated (adding to the complexity)


  • In the end it took 4 days to complete the entire migration, giving the priority (medium) and complexity I think that´s good!
  • image