Using Windows Server AppFabric Caching for Storing SSO & Configuration data

Storing SSO data in cache can be very useful in a low latency scenario. Performance of services can also be improved by caching configuration data such as a Status Codes table (which has business level exception status codes and descriptions).

Using cache in SOA and BPM solutions is not new neither storing SSO data in the cache. You can read Using SSO Efficiently in the Service Oriented Solution and Business Process Management Solution.

Problems with Enterprise Library Caching:

Before AppFabric we could use Enterprise Library caching which had scalability and synchronization problems which means that it was a single server in memory cache which would not be an option with BizTalk running in a Farm and it would cause inconsistency. The other problem is in-process residing of cache in BizTalk Host instances. If the host instances are restarted it means that the cache would load again and causing the cache to populate. The other problem would be if there was a change in the source data the cache would have to be refreshed either by the time interval specified or by enforcement by restarting the host instances. Restarting the host instance can be a heavy operation for the services running and is not an option in production environment. If the cache would be refreshed after a specific time interval it is fine for a single server but in a multi-server environment it can create inconsistency of data for a specific period of time. Consider the refresh interval is 5 minutes on both servers. On Server A the cache will be refreshed after 3 minutes but on Server B it will be after 15 Seconds. In this scenario Server A will be running with the old data for 3 minutes. Windows Server AppFabric has solved this issue and now we can leverage with the features of the technology and incorporate it with BizTalk.

Windows AppFabric Cache features and advantages:

The best would be to go through this article to have an understanding of the architecture and benefits of the caching features. Here is an excerpt of features from this article.

  • Caches any serializable CLR object and provides access through simple cache APIs.
  • Supports enterprise scale: tens to hundreds of computers.
  • Configurable to run as a service accessed over the network
  • Supports dynamic scaling-out by adding new nodes.
  • Backup copy provides high availability.
  • Automatic load balancing.
  • Integration with administration and monitoring tools such as PowerShell, Event Tracing for Windows, System Center, etc.
  • Provides seamless integration with ASP.NET to be able to cache session data in without having to write it to source databases. It can also be used as a cache for application data to be able to cache application data across the entire Web farm.
  • Follows the cache-aside architecture (also known as Explicit Caching) for V1. That is, you must decide explicitly which objects to put/remove in your applications and the cache does not synchronize with any source database automatically.

Let’s see now how the problems of previous caching techniques can be solved by Windows AppFabric. For this first we need to understand our requirements because there are a lot of variations of AppFabric caching host and client. You need to analyze which caching hosting options would fit into your scenario and what client you would be writing for your caching. The overview of them is given below.

Windows AppFabric Host Configurations:

On one or more servers the AppFabric Cache service will be running as a Windows service. The servers should be clustered and when using with BizTalk I would install and configure AppFabric on all of my BizTalk Server machines and configure the Cache Cluster. Of course if you have a shortage of servers like us Smile  then you have to use the existing BizTalk servers but if you can make dedicated cache servers for large cache data you can.

In BizTalk as we will not be storing services data but only SSO data and some configuration data which we get from a Sharepoint List then configuring the existing BizTalk Server clusters for AppFabric caching is a good idea.

You have to follow the installation and configuration guide on how to make a cluster and where to store the configuration data of the cluster. The configuration data can be stored in XML file stored on a shared folder or in SQL Server database I have chosen the latter one. Here is the physical architecture diagram of a cache cluster.


1- Partitioned Cache:

I will be assuming that you are familiar with the logical hierarchy of the AppFabric cache. If the caching is configured on a cluster of servers and named cache is defined on each of the servers then the regions can be distributed among the servers and therefore providing availability or scalability.

a) Scalability:

The cache item would reside in one of the regions of the cache and that region can reside on any one of the cache cluster node. The region is guaranteed to reside on one server and the partition cannot be further distributed among the cluster. Therefore all the items residing in one region will reside in one cluster node. Defining region is optional when you add an item to the cache therefore the cache service itself will load balance the region and assign keys to the regions it has created internally on any server. There is a routing layer on the cache level which routes the put and get operations to the relevant cluster node having the key.

b) Availability:

In the availability scenario the cluster nodes can be defined as secondary and one node can be the primary node thus all the nodes having a copy of the cache items. If the primary node fails when a put or get operation is called for the cache one of the secondary nodes becomes the primary node and the applications continue accessing the cache.

It doesn’t matter on which node the get and put operations are called the routing layer of the cache determines the primary node and routes the request to it. The primary node is responsible for the synchronization of data as the data is updated in this node. When the item is accessed or updated it updates itself and then sends the operation to all the secondary nodes to update themselves. It then waits for an acknowledgment from the secondary nodes. When the acknowledgments are received from each node it then sends the acknowledgment of success of operation back to the client.

2- Local Cache:

If there is no need for availability and scalability the local cache host can be configured just as on my development machine I will have a local cache. This will have the cache in one server therefore will be fast as there will be no network hops and deserialization of data. For this when configuring the AppFabric you will have the new AppFabric cluster without having any other nodes joined with them. For making a multi-server cluster you can install AppFabric on another machine and cluster by selecting the Join Existing Cluster option in the configuration wizard and apply the appropriate settings. It still depends upon the client on which server it is accessing cache.

AppFabric Cache Clients:

There are two types of clients that can be configured in AppFabric.

1- Routing Client:

The routing client will have its own mechanism to keep track and management of the cached objects. It should know on which server the region resides and which key is placed in which region. We will not be using this in the middleware since we are just storing the SSO and Configuration data in the cache but it can be used by services or mainly by web applications depending upon the requirements.

2- Simple Client:

The Simple client is not aware about the locations of regions in the cluster. Therefore they just try to access the object in the cache in their respective region (if they are using regions). The cache routing mechanism itself takes care of routing but it depends if the cache cluster is configured for Scalability or Availability. Their routing mechanism is defined above in the host configurations section.


After having some basic concepts about architecture, usage and advantages I am using the AppFabric cache with the Local Cache host configuration and a base client.

For the host you have to install AppFabric and configure the AppFabric caching services. The first configuration would be to store the configuration (which is SQL Database in my case can be a shared XML File). Second configuration would be to add to an existing cluster or to create a new one. I will have another blog post for this but it’s pretty simple and one can follow the AF installation and configuration.

Now it’s time to write the client which would be BizTalk. We have a Common project which is referenced by each service for using common functionality like to read the configuration data/getting status code etc. Therefore in the same common project I will be writing the client code. I have provided the sample for download in the widget which is free from our organization helper functions so it can be used on a BizTalk machine having SSO and Caching configured.

The client can have the configurations in a configuration file or have the configurations programmatically. I will be using a configuration file and will not require recompiling when changing hosting environments. These settings can be stored into machine.config or BTSNTSvc.exe.config. I will not be explaining the configuration file as it is self explanatory with comments. Feel free to copy and modify it according to your needs as it has all the configuration sections with all the parameters.

<?xml version="1.0" encoding="utf-8" ?> <configuration> <!--configSections must be the FIRST element --> <configSections> <!-- required to read the <dataCacheClient> element --> <!-- Cache Client Setting 1- Client time-out (milliseconds) The requestTimeout attribute in the dataCacheClient element. We do not recommend specifying a value less than 10000 (10 seconds). Default value is 15000. 2- Channel open time-out (milliseconds) The channelOpenTimeout attribute in the dataCacheClient element. This value can be set to 0 in order to immediately handle any network problems. For more information, see Configuring Cache Client Timeouts (Windows Server AppFabric Caching). The default value is 3000. 3- Maximum number of connections to the server The maxConnectionsToServer attribute in the dataCacheClient element. The default value is 1. --> <section name="dataCacheClient" type="Microsoft.ApplicationServer.Caching.DataCacheClientSection, Microsoft.ApplicationServer.Caching.Core, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" allowLocation="true" allowDefinition="Everywhere"/> </configSections> <dataCacheClient> <!-- (optional) specify local cache Remove in a multi-server farm --> <!-- Local Cache Settings 1- Local cache enabled The isEnabled attribute in the localCache element. Values may be true or false. The localCache element may also be missing to indicate that it is disabled. 2- Local cache invalidation method The sync attribute in the localCache element. Use the TimeoutBased value to indicate a time-out value should be used. Use NotificationBased to indicate cache notifications should also be used. 3- Local cache time-out (seconds) The ttlValue attribute in the localCache element. 4- Specific cache notifications poll interval (seconds) (optional) Specified by the pollInterval attribute of the clientNotification element. The clientNotification element is a child of the dataCacheClient element, and not a child of the localCache element. If not specified, a value of 300 seconds will be used. 5- Maximum locally-cached object count (optional) Specified by the objectCount attribute in the localCache element. Triggers when eviction on the local cache should start; it will then attempt to remove 20 percent of the least recently used locally cached objects. If not specified, the default value of 10,000 objects is used. The ObjectCount --> <localCache isEnabled="true" sync="TimeoutBased" objectCount="100000" ttlValue="3000" /> <!--(optional) specify cache notifications poll interval --> <!-- Client Notification Settings 1- Specific cache notifications poll interval (seconds) Specified by the pollInterval attribute of the clientNotification element. If not specified, a value of 300 seconds will be used. 2- Maximum queue length The maxQueueLength attribute of the clientNotification element. If not specified, the default value is 10000. --> <!-- <clientNotification pollInterval="300" /> --> <hosts> <!-- Cache Host Settings 1- Cache server name The name attribute of the host element. 2- Cache port number The cachePort attribute of the host element. --> <host name="D001MWWS3" cachePort="22233"/> <!-- In a Mult-Server Environment add the second Server OR More for caching <host name="CacheServer2" cachePort="22233"/> --> <!-- Security Settings 1- Mode The mode attribute of the securityProperties element. Possible values include Transport and None. The default value is Transport. 2- Protection level The protectionLevel attribute of the securityProperties element. Possible values include None, Sign, and EncryptAndSign. The default value is EncryptAndSign. --> <!-- <securityProperties mode="Transport" protectionLevel="EncryptAndSign" /> --> <!-- Transport Settings Connection buffer size (bytes) The connectionBufferSize attribute of the transportProperties element. The ConnectionBufferSize property of the DataCacheTransportProperties class. This is then assigned to the TransportProperties property of the DataCacheFactoryConfiguration class. Maximum buffer pool size (bytes) The maxBufferPoolSize attribute of the transportProperties element. The MaxBufferPoolSize property of the DataCacheTransportProperties class. Maximum buffer size (bytes) The maxBufferSize attribute of the transportProperties element. The MaxBufferSize property of the DataCacheTransportProperties class. Maximum output delay (milliseconds) The maxOutputDelay attribute of the transportProperties element. The MaxOutputDelay property of the DataCacheTransportProperties class. Channel initialization timeout (milliseconds) The channelInitializationTimeout attribute of the transportProperties element. The ChannelInitializationTimeout property of the DataCacheTransportProperties class. Receive timeout (milliseconds) The receiveTimeout attribute of the transportProperties element. The ReceiveTimeout property of the DataCacheTransportProperties class. --> <!-- <transportProperties connectionBufferSize="131072" maxBufferPoolSize="268435456" maxBufferSize="8388608" maxOutputDelay="2" channelInitializationTimeout="60000" receiveTimeout="600000"/> --> </hosts> </dataCacheClient> </configuration>

Get all the data from SSO:

The below code snippet of the function retrieves all the keys from all applications from SSO. I have used it in the CacheManager project where you can find it in the SSOConfigManager class.

/// <summary> /// Returns list of applications in SSO database. /// </summary> /// <returns>Dictionary of application name as key and description as value.</returns> public static IDictionary<string, string> GetApplications() { ISSOMapper ssoMapper = new ISSOMapper(); AffiliateApplicationType appTypes = AffiliateApplicationType.ConfigStore; IPropertyBag propBag = (IPropertyBag)ssoMapper; uint appFilterFlagMask = SSOFlag.SSO_FLAG_APP_FILTER_BY_TYPE; uint appFilterFlags = (uint)appTypes; object appFilterFlagsObj = (object)appFilterFlags; object appFilterFlagMaskObj = (object)appFilterFlagMask; propBag.Write("AppFilterFlags", ref appFilterFlagsObj); propBag.Write("AppFilterFlagMask", ref appFilterFlagMaskObj); string[] apps = null; string[] descs = null; string[] contacts = null; ssoMapper.GetApplications(out apps, out descs, out contacts); Dictionary<string, string> dict1 = new Dictionary<string, string>(apps.Length); for (int i = 0; i < apps.Length; ++i) { if (!apps[i].StartsWith("{")) dict1.Add(apps[i], descs[i]); } return dict1; }

Creating and managing cache:

Before we perform operations on the cache we have to make sure the cache we are going to use is created. There are some power shell command lines for the administration of AppFabric Cache. There is also a useful GUI based tool for cache management. I would recommend downloading it in case the UAT and Production server administrator is not you. I would continue with both power shell command lines and the tools.

1- Create the cache:

There is always a Default cache which you don’t need to create. I am creating a cache named MWConfigurationCache for storing my middleware configuration data by running the New-Cache command. You can then run the Get-CacheClusterHealth command to see its health.


2- Management:

Some commands will be handy during development. For a full list refer to AppFabric Caching Deployment and Management Guide.

1-  First is the Get-CacheStatistics from which you can see how many items, regions and request are being made to the cache. You can also see the cache size in bytes.


2- The Get-CacheConfig command which gives the following output.


Setting Description
CacheName The name of the cache.
TimeToLive The default time that items reside in the cache before expiring.
CacheType The type of cache. This is always Partitioned.
Secondaries A value of 1 indicates that the cache uses the high availability feature.
IsExpirable Indicates whether objects in the cache can expire.
EvictionType Specifies an eviction type of Least-Recently-Used (LRU) or None.
NotificationsEnabled Indicates whether notifications are enabled for this cache.

3- You can see all the cache that exists on the cluster by Get-Cache command.


4- Stop and Start the cluster from Stop-CacheCluster and Start-CacheCluster commands respectively.



Note: Starting and Stopping the cluster clears the cache here is the sequence of commands first we can see from the stats that the cache has 3 items after stopping and starting the cache has no items. This can be useful when your source item has been updated and you want to reflect this in your cache. This would require some down time.


Inserting and retrieving Items:

There are a lot of variations in the API of the AppFabric caching. I would recommend to go through them here. In your middleware if you wish to read/write shared data between services then do consider the concurrency models. You can also have tags with keys and tags can be used to group items with your cache.

I am using the basic cache methods of Put and Get.

You can see the code below where I am getting all the key/value from all applications from SSO and adding them to cache. The Put method updates items if they already exists in cache or adds them. There is also and Add method which gives an exception if the item already exists.

public void PopulateCacheFromSSO() { IDictionary<string, string> apps = SSOConfigManager.GetApplications(); foreach (string appName in apps.Keys) { string appUserAcct, appAdminAcct, description, contactInfo; HybridDictionary properties = SSOConfigManager.GetConfigProperties(appName, out description, out contactInfo, out appUserAcct, out appAdminAcct); System.Diagnostics.EventLog.WriteEntry("SSO Application Name", "Name = " + appName); foreach (DictionaryEntry appProperties in properties) { System.Diagnostics.EventLog.WriteEntry("SSO Application enteries", "Key = " + appProperties.Key.ToString() + " , " + "Value = " + appProperties.Value.ToString()); PutInCache(appName, appProperties.Key.ToString(), appProperties.Value.ToString()); } } } public void PutInCache(string category, string key, string value) { DataCacheItemVersion itemVersion; if ((itemVersion = configCache.Put(category + "_" + key, value)) != null) System.Diagnostics.EventLog.WriteEntry("Cache Item Added", "Key = " + key); else throw new Exception("Cache Item not added"); }

After running the code you can run the Get-CacheStatistics command to see if the items are added to the cache. Now it’s time to retrieve and item from the cache which you added. The code below gets the items from the cache.

public string GetFromCache(string category, string key) { string item; if ((item = (string)configCache.Get(category + "_" + key)) != null) System.Diagnostics.EventLog.WriteEntry("Cache Item Retrieved", "Key = " + key); else throw new Exception("Cache item could not be found"); return item; }

Try to retrieve the values from the cache after the TTL time configured in the configuration file. You will find that the cache has expired. Also run the Get-CacheStatistics from power shell, see what you find.

Some troubles which I had and would be common to any developer are below.

ErrorCode<ERRCA0017>:SubStatus<ES0007>:There is a temporary failure. Please retry later. (The request failed because the server is in throttled state.)

If you get the error above it means nothing I couldn’t figure it out neither the guys on MSDN I just reset the IIS and this will go away. You will notice in the Task Manager that the w3wp process is taking too much memory.

The type or namespace name ‘ApplicationServer’ does not exist in the namespace ‘Microsoft’ (are you missing an assembly reference?)

If you are getting the error above may be you have not set the target framework 3.5/4. The second thing which I had is that I was adding references to Microsoft.ApplicationServer.Caching.Client and Microsoft.ApplicationServer.Caching.Core assemblies from C:\Windows\SysNative\AppFabric path. It simply didn’t work and the error persisted. I then added the reference from the GAC (I have no explanation for this). You can find the references in the sample.

Did the above solve my middleware problems?

I had to find a solution to the problems which I had from the enterprise library.

1- Availability and Scalability is solved by architecture of the AppFabric cache.

2- If you want to reflect changes immediately in the cache as a result of the update in the source restart the cluster services without restarting the BizTalk host instances. I mentioned that there would be a downtime it means that you don’t need to stop the host instances just stop the receive locations so that no request is entertained by BizTalk.

3- If you don’t need the downtime there can be another trick. Create a new cache of same configuration but with a different name. In the BTSNTSvc.exe.config or machine.config file I assume that you have kept the name of the cache in the appSettings section. This means that you will be retrieving it at runtime. Change it to the new cache which you created.

4- If you can wait till the cache gets expired it’s the best thing the cache will get the fresh data from the source and in the multi-server environment of BizTalk each BizTalk node will have a consistent identical copy of the cache. Great!

Security Considerations:

Without security considerations this article is incomplete and a BizTalk guy reading this cannot compromise the security over the SSO data. Of course if security is not considered with caching the SSO data can be overridden by any client who has access to the cache. As the cache will be clustered, clear text data in the network can also be sniffed.

In AppFabric cache cluster the communication between the client and the server supports Encryption and Signing.

A windows account must be added which has access to the cache cluster. This account must be used by the client application to access the cache cluster. In the BizTalk scenario we would add users such as BizTalk application Users (under which host instances run) and SSO Administrator/Affiliated administrator. This is done by Grant-CacheAllowedClientAccount command from power shell.

Cluster Security options :

After allowing access to the users you have to configure the server and client for security.

For enabling security option to the server you have to use Set-CacheClusterSecurity command from power shell.


Client Security options:

For client you can do it programmatically and in the configuration file locate the security properties tag.

<securityProperties mode=”Transport” protectionLevel=”EncryptAndSign” />

There is a table from Security Model (Windows Server AppFabric Caching) where the matrix of the combination of the cluster and client security options is given. The combination of client and cluster security options will work or not is explained in the following table. 

Client Settings Mode=None, ProtectionLevel=Any Mode=Transport, ProtectionLevel=None Mode=Transport, ProtectionLevel=Sign Mode=Transport, ProtectionLevel=EncryptAndSign
None, Any Pass Fail Fail Fail
Transport, None Fail Pass Fail Fail
Transport, Sign Fail Pass Pass Fail
Transport, EncryptAndSign Fail Pass Pass Pass

Invoking Concurrent programs and working with BizTalk Oracle E-Business Adapter

Invoking Concurrent programs and working with BizTalk Oracle E-Business Adapter 

In my current project I had to call a concurrent program in the Oracle E-Business Suite which would generate a report of all the employs payroll of the month. We were automating the payroll process in our organization and the whole solution involved getting and validate the Employee data through BizTalk and then have the data in an excel file and initiate a Payroll approval workflow which was made in sharepoint. 

I was new to the Oracle E-Business Suite application and had a little hiccups and surprises while connecting to the Oracle E-Business Suite through the WCF Oracle E-Business Suite adapter. The major challenge was to get the xml that is generated at the Oracle Server by the Concurrent Program. 

Generating Oracle EBS Adapter Metadata: 

The first step is to generate the Adapter metadata in order to get the port types and schemas generated for the concurrent programs. Right click on the project and select Add generated items, then select Add adapter metadata. You will get the list of LOB adapters. Select the Oracle EBS adapter and click “Next”. You will see the window where binding type will be oracleEBSBinding. Click “Configure” to configure the adapter URI and binding properties. In the URI Properties tab as shown below give the port number, server name or IP and Service Name from the TNS entry. 


Next to configure the Binding Properties the first property in the window is the ClientCredentialType which could be Database or EBusiness. It depends upon your choice which credential type you want to choose and specify those credentials in the Security Tab. 


For generating the metadata you need to give the correct credentials of database, Responsibility Key or Name and Organization ID. You can ignore other properties for now and when configuring the Physical port in BizTalk Administration Console we will come to these properties. The Responsibility Key/Name, Organization ID and Credentials are given by the EBusiness Suite guys. Ask them for the correct values if you are having problems in connecting to EBS. 

When you are done click Ok and click Connect if you are getting any errors troubleshoot it with supplying the correct binding properties and credentials if you are lucky then you would be able to see Categories and Operations. 


Getting the concurrent program Application Name and parameters from Oracle E-Business application: 

At first the categories and operations might be confusing for you if you have a good team of EBS in your organization they would guide you through this if not you are having the same fate as me. Because you might have been executing the concurrent program from the interface of EBS you might be confused from the categories and operations. I will have a simple walkthrough of the Oracle EBS interface because for getting the parameters and status of the CP you have to get familiar with the EBS interface. 


This is the page you will be viewing after logging in. It is a list of EBS Sections. I went to the Processes and Reports group and Selected Submit Process and Reports. Suddenly you will see a popup appear and Oracle Interface would open where you can submit a request for the concurrent program. 


Depends upon your request my request was single type so I selected it and went to the second screen. 


You can select the name of the Concurrent program the Oracle guys will be more helpful to you on this. Observing the next screens will help you in getting the parameters and finding the concurrent programs in the categories and operations list when generating metadata.  In the screen below you can see the Name of the concurrent program and the Application to which it is associated. The application name will be the category and the concurrent program will be the operation when generating the metadata.  


The next critical thing is the parameters which you will pass in the request message of the Concurrent program in BizTalk. This is nearly a riddle and I noticed the values of the parameters after a long trial and error process. You can easily execute the concurrent program from the interface as you can select them from the list available and one would generally assume that these must be the values to be passed to the concurrent program from BizTalk. This is not the case J 



Finally after selecting all the parameters from the list of available values you will populate all the parameters and ready to execute the concurrent program as shows in the screens below. 


In the screen below I observed and discovered that the parameter values are 61 and may-2010 while I was copying the strings from the previous screens. You can see the parameters to the Concurrent program are different than the one from the interface. 

You can refresh the data to see that the execution of the concurrent program is complete and see the status from the interface. 


Developing the Solution to Invoke the Concurrent program: 

After getting the parameters, concurrent program name and application name you can map these to the adapter metadata wizard and generate the metadata. After going through the interface you can generate the metadata, design your orchestration and populate your request message with correct values.  My orchestration is below in which I am doing the following. 

1- Getting the request from the SOAP adapter mapping the values to the request message of the concurrent program.
2- Calling the concurrent program and getting the response.
3- Getting the request id from the response message of concurrent program.
4- Dela y for 2 mins.
5- Calling the Status concurrent program and repeatedly checking the status till the status is successful. 


This is the code in my expression shape in which I am constructing the request message for the concurrent program 

xDocRequest = new System.Xml.XmlDocument();

xDocRequest.LoadXml(@"<ns0:XXMARPYRREGXML xmlns:ns0=''>
<ns1:Implicit xmlns:ns1=''>ns3:Implicit_0</ns1:Implicit>
<ns1:Protected xmlns:ns1=''>ns3:Protected_0</ns1:Protected>
<ns1:Language xmlns:ns1=''>ns3:Language_0</ns1:Language>
<ns1:Territory xmlns:ns1=''>ns3:Territory_0</ns1:Territory>
<ns1:ContinueOnFail xmlns:ns1=''>true</ns1:ContinueOnFail>
<ns1:Printer xmlns:ns1=''>ns3:Printer_0</ns1:Printer>
<ns1:Style xmlns:ns1=''>ns3:Style_0</ns1:Style>
<ns1:Copies xmlns:ns1=''>10</ns1:Copies>
<ns1:SaveOutput xmlns:ns1=''>true</ns1:SaveOutput>
<ns1:PrintTogether xmlns:ns1=''>ns3:PrintTogether_0</ns1:PrintTogether>
<ns1:ContinueOnFail xmlns:ns1=''>true</ns1:ContinueOnFail>
<ns1:RepeatTime xmlns:ns1=''>ns3:RepeatTime_0</ns1:RepeatTime>
<ns1:RepeatInterval xmlns:ns1=''>10</ns1:RepeatInterval>
<ns1:RepeatUnit xmlns:ns1=''>ns3:RepeatUnit_0</ns1:RepeatUnit>
<ns1:RepeatType xmlns:ns1=''>ns3:RepeatType_0</ns1:RepeatType>
<ns1:RepeatEndTime xmlns:ns1=''>ns3:RepeatEndTime_0</ns1:RepeatEndTime>
<ns1:ContinueOnFail xmlns:ns1=''>true</ns1:ContinueOnFail>
<ns0:StartTime>19-APR-2010 14:24:50</ns0:StartTime>
<ns0:Payroll_x0020_Name>" + MAR.Payroll.Helper.GetConfigurations.ConcurrentProgramID + @"</ns0:Payroll_x0020_Name>
<ns0:Month>" + ClientRequestMessage.Message.PayrollRq.Month + @"</ns0:Month>

This is the code in my expression shape in which I am getting the RequestID of the concurrent program from the response message and then creating the request for the Status Concurrent program.

RequestID = xpath(PayrollRegisterRs.parameters, "string(/*[local-name()='XXMARPYRREGXMLResponse']/*[local-name()='XXMARPYRREGXMLResult']/text())");
xDoc.LoadXml("<ns0:GetStatusForConcurrentProgram xmlns:ns0=\"<a href="">\"><ns0:RequestId</a>>" + RequestID + "</ns0:RequestId></ns0:GetStatusForConcurrentProgram>");

Getting the status of the Concurrent program: 

Next phase would be to get the status of the concurrent program which you would execute from BizTalk but the concurrent program you executed would take time and after some delay you would be inquiring for the status of the concurrent program and hopefully you will get a complete status. You should have an estimate of the execution time of the concurrent program and set the delay in your orchestration before fetching the status. In my case I had a delay of 2 minutes and my concurrent program would take 90 seconds on an average. 

In order to fetch the status you have to execute another concurrent program which will return the status of the concurrent program you executed based on the request ID.  When getting the status you would require the request ID from the previous concurrent program which you executed. Using XPath we can fetch the request ID from the concurrent program response message and pass it to the Status request message. 

When the status concurrent program request is sent we get a response message in which we can get the status of the concurrent program.  You will be generating the metadata for the status concurrent program as well. The idea is in one application there is only one generic ”Get Status” concurrent program which you can run for any concurrent program in the application and gets its status based on the request ID. See above how to generate the metadata for the Status concurrent program. 


Connecting to the Oracle E-Business Suite using BizTalk WCF LOB EBS Adapter: 

The major issue was to establish a connection with the Oracle E-Business application. We were given a URL with a username and password to login to the EBS application. In order to connect with the Oracle EBS with BizTalk you need to have Oracle client installed and having a TNS entry in Oracle TNS file. If you can login into the Oracle EBS database which has different credentials than the Oracle EBS application then you can take one step further and configure the adapter to connect with Oracle EBS. 

When you will be logging into the EBS application with the application URL given you would be prompted again to give the same username and password of the application (not database). Of course you would think what is wrong with my credentials because you are on the same login screen after once you entered the right credentials. Anyway you have to enter them again and login and you would see the screen similar to the one below. 


In EBS you can think of the responsibility as roles in SQL Server or Microsoft products. Your user will be a member to one or more responsibilities and those responsibilities would be having the rights over the concurrent program. So if you want to execute the concurrent program you have to make sure that the Application ID, Responsibility and Username combination is correct. If this is not the case then you have to contact the EBS application administrator for you to resolve it. At your end you can make sure you have the correct combination you can execute the query below on the Oracle client tool (Toad/ PL-SQL) and see if your user is in the responsibility and the application in which you want to execute the concurrent program. 

SELECT FNDRESP.* FROM apps.fnd_user fnduser, apps.fnd_user_resp_groups FNDRESPGROUP, apps.fnd_responsibility_TL FNDRESP WHERE fnduser.user_id=FNDRESPGROUP.user_id AND FNDRESP.responsibility_id=FNDRESPGROUP.responsibility_id AND upper(fnduser.user_name) like upper(‘USERNAME’) 

Even if you are connected to the EBS and you execute the concurrent program without ensuring the correct combination of User Id, Responsibility ID and Application ID BizTalk would fail to set the application context and you would get the exception details in the Event Log given below. 

The adapter failed to transmit message going to send port "" with URL "oracleebs://Servername/TNS/Dedicated". It will be retransmitted after the retry interval specified for this Send Port. Details:"Microsoft.ServiceModel.Channels.Common.ConnectionException: Could not retrieve User ID, Responsibility ID, and Application ID. These values are required to set the application context. Make sure that you have specified correct values in the binding properties or the message context properties for setting the application context. 

You would get a good explanation and a tool to resolve this error here but still I was getting the same at runtime and I discovered that the username was to be in Uppercase. I was using the username with lower case and it would work for generating the schemas but I was getting this error at runtime. 

Configuring the WCF Oracle E-Business Adapter: 

To configure the send port select WCF Custom adapter and click configure. In the General tab you have to specify the Endpoint address of the adapter service. It will be in the format oracleebs://[serverip]:[port]/[Service] as shown below. 


In the SOAP action header section you have to specify the action and operation name. You can get the action value from the generated schemas in the BizTalk solution. In the schema XML you will find the value something similar to the one in the screenshot above. 

The main configuration is in the binding tab. First select oracleEBSBinding and have the correct values for oracleEBSOrganizationId, oracleEBSResponsibilityKey, oracleUserName and oraclePassword. The username and password here are the database credentials. You can see my configurations below. 


If you are selecting clientCredentialType as Ebusiness then you need to enter the EBusiness credentials in the Security Tab. 

Enterprise Integration Pattern Part -6- Envelope Wrapper

Sometimes batches are sent with common header information related to messages in the batch. This information is useful for routing purposes whereas the information is also useful for processing the messages. This information may or may not be useful to other client application or processes other than the process which is processing the message. Sometimes the header is preserved if the header is needed sometimes it is stripped off from the batch.

You can read more about Envelope Wrapper Integration pattern here.

Thinking and implementing in terms of BizTalk we have to consider the disassembler component properties specially the preserve header property. I have seen a lot of BizTalkers getting in trouble with the header information and they have no idea where the header goes once they specify the header schema and the document schema property in the disassembler component.

Following is the input flat file which I have used as an example. It contains two items for body schema and two items in the header schema. We will mainly focus on the header items and how to extract them separately and access them in a meaningful way.


First thing you notice and should implement is to set the preserve header property to true. With this the header schema value is promoted to the message context and it can be extracted in the orchestration. In the figure below you can also see two other properties HeaderValue1 and HeaderValue2 in the expression editor. These values are much more useful than just extracting the raw header schema value. The raw header value can be extracted by the XNORM.FlatFileHeaderDocument context property of the message.



Now to extract the fields separately from the header you have to create a property schema. In my case I have defined two header fields (HeaderField1 and HeaderField2).




After creating the property fields make sure to change the Property Schema Base property to MessageContextPropertyBase for each of the header fields.



Next step is to promote these fields from the header schema as shown below.




Now when the project is built and deployed we test the solution by placing the input file in the input folder. What we expect is the two header values in the variables which are written in the event log along with the raw header XML which you can see below.





In this way we can use the header values in a more meaningful way and even if we need we can route the message based on the header values. If you could see the output file the header items would be stripped off from the batch.

You can download this sample from the widget available on the right of the page.

Enterprise Integration Pattern Part 4 – Splitter (Debatching multiple records)

In many business processes we receive a consolidated group of messages containing multiple messages inside it and we call it a batch. It is a common task to debatch each single message from the batch and process it separately. We can then transform or route these messages for further processing.

You can read more about Splitter pattern here.

This would be really common for people working on BizTalk for some time however on the MSDN forums I still find beginners running into problems with debatching and using xpaths. I had this in mind to have a blog post explaining debatching, however I have explained debatching quite a time on the forums. Guess from now on I will be redirecting them to this post.

This is an overview of the debatching orchestration.


The orchestration works in the following steps.

1-      First we count the number of items to debatch in the original message. We assign the count into an integer variable and use a count function in the xpath. Remember to copy the xpath of the repeating record node from the message schema.

TotalOrders = System.Convert.ToInt32(xpath(OrdersRqMsg,”count(/*[local-name()=’Orders’ and namespace-uri()=’http://Splitter.OrderRq’%5D/*%5Blocal-name()=’Order&#8217; and namespace-uri()=”])”)); 

2-      Then we extract each message and assign it in the message of type single message. You have to create a new schema that would define the single message in the batch and in the construct shape assign the Nth message from the batch to the single message. I have used the position function in the xpath and specified the index throught the loop count variable which is incrementing by 1 in the loop.

xpath_str = System.String.Format(“/*[local-name()=’Orders’ and namespace-uri()=’http://Splitter.OrderRq’%5D/*%5Blocal-name()=’Order&#8217; and namespace-uri()=” and position()={0}]”,LoopCnt);
SingleOrderMsg = xpath(OrdersRqMsg,xpath_str);

3-      Then we can process, transform or route these single messages. I have just placed them inside a folder.

Enterprise Integration Patterns Part-1 – Scatter and Gather using Self Correlating Port

Scatter and Gather Integration pattern implies that a message is sent to the Scatter-Garther Operation and the message is either debatched and sent to multiple recepients or is sent as it is depending upon the requirements. The recepients process the message request and when the processing is completed a response is sent back to Scatter-Gather where all the responses are collected from all the recepients and then processed and aggregated into a single message. You can read more about Scatter-Gather Enterprise pattern here.

There can be many approaches for building this pattern but I would suggest to follow a loosely coupled design where you can add more systems. I have used Self correlating ports where the parent orchestration calls the partner orchestrations.

Self Correlating ports – Response messsage returned by Start Orchestration shape:

Self correlating ports are used by orchestrations where it calls another orchestration by passing a port instance as a parameter to the child orchestration. The child orchestration sends the message back to this port instance. In the parent orchestration you can create a new one-way send communication port and use direct binding and select “self correlating” in the port configuration wizard. In the child orchestration drag a port shape and select use existing port type and select the self correlating port you created in the partent orchestration. Use one way communication and send in the port configuration wizard. The self correlating port is part as a port parameter to the child orchestration and the orchestration is called by Start Orchestration shape. In this way a response message is returned by Start orchestration shape.


The diagram below shows the design of the parent orchestration where the parent orchestration calls the partner orchestrations using Start Orchestration shape within the parallel shape. Remember that the Start Orchestration shape executes asynchronously and it does not returns a message.  So we use self correlating ports to return the message back to the parent orchestration. The advantage in this approach is that it is an asynchronous operation and the child orchestration are called concurrently and their responses are also collected independently irrespective of the order of response messages are received. With looping the parent orchestration has to wait for the first child orchestration to complete before calling the next orchestration.


At the end you must apply the business logic to aggregate all the responses into one consolidated response message. I have used a simple transformation map that outputs a response message. In the partner 1 orchestration I have placed a delay shape with delay of 10 seconds and the partner1 orchestration is started in the first branch of the parallel shape. Therefore the response is received after the execution and response of the other two orchestrations.


Partner orchestrations cannot be invoked by start orchestration shape

However if the child/partner orchestrations cannot have a self-correlating port as a parameter or you cannot start orchestrations (if the orchestration has receive shape activate property value “True”) then you have to place send shapes in place of start orchestration shapes while the rest of the design will remain the same. When sending the message you have to ensure that the child or partner orchestration is invoked and also you have to use correlation in this design. The send shape will initialize the correlation set while the receive shape will follow the correlation.


Web Services as partners for Scatter-Gather

If the partner is a web service then the calling and receiving of the message will be a synchronous operation therefore the send and receive shape will be in the parallel shape branches. The self correlating ports or correlation will not be required for this design.

You can download the Scatter and gather integration pattern sample from here.

Consuming Web Services without web reference in BizTalk

Last week I came across a scenario where I had to call the card verification service before updating the credit/debit card status. This was the first time I was consuming a web service in an orchestration so I had a little trouble in the beginning. Very innocently I made request and response messages of the schema types which I got from the wsdl, made a request-response port and deployed the project. Configured the port to use the SOAP adapter and gave the URI of the web service. I tested my orchestration and was expecting the results but got the exception in the system Event Log.


“Failed to load “” type.                           

Please verify the fully-qualified type name is valid.
Details: “”.
The type must derive from System.Web.Services.Protocols.SoapHttpClientProtocol.
The type must have the attribute System.Web.Services.WebServiceBindingAttribute. “.


After a little search on the internet I found out that I have to add the web reference of the web service and configure web ports and web messages and cannot call the web service in this manner. However I succeeded to consume the web service but I had to consume the web service without adding the web references. Because I figured out if the schemas change in the future of the web service (which was most likely too) then I had to update the web references, recompile and redeploy the project.



Adding the web reference in my project I got the following items which will be used by BizTalk to consume the web service.


  • Universal Resource locator (URI)
  • WSDL which contains the methods,ports and message type information of the web service.
  • Reference map (Which will contain xsd’s).





BizTalk will then use web port types and web messages from the web reference generated items. It will capture web ports from the WSDL and web messages from the reference.xsd’s generated.


The trick is if we can generate these items for BizTalk we can surely call the web service without using web reference. The work around for this is generating the proxy class from wsdl.exe. You can generate the proxy class from visual studio command prompt and giving the proper switches will create the proxy class.

You can see the MSDN help for more switches of wsdl.


wsdl /out:[dir]proxy.cs http://localhost/%5Bwebservice URI].asmx?wsdl     


After generating the proxy class you can add the proxy class to a .NET Library project, give it a strong name key file, build the project. (Don’t forget to GAC the generated assembly before deploying the BizTalk Project).


Technically the SOAP.MethodName and SOAP.AssemblyName properties are promoted to the context of the message before being published to the message box and these values are supplied automatically when we use web port (which come from the web reference). We can supply the AsemblyName and MethodName from the proxy class we created. After its GAC’d we can configure the send port of the orchestration and supply the AssemblyName and MethodName properties for the message context. After the BizTalk Project is deployed we can configure the send port calling the web service. In the web service tab, select the assembly which was created before by building the .NET library project containing the proxy class. Select the Type name and method name and in the general tab specify the web service URI.





 In this way you can have more control over the proxy and handle its versioning and a little change in the web service won’t make you build deploy the project again.

technorati tags :

Creating Oracle Adapter Metadata from Visual Studio in BizTalk 2006 – Points to consider in Enterprise

1-     Build a separate project for Oracle adapter metadata which gives auto-generated schema types and port types because if any one else in the enterprise uses the same table it will be generated with the default target namespace “{ServiceName}/{TableName}]#{Operation}” and if both the projects gets deployed on the same server you will get a routing failure. For e.g. the mostly used schema is the NativeSQL for generic SQL statements and of course used for polling statement results.

“There was a failure executing the receive pipeline: “Microsoft.BizTalk.DefaultPipelines.XMLReceive, Microsoft.BizTalk.DefaultPipelines, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35” Source: “XML disassembler” Receive Port: “ReceivePortOracleDemo” URI: “OracleDb://OLTPDEV_6d833f94-9fb8-423e-be29-bc7a75884bc0” Reason: Cannot locate document specification because multiple schemas matched the message type “;.  

2-     When generating metadata first create a Static Solicit-Response send port in the default BizTalk project which is BizTalk Application 1 in the Management Console. Create one port for each of your service in Oracle. This send port will only be used for generating schemas as when generating schemas in the following dialogue. To retrieve Oracle adapter metadata perform the following steps.

  • Right click your project, go to Add and then select Add Generated Items.
  • In Categories select Add Adapter Metadata
  • In templates double Click Add adapter Metadata

You will get the following dialogue.


 Select the Oracle Database adapter and then select the port you created in the BizTalk Application 1 project. Select your Table or Native SQL type and you will get Orchestration File with Type Orchestration_1, Port Types with Operations of Select, Insert, Remove and Update. Also multi-part messages will be created for all the operations request and responses. All the multi-part message parameters message part will be of the type of the schema generated.

 3- There are a little catches and you will face problems when generating metadata, the first thing is every time an orchestration file is generated the Orchestration Type is Orchestration_1 in the same project which should not be the case however the filename is Orchestration_(Index+1). The second thing is all the generated Multi-part messages will be of the same name Query, QueryResponse etc. So If you are adding metadata for two tables you will run into troubles with conflicting message part types as they will have same name and same namespace.


The work around for this is before you create a metadata for the adapter change the default namespace of your schema project to lets say MyEnterprise.Oracle.Schemas.[Tablename]. In this way the multi-part message types created will be of different namespaces and of course the Oracle Port created will point to the same message types and there will be no conflict. Then change the name of your multi-part messages with a prefix of your tablename so that the referencing project can identify the message names.


 4- At the end the open the Orchestration file created, change its type from Orchestration_1 to TableName type. As you will be using this project for schema types don’t forget to change the access modifier properties of Ports and multi-part messages from Internal to Public.

technorati tags :
%d bloggers like this: