BizTalk Server MVP 2012

So another year starts with a great news from Microsoft. I would like to thank Microsoft, my fellow MVPs, my MVP Lead-Ruari Plint, community members and my wife. Thank you all for your support without you people this would not be possible.

Hope this year brings the same success to me as the previous years. Happy New Year to everyone.


I used to wait for the news on the new year but this time I was really busy and my family reminded me of the award renewal date. So today I just checked my inbox for the email from Microsoft.

Passed BizTalk 2006 R2 exam (70-241)

I know I am late (probably will be the last) in taking BizTalk 2006 R2 (70-241) exam (Released in October 2009) and will be now retired on June 30 2011 with BizTalk 2006 (70-235) which I passed in June 2008. So I am heads up with BizTalk 2010 (70-595) exam which just released on March 30 2011 and will be taking it next week. Hope to pass that too!

So my passion for BizTalk compelled me to register for BizTalk 2006 R2 exam with BizTalk 2010. Even though it is retiring it was worth taking BizTalk R2 because no one is perfect and I was able to discover my weakness in BizTalk extended capabilities (RFID, AS2, EDI). So when preparing for BizTalk 2010 I am focusing on EDI and RFID which also helped me in BizTalk 2006 R2 exam.

BizTalk Server MVP 2011

Since two years I am getting a great news of being Microsoft’s Most Valuable professional (MVP) on the new year. I thank my peers, community members, fellow MVPs, MVP Lead and Microsoft which honored me and recognized my contributions in BizTalk communities.

Last year I gained knowledge, experience and more insight in BizTalk which includes SWIFT accelerator, ESB Toolkit, worked with more adapters, developed some pipelines, lots of services etc.. In the beginning of the year I worked as a sharepoint developer/administrator but BizTalk turned out to be my destiny and I got a chance in a financial institution to be with a very good BizTalk team. We delivered successful middleware projects and had effectively used the combination of BizTalk, WCF and Windows Application Fabric technologies to follow the principles of SOA. 

Hope I have a more prosperous year and I continue to serve the BizTalk community more effectively with the best of my knowledge and experience.

Using Windows Server AppFabric Caching for Storing SSO & Configuration data

Storing SSO data in cache can be very useful in a low latency scenario. Performance of services can also be improved by caching configuration data such as a Status Codes table (which has business level exception status codes and descriptions).

Using cache in SOA and BPM solutions is not new neither storing SSO data in the cache. You can read Using SSO Efficiently in the Service Oriented Solution and Business Process Management Solution.

Problems with Enterprise Library Caching:

Before AppFabric we could use Enterprise Library caching which had scalability and synchronization problems which means that it was a single server in memory cache which would not be an option with BizTalk running in a Farm and it would cause inconsistency. The other problem is in-process residing of cache in BizTalk Host instances. If the host instances are restarted it means that the cache would load again and causing the cache to populate. The other problem would be if there was a change in the source data the cache would have to be refreshed either by the time interval specified or by enforcement by restarting the host instances. Restarting the host instance can be a heavy operation for the services running and is not an option in production environment. If the cache would be refreshed after a specific time interval it is fine for a single server but in a multi-server environment it can create inconsistency of data for a specific period of time. Consider the refresh interval is 5 minutes on both servers. On Server A the cache will be refreshed after 3 minutes but on Server B it will be after 15 Seconds. In this scenario Server A will be running with the old data for 3 minutes. Windows Server AppFabric has solved this issue and now we can leverage with the features of the technology and incorporate it with BizTalk.

Windows AppFabric Cache features and advantages:

The best would be to go through this article to have an understanding of the architecture and benefits of the caching features. Here is an excerpt of features from this article.

  • Caches any serializable CLR object and provides access through simple cache APIs.
  • Supports enterprise scale: tens to hundreds of computers.
  • Configurable to run as a service accessed over the network
  • Supports dynamic scaling-out by adding new nodes.
  • Backup copy provides high availability.
  • Automatic load balancing.
  • Integration with administration and monitoring tools such as PowerShell, Event Tracing for Windows, System Center, etc.
  • Provides seamless integration with ASP.NET to be able to cache session data in without having to write it to source databases. It can also be used as a cache for application data to be able to cache application data across the entire Web farm.
  • Follows the cache-aside architecture (also known as Explicit Caching) for V1. That is, you must decide explicitly which objects to put/remove in your applications and the cache does not synchronize with any source database automatically.

Let’s see now how the problems of previous caching techniques can be solved by Windows AppFabric. For this first we need to understand our requirements because there are a lot of variations of AppFabric caching host and client. You need to analyze which caching hosting options would fit into your scenario and what client you would be writing for your caching. The overview of them is given below.

Windows AppFabric Host Configurations:

On one or more servers the AppFabric Cache service will be running as a Windows service. The servers should be clustered and when using with BizTalk I would install and configure AppFabric on all of my BizTalk Server machines and configure the Cache Cluster. Of course if you have a shortage of servers like us Smile  then you have to use the existing BizTalk servers but if you can make dedicated cache servers for large cache data you can.

In BizTalk as we will not be storing services data but only SSO data and some configuration data which we get from a Sharepoint List then configuring the existing BizTalk Server clusters for AppFabric caching is a good idea.

You have to follow the installation and configuration guide on how to make a cluster and where to store the configuration data of the cluster. The configuration data can be stored in XML file stored on a shared folder or in SQL Server database I have chosen the latter one. Here is the physical architecture diagram of a cache cluster.


1- Partitioned Cache:

I will be assuming that you are familiar with the logical hierarchy of the AppFabric cache. If the caching is configured on a cluster of servers and named cache is defined on each of the servers then the regions can be distributed among the servers and therefore providing availability or scalability.

a) Scalability:

The cache item would reside in one of the regions of the cache and that region can reside on any one of the cache cluster node. The region is guaranteed to reside on one server and the partition cannot be further distributed among the cluster. Therefore all the items residing in one region will reside in one cluster node. Defining region is optional when you add an item to the cache therefore the cache service itself will load balance the region and assign keys to the regions it has created internally on any server. There is a routing layer on the cache level which routes the put and get operations to the relevant cluster node having the key.

b) Availability:

In the availability scenario the cluster nodes can be defined as secondary and one node can be the primary node thus all the nodes having a copy of the cache items. If the primary node fails when a put or get operation is called for the cache one of the secondary nodes becomes the primary node and the applications continue accessing the cache.

It doesn’t matter on which node the get and put operations are called the routing layer of the cache determines the primary node and routes the request to it. The primary node is responsible for the synchronization of data as the data is updated in this node. When the item is accessed or updated it updates itself and then sends the operation to all the secondary nodes to update themselves. It then waits for an acknowledgment from the secondary nodes. When the acknowledgments are received from each node it then sends the acknowledgment of success of operation back to the client.

2- Local Cache:

If there is no need for availability and scalability the local cache host can be configured just as on my development machine I will have a local cache. This will have the cache in one server therefore will be fast as there will be no network hops and deserialization of data. For this when configuring the AppFabric you will have the new AppFabric cluster without having any other nodes joined with them. For making a multi-server cluster you can install AppFabric on another machine and cluster by selecting the Join Existing Cluster option in the configuration wizard and apply the appropriate settings. It still depends upon the client on which server it is accessing cache.

AppFabric Cache Clients:

There are two types of clients that can be configured in AppFabric.

1- Routing Client:

The routing client will have its own mechanism to keep track and management of the cached objects. It should know on which server the region resides and which key is placed in which region. We will not be using this in the middleware since we are just storing the SSO and Configuration data in the cache but it can be used by services or mainly by web applications depending upon the requirements.

2- Simple Client:

The Simple client is not aware about the locations of regions in the cluster. Therefore they just try to access the object in the cache in their respective region (if they are using regions). The cache routing mechanism itself takes care of routing but it depends if the cache cluster is configured for Scalability or Availability. Their routing mechanism is defined above in the host configurations section.


After having some basic concepts about architecture, usage and advantages I am using the AppFabric cache with the Local Cache host configuration and a base client.

For the host you have to install AppFabric and configure the AppFabric caching services. The first configuration would be to store the configuration (which is SQL Database in my case can be a shared XML File). Second configuration would be to add to an existing cluster or to create a new one. I will have another blog post for this but it’s pretty simple and one can follow the AF installation and configuration.

Now it’s time to write the client which would be BizTalk. We have a Common project which is referenced by each service for using common functionality like to read the configuration data/getting status code etc. Therefore in the same common project I will be writing the client code. I have provided the sample for download in the widget which is free from our organization helper functions so it can be used on a BizTalk machine having SSO and Caching configured.

The client can have the configurations in a configuration file or have the configurations programmatically. I will be using a configuration file and will not require recompiling when changing hosting environments. These settings can be stored into machine.config or BTSNTSvc.exe.config. I will not be explaining the configuration file as it is self explanatory with comments. Feel free to copy and modify it according to your needs as it has all the configuration sections with all the parameters.

<?xml version="1.0" encoding="utf-8" ?> <configuration> <!--configSections must be the FIRST element --> <configSections> <!-- required to read the <dataCacheClient> element --> <!-- Cache Client Setting 1- Client time-out (milliseconds) The requestTimeout attribute in the dataCacheClient element. We do not recommend specifying a value less than 10000 (10 seconds). Default value is 15000. 2- Channel open time-out (milliseconds) The channelOpenTimeout attribute in the dataCacheClient element. This value can be set to 0 in order to immediately handle any network problems. For more information, see Configuring Cache Client Timeouts (Windows Server AppFabric Caching). The default value is 3000. 3- Maximum number of connections to the server The maxConnectionsToServer attribute in the dataCacheClient element. The default value is 1. --> <section name="dataCacheClient" type="Microsoft.ApplicationServer.Caching.DataCacheClientSection, Microsoft.ApplicationServer.Caching.Core, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" allowLocation="true" allowDefinition="Everywhere"/> </configSections> <dataCacheClient> <!-- (optional) specify local cache Remove in a multi-server farm --> <!-- Local Cache Settings 1- Local cache enabled The isEnabled attribute in the localCache element. Values may be true or false. The localCache element may also be missing to indicate that it is disabled. 2- Local cache invalidation method The sync attribute in the localCache element. Use the TimeoutBased value to indicate a time-out value should be used. Use NotificationBased to indicate cache notifications should also be used. 3- Local cache time-out (seconds) The ttlValue attribute in the localCache element. 4- Specific cache notifications poll interval (seconds) (optional) Specified by the pollInterval attribute of the clientNotification element. The clientNotification element is a child of the dataCacheClient element, and not a child of the localCache element. If not specified, a value of 300 seconds will be used. 5- Maximum locally-cached object count (optional) Specified by the objectCount attribute in the localCache element. Triggers when eviction on the local cache should start; it will then attempt to remove 20 percent of the least recently used locally cached objects. If not specified, the default value of 10,000 objects is used. The ObjectCount --> <localCache isEnabled="true" sync="TimeoutBased" objectCount="100000" ttlValue="3000" /> <!--(optional) specify cache notifications poll interval --> <!-- Client Notification Settings 1- Specific cache notifications poll interval (seconds) Specified by the pollInterval attribute of the clientNotification element. If not specified, a value of 300 seconds will be used. 2- Maximum queue length The maxQueueLength attribute of the clientNotification element. If not specified, the default value is 10000. --> <!-- <clientNotification pollInterval="300" /> --> <hosts> <!-- Cache Host Settings 1- Cache server name The name attribute of the host element. 2- Cache port number The cachePort attribute of the host element. --> <host name="D001MWWS3" cachePort="22233"/> <!-- In a Mult-Server Environment add the second Server OR More for caching <host name="CacheServer2" cachePort="22233"/> --> <!-- Security Settings 1- Mode The mode attribute of the securityProperties element. Possible values include Transport and None. The default value is Transport. 2- Protection level The protectionLevel attribute of the securityProperties element. Possible values include None, Sign, and EncryptAndSign. The default value is EncryptAndSign. --> <!-- <securityProperties mode="Transport" protectionLevel="EncryptAndSign" /> --> <!-- Transport Settings Connection buffer size (bytes) The connectionBufferSize attribute of the transportProperties element. The ConnectionBufferSize property of the DataCacheTransportProperties class. This is then assigned to the TransportProperties property of the DataCacheFactoryConfiguration class. Maximum buffer pool size (bytes) The maxBufferPoolSize attribute of the transportProperties element. The MaxBufferPoolSize property of the DataCacheTransportProperties class. Maximum buffer size (bytes) The maxBufferSize attribute of the transportProperties element. The MaxBufferSize property of the DataCacheTransportProperties class. Maximum output delay (milliseconds) The maxOutputDelay attribute of the transportProperties element. The MaxOutputDelay property of the DataCacheTransportProperties class. Channel initialization timeout (milliseconds) The channelInitializationTimeout attribute of the transportProperties element. The ChannelInitializationTimeout property of the DataCacheTransportProperties class. Receive timeout (milliseconds) The receiveTimeout attribute of the transportProperties element. The ReceiveTimeout property of the DataCacheTransportProperties class. --> <!-- <transportProperties connectionBufferSize="131072" maxBufferPoolSize="268435456" maxBufferSize="8388608" maxOutputDelay="2" channelInitializationTimeout="60000" receiveTimeout="600000"/> --> </hosts> </dataCacheClient> </configuration>

Get all the data from SSO:

The below code snippet of the function retrieves all the keys from all applications from SSO. I have used it in the CacheManager project where you can find it in the SSOConfigManager class.

/// <summary> /// Returns list of applications in SSO database. /// </summary> /// <returns>Dictionary of application name as key and description as value.</returns> public static IDictionary<string, string> GetApplications() { ISSOMapper ssoMapper = new ISSOMapper(); AffiliateApplicationType appTypes = AffiliateApplicationType.ConfigStore; IPropertyBag propBag = (IPropertyBag)ssoMapper; uint appFilterFlagMask = SSOFlag.SSO_FLAG_APP_FILTER_BY_TYPE; uint appFilterFlags = (uint)appTypes; object appFilterFlagsObj = (object)appFilterFlags; object appFilterFlagMaskObj = (object)appFilterFlagMask; propBag.Write("AppFilterFlags", ref appFilterFlagsObj); propBag.Write("AppFilterFlagMask", ref appFilterFlagMaskObj); string[] apps = null; string[] descs = null; string[] contacts = null; ssoMapper.GetApplications(out apps, out descs, out contacts); Dictionary<string, string> dict1 = new Dictionary<string, string>(apps.Length); for (int i = 0; i < apps.Length; ++i) { if (!apps[i].StartsWith("{")) dict1.Add(apps[i], descs[i]); } return dict1; }

Creating and managing cache:

Before we perform operations on the cache we have to make sure the cache we are going to use is created. There are some power shell command lines for the administration of AppFabric Cache. There is also a useful GUI based tool for cache management. I would recommend downloading it in case the UAT and Production server administrator is not you. I would continue with both power shell command lines and the tools.

1- Create the cache:

There is always a Default cache which you don’t need to create. I am creating a cache named MWConfigurationCache for storing my middleware configuration data by running the New-Cache command. You can then run the Get-CacheClusterHealth command to see its health.


2- Management:

Some commands will be handy during development. For a full list refer to AppFabric Caching Deployment and Management Guide.

1-  First is the Get-CacheStatistics from which you can see how many items, regions and request are being made to the cache. You can also see the cache size in bytes.


2- The Get-CacheConfig command which gives the following output.


Setting Description
CacheName The name of the cache.
TimeToLive The default time that items reside in the cache before expiring.
CacheType The type of cache. This is always Partitioned.
Secondaries A value of 1 indicates that the cache uses the high availability feature.
IsExpirable Indicates whether objects in the cache can expire.
EvictionType Specifies an eviction type of Least-Recently-Used (LRU) or None.
NotificationsEnabled Indicates whether notifications are enabled for this cache.

3- You can see all the cache that exists on the cluster by Get-Cache command.


4- Stop and Start the cluster from Stop-CacheCluster and Start-CacheCluster commands respectively.



Note: Starting and Stopping the cluster clears the cache here is the sequence of commands first we can see from the stats that the cache has 3 items after stopping and starting the cache has no items. This can be useful when your source item has been updated and you want to reflect this in your cache. This would require some down time.


Inserting and retrieving Items:

There are a lot of variations in the API of the AppFabric caching. I would recommend to go through them here. In your middleware if you wish to read/write shared data between services then do consider the concurrency models. You can also have tags with keys and tags can be used to group items with your cache.

I am using the basic cache methods of Put and Get.

You can see the code below where I am getting all the key/value from all applications from SSO and adding them to cache. The Put method updates items if they already exists in cache or adds them. There is also and Add method which gives an exception if the item already exists.

public void PopulateCacheFromSSO() { IDictionary<string, string> apps = SSOConfigManager.GetApplications(); foreach (string appName in apps.Keys) { string appUserAcct, appAdminAcct, description, contactInfo; HybridDictionary properties = SSOConfigManager.GetConfigProperties(appName, out description, out contactInfo, out appUserAcct, out appAdminAcct); System.Diagnostics.EventLog.WriteEntry("SSO Application Name", "Name = " + appName); foreach (DictionaryEntry appProperties in properties) { System.Diagnostics.EventLog.WriteEntry("SSO Application enteries", "Key = " + appProperties.Key.ToString() + " , " + "Value = " + appProperties.Value.ToString()); PutInCache(appName, appProperties.Key.ToString(), appProperties.Value.ToString()); } } } public void PutInCache(string category, string key, string value) { DataCacheItemVersion itemVersion; if ((itemVersion = configCache.Put(category + "_" + key, value)) != null) System.Diagnostics.EventLog.WriteEntry("Cache Item Added", "Key = " + key); else throw new Exception("Cache Item not added"); }

After running the code you can run the Get-CacheStatistics command to see if the items are added to the cache. Now it’s time to retrieve and item from the cache which you added. The code below gets the items from the cache.

public string GetFromCache(string category, string key) { string item; if ((item = (string)configCache.Get(category + "_" + key)) != null) System.Diagnostics.EventLog.WriteEntry("Cache Item Retrieved", "Key = " + key); else throw new Exception("Cache item could not be found"); return item; }

Try to retrieve the values from the cache after the TTL time configured in the configuration file. You will find that the cache has expired. Also run the Get-CacheStatistics from power shell, see what you find.

Some troubles which I had and would be common to any developer are below.

ErrorCode<ERRCA0017>:SubStatus<ES0007>:There is a temporary failure. Please retry later. (The request failed because the server is in throttled state.)

If you get the error above it means nothing I couldn’t figure it out neither the guys on MSDN I just reset the IIS and this will go away. You will notice in the Task Manager that the w3wp process is taking too much memory.

The type or namespace name ‘ApplicationServer’ does not exist in the namespace ‘Microsoft’ (are you missing an assembly reference?)

If you are getting the error above may be you have not set the target framework 3.5/4. The second thing which I had is that I was adding references to Microsoft.ApplicationServer.Caching.Client and Microsoft.ApplicationServer.Caching.Core assemblies from C:\Windows\SysNative\AppFabric path. It simply didn’t work and the error persisted. I then added the reference from the GAC (I have no explanation for this). You can find the references in the sample.

Did the above solve my middleware problems?

I had to find a solution to the problems which I had from the enterprise library.

1- Availability and Scalability is solved by architecture of the AppFabric cache.

2- If you want to reflect changes immediately in the cache as a result of the update in the source restart the cluster services without restarting the BizTalk host instances. I mentioned that there would be a downtime it means that you don’t need to stop the host instances just stop the receive locations so that no request is entertained by BizTalk.

3- If you don’t need the downtime there can be another trick. Create a new cache of same configuration but with a different name. In the BTSNTSvc.exe.config or machine.config file I assume that you have kept the name of the cache in the appSettings section. This means that you will be retrieving it at runtime. Change it to the new cache which you created.

4- If you can wait till the cache gets expired it’s the best thing the cache will get the fresh data from the source and in the multi-server environment of BizTalk each BizTalk node will have a consistent identical copy of the cache. Great!

Security Considerations:

Without security considerations this article is incomplete and a BizTalk guy reading this cannot compromise the security over the SSO data. Of course if security is not considered with caching the SSO data can be overridden by any client who has access to the cache. As the cache will be clustered, clear text data in the network can also be sniffed.

In AppFabric cache cluster the communication between the client and the server supports Encryption and Signing.

A windows account must be added which has access to the cache cluster. This account must be used by the client application to access the cache cluster. In the BizTalk scenario we would add users such as BizTalk application Users (under which host instances run) and SSO Administrator/Affiliated administrator. This is done by Grant-CacheAllowedClientAccount command from power shell.

Cluster Security options :

After allowing access to the users you have to configure the server and client for security.

For enabling security option to the server you have to use Set-CacheClusterSecurity command from power shell.


Client Security options:

For client you can do it programmatically and in the configuration file locate the security properties tag.

<securityProperties mode=”Transport” protectionLevel=”EncryptAndSign” />

There is a table from Security Model (Windows Server AppFabric Caching) where the matrix of the combination of the cluster and client security options is given. The combination of client and cluster security options will work or not is explained in the following table. 

Client Settings Mode=None, ProtectionLevel=Any Mode=Transport, ProtectionLevel=None Mode=Transport, ProtectionLevel=Sign Mode=Transport, ProtectionLevel=EncryptAndSign
None, Any Pass Fail Fail Fail
Transport, None Fail Pass Fail Fail
Transport, Sign Fail Pass Pass Fail
Transport, EncryptAndSign Fail Pass Pass Pass

BAM Portal Configuration Error

When configuring the BAM portal on a x64 Environment I got this error and I had this previously but somehow forgot what was the fix.

Start registering ASP.NET scriptmap (2.0.50727) at W3SVC/2/Root/BAM.
Error when validating the IIS path (W3SVC/2/Root/BAM). Error code = 0x80040154
The error indicates that IIS is in 64 bit mode, while this application is a 32 bit application and thus not compatible.

To run the 32-Bit version of ASP.NET run the following command

cscript %SYSTEMDRIVE%\inetpub\adminscripts\adsutil.vbs SET W3SVC/AppPools/Enable32bitAppOnWin64 1

You can find the commands on this KB article. After running the command set the Enable 32-Bit mode property to true of the application pool under which the BAM applications are running.

Invoking Concurrent programs and working with BizTalk Oracle E-Business Adapter

Invoking Concurrent programs and working with BizTalk Oracle E-Business Adapter 

In my current project I had to call a concurrent program in the Oracle E-Business Suite which would generate a report of all the employs payroll of the month. We were automating the payroll process in our organization and the whole solution involved getting and validate the Employee data through BizTalk and then have the data in an excel file and initiate a Payroll approval workflow which was made in sharepoint. 

I was new to the Oracle E-Business Suite application and had a little hiccups and surprises while connecting to the Oracle E-Business Suite through the WCF Oracle E-Business Suite adapter. The major challenge was to get the xml that is generated at the Oracle Server by the Concurrent Program. 

Generating Oracle EBS Adapter Metadata: 

The first step is to generate the Adapter metadata in order to get the port types and schemas generated for the concurrent programs. Right click on the project and select Add generated items, then select Add adapter metadata. You will get the list of LOB adapters. Select the Oracle EBS adapter and click “Next”. You will see the window where binding type will be oracleEBSBinding. Click “Configure” to configure the adapter URI and binding properties. In the URI Properties tab as shown below give the port number, server name or IP and Service Name from the TNS entry. 


Next to configure the Binding Properties the first property in the window is the ClientCredentialType which could be Database or EBusiness. It depends upon your choice which credential type you want to choose and specify those credentials in the Security Tab. 


For generating the metadata you need to give the correct credentials of database, Responsibility Key or Name and Organization ID. You can ignore other properties for now and when configuring the Physical port in BizTalk Administration Console we will come to these properties. The Responsibility Key/Name, Organization ID and Credentials are given by the EBusiness Suite guys. Ask them for the correct values if you are having problems in connecting to EBS. 

When you are done click Ok and click Connect if you are getting any errors troubleshoot it with supplying the correct binding properties and credentials if you are lucky then you would be able to see Categories and Operations. 


Getting the concurrent program Application Name and parameters from Oracle E-Business application: 

At first the categories and operations might be confusing for you if you have a good team of EBS in your organization they would guide you through this if not you are having the same fate as me. Because you might have been executing the concurrent program from the interface of EBS you might be confused from the categories and operations. I will have a simple walkthrough of the Oracle EBS interface because for getting the parameters and status of the CP you have to get familiar with the EBS interface. 


This is the page you will be viewing after logging in. It is a list of EBS Sections. I went to the Processes and Reports group and Selected Submit Process and Reports. Suddenly you will see a popup appear and Oracle Interface would open where you can submit a request for the concurrent program. 


Depends upon your request my request was single type so I selected it and went to the second screen. 


You can select the name of the Concurrent program the Oracle guys will be more helpful to you on this. Observing the next screens will help you in getting the parameters and finding the concurrent programs in the categories and operations list when generating metadata.  In the screen below you can see the Name of the concurrent program and the Application to which it is associated. The application name will be the category and the concurrent program will be the operation when generating the metadata.  


The next critical thing is the parameters which you will pass in the request message of the Concurrent program in BizTalk. This is nearly a riddle and I noticed the values of the parameters after a long trial and error process. You can easily execute the concurrent program from the interface as you can select them from the list available and one would generally assume that these must be the values to be passed to the concurrent program from BizTalk. This is not the case J 



Finally after selecting all the parameters from the list of available values you will populate all the parameters and ready to execute the concurrent program as shows in the screens below. 


In the screen below I observed and discovered that the parameter values are 61 and may-2010 while I was copying the strings from the previous screens. You can see the parameters to the Concurrent program are different than the one from the interface. 

You can refresh the data to see that the execution of the concurrent program is complete and see the status from the interface. 


Developing the Solution to Invoke the Concurrent program: 

After getting the parameters, concurrent program name and application name you can map these to the adapter metadata wizard and generate the metadata. After going through the interface you can generate the metadata, design your orchestration and populate your request message with correct values.  My orchestration is below in which I am doing the following. 

1- Getting the request from the SOAP adapter mapping the values to the request message of the concurrent program.
2- Calling the concurrent program and getting the response.
3- Getting the request id from the response message of concurrent program.
4- Dela y for 2 mins.
5- Calling the Status concurrent program and repeatedly checking the status till the status is successful. 


This is the code in my expression shape in which I am constructing the request message for the concurrent program 

xDocRequest = new System.Xml.XmlDocument();

xDocRequest.LoadXml(@"<ns0:XXMARPYRREGXML xmlns:ns0=''>
<ns1:Implicit xmlns:ns1=''>ns3:Implicit_0</ns1:Implicit>
<ns1:Protected xmlns:ns1=''>ns3:Protected_0</ns1:Protected>
<ns1:Language xmlns:ns1=''>ns3:Language_0</ns1:Language>
<ns1:Territory xmlns:ns1=''>ns3:Territory_0</ns1:Territory>
<ns1:ContinueOnFail xmlns:ns1=''>true</ns1:ContinueOnFail>
<ns1:Printer xmlns:ns1=''>ns3:Printer_0</ns1:Printer>
<ns1:Style xmlns:ns1=''>ns3:Style_0</ns1:Style>
<ns1:Copies xmlns:ns1=''>10</ns1:Copies>
<ns1:SaveOutput xmlns:ns1=''>true</ns1:SaveOutput>
<ns1:PrintTogether xmlns:ns1=''>ns3:PrintTogether_0</ns1:PrintTogether>
<ns1:ContinueOnFail xmlns:ns1=''>true</ns1:ContinueOnFail>
<ns1:RepeatTime xmlns:ns1=''>ns3:RepeatTime_0</ns1:RepeatTime>
<ns1:RepeatInterval xmlns:ns1=''>10</ns1:RepeatInterval>
<ns1:RepeatUnit xmlns:ns1=''>ns3:RepeatUnit_0</ns1:RepeatUnit>
<ns1:RepeatType xmlns:ns1=''>ns3:RepeatType_0</ns1:RepeatType>
<ns1:RepeatEndTime xmlns:ns1=''>ns3:RepeatEndTime_0</ns1:RepeatEndTime>
<ns1:ContinueOnFail xmlns:ns1=''>true</ns1:ContinueOnFail>
<ns0:StartTime>19-APR-2010 14:24:50</ns0:StartTime>
<ns0:Payroll_x0020_Name>" + MAR.Payroll.Helper.GetConfigurations.ConcurrentProgramID + @"</ns0:Payroll_x0020_Name>
<ns0:Month>" + ClientRequestMessage.Message.PayrollRq.Month + @"</ns0:Month>

This is the code in my expression shape in which I am getting the RequestID of the concurrent program from the response message and then creating the request for the Status Concurrent program.

RequestID = xpath(PayrollRegisterRs.parameters, "string(/*[local-name()='XXMARPYRREGXMLResponse']/*[local-name()='XXMARPYRREGXMLResult']/text())");
xDoc.LoadXml("<ns0:GetStatusForConcurrentProgram xmlns:ns0=\"<a href="">\"><ns0:RequestId</a>>" + RequestID + "</ns0:RequestId></ns0:GetStatusForConcurrentProgram>");

Getting the status of the Concurrent program: 

Next phase would be to get the status of the concurrent program which you would execute from BizTalk but the concurrent program you executed would take time and after some delay you would be inquiring for the status of the concurrent program and hopefully you will get a complete status. You should have an estimate of the execution time of the concurrent program and set the delay in your orchestration before fetching the status. In my case I had a delay of 2 minutes and my concurrent program would take 90 seconds on an average. 

In order to fetch the status you have to execute another concurrent program which will return the status of the concurrent program you executed based on the request ID.  When getting the status you would require the request ID from the previous concurrent program which you executed. Using XPath we can fetch the request ID from the concurrent program response message and pass it to the Status request message. 

When the status concurrent program request is sent we get a response message in which we can get the status of the concurrent program.  You will be generating the metadata for the status concurrent program as well. The idea is in one application there is only one generic ”Get Status” concurrent program which you can run for any concurrent program in the application and gets its status based on the request ID. See above how to generate the metadata for the Status concurrent program. 


Connecting to the Oracle E-Business Suite using BizTalk WCF LOB EBS Adapter: 

The major issue was to establish a connection with the Oracle E-Business application. We were given a URL with a username and password to login to the EBS application. In order to connect with the Oracle EBS with BizTalk you need to have Oracle client installed and having a TNS entry in Oracle TNS file. If you can login into the Oracle EBS database which has different credentials than the Oracle EBS application then you can take one step further and configure the adapter to connect with Oracle EBS. 

When you will be logging into the EBS application with the application URL given you would be prompted again to give the same username and password of the application (not database). Of course you would think what is wrong with my credentials because you are on the same login screen after once you entered the right credentials. Anyway you have to enter them again and login and you would see the screen similar to the one below. 


In EBS you can think of the responsibility as roles in SQL Server or Microsoft products. Your user will be a member to one or more responsibilities and those responsibilities would be having the rights over the concurrent program. So if you want to execute the concurrent program you have to make sure that the Application ID, Responsibility and Username combination is correct. If this is not the case then you have to contact the EBS application administrator for you to resolve it. At your end you can make sure you have the correct combination you can execute the query below on the Oracle client tool (Toad/ PL-SQL) and see if your user is in the responsibility and the application in which you want to execute the concurrent program. 

SELECT FNDRESP.* FROM apps.fnd_user fnduser, apps.fnd_user_resp_groups FNDRESPGROUP, apps.fnd_responsibility_TL FNDRESP WHERE fnduser.user_id=FNDRESPGROUP.user_id AND FNDRESP.responsibility_id=FNDRESPGROUP.responsibility_id AND upper(fnduser.user_name) like upper(‘USERNAME’) 

Even if you are connected to the EBS and you execute the concurrent program without ensuring the correct combination of User Id, Responsibility ID and Application ID BizTalk would fail to set the application context and you would get the exception details in the Event Log given below. 

The adapter failed to transmit message going to send port "" with URL "oracleebs://Servername/TNS/Dedicated". It will be retransmitted after the retry interval specified for this Send Port. Details:"Microsoft.ServiceModel.Channels.Common.ConnectionException: Could not retrieve User ID, Responsibility ID, and Application ID. These values are required to set the application context. Make sure that you have specified correct values in the binding properties or the message context properties for setting the application context. 

You would get a good explanation and a tool to resolve this error here but still I was getting the same at runtime and I discovered that the username was to be in Uppercase. I was using the username with lower case and it would work for generating the schemas but I was getting this error at runtime. 

Configuring the WCF Oracle E-Business Adapter: 

To configure the send port select WCF Custom adapter and click configure. In the General tab you have to specify the Endpoint address of the adapter service. It will be in the format oracleebs://[serverip]:[port]/[Service] as shown below. 


In the SOAP action header section you have to specify the action and operation name. You can get the action value from the generated schemas in the BizTalk solution. In the schema XML you will find the value something similar to the one in the screenshot above. 

The main configuration is in the binding tab. First select oracleEBSBinding and have the correct values for oracleEBSOrganizationId, oracleEBSResponsibilityKey, oracleUserName and oraclePassword. The username and password here are the database credentials. You can see my configurations below. 


If you are selecting clientCredentialType as Ebusiness then you need to enter the EBusiness credentials in the Security Tab. 

MVP title activated again

My title was suspended this week and now its activated again. I was missing my MVP tag from forums and I am happy to see it back again. I thank Microsoft again for resolving my NDA issue. I would like to thank my nominator, fellow MVPs, my MVP leads and the Program manager for Global MVP program as they resolved it very quickly.

SQL Server Query Notification with BizTalk Server 2009 WCF SQL Adapter

SQL Server 2005 introduced query notifications which allow applications to subscribe to the database and receive notifications from the database based on the changes in the result set of the query on which the application is subscribed. This can change the behavior and performance of the application as the application does not have to query the database in order to get changes. For e.g. if a service/application has cached the data it can refresh its cache whenever there is a change in the results of the cache data. In this way efficiently and in real time the data can be refreshed.

With regard to BizTalk previously we only used polling (using the SQL Server adapter) in order to get the results from the database. Polling would be a heavy operation depending on the polling interval and the results being returned which would affect the overall BizTalk server performance. But now by utilizing the Query notification feature of SQL Server 2005/2008 BizTalk server can receive notifications whenever there are changes in the result set of the query. For further reading you can read Using Query Notification on MSDN. Before planning to use the SQL Query notifications using the WCF adapter please go through Considerations for Receiving Query Notifications Using the Adapter on MSDN.

Generating schemas from Consume adapter service Wizard

For using Query notifications in BizTalk the first step is to use the Consume Adapter Service Wizard to generate the schemas. You can start the wizard by Right click your project->Add Generated Items -> Consume Adapter service. In order to use the SQL WCF service you have to select sqlBinding from the wizard and supply the SQL URI. Please refer to SQL Server Connection URI on MSDN. Click the configure button and configure the mssql URI.

1-      In the Security tab choose the credential type (windows/username) and supply the username/password if using SQL Authentication.

2-      In the URI tab supply the URI properties. Supply the database name in the InitialCatalog property, SQL Server Instance Name and the SQL server name/IP in the Server property. The inbound id is used for typed polling and it makes the URI unique.

3-      In the Binding properties go to the Inbound property group and set the InboundOperationType property to Notification. For complete binding properties read Working with BizTalk Adapter for SQL Server Binding Properties on MSDN.

4-      Since you are using polling and you have set the Inbound operation type to Notification you will set the Notification properties in which Notification Statement is specified. Notification statement is the query based on which notifications will be received by the adapter. Whenever there will be changes in the result set returned by the query, SQL notifications will be sent. For complete reference for creating Query notification read Creating a Query for Notification on MSDN. In my case the query was ”Select [columns] from MAR_SP_INFO_V2”.

In the end two files will be generated one is the simple schema with three fields as shown below and the other is a binding file xml.

Setting up the Orchestration for Query Notification and processing the results

When the notification is received in the orchestration the orchestration has to determine the type of notification. As the WCF adapter will return two types of notifications.

1-      Notifications based on the changes on the result set.

2-      Notification when receive location was enabled after a failure.  

The adapter will send notification whenever the receive location is back up again when NotifyOnListenerStart is set to true in the binding properties. But beware that the adapter does not perform any activity when the receive location is down and there are changes in the database. The adapter will start notification after the receive location is up again. For e.g. when the receive location was down and a few records were inserted and updated when it will come up again it will not notify what had happened. The orchestration must have an implementation to determine the changes. For this you can read Receiving Query Notifications After a Receive Location Breakdown on MSDN.

The schema that was generated from the wizard will have three fields Info, Source and Type. In the orchestration the first step would be to have a decide shape to decide which type of notification was received by the orchestration. I decided to distinguish all the three fields so that I can use them in my orchestration if you do not you can use xpaths to extract the value of the fields.

In the decide shape first check the Info field and Source. If the Info is “ListenerStarted” and Source is “SqlBinding” and Type is “Startup” you can proceed to the logic to detect changes to the database while the receive location was down. If the Info is “Insert/Update or Delete” operation the Source would be “Data” and Type would be “Change”.

Field On Data changes in the database On Listener Start (Receive location enabled)
Info Insert/Update or Delete ListenerStarted
Source Data SqlBinding
Type Change Startup


In my orchestration I am doing nothing for receiving Listener start notifications or for Updates and Deletes and I am only interested in taking actions against the insert operation in the table therefore I am checking this in my decide shape. I will devise some mechanism to check if my receive location went up after going down what shall I do.

For now I need to get the new records which are inserted and process them. I am using the WCF SQL Adapter and selecting all the records whose StatusRecord field is set to NEW. NEW is the default value for a new record that is inserted for me to identify the records. I will write in detail in my next post how I am using the WCF SQL Adapter for selecting the records. When I select the records and finish processing them I update the StatusRecord column to READ.

I pass the whole select message response to my helper class where all the processing is done and if the operation is successful I log the results.

Configuring the WCF Custom Adapter properties for Notification

There are again two ways to configure the adapter properties, first you can redefine the properties here or you can directly import the properties from the binding file which was generated by the WCF adapter service wizard. For that you can refer to Configuring a Physical Port Binding Using a Port Binding File on MSDN. I haven’t looked into it but will use when needed.

I will have manual bindings for the WCF Custom adapter. When finished with the orchestration you have to build and deploy the BizTalk application. Then from the BizTalk administration console open the receive port node and create a new receive port. Then create a new receive location. Select the type as WCF-Custom and use the default XML Receive pipeline. Click the configure button to configure the adapter properties.

In the general tab specify the address URI. You can copy paste that from the Binding file generated by the Consume Adapter Service wizard.

In the Other tab specify the username and password for the database. Otherwise you will get user credential error.


Now for the Binding properties go to the Binding Tab and select sqlBinding as the Binding Type. You will see all the binding properties below. We will be interested only in the notification binding properties. Set inboundOperationType as Notification, set the notificationStatement property to the SQL Query. On the basis of this query result set a notification will be sent to the Orchestration. And notifyOnListenerStart property to True if you want to receive the notification when the receive location is enabled. In my case it is false.

%d bloggers like this: