Pages

BizTalk : How To : Rename BizTalk Server Machine Walkthrough

Tuesday, March 19, 2013

Renaming your BizTalk machine

Anyone who been working on BizTalk server long enough will know renaming a BizTalk machine is not a trivial task. It always involves quite a bit of manual work, things like
  • Un-configure the existing setting either using the UI or ConfigFramework /U switch
  • Deleting the Databases manually,
  • Re-Configuring it, which will raise tons of errors.
It's never been a smooth procedure. Recently I had to go through this painful process, because due to corporate policy changes, they renamed all of our developer workstations.
Here I'll explain how we utilized the Disaster Recovery scripts available as part of BizTalk installation to help us rename the BizTalk machines smoothly (mainly without the pain of un-configure and re-configure). If everything goes smoothly you should be done in less than 10 minutes.

Disclaimer: Do this at your own risk, it worked for me.
Edit SampleUpdateInfo.xml file:
  • Navigate to the following folder C:Program FilesMicrosoft BizTalk Server 2006SchemaRestore and open the file SampleUpdateInfo.xml file in your favourite editor.
  • Find and Replace all the "SourceServer" value with your original Server name
  • Find and Replace all the "DestinationServer" value with your new Server name.
  • Stuff related to Analysis, BAM, RuleEngine, HWS, and EDI are commented by default, if you are using them in your environment un-comment the required ones.
  • Save and Close the file.
Run the scripts to update database and registry:

Run the following command in the command prompt
cscript UpdateDatabase.vbs SampleUpdateInfo.xml

This script will update all the BizTalk databases with the correct server name. You need to run this script only once in a multi BizTalk server environment.
Run the following command in the command prompt

cscript UpdateRegistry.vbs SampleUpdateInfo.xml

This script will update the local registry with the correct server name. You need to run this script on each BizTalk server you have in the group.

Restart WMI Service:
Open the Service control manager (services.msc) and restart the "Windows Management Instrumentation" service. This step is required because most of the administration tasks you perform from the admin console depends on WMI.

Promote the new server as master secret server:
Follow the steps outlined in this link http://msdn.microsoft.com/en-us/library/aa559842.aspx to promote your new server as master secret server.

Configuring the BizTalk Administration Console:
Open the BizTalk administration console, click on the existing node (the one pointing to original server), right-click and remove.
Right click on the "BizTalk Server 2006 Administration" node and select "Connect to Existing Group…". Provide the new Server Name and select the BizTalkMgmtDb database.
Click Ok.

Update your Visual Studio project properties to point to new Server:
Most of your Visual Studio BizTalk projects will be still pointing to the old management db server. Trying to deploy your project will result in an error message like this "Login failed for the user ''. The user is not associated with a trusted SQL Server connection. "

Couple of Caveats:
At this point you should be able to see a working BizTalk Group. I experienced couple of problems,
1. When your try to expand the "Platform SettingsAdapters" and select any adapter from the list, you'll see an error message like this "Unable to load adapter handler for XXXX adapter. (Microsoft.BizTalk.Administration.SnapIn) Access Denied (System.Management)"

image

The machine name in the error message is pointing to my old server name. I wrote a small console application utilizing the BizTalk WMI classes and they all worked fine. It shows clearly the wrong server name is stored somewhere and the admin console is trying to connect to the old server. BTW there won't be anything in the event viewer to assist you :-)
I opened the SQL Management studio and made the following changes in the BizTalk Management Database (BizTalkMgmtDb)
1. adm_group table, SSOServerName Column: This one had the original server name, I entered the new server name
2. adm_server table, Name column: Again this one had the original server name, I entered the new server name.
Restart all the BizTalk/SSO services.

NOTE: These procedures were performed on a Windows XP workstation (single BizTalk installation), SQL Server installed on the local machine. No named SQL instance. The procedure should work for Windows 2003 on a remote SQL Server with multiple BizTalk servers. I'm not sure about the SQL Named instance setup(example people using SQL Express, where the instance name will be for example ServerNameEXPRESS).
Read more ...

BizTalk: Best Practise Analyser

Saturday, March 16, 2013

I've found a great tool in a blog of an employee of Microsoft Israel named
The tool overview (from MS site)
The BizTalk Server 2006 Best Practices Analyzer performs configuration-level verification by reading and reporting only. The Best Practices Analyzer gathers data from different information sources, such as Windows Management Instrumentation (WMI) classes, SQL Server databases, and registry entries. The Best Practices Analyzer uses the data to evaluate the deployment configuration. The Best Practices Analyzer does not modify any system settings, and is not a self-tuning tool.
Read more ...

Connecting to Oracle via SQL Adapter

Friday, March 15, 2013


In a specific scenario, we needed two receive location bounded to Oracle db.
One was simple, we needed a table rows to start a process. This was eazy using BizTalk Oracle Adapter. Installing Oracle client, defining TNS, and connection establish.

The other data we needed was a bit more complex. We needed the employee courses including subject learned in the course.

The Oracle adapter can not receive XML queries as result, so we preffered to define a linked server from SQL to Oracle, define the query SQL with the linked server like that:

Select *
from
OpenQuery(Ora,"Select empid,courseid,x1....xn from courses") course,
OpenQuery(Ora,"Select empid,courseid,y1...ym from subjects") subject
where
course.empid = subject.empid and
course.courseid = subject.courseid
for xml auto

the query worked fine in SQL Managment studio. But the receice location query failed in the adapter since the BizTalk tried to add the Oracle DB to the distributed transaction belongs to the SQL Server, over the linked server.

When searching this issue, I came accross a blog, Joe unfiltered: BizTalk, SQL linked servers and DTC trying to do the same.

Following the post solution, we created new linked server, adding the string "DistribTX=0" to Provider settings (and selecting Oracle OLE DB Provider - instead of MSDAORA), which solve the issue.
Read more ...

BizTalk - Testing Pipeline Components Approaches

Friday, March 15, 2013

We will begin this post by discussing some of the traditional ways I have seen pipeline components tested, then continue to 2 more recent techniques which I believe to offer significant advantages.  To begin with the traditional techniques are: 
Traditional Approach 1 - Testing as part of a larger process
In this technique the Pipeline component is developed and then deployed along with a BizTalk solution.  Tests are then conducted against the overall process and it is assumed that if the end to end test is successful then the pipeline component has been adequately tested.
The key points about this approach are:
  • Often problems with the pipeline component are not detected during development because the end to end test does not cover all cases in the component
  • It is difficult to obtain code coverage information for the pipeline component
  • It is time consuming as it requires a deployment to BizTalk to be able to test
  • It is error prone because often you forget to restart host processes and think you havent fixed something that you really have
  • The component has limited reusability as it is only tested within the context of this process
Traditional Approach 2 - Using abstraction to make the component more testable
In this technique you find the developer has abstracted the logic using the facade pattern which the pipeline component then uses.  This means the code in the pipeline component is as simple as possible.  The more complex code is in other classes which do not depend on the BizTalk classes and interfaces such as IBaseMessage.  This in turn makes these classes easier to test outside BizTalk.
I think this pattern in general isnt a bad thing.  but I often see this technique used in conjunction with technique 1.  So we end up with a situation where the underlying classes are tested with unit tests and the pipeline component itself is assumed as tested as part of the larger process.  The key points of this technique are:
  • It is better than technique 1 as we are performing some unit tests which validates most of the functionality of the component before BizTalk becomes involved
  • We still cant test the pipeline component interface without having to deploy to BizTalk
  • Most of the other points from technique 1 still apply
Traditional Approach 3 - Using Pipeline.exe
I sometimes have seen the technique where a developer will create the pipeline component and then some pipelines.  The developer will then use Pipeline.exe to execute the test cases. 
The key points of this approach are as follows:
  • This does not require the artifacts to be deployed to BizTalk
  • It requires additional BizTalk pipelines to be defined to test the pipeline component
  • This tool needs to be used from the command line (although you could use the process object to call it from a C# test)
  • When using this approach you would probably want to validate the output document from the pipeline.exe call to ensure the message is as expected
  • You cant really interact with the message context before or after the test so this might limit your testing capability or require additional components needing to be added to the pipeline.
The challenge of the traditional approaches
The main challenge which limited the traditional approach to how you would test pipeline components was the ability for the developer to create and setup the IBaseMessage and IPipelineContext objects which you would then use for testing.  This resulted in the above 3 approaches being (in my opinion) the most popular way of testing pipeline components.
As result developers were often making their best effort at being able to test pipeline components but always knowing that they could only effectively test so much and there was always a reasonable chance that the component is going to have problems when used.
Newer Approachs
As with previous posts in this series I'm trying to encourage the following desired practices when testing:
  • We want to make testing the component relatively simple
  • We want to test the component as much as possible before we start using it in BizTalk
  • We want the tests to be automated and part of a continuous integration process
In order to implement the approach to testing pipeline components I would recommend either of the following 2 techniques (outlined below) when testing pipeline components.  Before I discuss the two techniques some background on the sample (available for download at the bottom of the article):
The sample contains a simple pipeline component which will read the input message using the XPathMutatorStream.  When it finds an element matching the desired XPath query the value from this element will be promoted to the File.ReceivedFileName promoted property.
The sample pipeline component is intended to be a fairly straightforward component which can be used to demonstrate how to test a component.  The following picture shows the main part of the pipeline component:
In the tests project there are 2 test classes each one demonstrating each technique.
Approach 1 - Testing with the Pipeline Component Test Library
The Pipeline Component Test Library has been around for a little while, but I dont think its used as much as it should be.  The library basically provides a simpler API to the PipelineObjects.dll which comes with the Pipeline.exe tool in the SDK.
This means you can interact with the Pipeline.exe type facilities in a simpler way directly from your C# test.  You can also access the message and its context much easier than you would be able to by using Pipeline.exe.  The following picture shows the code snippet which forms the test of the pipeline component using the pipeline component test library:
 
 In the test you can see you use the Pipeline Library to help tackle the key challenges of the IBaseMessage and the IPipelineContext.  In terms of the message you use the libraries helpers to create this message from an input document.  For the IPipelineContext this is handled by the library internally because you are creating a pipeline in code to execute the component in.
The advantages of this technique are:
  • You have full access to the proper IBaseMessage before the test.  This lets you remove the dependancy on things like components before yours in a pipeline or adapters because you can do things like set properties yourself.
  • The technique uses objects that a BizTalk person will be familiar with so the learning curve is not that steep
  • The tests can be developed very quickly
  • You control the pipeline so you can add additional components as required
With this technique it allows you to treat the pipeline component like a black box.  You put a message in and check the message and context that comes out. 
Useful Resources:
Some useful resources on this technique are:
Tomas Restrepo  - Creator of the Pipeline Component Test Library
Nick Heppleson - Has an article on how he tests his Message Archive component using this technique
Approach 2 - Testing with Rhino Mocks
In approach 2 i'm going to demonstrate how you can use a mocking framework to help you test the pipeline component.  In this example I am using Rhino Mocks.  In this technique you are basically defining a dynamic mock for the objects which will be used by the pipeline component.  On the mock objects you set expectations for what should happen each time a method is called on the mock object.  You then execute the pipeline component and then verify that all of the expectations happened as you planned.
The below code sample shows the equivelent test implemented using Rhino Mocks.
 

The advantage of this technique is:
  • It is a very powerful technique which gives you full control over pretty much all of the objects
  • It is a technique which is common to C# developers
  • Encourages the developer to think more about the component
  • Again this does not require the code to be deployed to BizTalk
This technique is much more white box, requiring the developer to have a much more intermate knowledge of what the component is doing when creating the test or as we are all test driven developers this makes you think a little harder about what the component does internally.   
Useful resources:
For more info on BizTalk and Rhino Mocks check out the following (click here)
Summary
 I think the key differences between the pipeline component test library and Rhino Mock techniques are as follows (i will refer to the pipeline component test library as PCTL):
  • The PCTL offers a technique which has a shallower learning curve and will be familiar to most BizTalk developers
  • Rhino mocks offers probably more control over things for very complicated tests
  • The PCTL is a much quicker way of developing tests, i find that using Rhino Mocks is quite time consuming in working out all of the expectations (especially when you are new to the technique)
  • It would be easier to refactor PCTL tests when there are changes to your component
  • In my opinion the PCTL just gives me a little more confidence than Rhino Mocks.  This is mainly because the test technique gives me the gut feeling that it is performing like how it will in BizTalk.  Where as with Rhino Mocks it sometimes feels that there is a bit of a gap between the mocking and what will happen when it is in BizTalk.  I dont really have any hard evidence to back this up but I think the fact that the tests themselves are that bit more complicated to write that they almost need testing in their own right.
So based on this article I would make the following recommendations for your approach to testing pipeline components:
  1. Use the traditional abstraction technique anyway as this is a pattern that can make your component simpler to understand and test
  2. As a default technique use the Pipeline Component Test Library
  3. When you have a special case or unusual component that has advanced testing requirements, compliment the Pipeline Component Test Library tests with ones which use Rhino Mocks to help you do those more advanced things
  4. Use a code coverage tool to ensure you dont miss any tests
  5. Remember to test more than just the core interface such as IComponent as the rest of the code needs testing too!
Read more ...

Biz Talk How To: Configure Dynamic Send Ports Using WCF Adapters

Wednesday, January 2, 2013

You can configure dynamic send ports for WCF adapters. The URI, action, and binding might be determined from a property on an incoming message, and then specified in the Expression shape, as shown in the following WCF-NetTcp adapter:
MessageOut=MessageIn;
MessageOut(WCF.Action)="http://tempuri.org/IReceiveMessage/ReceiveMessage";
MessageOut(WCF.SecurityMode)="Transport";
MessageOut(WCF.TransportClientCredentialType)="Windows";
DynamicSendPort(Microsoft.XLANGs.BaseTypes.Address)="net.tcp://localhost:8001/netTcp";
DynamicSendPort(Microsoft.XLANGs.BaseTypes.TransportType)="WCF-NetTcp";
The following code shows an example of how to specify the WCF context properties in the Expression shape for a WCF-Custom adapter:
MessageOut=MessageIn;
MessageOut(WCF.BindingType)="customBinding";
MessageOut(WCF.Action)="http://tempuri.org/IReceiveMessage/ReceiveMessage";
MessageOut(WCF.BindingConfiguration)=@"<binding name=""customBinding""><binaryMessageEncoding /><tcpTransport /></binding>";
DynamicSendPort(Microsoft.XLANGs.BaseTypes.Address)="net.tcp://localhost:8001/customNetTcp";
DynamicSendPort(Microsoft.XLANGs.BaseTypes.TransportType)="WCF-Custom";
Considerations when specifying WCF context properties are as follows:
  • There are addresses that can be mapped to multiple adapters in BizTalk Server 2006 R2. For example, an address that starts with http:// or https:// can be handled by the HTTP adapter as well as by the WCF-BasicHttp, WCF-WsHttp, or WCF-Custom adapters. For another example, in the above sample code, both of them are using the address starts with net.tcp://, yet because the second sample code uses custom binding, WCF-Custom adapter should be used to handle the address. Therefore, to identify the correct adapter, you must configure the optional Microsoft.XLANGs.BaseTypes.TransportType field in an Expression shape with the adapter that you want to use.

    Bb727706.note(en-us,BTS.20).gifNote
    If the address starts with http:// or https://, and if you do not specify the Microsoft.XLANGs.BaseTypes.TransportType field, by default, the BizTalk engine will use the HTTP adapter.
  • The WCF.BindingType identifies the binding by name. It can be one of the following:

    • basicHttpBinding
    • customBinding
    • netMsmqBinding
    • netNamedPipeBinding
    • netTcpBinding
    • wsFederationHttpBinding
    • wsHttpBinding
    The above list can be extended. For example, you can add your own binding to it such as FtpBinding.
  • The WCF.BindingConfiguration specifies the binding configuration for the binding type. It takes any binding that are registered in the machine configuration file. It also takes the XML configuration in the same format as used in the binding configuration in the WCF configuration file.
  • You may need to specify additional WCF properties. You can type WCF in the Expression Editor, and the IntelliSense feature should list all the available context properties. For more information about WCF context properties, see WCF Adapters Property Schema and Properties.
The preceding examples show how to configure WCF.Action with a single action. For multiple actions mapping scenarios, WCF adapter do not support using multiple actions mapping with dynamic send ports. You can just set the actual action in the WCF.Action context property as showing in above.
Read more ...