Using Test X509 Certificates with BizTalk Web Services

Friday, December 9, 2011

post by : paul petrov

If you’re planning to use X509 digital certificates to secure communication between services then you would need to test all security features at some point. MakeCert.exe tool can help to generate self-signed certificates for development and integration testing. Here’s the quick process overview. We will need one Certification Authority Root certificate and one ore more (depending on application design and configuration) client and server certificates.
First, create root certification authority certificate. This certificate will be used to sign server and client certificates:
makecert -r -pe -cy authority -n "CN=Test Root Authority"  –sk “Test Root Authority”  -sr CurrentUser -ss My -a sha1 -sky signature TestRootAuthority.cer
This will create self-signed certificate and install it in the CurrentUser/Personal store. Copy this certificate into LocalMachine/ Trusted Root Certification Authorities.
We also need to import this certificate into LocalMachine/Trusted Root Certification Authorities store of all client and server boxes so they can validate certification path. This can be done through MMC Certificates snap-in or using certmgr.exe:
certmgr -add -all -c "TestRootAuthority.cer" -s -r LocalMachine Root
Create server certificate:
makecert -pe -cy end -a sha1 -sky exchange -eku -sy 12 –sp "Microsoft RSA SChannel Cryptographic Provider" –n "CN=HostName" -sr CurrentUser -ss My -in "Test Root Authority" -is My -ir CurrentUser TestServer.cer
Using MMC export just created certificate to the pfx file with private key to install it on servers. Note: when installing certificates make sure that account used has access to certificate private key. In case of BizTalk web service this is account for the host instance that runs BizTalk SOAP receive adapter.
Use MMC or certmgr.exe to import server’s public key TestServer.cer into LocalMachine/Other People store on client machines.
Create client certificate:
makecert -pe -cy end -a sha1 -sky exchange -eku -sy 12 -sp "Microsoft RSA SChannel Cryptographic Provider" -n "CN=Test Client" -sr LocalMachine -ss My -in "Test Root Authority" -is My -ir CurrentUser TestClient.cer
Install this certificate on the client in the Personal store of the service account that will be sending requests to BizTalk server. Make sure that service account has rights to access certificate private key. If you’re getting HTTP 403 Forbidden error when calling the service that may be the reason behind it. For example, when client makes web service request in context of ASP.NET web application it uses ASP.NET process identity (default, if impersonation disabled is Network Service or ASPNET). To grant certificate access for the specific account use this command:
winhttpcertcfg -g -c LOCAL_MACHINE\My -s “Test Client” -a  Domain\Account
Command above grants access to the “Test Client” certificate private key located in the LocalMachine/Personal to the Network Service account. Command tool winhttpcertcfg.exe is part of Windows SDK and can be found here.
Finally, import client public key into the LocalMachine\Other People store on servers. Then you can enable certificates on the IIS, apply authorization rules, map BizTalk parties to certificates, etc.. This pretty much allows you to reproduce and test real world scenarios with authentication between services, operations authorization, and party resolution in staging environment.

Read more ...

Microsoft BizTalk Dynamics CRM Adapter by Brajendra Singh

Thursday, December 8, 2011

post by  Brajendra Singh, MSDN

Recently, I have been working on an integration project where few exiting systems were getting integrated with Microsoft CRM System 3.0. We architected to use BizTalk CRM adapter to synchronize and integrate data with CRM system. We could not locate a good documentation about adapter usage. We had to do lots of research, hit-&-try POCs to understand CRM adapter usage. And this encouraged me to come up with set of articles which can facilities developers to understand and use adapter easy and fast.
I am thinking to cover some basic operations with BizTalk CRM adapter such Query Data, Create Data, Update Data, Delete Data, Execute Operations, Retrieve Data etc. I will also cover dealing with complex data types (Pick List, Lookup) and custom entities/attributes.
I am expecting reader to have basic understanding of schemas, maps, orchestration, ports and adapters.
How to Install BizTalk CRM Adapter
Adapter installation is simple. You can download installation bits and usage guide from
Following are key points of installation and usage guide –
1.    Installation is simple and setup based.

2.    CRM system uses AD based security schema. BizTalk uses send or solicit-response ports with CRM adapter to talk to CRM system. BizTalk ports run under host. If your BizTalk host service account does not have adequate access on CRM system then you are required to create another host with account which has sufficient access permission on CRM system. And then you construct a new CRM send handler which runs under new host and then configure CRM related ports to run new CRM send handler.

3.    Installation guide also talks about creating port and generating schema which I will also cover in coming sections.

4.    I recommend readers to install CRM SDK which holds good quantity of code samples and information. You can find SDK @

5.    It is useful to take look at overall installation and usage document. Especially known issues section.

Basics of CRM System
Before we chat about using CRM adapter, we will cover some nuts and bolts which are essential for any BizTalk developer to work with CRM. CRM application can be access using following URL (though it depends on configuration of CRM site) - http://<CRM-Sever>/loader.aspx.
You will find that CRM system is divided into multiple sub-systems such Sales, Marketing, Service etc. With extension of CRM GP, accounting sub-systems also come in picture. Each of these sub-systems maintains a range of information like “Service” sub-system maintains Cases (customer escalations), Accounts, Contacts, Products etc.
In technical vocabulary, information is hold by various entities like we have one entity each for case, account, contact etc. These entities generally hold association/relationship among themselves. For example one account can have multiple contacts, one contact can have more than one case, there is many-to-many relationship between accounts and products etc.
Each of these entities follows a schema. Schema defines attributes hold by these entities and their type information. For example contact has Last Name (string), First Name (string), Gender (Pick List) etc as attributes. If you require then you can also create new custom entities or new custom attributes. When working with CRM system, we generally deal with these entities and their attributes. We create, update or delete instances of these entities.
Before we start integration with CRM system we need to understand entities, their schema, relationship with other entities etc. You can go to “Settings >> Customization >> Customize Entities” to see list of entities available in CRM system. List shows name, schema name, type and description. Here you will find option to create new custom entity or modify existing entity.
If you open (double-click) any existing entity, it will show you details of it. You will see details of attributes, forms/views to populate these entities in CRM system, relationship with other entities etc. When you check “Attributes” of entity, you will see attribute schema name, display name, type and description. Attribute schema name and type is very important information because BizTalk CRM adapter generates schema based on schema name and type information inside CRM system. You will also find provision to create new custom attributes.
CRM sub systems, forms, entities, schemas and customization are self explanatory. I suggest readers to spend some time with it; play around and get comfort. 
It is time to get into coding now.
How to perform Query Data with CRM Adapter
Dynamics CRM provides a web service which can be used by developers to do operations against CRM system. CRM Adapter internally uses this web service for operations. Generally, you will find web service URL as –
Without documentation, it was hard for me to perform POCs with BizTalk CRM adapter. To make my job simpler, I played around with Dynamics CRM web service to manipulate entities. And this gave me adequate idea when dealing with generated CRM schemas in BizTalk. I am planning to follow same trick in these articles also - .Net code first and then BizTalk implementation.
We are going to see how query can be performed against CRM system. I am going to query case details using case guid. Case is complaints or service requests registered by customers and it comes under “Service” sub-system.
1.    Perform Query using .net code (calling CRM web service)
I added CRM web service as web reference and named it “CRMServer”. Following is code to query case using case guid.
 CRMServer.CrmService service = new CRMWSTest.CRMServer.CrmService();
 service.Credentials = System.Net.CredentialCache.DefaultCredentials;
 string fetchXml = @"<fetch mapping=""logical"">
                                  <entity name=""incident"">
                                     <attribute name=""ticketnumber""/>
                                     <attribute name=""title""/>
                                     <filter type=""and"">
                                          <condition attribute=""incidentid"" operator=""eq"" value=""7c89ec34-6828-410b-9718-185b5a39a8ba""/>
 string result = service.Fetch(fetchXml);
Code is easy to understand. I first created an instance of CRM web service client and then create query (fetch) XML. Fetch xml is straightforward to understand.
  <entity name=""incident""> 
Means entity which I wish to query is “incident”. Well, case is the display name and schema name for case is “incident”. You can find this in customization section as mentioned in previous section.
<attribute name=""ticketnumber""/>
 <attribute name=""title""/>
I am querying “Case Number” and “Title” attributes of case. Schema name of these two attributes are “ticketnumber” and “title”. You can add more attributes as required.

 <filter type=""and"">
       <condition attribute=""incidentid"" operator=""eq"" value=""7c89ec34-6828-410b-9718-185b5a39a8ba""/>
This is filter condition to query. Type is “and” as logical operator. It can also accept “or”. “attribute” is entity attribute which we are using for query. Again this is schema name not display name. “operator” is assignment operator and it can accept values as “lt” (less than), “gt” (greater than), “le” (less than or equal), “ge” (greater than or equal), “eq” (equal), “ne” (not equal), etc. Finally “value” holds value of attribute to be queried. If required, you can use more attributes with “and/or” options. I suggest readers to refer CRM SDK for more details about fetch xml.
Finally we call “Fetch” method passing “fetchXml” as parameter.
Query result comes in following format.
<resultset morerecords="0" paging-cookie="$1$$incident$incidentid$1$0$38${7C89EC34-6828-410B-9718-185B5A39A8BA}$!$incident$incidentid$1$0$38${7C89EC34-6828-410B-9718-185B5A39A8BA}$">
    <title>This is test case1</title>
Result is self illustrative. You can see ticket number and title returned for query.

2.    Perform Query BizTalk code (using Dynamics CRM Adapter)
Above was query implementation in .net code. Now we are going to implement same logic in BizTalk orchestration. Following are the steps to perform same query operation in orchestration using BizTalk Dynamics CRM adapter.
·         In query we send Fetch XML and return result. We need solicit-response send port for this purpose. Create a new static solicit-response send port in BizTalk Administration Console. Give a name to it (say “CRMSendPort”) and select type as “Microsoft Dynamics CRM”. Go to “Configure” and put “Web Service URL” as http://<CRM-Server>/mscrmservices/2006.
In send handler, select send handler created as per CRM specific host in above section. Configure “Send Pipeline” as “Xml Transmit” and “Receive Pipeline” as “Xml Receive”. Save and close things.

Note: One of the common mistakes committed by developers is to put complete web service URL (http://<CRM-Server>/MSCRMServices/2006/CrmService.asmx) in configuration. This generates error during schema generation. Reason is that CRM adapter uses “Metadata.asmx” to generate schema and it just need URL of CRM web service virtual directory.

·         Create a BizTalk project and assign a strong name key to it.

·         We have to generate schema for query operation. Right click Project Name >> Select Add Menu >> Select Add Generated Items Menu>> Select Add Adapter Metadata Option>> Press Add Button >> Select Microsoft Dynamics CRM Option>> Select Port (same port created in above steps say “CRMSendPort”) >> Press Next Button. “Microsoft Dynamics CRM User Credentials” screen opens up. Enter user name and password for CRM system. Remember this user name and password should have adequate access to fetch entity and their schema information from CRM system. Press Ok button after entering credentials.

·         “Microsoft Dynamics CRM Actions and Entities” screen opens up. As we already discussed, CRM sub-systems are made of entities. At this point we select entity (account, contact, case etc.) which we want to deal with and action (add, update, delete, execute etc.) which we want to perform on selected entity. Since we want to perform query on cases (incidents), select “ExecuteFetch” as action and “Incident” as entity. If required, in some implementation, you can also select multiple actions and multiple entities. When selection is done, press next button. It would take few seconds to generate all schemas related to fetch operation in BizTalk project.

·         In generated artifacts, it generates lots of schema and one BizTalk orchestration file. Open orchestration file and delete all port types and multi-part message types generated by default. Well, these are useful but I am asking to clean them so that we can create and understand things from scratch.

·         It generates some 10 schemas in project. These schemas contain type definition and are included in each other. We are interested in two schemas only – one request schema which can send our query/fetch XML and one response schema which can pull result back.

·         “ExecuteFetch_ExecuteFetchRequest.xsd” serves purpose of request schema. You expand to see details of this schema. It has two important elements – “crm_action” and “FetchXml”. We will check these in details in few seconds.

·         Now response is tricky part. Developers think that “ExecuteFetch_ExecuteFetchResponse.xsd” is response schema but it is NOT. Every response using CRM adapter follows a fix and common schema and this schema is not generated. You can find this schema @ “C:\Program Files\BizTalkAdapter\Schemas\Response.xsd” depending upon adapter installation location. You have to include this Response.xsd in project.

·         We are now all set to implement logic. Create two messages of type request and response schemas. Hook these messages to a static solicit-response send logical port using send and receive shape. Use map (with transform shape) or message assignment shapes to create request xml and send it via solicit-response port. When response comes consume as per requirement. I am leaving this logic development part to readers. If you face any issue, please refer sample application attached with this article or feel free to message me.

·          When creating xml request (using map or message assignment), take care of following things. Set “crm_action” attribute value to “execute”. We are doing this because we are executing fetch operation on CRM system. Other possible values are “create”, “update” and “delete”. We will see use of this in coming articles. Set “FetchXml” element value to - "<fetch mapping='logical'><entity name='incident'><attribute name='ticketnumber'/><attribute name='title'/><filter type='and'><condition attribute='incidentid' operator='eq' value='7c89ec34-6828-410b-9718-185b5a39a8ba'/></filter></entity></fetch>"
Remember this is same format which we used in previous .net code sample. In attached sample, I used script functoids in map to prepare request XML. I have implemented in crude way to keep things simple but you can do better and smart job.

·         When you are done with coding, compile project and deploy it. After deployment, bind solicit-response orchestration ports with physical solicit-response port (“CRMSendPort”) created in above steps. Enlist and start orchestration. Finally trigger orchestration to test query operation.

·         You will find that response message comes in following format –

  <ErrorCode />
  <ErrorString />
  <Retryable />
  <Message><prefix:ExecuteFetchResponse xmlns:prefix="http://localhost/"><FetchXmlResult>&lt;resultset morerecords='0' paging-cookie='$1$$incident$incidentid$1$0$38${7C89EC34-6828-410B-9718-185B5A39A8BA}$!$incident$incidentid$1$0$38${7C89EC34-6828-410B-9718-185B5A39A8BA}$'&gt;&lt;result&gt;&lt;ticketnumber&gt;CAS-01006-ZBEJ0W&lt;/ticketnumber&gt;&lt;title&gt;This is test case1&lt;/title&gt;&lt;/result&gt;&lt;/resultset&gt;</FetchXmlResult></prefix:ExecuteFetchResponse></Message>
“Message” element contains response (xml format) and it follows schema type ExecuteFetch_ExecuteFetchResponse.xsd. You need to employ some parsing technique to fetch information out of response xml.
Rest of the return message contains error handling mechanism which can be used as required.
·         That’s it. Make tweaks in request message and orchestration logic to explore query/fetch option thoroughly.
This was a high level walk through and a sample to query CRM system for entities. If required, please download sample project attached with article for further reference. If you face any issue please feel free to ping me on this article. Hope article was useful to you and your comments are always welcome.
Please wait for some time before we meet again with next part of it.

Read more ...

Biz Talk Integration with CRM : How to Videos by peter kelcey

Wednesday, December 7, 2011

Post by Peter Kelsey, MSDN
I recently participated with our MCS Canada team on a large BizTalk 2009 and CRM 4.0 integration project. I gained a ton of experience about CRM integration from that project and I thought I’d speak about it at our semi-annual TechReady internal conference. Not long before TechReady though, we release CRM 2011 and I realized that speaking about BizTalk 2009 and CRM 4.0 was about as relevant as speaking about Windows 98 and Trumpet Winsock implementations (I was reading a Slashdot article this week about the Trumpet Winsock creator… so that’s where the reference came from…). I hastily decided to refocus the presentation on CRM 2011 and BizTalk 2010. That meant that I had to quickly figure out exactly what the integration story was between these two products. After a bunch of trial and error and some helpful conversations with folks from the CRM 2011 team, I figure out the options. It turns out that you have TWO options for integrating BizTalk 2010 and CRM 2011.  Yes indeed, you have two different methods to choose from when integrating these two products! I ended up having a fairly relevant and well received presentation to deliver. From that internal session, I’ve repurposed my content into a publicly ready format and created a shorter walk through video that explains the two options that you have. Instead of trying to write a lengthy blog post about how to do that, I’ve just filmed a walkthrough of the process.  (I found that these video walkthroughs were highly successful when I did them for the BizTalk ESB, so I thought I’d do them again here for CRM) Also, I’ve included the source code project that I use in the video so that you can get a jumpstart on your process.
All in all, I found the process to be a very simple one once I figure out a few basic concepts. Hopefully, this blog post will help you figure out those concepts more quickly than I did. Once you’ve got a grip on those, then I believe you’ll find this integration process to be a very easy one.  As with most of these blog videos that I do, I created this after midnight when the house was quite Smile so if you find some issues or bloopers, let me know and I’ll correct them.
Below is an overview of the components, tools and services at play in to the process. In the video, I’ll walk you through this diagram in more depth before showing you the actual project I built.  In this diagram, you can see that there is data flowing from BizTalk out to the CRM 2011 cloud service as well as data flowing from CRM 2011 Online back through the firewall to BizTalk on-premise. In this first video, I focus on the BizTalk to CRM 2011 Online option. In a day or two, I’ll post a second video that shows the CRM 2011 Online event notification being sent back to an BizTalk on-premise installation. (it’s a cool one…)
This is quite a long video (>45 minutes) so I had to split it into a multipart zip file in order to get it to fit into my SkyDrive folder.
You can get:
  1. Part one at
  2. Part two at
  3. The source project and test file here
You’ll need to download both parts of the zip file and then use WinZip to extract them.
Read more ...

Using Table Looping Functoid in Biz talk

Tuesday, December 6, 2011

When dealing with existing systems, sometimes a challenge presents itself in the form of a flat file.   Trying to impose structure upon a flat file can be achieved, however, using the Table Looping and Table Extractor functoids.  Consider the following schemas:
Source schema:
Target schema:
One’s first attempt at a map to transform the source to destination might look something like this:
We’ll use the following input file to test the map.
The output isn’t quite what we were hoping for.
The Table Looping functoid is the key to what we’re trying to achieve.  Below is the map that uses the Table Looping and Table Extractor functoids to create the desired output.
The Borrower fields are used as inputs into the Table Looping functoid as well as some definitions about how many rows and columns there will be.
By opening up the Table Looping Grid, we’re able to define what fields will go into certain columns/rows:
The Table Extractor functoids are used to define which columns from the table map to use as inputs.  Each Table Extractor functoid corresponds to a column within the Table Looping Grid.
And finally the output from Table Looping functoid to the Borrower node dictates that a Borrower node be created for each row within the Table Looping Grid.  With that said, here’s the output from testing the second map:
 Source code for this example can be found here.
Read more ...

Developing Integration Solutions Using BizTalk Server 2009 and Team Foundation Server

Friday, December 2, 2011

Author: Henk van de Crommert, Dennis Mulder,  Microsoft TechNet

Applies to: BizTalk Server 2009

Summary: This document provides developers with techniques for designing, developing, and deploying solutions within Microsoft® BizTalk® Server 2009 in conjunction with Team Foundation Server 2008. This paper was based on "Developing Integration Solutions with BizTalk Server" by Angus Foreman and Andy Nash and updated for BizTalk Server 2006 R2, 2009 and Team Foundation Server. The BizTalk Server 2006 R2 version of this paper can be found here:  .



Note: To download a copy of the  document and sample files, go to  .
This guide provides developers with techniques for designing, developing, and deploying solutions within Microsoft® BizTalk® Server 2009. It contains examples of team approaches for developing with BizTalk Server in conjunction with Visual Studio® Team Foundation Server 2008, and illustrates techniques that can increase development efficiency.
The guide also provides hints and tips that can decrease the time and effort spent debugging and testing. It emphasizes the team development process and team development setup.
This guide focuses on projects that use the core messaging and orchestration functionality of BizTalk Server. It does not cover in detail additional areas such as Business Activity Services (BAS), Business Activity Monitoring (BAM), Line of Business Adapters, RFID, the WCF Adapter or the XREF, or Business Rules Engine (BRE).

Who Should Read This Guide?

The following users should read this guide:
  • Solution designers will gain understanding of the constituent parts of a BizTalk Server solution and the possible project structures used to develop BizTalk Server solutions.
  • Developers will gain an understanding of the process of developing a BizTalk Server solution in a team and learn techniques that can improve developer efficiency by getting the most out of Team Foundation Server.
  • Project managers will gain an overview of the main project phases and understand typical tasks when planning a BizTalk Server team development.
This guide assumes that the reader is familiar with BizTalk Server (2004 or higher) and Team Foundation Server and understands the core concepts of the products.

Document Structure

The document is based upon a subset of the project phases that are described in the Microsoft Solutions Framework, namely:
  • Planning
  • Building and Developing
  • Stabilizing
  • Deploying
The Microsoft Solutions Framework (MSF) contains proven practices for planning, building, and deploying small to large information technology (IT) solutions. MSF is made up of principles, models, and disciplines for managing the people, process, and technology elements that are encountered on most projects. For more information about MSF, see the Process Templates and Tools section on the Team System Developer center atProcess Templates and Tools  (


This section describes a lightweight technique for gathering integration requirements and translating them into a list of BizTalk Server artifacts to be delivered. It discusses a typical BizTalk Server integration scenario. Readers can compare this typical scenario with their own requirements and create plans and documents accordingly.
The planning phase also includes setting up the development environment and the development processes to provide a stable and efficient development environment. Making changes to the development environment or processes during the development phase can be very disruptive, so time spent planning them is likely to save time later in the project.
Planning for the integration scenario includes the following:
  • Gathering information
  • Defining naming conventions
  • Planning team development
  • Setting up and working with source control

Gathering Information

Like most software development problems, the successful delivery of an integration solution depends upon the collection and collation of large amounts of information prior to the design process. The following sections suggest a lightweight approach to gathering the information that is typically useful to the design and development of an integration solution. This information includes the business processes and the messages that the solution requires.

Business Processes 

An important function of the planning process is to draw up the master list of business processes that you will deliver with the integration solution. This task ensures that a single, comprehensive list of all the business processes exists, along with a list of all the business rules that apply to those processes. You can use this list to plan and design the development of the BizTalk Server artifacts that are required to carry out those business processes.
One possible approach to developing the list of business processes is to devise logical categories of related business processes and then list all the processes within each category. For example, categories like "User Management," "Supplier Transactions," "Customer Transactions," "Catalogue Management," or "Credit Control" are often meaningful to the business and typically consist of several related business processes (with related system and messaging requirements).
In the example scenario, the business process list is documented in the following tabular format.
Table 1 - Example processes and categories 
Business process category
Process name
Account Management
Order Fulfillment
The team now has a master list of business processes that need to be developed. The next step is to capture the process details that are significant to the integration solution. Detailed information that is typically captured about each process falls into two categories—logical process information and implementation information. You may also want to diagram the high-level processes so that developers will have a starting point for creating BizTalk orchestrations.

Logical Process Information

This information describes the process in the abstract, without being concerned about the actual implementation technologies. It is typically gathered by interviewing the owners of the business processes rather than IT or system specialists. To gather logical process information, you might perform actions or answer questions like the following: 
  • Produce a "flow chart"-style description of process logic.
  • Define core business entities (not systems) that are involved in the process, for example, customers, orders, quotations, parts, invoices.
  • Identify the events or actions that initiate a process (both human and system-based).
  • Determine where exceptions occur in the current business processes and how those exceptions are handled (that is, is a process restarted, is human intervention required).
  • Determine where business data validation takes places in a process.
  • Understand the business rules that form part of the process.
  • Determine whether there are long-running processes (for example, days, weeks).
  • Is any human intervention required to complete a process?
  • What is the nature of the communication between processes and other systems? For example, is the process real-time or batch-based?
  • Are the other systems always available?
  • Do messages need translated from one format to another or mappings need transformed?
  • Are manual processes being automated?
  • Are new processes being created to achieve the solution?
  • Are any of these processes currently residing in other systems? Can these be documented?
  • What business metrics, milestones, or key business data in the process must be reported to management?
  • How are humans notified or how do they interact in the process? 

Implementation Information

Information related to the implementation is often dictated by the design and constraints of the existing systems being integrated. It is important to keep in mind the constraints placed around processes by the environment and other systems. These constraints often significantly influence the final design. To gather this information you might perform actions like the following:
  • List the transport-specific requirements for each process.
  • List the security models and protocols required to interact with other systems.
  • List the messages to be consumed and produced by a given process.
  • Understand the message format constraints that exist (that is, which messages exist, which are standards, and which need to be created).
  • Detail the requirements for handling exceptions and errors including reprocessing and resubmission.
  • Understand any transactional data requirements of a process, including any compensation logic required for long-running transactions.
  • List any message correlation requirements.
  • List any requirements about the order of processing of messages.
  • List any requirements for cross-referencing data between the systems being integrated to determine the exact meaning of data in each system being integrated. (For example, is it necessary to maintain a link between the two separate customers? Do IDs used by different systems describe the same customer?)
  • List the auditing or management information system requirements.
  • List the applications or interfaces used to allow human intervention into the processes.
  • Understand the operational requirements of the processes, that is, what data is required in order to manage, troubleshoot and operate the system. For example, what logging requirements are there? Which monitoring software is being used and how can the system be designed for operations?

Defining Messages

The process of examining all business processes produces a list of the messages that are required to deliver the solution. When defining messages consider the following:
  • Determine the complete list of entities (i.e., Customer, Order, Invoice, etc.) that are meaningful to the organization to allow the processes to be designed.
    Typically you start this list when looking at the processes and you refine it to produce the "business-centric" list of all the required entities. 
    Determine which message schema will define the "standard" messages. These messages are not constrained by any specific integration requirement but instead define a business entity, concept, or operation. Standard messages can be defined as the application internal representation of business entities. 
    Every back-end system that the application integrates with has a different representation of any entities. This representation consists of a set of properties. 
    A standard message representing a given business entity that can be defined as the minimum common multiplier of all the properties used by any upstream or downstream systems to represent that particular entity. In other words, a standard message can be defined as the aggregation of all the entity data properties and constraints defined used by any upstream or downstream applications to represent a particular entity. 
    This information could then be used to define the Customer Message Standard for the organization and will be driven by the organization's long term goals. For example, you can define the customer message so that it can contain data that is not currently required, but will be needed to deliver new services in the future.
  • When considering standards for messages, consider the cost of creating a standard and the cost of mapping to and from the standard for each of the upstream and downstream systems. In some cases the standard provides benefits (especially when multiple systems exist which all use a variation on the standard). In other cases mapping from one system to another system (without mapping to the intervening standard) can be a more practical approach.
  • Determine the ownership of standards (i.e., naming, data, etc.). It is important for an organization to realize that messaging standards are a representation of business entities and concepts and should be owned by the business and IT jointly. This includes the owning of namespaces. For more examples of standards, see Messaging Standards  (
  • Are there existing messaging standards that you can use? Industry-specific standards may exist for exchanging data between organizations (for example, health care (HL7) and finance (Swift, FIX, Apcor, etc.)). It is worth determining whether such a standard exists, and if it does, whether using the standard rather than defining a new one can provide benefits (both current and future).
  • Some vendors may publish standard messages for interacting with their systems. Using these schemas may save considerable development effort and remove specific integration challenges from the project team to a vendor's supported solution.
  • When defining messages, your result should be a comprehensive list of the messages you require to implement the solution. By stabilizing and gaining agreement to this list early in the project, you can avoid the introduction of new messages late in your development phases.
Note As part of BizTalk Server 2009 Microsoft ships Solution Accelerators that contain artifacts that can be used in certain specific industry scenarios, such as Health (HL7, HIPAA), Supply Chain (Rosettanet) and Finance (SWIFT). For more information see Accelerators (

Defining Naming Conventions

When describing the complexities of an integration solution, it is important that you use a strong set of terms to ensure clarity throughout the planning, designing, and development phases. You begin this process in the planning phase by identifying the naming conventions you use to describe systems and their operations. The naming conventions in the following table can be applied to the integration solution.
This document does not present a comprehensive list, but instead provides terms found useful on a variety of projects. For a comprehensive approach to integration terms, see the patterns & practices Integration Patterns whitepaper  (
Table 2 – Integration Terms 
Integration layerThe technology that provides the actual integration functionality. In this document the integration layer is provided by BizTalk Server.
Upstream systemA system that participates earlier in the business process than the point currently being discussed or documented. Typically upstream systems transmit messages to the integration layer.
Downstream systemA system that participates later in the business process than the point currently being discussed or documented. Typically downstream systems accept messages from the integration layer.
Source systemThe system from which a specific message originates.
Destination systemThe system to which a specific message is targeted.
Process initiatorA system or event that starts (initiates) a process that the integration layer will be involved in.
Activated process or activated systemA process or system that is activated by an initiating process.
Master dataBusiness information that is key to a certain process (i.e., customers, orders, invoices, etc.)
AdapterMessages from a system are sent by using some protocols handled by an adapter. An adapter abstracts the system from the protocols used to communicate.
Duplicated dataData that is duplicated from the master for a given process. Sometimes known as slave data.
 When describing data or messages between systems and the integration layer, it can also be useful to use a standard notation to avoid confusion over the direction of message flow. This is particularly useful when multiple request-response operations exist between systems.
One possible approach is to describe message instances between systems using the notation of "incoming" and "outgoing" where "incoming" and "outgoing" are understood from the point of view of the integration layer. The following messages are examples:
Messages originating from the integration layer:
  • CRM_Add_Customer_Incoming_Request
  • Order_ProcessOrder_Incoming_Response
Messages originating from the integration layer:
  • Order_ProcessOrder_Outgoing_Request
  • CRM_Add_Customer_Outgoing_Response
Figure 1 shows the messages targeting the integration layer. 
 Figure 1 - Message Direction and Naming
While it can be helpful to name message instances in an orchestration, naming the message schema with a name describing direction may cause confusion in other areas of the solution, especially if the schemas are reused or exchanged with other parts of the system.
Many line-of-business messaging applications, especially financial systems, use a numeric system for identifying the messages (that is, the message type, or schema, and the destination system/direction), for example, 9820000-1104 in ISO 8583, MT 512 in SWIFT. In such cases you can either maintain the numeric notation or use a combination of numbers and text to identify messages (such as MT_512_Securities_Trade_Confirmation).
It is not uncommon for a complex process to use 10 to 20 related incoming and outgoing request and response messages. A naming convention applied to orchestration messages and variables can aid clarity. A standardized approach can reduce possible errors and confusion and make documentation more effective.
In some cases, naming conventions can produce very long names that are not easy to use in Visual Studio and the BizTalk Management Tooling, given that the dialog boxes and panes are narrow. In these cases consider using a coding scheme such as CRM_Add_Customer_IReq and CRM_Add_Customer_Oresp, where IReq is an incoming request and OResp is an outgoing response.

Describing Systems

When describing an integration solution, there is typically a large amount of detail to be captured about each of the systems and processes. It is useful to start by listing some common properties of each of the systems involved. It is typical to collect this information for each system participating in the integration solution. The following table shows some common system attributes.
Table 3 - Common attributes of systems 
System purposeThe general business and technical purposes of the system being documented. Often there are system specialists involved in an integration solution who have no previous knowledge of the other systems involved. An overview can be valuable to ensure effective communication across the team.
Interfaces and access methodsA description of how the integration layer accesses the system (and vice versa). This is usually described in terms of the protocols and standards used by the system for both incoming and outgoing operations.
Message and data formatsDetails on the likely format of incoming and outgoing data and messages.
Data masterWhen understanding data and process it is important to understand which system holds the master copy of each type of data. For example, a CRM system may be the master for all customer address data, but may also hold replicated accounts data.
Volume and frequency of data operationsThe volume and frequency of data operations can have an impact on the design of an integration solution. During planning, it is typical to capture as much information about the expected volumes as possible. Ensure that these figures include peak volume requirements because this information is critical to determine how the system is going to perform and scale.
Security models usedWhen designing an integration solution, the security requirements are often one of the most complex areas to model and implement. Typically this information consists of:
  • A description of the authentication and authorization models imposed upon the integration layer when calling into a system
  • A description of the authentication and authorization models used when calling out of a system into the integration layer
  • Any requirements the system places on secure communications (for example, encryption required to communicate with the system)

 Example Integration Information Gathering Scenario

In our example scenario the planning process starts by documenting the systems as shown in the following table, with more detail being included as the planning and design processes uncover more information. The following table lists examples of the type of data that is typically recorded. The systems described here represent a customer relationship management system, and an account management.
Table 4 – Example System Information 
 Customer relationship management system (CRM)Account management and billing (Accounts)
System purposeProvides pre-sales, sales, and customer management functions to customer services representatives. The application provides the logic, storage, and front end for all interaction between customers and the organization (for example, the screen used in call centers when a customer calls up). Used to manage prospects and to manage the status of a customer.Maintains the databases and logic used to manage customer accounts and the logic to run account billing and management processes.
Interfaces and access methodsIncoming and outgoing: Access to and from this system is via custom XML documents that are being posted over HTTP Web interfaces (not Web services). The XML documents are defined by the customer management system. That is, other systems wishing to communicate with the customer management system must conform to the specifications laid down by the CRM system.
Typically used in an interactive manner, where a change needs to be made to a customer or product within a matter of seconds or minutes. (Synchronous). 
Incoming: Unidirectional HTTP access to the system, with the system accepting XML messages on an HTTP listener, with no response message (other than a transport success message).
Outgoing: Batch-based file outputs typically running nightly, weekly, or quarterly processes. Files written to a UNIX file share location.
Message and data formatsIncoming message formats: CRM-specific XML schema. Definition available as XSD file. Actual schema used is dependent upon message type (of which six exist).
Outgoing message formats: CRM-specific XML schema. Definition available as XSD file. Actual schema used is dependent upon message type (of which six exist). 
Incoming message formats: Custom XML message as defined by accounts system. Sample XML messages available.
Outgoing message formats: Positional ASCII flat file with headers and footers. Definition available as Word document.
Data masterCRM is the master source for pre-sales prospect information.
While the system provides access to customer information, it is not the master for customer information (which is held in Invoicing). 
This system is the master for customer and account information.
Volumes and frequency of data operationsUpdates from the CRM system are regular during the day, covering the entire 24-hour period (due to a global roll out). Volumes of operations are 1-2000 per day. The resulting 1-2000 messages mostly hold customer data. Due to the internal definition of a customer used by the CRM system the messages to and from the CRM system can consist of very large XML documents, with sizes of 500K not uncommon. Message size and frequency are significant parameters in sizing the integration servers and may be significant during the design phase.Incoming: Incoming requests to the Accounts system are infrequent and small in size.
Outgoing: Due to its batch nature, under certain circumstances the volume of data passed from the Accounts system can be very large. Typically these large volumes occur during processes like quarterly or year-end billing runs. These large volumes are significant because it is often the case that there exists a relatively short duration in which all this information must be processed. (For example, there might be 1000 files of 3 MB each.) 
Service windowsAvailable for a 24 x 7 global application.Available GMT 8 AM – 8 PM (with peak usage at month ends).
Security models usedIncoming: The CRM system expects a user identity and password to be passed as a part of all incoming messages for validation against an internal user store.
Outgoing: By default the CRM system's outgoing HTTP requests do not pass credentials to the target system. 
Incoming: HTTP requests accepted from a set of known IP addresses only.
Outgoing: Configurable credentials are used to write outgoing data files to a specific location.

Categorizing Interfaces

Having previously defined the systems and business processes in scope, the next step is to document the interfaces that are required to enable the integration layer to connect to the upstream and downstream systems. One way to capture this information is to list the processes and then detail each of the interfaces required by each process. If you are designing a system to replace an existing integration solution it can be helpful to obtain actual copies of all the types of messages that are currently going through the various systems.
The following list describes typical information that you should gather while you are categorizing interfaces:
  • Interface name
  • Business name
  • Interface description
  • Source system name
  • Destination system name
  • Interface address (URI/URL)
  • Interface direction—examples include:
    • Unidirectional/Incoming (to the integration layer)
    • Unidirectional/Outgoing (from the integration layer)
    • Bidirectional/Incoming
    • Bidirectional/Outgoing
  • Protocol
  • Message/data format used
  • Message schema name
  • Expected number of messages per minute, average and peak
  • Expected average message size
  • Message schema owner (the person responsible for versioning and maintaining the structure of the message)
Later in the planning process this list of interfaces can be further extended to help produce a detailed list of functional requirements that can be used as the basis of a BizTalk Server project.
A helpful way to document those functional requirements is in the form of test cases with specific input files and expected output files. This typically means collecting a lot of data early in the project, but in all likelihood this data will be required when development or testing starts, and by gathering it early in the development process any unanticipated challenges can be spotted early on.

Documenting Interfaces as Contracts

When developing an integration solution, the definition of interfaces is a significant part of the design process. The interface is typically considered as a "contract" between each of the parties involved in the development of the integration solution. The contract verifiably ensures that parties are developing compatible systems.
When documenting interfaces it is useful to be able to produce detailed descriptions of the interfaces as part of the contract. The best way to document the contract is with a domain-specific language like Web Services Description Language (WSDL), or some other consumable form of metadata. All parties can use these descriptions to validate that their development meets the interfaces expected by upstream and downstream systems. When working with XML-based solutions like Web services and BizTalk Server, standards exist to produce these detailed contract descriptions. The following table shows the contract formats for various interface types.
Table 5 - Interface types and contract formats
Interface message typeContract format
XML messageXSD schema files.
Note If no XSD schema file exists, but sample XML messages exist, then BizTalk Server can generate an XSD based upon a sample XML file. For more details on this functionality, see XML Schemas   ( in the BizTalk Server documentation.
XML Web serviceWSDL definition files. The xsd.exe and the WCF svcutil.exe programs can be used to create schemas and proxies to consume WSDL definition files.
Flat filesBizTalk schema, the flat file schema wizard and the flat file validation tools.
Note The flat file schema wizard can be used to simplify the creation of flat file schemas. For more information, seeCreating Schemas Using BizTalk Flat File Schema Wizard ( The flat file command line tools can be used to aid the validation of flat file instances against a BizTalk Server schema and can be found in the SDK under <Installation Path>\SDK\Utilities\PipelineTools. For more information about pipeline tools, see Pipeline Tools  (


At the end of the information-gathering process, the high-level requirements of the integration solution are captured in a series of related documents that list unique business processes, systems, and interfaces. As the design process progresses the level of detail captured increases. Prior to development it is typical to produce a list of BizTalk Server artifacts to be developed during the build phase.
Additionally, the planning phase may well identify the number and type of test harnesses or stub systems that will be required to meaningfully develop and test the business processes. The planning phase may also identify the data requirements needed to allow development to proceed. This may include migration requirements and cross-reference data.
You use this list of requirements to aid in designating teams, planning schedules, and identifying required development resources.


This section describes tips that you can use during the development of a BizTalk Server integration scenario. Readers can use this information to apply the best practices that have been gathered during the development of BizTalk Server Integration Solutions across the world.

Planning Team Development

The approaches described in this section relate to the general approaches needed when developing within a team. Many of the concepts in this document are covered for general .NET development in the patterns & practices Guide Team Development with Visual Studio Team Foundation Server (

Dividing Tasks and Allocating Ownership

When developing a solution within a team it can be very helpful to define owners and development teams for distinct parts of the solution early in the development process. Typically named individuals (or groups of individuals) are given ownership of logical parts of the solution. Useful divisions of work can include allocating ownership of:
  • Individual business processes
  • Process-specific messages, schemas, transports, and protocols
  • Shared message standards
  • Map development
  • Error messages and error-handling standards
  • Upstream and downstream system interfaces
  • Helper classes and utilities
  • Test data and test tools
  • Deployment and build processes
Typically the owner is responsible for understanding their part of the solution and acts as the "gateway" to any requested modifications. In this way when a modification to a design is proposed, the individual can assess the effect of the proposed changes and then allow or disallow that change. This is particularly important when working with shared parts of the solution, like shared message standards where a change can have a potentially significant impact upon the development team and upstream or downstream systems.

Solution Structures and Team Development Models

The following sections discuss solution structures and development models for a BizTalk Server project with multiple developers.
The term "solution structure" refers here to the logical and physical structure used to store all the related BizTalk Server files and folders required to produce, develop, test, and deploy a production solution.
The term "team development model" refers to the relationship between Visual Studio 2008 projects and solutions, Microsoft Team Foundation Server source configuration, and the setup and processes used for a team of developers using common resources across multiple workstations.
It is important to enforce a solution structure. A BizTalk Server solution that is built to meet business requirements typically consists of many separate related artifacts. To avoid rework and build failures, these artifacts need to be managed throughout the project from development, test, acceptance/staging up into the deployment on production. A typical BizTalk Server solution will contain the artifacts in the following table.
Note This list does not include artifacts related to Business Activity Monitoring (BAM) and the Business Rules Engine (BRE).
Table 6 - Typical Visual Studio 2008 and BizTalk Server Solution Entities 
Visual Studio 2008 solution filesThe files produced by Visual Studio 2008 when creating and editing a BizTalk Server solution.
Strong name keysStrong name key files used to assign strong (unique) names to the assemblies produced during the build process.
Binding filesThe files that BizTalk Server uses to bind a BizTalk Server assembly to physical resources (HTTP addresses or file locations).
WCF Web service filesWCF files, .NET DLLs, and configuration files produced by the BizTalk Server Web service publishing wizards that provide a Web service interface to BizTalk Server.
Output DLLsThe assembly files produced by the build process.
Referenced DLLsFiles containing functionality used by the BizTalk Server solutions using file references.
Cross-reference dataData used to prime the BizTalk Server cross-reference databases.
Test dataData files used to initiate or participate in the testing of processes under development.
Test toolsTools used to initiate or participate in the testing of processes under development. These might include "stub" Web services that act like a Web service not available during testing, or tools that act like an external application submitting a message into BizTalk Server.
The solution structure is important to a team of BizTalk Server developers for the following reasons:
  • Any integration solution by its nature consists of many independently developed assemblies that need to fit together, like pieces of a jigsaw puzzle, to deliver the complete solution. If these pieces are not closely managed throughout the development, there is a strong chance the pieces will not fit together when the complete solution is tested. A predefined solution structure assists in enabling the management of these parts.
  • In a project of medium to high complexity, BizTalk Server projects are themselves often linked and dependent upon each other. A solution structure helps ensure that the relationships between BizTalk Server projects is enforced and maintained throughout the development process.
  • Deploying and testing a BizTalk Server project requires more than just the output of the compilation process. A typical process may require binding files, test data, test harnesses, and stub systems as well as the BizTalk Server assemblies to perform a functional test. A well-defined solution structure allows testing and deployment to be automated.
A typical BizTalk Server solution structure is composed of the following logical elements:
  • A Visual Studio 2008 solution structure, which describes the relationship between BizTalk Server solutions, project files, and dependencies.
  • A file system folder structure in which to store BizTalk Server project files and the other associated entities required to develop and test a solution.
  • A development model that describes how a team of developers share the server resources required to develop a solution.
  • An approach to using Visual Studio Team Foundation Server that enables the BizTalk Server projects and artifacts to be successfully source controlled and shared between a team of developers.
  • An approach to using Visual Studio Team Foundation Server Source Control to effectively manage branching of multiple releases and potentially and disparate features.

Dividing BizTalk Server Artifacts between Projects

There are many ways to create a BizTalk Server application composed of maps, schemas, orchestrations, pipelines, and so on. The most intuitive way is to place all of the BizTalk artifacts in one project, compiled and then deployed as a single unit. This method has the benefit of simplicity but is less effective when you take into account required modifications and project upgrades (new versions).
Using this model, when modifications or upgrades are needed, you must first undeploy the complete application. This means stopping the orchestrations and potentially affecting related entities. For example, if you use custom pipelines, the undeploy operation causes the pipeline bindings to default back to the pass-through pipeline. After you deploy a change, you must reconfigure any location that uses custom pipelines. The BizTalk Documenter (see Resources) can help out here in determining what the original pipeline settings were. Other implications you might encounter are versioning requirements subsequent to your initial release. When updating your BizTalk project you will end up having to redeploy your entire application as well. Working in a single-project approach also has implications when working in teams. Chances are team members have to touch the same files often which can cause merging challenges.
As an alternative to using the single-project approach, consider how the application is structured and look for ways to split the artifacts into logical groups. A typical way to manage this is to place orchestrations, pipelines, schemas, and maps in their own projects, often on a per use-case basis. If a pipeline changes, just a single assembly can be undeployed, and there is no need to undeploy maps or schemas. In the same way, if a map changes there is no need to undeploy the schemas. This approach favors components reuse and at the same time minimizes cross-reference dependencies.
There are implications every time you version the map. You may have to update and redeploy the Orchestration if this map is referenced in there. Additionally, consider dividing suborchestrations (that is, orchestrations that are called by other orchestrations) into their own projects.
Remember that a project can only be deployed as a single unit. If the task of redeploying a single project begins to involve the rebinding of unrelated resources, then you should consider splitting the project into smaller units that allow the deployment and binding of related items only.
Using this model, when modifications or upgrades are needed, you must first undeploy the complete application. This means stopping the orchestrations and potentially affecting related entities. For example, if you use custom pipelines, the undeploy operation causes the pipeline bindings to default back to the pass-through pipeline. After you deploy a change, you must reconfigure any location that uses custom pipelines. The BizTalk Documenter (see Resources) can help out here in determining what the original pipeline settings were. Other implications you might encounter are versioning requirements subsequent to your initial release. When updating your BizTalk project you will end up having to redeploy your entire application as well. Working in a single-project approach also has implications when working in teams. Chances are team members have to touch the same files often which can cause merging challenges.
As an alternative to using the single-project approach, consider how the application is structured and look for ways to split the artifacts into logical groups. A typical way to manage this is to place orchestrations, pipelines, schemas, and maps in their own projects, often on a per use-case basis. If a pipeline changes, just a single assembly can be undeployed, and there is no need to undeploy maps or schemas. In the same way, if a map changes there is no need to undeploy the schemas. This approach favors components reuse and at the same time minimizes cross-reference dependencies.
There are implications every time you version the map. You may have to update and redeploy the Orchestration if this map is referenced in there. Additionally, consider dividing suborchestrations (that is, orchestrations that are called by other orchestrations) into their own projects.
Remember that a project can only be deployed as a single unit. If the task of redeploying a single project begins to involve the rebinding of unrelated resources, then you should consider splitting the project into smaller units that allow the deployment and binding of related items only.
Note The preceding approach is a "horizontal" division of work. You may want to consider a vertical division as well (that is, by project or business process functionality). For example, if a pipeline or schema is updated it should only affect the business project that it is related to. This is relevant if you have the requirement to avoid shutting down a CRM integrator when the ordering system updates a schema.
The following table shows the structure of a typical project.
Table 7 - Typical Project Structure 
Standard schemas project
("Shared Schema")

This project contains all the schemas that are across the whole solution. This project is referenced by other BizTalk Server projects. It is separate because it needs to be shared and should be stabilized early in the project.
Functionally grouped orchestrations ("Billing Project")These projects contain logically related orchestrations, such as "Billing project" and "Invoicing project." These are usually separated out because these related orchestrations are usually under control of a single individual and they often contain schema local to their own processes.
Shared orchestrations projects
("Email Handler")

These are orchestrations that are shared across the whole solution. These are kept separate to allow other projects to reference them. Typical examples include "error handling project" or "email send project." They contain the orchestrations and schema necessary to complete a discrete set of tasks. Where possible, you should stabilize these functions early on in the project because changes can affect all projects using them.

Team Development Models

Team development with BizTalk Server has close parallels with team development of Web applications. Both models typically have dependent elements that can be shared between the team members or a user in isolation.
The development model for BizTalk Server proposed in this paper is based upon the "isolated model" of development. The models available for BizTalk Server development are "isolated," "semi-isolated," or "non-isolated." In the isolated model no elements of the developer environment are shared (with the exception of access to a common Visual Studio Team Foundation Server environment). So every developer has its own instance of BizTalk Server, the BizTalk Server databases, SQL Server etc. When using semi-isolated and non-isolated models increasing areas of the developer environment are shared.
The nature of the BizTalk Server runtime means that isolated development typically provides the most effective approach. BizTalk Server developments are typically composed of several distinct environments, as shown in the following table.
Table 8 - BizTalk Server Environments
Developer toolsTools used to produce BizTalk Server solutions. Composed of Visual Studio 2008 and BizTalk Server design-time tools and test tools.
Developer runtimeEnvironment used to test a developer's output. Composed of the BizTalk Server runtime, deployment tools, and test tools.
Integration test environment/shared test runtimeEnvironment used to test multiple developer deliverables in a single environment (often logically mirroring the production environment). Composed of the BizTalk Server runtime and deployment tools.
Pre-production environmentQA environment that is as close to the physical production environment as possible. This is used to test the deployment processes and runtime of the completed integration solution. This environment can also be used for scalability and performance testing.
Composed of the BizTalk Server runtime and deployment tools.
Production environmentEnvironment used to deploy the completed integration solution. Composed of the BizTalk Server runtime and deployment tools.
The team development model described below details an approach to configuring the development tools, developer runtime, and shared runtime to provide an isolated developer environment.

What Is the Isolated Development Model?

Using an isolated model a developer edits, debugs, and runs functional tests completely isolated from other developers. They have their own self-contained development workstation with a local BizTalk Server Group. Access to the master source files is controlled via Team Foundation Server. Figure 2 illustrates an isolated development model.
Figure 2 - Isolated BizTalk Server Development Model
The isolated model of BizTalk Server development provides the following benefits:
  • No chance that BizTalk Server shared configuration information will interfere with another developer's work (for example, XML target namespace clashes occurring from multiple installed versions of shared schema)
  • No opportunity for any one individual's BizTalk Server deployment and test processes interrupting other users (for example, the starting and stopping of BizTalk Server host instances, receive locations, or dependent services like IIS, SQL, and SSO)
  • Ability to clear resources like event logs and tracking databases without disrupting other users.
  • Ability for developers to use different versions of shared resources, for example, helper classes and error-handling processes.
  • Ability for developers to attach and debug the BTSNTsvc.exe process without halting other developers' processes
A shared or non-isolated model can be used for developing with BizTalk Server if it is absolutely required, but developer efficiency may be reduced by the lack of the benefits listed above.

Using virtualization to host BizTalk Server development environments

In our experience of developing BizTalk Server solutions virtualization can be used to efficiently create an isolated development environment and within Microsoft Services this is the standard approach used on nearly all BizTalk Server developments. Using virtualization typically saves the development teams a considerable number of hours effort (per developer) to create the development environment and is an extremely efficient way to enforce a standard development environment. It has additional advantages including the ability to efficiently "package up" of a development environment including an easy refresh when the environment is not functioning properly anymore or new versions of products and tools become available. This technique is often used for scenarios like sharing best practice, handing over issues between teams, testing possible server configurations, etc.
For additional information on virtualization, see the following resources:

Using virtualization to Host the BizTalk Server Developer Environment

Setting up a BizTalk Server development environment typically requires the following:
  • Installation of prerequisites
  • Application install process (SQL Server, Visual Studio 2008, and BizTalk Server)
  • BizTalk Server group configuration
  • Installation of tools and utilities
An alternative to performing these steps on every developer's workstation is to use the services provided by virtualization. The typical environment to use on Client Operating Systems such as Windows Vista® or Windows 7 is Microsoft Virtual PC. Virtual PC allows users to run multiple PC-based guest operating systems simultaneously on the host operating system. Microsoft Virtual PC allows users to run multiple PC-based operating systems simultaneously. For example a user with Virtual PC installed on their Windows computer can run Windows Server 2003 or Windows Server 2008 within the Virtual PC. The benefit to a BizTalk Server development team lies in the fact that Virtual PC uses Virtual Hard Disks files (VHD files) that can be "copied" between computers. Although there are certain caveats with this approach, described later in this document, it is a very practical approach. In essence a BizTalk Server development team can invest time in creating a standard BizTalk Server developer virtual hard disk and then push that image to every member of the development team who gains a preconfigured BizTalk Server development environment. The VHD format allows you to interchange the VHD files between all Microsoft virtualization products, including the virtualization platform released with Windows Server 2008, Hyper-V.
Follow these steps to create a virtual PC virtual hard drive containing a BizTalk Server developer environment.
  1. Ensure that the developer workstations are powerful enough to run the guest Windows Server 2008 operating system. Typically this will require a fast Windows workstation with a minimum of 1 GB of physical RAM (2 GB optimal) and approximately 15 GB of local disk storage available.
  2. Create a new Virtual PC Virtual Hard Disk. By default the Virtual Hard Disk type is set to "Dynamically Expanding". If fixed size is used then ensure that a minimum of 6 GB is selected.
  3. Install Windows Server 2008 and SQL Server 2005 or SQL Server 2008 (2008 is the preferred option) on the Virtual Hard Disk, following the instructions in the BizTalk Server installation documentation.
  4. Install all recommended security updates.
  5. Using the Virtual PC settings, enable the Virtual PC to access the network (network access will be required to contact Visual Studio Team Foundation Server, shared drives etc)
  6. Install BizTalk Server. Wait with configuring BizTalk Server.
  7. Install Visual Studio 2008 Professional Edition or higher (Team Suite preferred) with Team Foundation Team Explorer.
  8. Install any additional tools or utilities that the development team should have access to. Installing common tools on the VHD can save considerable time and effort later. See later in this section for a list of useful tools.
  9. Sysprep the machine. This will generalize the machine, making it suitable for reuse by duplicating the VHD. Sysprep is installed with Windows Server 2008 and can be found under %systemroot%\system32\sysprep. 
  10. Shut down the virtual PC instance and make a secure read-only copy of the Virtual Hard Disk file. This is now the master VHD for the development team.
  11. Copy the .VHD file to developers' workstations as required.
  12. Give the machine a new unique name.
  13. Configure BizTalk Server.
Note Not sysprepping is not recommended as using multiple machines with the same name will cause issues with Team Foundation Server Working Folders.
When developing on a Virtual computer, a useful approach can be to create a second hard drive (.VHD file) that holds the source code for projects. If this approach is used across the whole team with the same drive letter assigned to the drive then this provides a consistent set of paths for the solution structures. A major advantage of this approach is that the separate VHD can be significantly smaller and as such easier to copy between remote locations.

Developing BizTalk Server Applications Using Remote Desktop

It is possible to develop and debug BizTalk Server applications while running the BizTalk Server developer environment on a remote computer accessed by Remote Desktop or Terminal Services. It is not recommended to do this on a multi user platform, because users will have to share the same BizTalk Server installation. However, teams using this approach should be aware of the following limitations:
  • Attaching to BTSNTCVS.EXE (for example, for debugging custom functoids or adapters) may cause issues due to the single instance of BTSNTCVS.EXE.
  • Starting and stopping hosts will cause concurrent users issues unless separate hosts are configured for separate users.
  • Team Foundation Server working folders will have to be different for every user, which can cause issues with File References as described in "Creating the VSS Source Control Structure."
If the BizTalk Server environment is running on Windows Server 2003 it is possible to connect to the console session rather than starting a new user session. This can be useful in scenarios where debugging information or the like is being written to the console.
To connect to the Windows Server 2003 console using remote desktop use the "/ console" or "/admin" (Windows Vista, Windows 7 and Windows Server 2008) command-line option, for example:
mstsc.exe <RDP connection file name> /console
In Windows Vista and Windows Server 2008 the new /console option in mstsc.exe is replaced by the /admin option for backwards compatibility. Windows Vista and Windows Server 2008 don't have a console session "0" anymore.
mstsc.exe <RDP connection file name> /admin
Note Only a single user can connect to the console session at the same time.

Partitioning Visual Studio 2008 Projects and Solutions

Visual Studio solution files (with the .sln file extension) are used to group related projects together and are primarily used to control the build process. You can use solutions to control build dependency issues and control the precise order in which contained projects are built. Partitioning describes the processes of creating Visual Studio 2008 solution files that divide up the multiple projects that compose a complete BizTalk Server solution. The approach used to partition BizTalk Server solutions and projects has a significant impact on the development and build processes in a team environment. There are three main approaches to consider when partitioning solutions and projects, as shown in the following table.
Table 9 – Solution Partitioning Approaches
Partitioning approachDescriptionApplicability
Single solutionThe team creates a single Visual Studio 2008 solution (the master solution) and uses it as a container for all of the BizTalk Server projects required by the BizTalk Server solution.Viable for simpler BizTalk Server projects with only two or three developers.
Partitioned single solutionUsed in more complex developments where related sets of projects are grouped together within separate sub-solution files. A master solution still exists but is used for building the entire solution on the build server.Should be considered for more complex solutions with several developers.
Multi-solutionUsing this model there is no master solution file and file references are used between projects in separate solutions (although project references are still used between projects within an individual solution).Not recommended. Relies on non-recommended file-basedreferences.
The partitioned single solution approach is often applicable, but is the most complex to set up. For these reasons this approach is documented in the appendix of this document. Unless you have very good reasons to use a multi-solution model, you should avoid this approach.

Attributes of a Partitioned Single Solution

When developing more complex BizTalk Server solutions systems it can be advisable to reduce the number of projects and source files required on each development workstation by grouping related sets of projects together within separate sub-solution files. This approach can be particularly beneficial if the complexity of the BizTalk Server solution means that loading, navigating, and rebuilding projects can take a significant amount of time.
This partitioned single solution allows developers to work on separate, smaller subsystems within the inner-system boundary. The following diagram illustrates the partitioned single solution model. Notice how separate solution files are used to allow you to work on smaller subsystems contained within the inner-system boundary. Also note how this results in projects being contained within more than one solution file. For example, in Figure 3, Projects D and H are in a total of three solution files including the master solution.
In the partitioned single solution model:
  • All projects are contained within a master solution file. This is used by the system build process to rebuild the entire system. If you need to work on the top-level project file, you also work with the master solution.
  • Project references are shared with individual projects. For example, file references are not used and instead the project containing the dependency is added to the solution and a project reference is created.
  • Separate solution files are introduced for selected project files. If you want, you can introduce a solution file for each project within your system. Each solution file contains the main project file, together with any downstream project it depends on and any further projects it depends on, and so on down the dependency chain.
  • Separate solution files allow you to work on smaller subsystems within your overall system but retain the key benefits of project references. Within each sub-solution file, project references are used between constituent projects.
Partitioned Single Solution

Figure 3 - Partitioned Single Solution
For more information about partitioning Visual Studio solutions see Chapter 3, "Structuring Solutions and Projects," in the Team Development with Visual Studio Team Foundation Server ( document referenced at the beginning of this section.

Using Project References

As described in the previous section, single solutions and partitioned single solutions use project references to create references between dependent BizTalk Server projects. This is significant because project references provide significant advantages over file-based (DLL) references. Specific benefits include:
  • A build process using project references automatically calculates the build order dictated by the dependencies. This significantly simplifies the build process for projects with dependencies.
  • A project-referenced solution can determine if a referenced project has been updated and will manage the rebuilding of the references. This is important in an environment when several users have dependencies upon a shared project (like a helper project) that is changing during the development phase.

Referencing External Dependencies

  • Use DLL references when referencing an external DLL that is outside of the BizTalk Server solution.
  • For example, suppose that an external organization is supplying schemas that are out of the control of the BizTalk Server development project. These DLLs would be referenced through file references. Other examples are .NET Framework assemblies and third-party assemblies.

Copy Local Attribute

When working with project or file references, do not change the default "Copy Local" attributes for a referenced project or DLL.
For more information about partitioning Visual Studio solutions see Chapter 6, "Managing Source Control Dependencies in Visual Studio Team System," in the Team Development with Visual Studio Team Foundation Server  ( document referenced at the beginning of this section.

Creating the Source Control Structure

The file system folder structure used to store the BizTalk Server integration solution must accommodate not only the BizTalk Server solutions and projects, but also all the additional artifacts that are required to develop, build, test, and deploy a BizTalk Server solution. The folder structure must also be compatible with the structure used within the team's Visual Studio Team Foundation Server repository to ensure efficient integration with the common source control tasks.
The folder structures described here is based upon typical BizTalk Server requirements and is designed to comply with common Team Foundation Server tasks. It can be modified to meet the requirements of a given development team.
When creating the source control structure, keep in mind the following points:
  • Use a consistent folder structure for solutions and projects. Development in a team development environment is easier to manage if a common structure for storing Visual Studio solutions and projects is used across the whole team. This is especially true when each BizTalk Server project will be generating and using files such as binding files that will be required by build, test, and deployment teams. 
    It is often easy to start development without taking the time to standardize on the folder structures. This will inevitably cost more time later on as project links and test harnesses need to be "patched" as the solution develops.
  • Keep source control and file system structures identical. To simplify the management of multiple developers' environments, set up a folder structure within Source Control that matches your local file system structure. This helps ensure that a developer can simply get a latest version from Team Foundation Server and know that the structure on the disk is compliant to the team's standards.
  • Define and use a common root folder across the team. It is recommended to keep all project code and deliverables under a single folder root, which can be created on all developers' computers. For example, create a root folder of $/BizTalkDev/[AppName]/Main within Source Control and all developers can create C:\BizTalkDev\[AppName]\Main on their local file systems. This acts as a container for all of your development systems. The [AppName] folder is the name of the project or application which is commonly used to talk about the project inside the company. The "Main" folder is created in order to enable the team to later branch the code to support concurrent releases and multiple environments.
    The drive need not be the C drive, but if a drive other than C is used, make sure that all developers have a similarly named drive available. In case you want to use drive letters that are not available on all computers, the SUBST shell command can be used to create drive letters mapped to physical folders, for example:
    SUBST m: c:\BiztalkDev
    When the preceding command is executed at the command prompt, it creates a drive letter m: mapped to physical folder c:\BiztalkDev. Avoid using a network share to store the project files (to avoid the security warning that arises from developing a .NET application on a non-local drive).  
  • Create a master solution that will hold all projects. As described in the previous section, the single partitioned model for BizTalk Server development is generally recommended for medium to high complexity BizTalk Server projects. For this reason a master solution should be created that will hold all the subprojects. This master solution will typically be used as the main build solution. For notes on creating a master solution for a partitioned single solution in Visual Studio Team Foundation Server, see "Appendix: Step-by-Step Guide to Setting Up a Partitioned Single Solution." 
  • Store all Visual Studio 2008 BizTalk Server projects in a folder under the master solution. If Team Foundation Server is being used as the source control environment then it is necessary to ensure that all subprojects that are created are created in a folder underneath the folder holding the master solution that contains them. For notes on adding subprojects to a master solution for a partitioned single solution in Team Foundation Server see "Appendix: Step-by-Step Guide to Setting Up a Partitioned Single Solution." For more information see How To: Create Your Source Tree in Visual Studio Team Foundation Server  ( referenced in chapter 4 of the Team Development with Visual Studio Team Foundation Server ( document referenced at the beginning of this section.
  • Divide the folder structure into shared and process-specific sections It is a common practice to have the shared entities separated from the business process specific entities. The shared entities are common to multiple projects and may include helper classes. For example, the first three folders in the following list are organized by shared entity and the last two are organized by business process:  
  • C:\BizTalkDev\[AppName]\Main\Src\_MasterSolution\Shared\
  • Pipeline projects. During development of pipeline components, it is a common requirement to modify and retest a pipeline independent of the rest of the solution. For this reason it is often helpful to keep pipeline solutions as a separate BizTalk Server project, resulting in a separate pipeline assembly that can be removed and redeployed without the need to perform rebinding of the rest of the solution. Additionally it is common practice to keep the code that implements the actual pipeline interfaces and pipeline processing logic in a separate project with a reference from the BizTalk Server project (containing the .btp files) to the Microsoft Visual C#® or Microsoft Visual Basic® .NET project. For notes on debugging pipelines see "Debugging Pipelines" later in this document.
  • Creating deployment and test scripts. When developing with BizTalk Server it is common to automate complex or often repeated tasks by using scripts. Nearly all BizTalk Server deployment and runtime tasks are exposed as command-line tools or Windows Management Instrumentation (WMI) interfaces to enable the development of scripts to aid automation of these tasks. The samples in BizTalk SDK often use .vbs or .cmd files to automate these tasks. We recommend to either use MSBuild or PowerShell to automate these tasks. Integration with Visual Studio (MSBuild) is much better than with the older VBScript or Batch File platform. PowerShell is the future of management tooling on the Microsoft platform. To enable the development of a unified build and deployment process, build and install scripts should be built by the individual developers, but viewed as reusable parts of the complete solution. If the scripts are written to be reusable then tasks like the deployment of a complete solution package can reuse the scripts. For example, if the deployment and undeployment scripts accept parameters to specify the paths for the DLL and binding file folder locations, the scripts can be reused when the compiled BizTalk Server assemblies are in a different location. Using this approach, each developer adds the installation tasks to the scripts for their specific processes. Then a single process can easily be written to perform a full deployment of all processes or the scripts can be combined to deploy the entire solution. For more information about deployment scripts, see Automating Developer Deployment Tasks. For more information about build scripts see Automating Developer Deployment Tasks. For more information about build scripts, see Automating Build Processes. Scripts are typically stored along with the solution file of the process they reference, using standard script names as in the following example:
    Any files the build script references (MSBuild .properties files and assemblies with custom tasks) are installed in a subfolder:
  • Strong name keys. All BizTalk Server projects will result in assemblies that get deployed to the Global Assembly Cache, and as such they must be signed with a strong name key during compilation. Typically a single Visual Studio 2008 solution uses a single key file. This is true because the solution is treated as a single entity and is managed as such. If the business solution being developed is actually composed of two or more distinct parts, then consider if two key files should be used. Multiple keys in this scenario allow the parts to be treated as independent entities in the future, for example with differing upgrade paths or managed by different teams. When considering helper projects, the same considerations apply. If the helper project is (or will be) a separate entity then it should be built using a separate strong named key. In an organization that has a closely guarded secure key that developers do not have access to, consider using delayed signing with the key. For more details on delayed signing, see Delay Signing an Assembly  (
    The process of creating and including a key file for a single project is simple, but using the same key file for multiple projects requires some additional steps. After creating the key file (see How to: Create a Public/Private Key Pair  ( for more detailed steps), other projects must add a link to the created key file to the project. 

    Figure 4 - Adding a link to an existing key file
    After this step, the key file will be available in the project properties signing panel.  

    Figure 5 - Project signing key configuration

    It is often helpful to build a project directory structure that can be replicated upon every workstation with a specific path to the key in the structure, for example:
  • Test harnesses and stubs. It is generally recommended that test harnesses and test stubs are developed early in the project because they are useful resources and they also help develop a deeper understanding of the actual interactions between the integration layer and the other systems. If test harnesses are kept under the master solution then they can be included in the source control of a specific process or under a more general shared project. For example:


    For more details on test harnesses, see Test Harnesses, Test Stubs, and Mocking.
  • Cross-reference data. If the BizTalk Server cross-reference databases are being used as part of the mapping of messages in the business processes under development then it is often necessary to load the databases with the necessary data to allow lookup operations to successfully take place in the cross-referencing functoids. Cross-reference data can be input using the BizTalk Server Cross Reference Import Tool  ( (btsxrefimport.exe) and XML "seed files." 

    When the import tool is run it empties the cross-reference databases before importing new data from the seed file. This means that prior to the development of the "all processes" deployment script all seed data must be consolidated into one seed file. For example:

    C:\BizTalkDev\[AppName]\Main\Src\_MasterSolution\Shared \Shared\CrossRefData\MasterSeedData.XML

    Early in the development cycle, individual developers may need to seed their local database with process-specific data only. In this case the seed data should be held in individual files, named according to the process they relate to. For example:

    C:\BizTalkDev\[AppName]\Main\Src\_MasterSolution \Shared\CrossRefData\AccountRequestSeedData.XML
  • Storing BizTalk binding files. When developing BizTalk Server projects binding files are produced that define the binding of a logical process to the physical transports and hosts. This binding information is initially created by using the GUI tools but is then persisted in binding files to ensure that a process can easily be deployed in a repeatable, scripted manner. 

    Complexity arises from the fact that the physical binding information for the development environment may be different from that in the test, pre-production, and final production environments. Additionally, the binding files contain references to the specific version numbers for the DLLs being deployed and consequently these files need to be managed to be kept in sync with any DLL version number changes. 

    To help manage this complexity the binding files should always be located within a specific location for a given process and named following a naming convention. This makes it easier to perform modifications to the binding files (either manually or automatically). 

    Typically binding files should be kept in a folder called "bind" underneath the project folder and named according to a naming standard. For example, the binding files for the "AccountRequest" process in the development environment could be kept in a file as in the following example:

    C:\BizTalkDev\[AppName]\Main\Src\_MasterSolution \AccountRequest\bind\bind_AccountRequest_Stage1_DevEnv.xml
    Note how this convention allows binding files from multiple processes to be moved to a single folder location and still be identified. In some solutions multiple binding files may be required. Ensure that the naming convention used can take this into account.
  • Location for file dependency DLLs. When creating the folder structure it is helpful to create a common folder to hold DLLs that are referenced as file dependencies. By ensuring that all developers follow the same folder structure, file references will not become broken when solutions are moved between developers or workstations. For example:

  • Test messages. When developing schemas and using messages it is common to spend considerable effort testing many different sample messages against XML schema and orchestration processes. 

    In a real-world solution, sample messages are a valuable asset because they contain data that is meaningful to the business solution. Random or dummy values in the same message will typically cause the process to fail. For this reason sample messages should be treated with as much care as the code of the solution. 

    To assist in managing the message files, keep test messages under a specific folder called "msgs". In cases where there are both "generated instances" and "example messages" (for example, messages from an existing system) it is useful to keep these messages separate to allow comparison between the developed schema and actual data. For example:




    It is common in an integration project to have multiple versions of schemas and consequently, multiple versions of message files. In the same way that code has a version number applied; it is worth considering ensuring that version numbers are used when naming files. 

    It may also be worth considering storing message instances within Source Control. This has the added advantage of allowing comments to be added to versions and previous versions to be examined. 

    Note:  Instances of a message can be generated from a schema (and can also be validated) by using the "Generate Instance" function in the BizTalk Schema editor. This function is extremely useful when testing a newly created schema against a system to be integrated. The generation capability also allows you to start development when the schema of a message is known but you have not yet been provided with instance messages.
  • Example folder structure. The example folder structure described above typically looks like the following for a sample project:

















Working with BizTalk Server and Team Foundation Server

This document assumes that the team developing the BizTalk Server solution consists of multiple developers who will be using Visual Studio Team Foundation Server as their source control tool.
For step-by-step notes on creating a master solution for a partitioned single solution in Team Foundation Server Source Control, see Appendix: Step-by-Step Guide to Setting Up a Partitioned Single Solution.

BizTalk File Types and Merging

Most BizTalk File Types are XML based. A typical Source Control feature is the capability to merge files when changes have been made to files by multiple developers at once. By default Team Foundation Server considers BizTalk files to be text based and therefore valid for merging. While the merging engine of Visual Studio does a good job in merging XML files it can still result in invalid XML files that cannot be opened in the BizTalk Orchestration Designer, BizTalk Mapper or BizTalk Pipeline editor. In projects where you have inexperienced developers this can cause work being overwritten. There are three approaches you can consider in order of preference:
  1. Partition the work across small teams, so that people don't interfere with each other's work.
  2. Disable multiple-checkout and have senior people do the merging across branches.
  3. Disable merging altogether and enforce people to apply changes on multiple places in a branching scenario.
Both steps 2 and 3 are described below.
The following steps mark the BizTalk file types as disabled for merging:
  1. Select Team Foundation Server, and then click Source Control File Types as shown in Figure 6.

    Figure 6 - Team Foundation Server Source Control File Types
  2. In the Add File Type dialog, enter the Name "BizTalk File Types" with the following File Extensions: "*.btm;*.btp;*.odx" and clear the option Enable File Merging and Multiple Check Out as shown in Figure 7.

    Figure 7 - Adding Team Foundation Server File Types
  3. Click OK twice.

    Note:  This process prevents merging of BizTalk Server file types across the entire Team Foundation Server and prevents you from merging BizTalk Server files altogether, also in a branching scenario.
 The following steps disable multiple check-out for a BizTalk Team Project in Team Foundation Server:
  1. Open the Source Control option of the Team Project Settings.
  2. Clear the option to Enable multiple check out as shown in Figure 8.

    Figure 8 - Team Project Check-Out Settings
  3. Click OK.

When to Check In BizTalk Server Projects

The recommended approach to using Visual Studio Team Foundation Server is to only check in code in the Main branch when it has successfully passed functional tests and the developer is confident that the code will successfully build without breaking any related code. Applying this model to BizTalk Server results in the following guidelines:
  • BizTalk Server projects that contain only message schemas should not be checked in the Main branch until the schemas have been successfully tested against a variety of sample messages.
  • BizTalk Server projects that contain a business process should not be checked in until the solution has successfully been tested using the appropriate input and output messages using the correct send and receive ports.
  • WCF and Web service projects should not be checked in until the Web service code has been tested against the initiating system or by using a test harness.
If this model is followed, then the Main Visual Studio Team Foundation Branch will always hold a build that can be successfully built and tested. This principle is important if the approach of "nightly builds" is to be adhered to. If you are working with multiple branches, then the reverse integration of code to the main branch should always be a thought through process that is executed by experienced people. Again the Test Harnesses and Quality gates determine the way forward.
Visual Studio Team System has the notion of check-in policies that can be used to determine whether the pre-requisites prior to check-in have been met. The product doesn't come with out-of-the-box check-in policies that support the described process, but it is certainly possible to develop one that does.

Checking In Intermediate Versions

An alternative approach to check-in is that of checking in "intermediate" versions. Team Foundation Server has excellent features that support this. In this approach an intermediate version will not yet have successfully passed functional tests and can be thought of as "between builds."
It is required to distinguish between the intermediate versions and build versions. Using Visual Studio Team System Team Foundation Server this can be done in a variety of ways, either automatic or process based. For example:
  • Shelvesets
  • Branching
Please refer to the topics in the remainder of this paper or alternatively refer to the "Source Control Guidelines" in the Team Development with Visual Studio Team Foundation Server ( document referenced at the beginning of this section.

Version Controlling Non-BizTalk Server Project Files

BizTalk Server uses additional files that can beneficially be versioned and stored in Source Control. The following files are examples:
  • Binding files (both development and test)
  • Custom Pipeline Components
  • Test data (for example, test messages)
  • Test harnesses (which may change over the project lifetime)
  • Build, deployment, and start-and-stop scripts that may need to be shared between development and build teams
If these files are related to a specific Visual Studio BizTalk Server project then these files can be included within the BizTalk Server project and managed by using the Visual Studio integrated source control functions. To include a file or folder into an existing Visual Studio project, do the following:
  1. In Solution Explorer, click Show All Files.
  2. Select the folder or file to include in the solution.
  3. Right-click the folder or file, and then select Include In Project.

    Figure 9 - Add an existing item to your project
If the non-BizTalk Server project files are not part of a Visual Studio project, then you can manage them by using the Source Control Explorer.

Creating an Example Solution Structure and Integrating with Visual Studio Team Foundation Server

Appendix: Step-by-Step Guide to Setting Up a Partitioned Single Solution contains a step-by-step guide to creating a partitioned single solution structure using Visual Studio 20085 and Visual Studio Team Foundation Server that follows the guidance listed earlier. This step-by-step guide is intended as a working example and individual projects should modify the steps listed to meet their requirements.

Working with BizTalk Server and Assembly Version Numbers

When developing with the .NET Framework, versioning is governed by a standard set of rules that ensure when a version number changes, the impact of that change is typically minimal. Due to certain design and implementation choices made during the development of BizTalk Server, version numbers with BizTalk Server do not always follow the standard .NET Framework rules. The following sections describe these conditions.

Overview of .NET Versioning

When a BizTalk Server project is compiled, the resulting DLL is a .NET assembly and its behavior follows the standard .NET versioning behavior. If it occurs that multiple versions of the same assembly are installed with the same assembly version number, the system will produce unexpected results. For this reason it is important to ensure that BizTalk Server assembly versioning is planned and managed.
Each DLL containing a .NET assembly has two distinct ways of expressing version information. The following table shows the difference between the assembly version and the file version.
Table 10 - DLL Version numbers
Assembly versionFile version
The file version is a string that is displayed by Microsoft Windows® Explorer. This number can be used to manually determine which version of a file is installed.The assembly version number together with the assembly name and culture information is part of the assembly's identity. The assembly version number is used by the runtime to enforce version policy and plays a key part in the type resolution process at run time. This version number is physically represented as a four-part string with the following format:
<major version>.<minor version>.<build number>.<revision>
The file version is a string that is displayed by Microsoft Windows® Explorer. This number can be used to manually determine which version of a file is installed.
The following figure shows where these two version numbers can be viewed for a DLL.
Figure 10 - DLL Version Numbers

Implications of Changing Version Numbers

In .NET development it is typical to update the assembly version number to the current build number when a build takes place. However when developing a BizTalk Server solution, changing the assembly version number can break the relationship between an assembly and the dependent items that reference the DLL by its assembly version number. The following table lists items that refer to a BizTalk Server assembly by using its version number and the effect of changing an assembly version number.
Table 11 - Entities affected by Assembly Version Number Changes
Entity Effect of changing assembly version number
Binding files 

Changing the assembly version number will cause any existing binding files that reference the assembly to fail. This is because the binding file references the assembly by attributes including its version number. To reuse existing files they will need to be modified or regenerated. Binding files are XML and so modification can be undertaken using tools like Visual Studio 2008, Microsoft Office Infopath® or Notepad or alternatively can be scripted or automated using a tool.
For more information on reusing existing binding files using tools or an alternative to binding files, see Patching Binding Files.
MapsIf you change the assembly version of a class library which is used inside maps, the maps will build fine but throw exceptions at runtime because the old version of the DLL can’t be found anymore. After updating the assembly version of a class library, every map should be opened in text-mode and all the references to the class library should be updated.
BAM tracking profile definition file (.btt) files
Changing the assembly version number will cause any existing BAM tracking profile definition files to fail. The BAM tracking files are a binary file format so they cannot be edited and instead must be regenerated. If BAM tracking profiles are required it may be necessary to do either of the following:
  • Avoid frequently updating version numbers during the build process.
  • Delay building BAM tracking profiles until version numbers are stable.
Web services published by using the Web services Publishing WizardWhen the WCF Services Publishing Wizard is used to produce a service interface, the assembly version of the BizTalk Server DLL is included in the source code. If the receive location uses the xmldefaultpipeline the subscription relies on the document type and will use the embedded assembly information and fail if the assembly does not exist. If you use the passthrupipeline it ignores this embedded assembly information.

Approaches to Incrementing BizTalk Server Assembly Version Numbers

During a project you have a choice between the following:
  • Choose a fixed assembly version for a given deliverable and increment only the file version number.
  • Increment both the assembly version and the file version as the development progresses.
The following table compares these two possible approaches to updating the version numbers.
Table 12 - Comparing Approaches to Update Version Numbers
Increment the informational file version only and keep a fixed assembly versionIncrement both the assembly version and the informational file version
Assembly version number = Fixed number
File version number = Build number  
Assembly version number = Build number
File version number = Build number
BizTalk Server runtime may pick up wrong version of assembly if multiple assemblies installed BizTalk Server will always run latest version of assembly, even if multiple assemblies installed
Only one version of solution can be deployed at any time Can deploy different versions of the solution side by side (although other aspects of the solution may clash, for example, schema definitions)
BizTalk host needs to be restarted to force the loading of updated assemblies Forces BizTalk to load new assemblies
Less work to create a new deployment because files that reference the assembly version number (for example, binding files and tracking profiles) do not need to be edited More work for deployment because files that reference the assembly version number need to be kept updated with new version

When to Change File and Assembly Versions

Creating a policy on when to increment a BizTalk Server assembly version number depends on several variables including build processes, testing processes, number of assemblies and binding files, and the type of build being delivered.
The following table compares shipping and non-shipping build types.
Table 13 - Comparing Build Types
Build type: Non-shipping Build type: Shipping
Non-shipping builds are builds that are not intended to ship to a production environment. Non-shipping builds are the builds typically produced during the development and test cycle and are used only by the development and test teams. Non-shipping builds are typically unsupported. Shipping builds are intended to ship to a production environment and become deployed on the end user's systems. Shipping builds are supported.

Approach 1: Fix assembly versions for non-shipping builds

Following this approach, assembly versions are kept fixed and file version numbers are updated with each build, as shown in the following table.
Table 14 - Build Types for Approach 1 
Build type: Non-shippingBuild type: Shipping
Increment the informational file version only and keep a fixed assembly version. Increment both assembly version and the informational file version.
 Following this approach means that binding files do not need to change; however the following additional requirements should be noted:
  • Ensure that the file version is always modified to reflect the build number. If the file version is not modified then it will become much more difficult to distinguish between different assembly versions. Relying on other attributes like file size and date stamps is less accurate.
  • Note that previously deployed BizTalk Server assemblies will be overwritten by the next deployment, so parallel deployments of versions will not be possible.
  • Because the assembly version number is not changing, BizTalk Server will not detect that the assembly has changed and so it will be necessary to force the BizTalk Server runtime to load the new version into the host's memory. This can be achieved by stopping the BizTalk Server runtime. For more details on how to achieve this, see the Restarting BizTalk Server Hosts and Services section.
  • It can be advantageous to have a process output its current build number as debug information. This ensures that when processes run, it is possible to confirm that all the build versions are as expected. For more details on how to view version number information at run time, see the section Debugging and Tracing.

Approach 2: Increment assembly versions for non-shipping builds

Following this approach, assembly version numbers are incremented for every build, including non-shipping, as shown in the following table.
Table 15 - Build Types for Approach 2
Build type: Non-shippingBuild type: Shipping
Increment both assembly version and file version. Increment both assembly version and file version. (note this is the same as Non-Shipping)
This approach incurs the added effort of modifying the assembly build numbers and modifying the associated dependencies that use the build numbers (like binding files).
If the development team follows the approach of incrementing assembly version numbers for every build then there are several possible ways of automating the required changes in version numbers and binding files. These options are discussed in the section Automating Build and Deployment Version Numbering.
Which versioning approach to use? For the majority of developments Approach 1 (fixed assembly versions for non-shipping builds) is typically suitable. It has the advantage of requiring fewer resources to test and as long as informational version numbers are kept up to date, issues should be detectable and resolvable using the informational version number.
Versioning in BizTalk should be a very deliberate event. Especially, if artifacts are divided into multiple assemblies how and what artifact is being versioned can easily break the production environment, because of the dependencies BizTalk artifacts have on each other.

Map Function Version Numbering

.NET assemblies can be invoked from within a map (using the scripting functoid, found under the advanced functoids palette) and this functionality provides a great deal of flexibility for delivering custom map functionality. However it is important to understand that the internal representation of the map files references not only the assembly type name but the full assembly version number.
This is significant because it means that if the version number of the assembly called by the map changes, then any links that reference the assembly will break. In a complex map this can cause considerable rework.
To avoid this issue, we recommend that if assemblies are required to be called from a map, you create a specific assembly to hold only map functionality and the assembly version number of this assembly is fixed. In this way, other helper functions can have the assembly version updated without breaking the maps.
If an assembly referenced from a map is changed after map development, then consider updating the map file in a text based editor to reflect the updated version numbers.
To achieve this use the following steps:
  • Do not open the map by double-clicking it because this will cause the links to break and incur rework.
  • Open the map using the Visual Studio XML editor: Right-click the map in Solution Explorer, click Open with and then choose HTML/ XML editor.
  • After the instances of the "Map only" assembly reference number have been located in the XML structure, they can be replaced with the updated version number by using Edit Search and Replace.
  • The map can then be saved, closed, and opened using the standard BizTalk Server map editor.
The version number of helper classes referenced by project references or file reference and used by orchestrations (for example, from within expressions) can be changed without encountering the issue described above.

Developing BizTalk Server Orchestration Designs

After the completion of the information-gathering phase, described in the first section of this document, it is usual for the team to begin to produce design documents for the BizTalk Server orchestrations. These documents are typically then used to validate the proposed designs with system and process owners prior to commencing the development phase.
One of the primary benefits of the BizTalk Orchestration model is the great clarity you can achieve when a software implementation is pictorial. Regardless of how well a developer comments code, there will always be a need to maintain a separate set of artifacts including process diagrams, Visio® diagrams (with varying shape usage), documents, and whiteboard discussions that are used to convey what the code is actually doing. This is especially true when working with a business audience.
When working with a BizTalk Orchestration, the diagram is the implementation of a piece of functionality (at least at a specific level). This provides opportunities to use an orchestration diagram in several helpful ways within a project lifecycle, as follows:
  • To capture an initial high-level design, using all the orchestration shapes as intended but without fully implemented maps and schemas. This is often called skeleton designs. All of the external system interactions, communication patterns, decision points, parallel vs. joined flows, and so on, can be represented at this point in a skeleton orchestration.
  • To gain consensus with the development team and business sponsor about the functionality. An efficient route to discussion can be to place the high-level skeleton design orchestrations on a wall with project stakeholders and then walk through the processes.
  • To estimate the number of tasks required and the level of effort needed to develop the solution. The various messages and shapes in your initial orchestration can often provide a reasonably granular approach for time estimating. If the complexity of messages is taken into account it is possible to construct a draft project plan that includes orchestration entities. If this draft plan is tested early on in the project, the approach can be refined to provide a reliable model for estimation.

Skeleton Orchestration Design

One approach to developing BizTalk Server orchestration designs is to use Visual Studio to produce "skeleton" projects. The skeleton projects provide a way of validating the assumptions about the number and type of messages and the operations taking place upon the messages to complete a process.
These skeleton projects exhibit the following attributes:
  • Projects should be divided up in accordance with the planned schema and process ownership.
  • Projects contain skeleton schema with correct schema names. These schema contain correctly named root nodes and correct target namespaces, but need not contain any further schema detail.
  • Projects contain skeleton maps that have a source and destination schema but need not have any links.
  • Orchestrations reference the appropriate skeleton schema.
  • Orchestrations contain messages and variables created from the skeleton schema. This can be useful to understand which orchestrations share messages.
  • Orchestrations contain the receive and send operations dictated by the design.
  • Orchestrations contain the process logic associated with the process, including send, receive, mapping, and logic operations.
  • Expression shapes contain //comments that describe the functionality of the expression. The expression shapes also need to contain at least one valid line of expression to allow the skeleton solution to compile without errors (for example, System.Diagnostics.Trace.WriteLine("In expression 1");).
As the designs are progressed, the skeleton projects can be used as the starting point for the BizTalk Server development phase. To add documentation to a group of related workflow shapes, use a Group shape. These display as much text as you care to associate with the shape. Many of the other orchestration shapes can hold descriptions and it can be useful to use this to express planned implementation detail.
To make best use of the BizTalk Server skeleton designs, it is helpful to use a documentation tool to export the skeleton design. For more information, see the Documenting BizTalk Server Systems section.
The BizTalk Software Factory referenced in the references section can help in generating a standardized solution and project structure. It can be customized or extended to adhere to specific company standards.

BizTalk Server Naming Conventions

This section provides guidelines on naming projects, assemblies, orchestrations, messages, and other artifacts within BizTalk Server.

XML Namespace Conventions

Prior to starting a BizTalk Server development, it is good practice to ensure that a standard is created for new XML target namespaces. The actual standard is often less important than the uniformity it confers across the project. An example target namespace naming policy might be:
  • <organizationname> is the name of the organization as used in the organization's domain name.
  • <system> is the business system that the message relates to.
  • <direction> and <operation> are optional, you can leave these out for canonical schemas.
  • <operation> is the operation on the interface, for example "ProcessOrder". In a canonical schema, typically the name "Order" is used.
  • <direction> is the direction of the message from the point of view of the system referred to by <system>.
  • <direction> is typically "incoming" or "outgoing" for unidirectional messages.
  • <direction> is typically "IncomingRequest", "OutgoingResponse" for bidirectional and synchronous systems.
  • <version> optionally a year or date or version number can be used as part of the message standard. However be aware that changing these will result in rework for the BizTalk Server developers as the maps and orchestrations will need to be updated to point to the "new" schemas. This can be a significant cost in a large solution. This could be partly overcome by using multipart message instances inside an orchestration. The multipart message will be referenced instead of the message type.

BizTalk Artifact Namespaces

Many artifacts within a BizTalk solution have a standard .NET namespace associated with them. The guidance on .NET namespace naming should be adhered to with BizTalk artifacts, namely:
Note that these are Pascal-cased and nested namespaces will have dependencies on types in the containing namespace.
For more information about naming guidelines, see Namespace Naming Guidelines (

BizTalk Projects, Assemblies and Applications

BizTalk project names and assembly names should often match the name of the associated namespace, such as the following name:
A division into assemblies such as the following will often be quite suitable for a BizTalk project:
BizTalk Server provides a means to logically deploy BizTalk artifacts in containers called Applications. Applications can be the means of deployment and group related BizTalk artifacts together. In terms of naming, often the names of the (sub)system or overall solution can be used.

Orchestration Naming Conventions

When deployed, an orchestration "type name" is the name that is used and displayed by the BizTalk Server administration tools to refer to the orchestration. When creating a new orchestration, this property defaults to "orchestration_1". When creating a new orchestration, change this property to a descriptive name to help distinguish the orchestration.

Messaging Artifact Naming

All artifacts should be named with a Pascal convention unless mentioned otherwise for the specific artifact, although underscores are used to separate logical entities.
For schemas, maps, orchestrations, and pipelines, ensure that the .NET type name matches the file name (without file extension).
The following table shows an example messaging artifact naming convention. Note that the .NET type name of the artifacts listed above should match (without the file extension) the file name. The .NET namespace will likely match the assembly name.
Table 16 - Example Messaging Artifact Naming Convention
ArtifactStandardNotes and Examples
Schema file

Standards include XML, X12, FlatFile (FF), and other custom formats.
If root node does not distinguish the schema, use descriptive name.
PurchaseOrderAcknowledge_FF.xsd or
Property Schema File<PropSchema>_<Standard>.xsd
Should be named to reflect its common usage, across multiple schemas if appropriate. Prefixed with “Prop_” to help distinguish it from standard message schema. Example:
Map file<SourceSchema>_To_
If XSLT file is specified for map, XSLT file should be named identically with .xsl extension. If an orchestration contains multiple instances of messages derived from the SourceSchema, use the message instance names instead. If you have multiple input or output schemas, then separate them with an underscore. <SourceSchema1>_<SourceSchema2>_To
Orchestration FileA meaningful name that represents the underlying process.
Send/Receive Pipeline file
Rcv_<SchemaName> or
Snd_<SchemaName> or
A pipeline might be used to ensure reception of particular schema(s), or to perform some other function.
Rcv_ PurchaseOrderAck_FF
Receive Ports
Use functional description if the input schema (and potentially output schema, if request/response) do not adequately describe the port.
(No need to add Snd/Rcv/Port, etc. since they are grouped accordingly in admin tools.)
(for request/response port)
 PurchaseOrder_XML or
(for one-way port)
Receive locations<ReceivePortName>_<Transport>
Send Port groups<FunctionalDescription>
PartiesA meaningful name for a trading partner.If dealing with multiple entities within a trading partner organization, the organization name could be used as a prefix.
RolesA meaningful name for the role that a trading partner plays.
Note The .NET type name of the artifacts listed above should match (without the file extension) the file name. The .NET namespace will likely match the assembly name. Adding the application name

Orchestration Shape Naming

Establishing and following naming conventions are good practices for designating variables, messages, multipart types, and so on, but they become even more important for the workflow shapes contained within an orchestration. The goal is to ensure that the intent of each shape is clear, and that the text associated with the shape conveys as much as possible given the space constraints. In this way, a non-technical audience will be able to use the orchestration as documentation.
Note  To add documentation to a group of related workflow shapes, use a Group shape. These display as much text as you care to associate with them, and can add quite a bit of documentation value to the diagram. However, if you intend to instrument the application using BAM, be aware that a group shape does not exist at runtime and as such cannot be part of a BAM tracking profile. If this is the case, you could use a non-transactional scope shape instead.
The following table shows an example orchestration shape naming convention. Shape types are not used in this naming convention to save space (allowing more detailed functional descriptions). Use the tooltip in the administration console when requiring that information.
Table 17 - Example orchestration shape naming conventions
ShapeStandardNotes and examples
Scopes <DescriptionOfContainedWork> or
Including brief information about transaction type may be appropriate.
Receive  <MessageName>
Typically, MessageName will be the same (though Pascal-cased) as the name of the message variable that is being received "into" (though the message variable will be camel-cased.)
Send <MessageName
Typically, MessageName will be the same (though Pascal-cased) as the name of the message variable that is being sent (though the message variable will be camel-cased.)
Expression <DescriptionOfEffect
Expression shapes should be named with Pascal convention (no prefix) to simply describe the net effect of the expression, similar to naming a method.
Decide <DescriptionOfDecision
Decide shapes should be a full description of what will be decided in the "if" branch.
If-Branch  <DescriptionOfDecision>
If-branch shapes should be a (perhaps abbreviated) description of what is being decided.
Else-BranchElseElse-branch shapes should always be named "Else".
Construct Message (Assign) 
<Message> (for Construct)<ExpressionDescription> (for expression) 
If a Construct shape contains a message assignment, it should be an abbreviated name of the message being assigned.
The actual message assignment shape contained should be named to describe the expression that is contained.
which contains expression:
Construct Message (Transform) 
<SourceSchema>To<DestSchema>(for Construct)X_<SourceSchema>To<DestSchema>
(for expression) 
If a Construct shape contains a message transform, it should be an abbreviated description of the transform (that is, source schema to destination schema).
The actual message transform shape contained should generally be named the same as the containing shape, except with an "X_" prefix ("X_LoanRequestToCreditRequest").
which contains transform shape:
Construct Message
(containing multiple shapes) 
N/A If a Construct Message shape uses multiple assignments or transforms, the overall shape should be named to communicate the net effect, using no prefix.
Call/Start Orchestration 
<OrchestrationName> N/A
Throw  <ExceptionType>
The corresponding variable name for the exception type should (often) be the same name as the exception type, only camel-cased.
RuleException, which references the "ruleException" variable.
Parallel <DescriptionOfParallelWork> 
Parallel shapes should be named a description of what work will be done in parallel.
Delay <DescriptionOfWhatWaitingFor> 
Delay shapes should be named an abbreviated description of what is being waited for.
Listen <DescriptionOfOutcomes> 
Listen shapes should be named an abbreviated description that captures (to the degree possible) all the branches of the Listen shape.
Loop <ExitCondition> 
Loop shapes should be named an abbreviated description of what the exit condition is.
Role Link  N/ASee "Roles" in messaging naming conventions above.
Suspend  <ReasonDescription>
Describe what action an administrator must take to resume the orchestration.
More detail can be passed to error property – and should include what should be done by the administrator before resuming the orchestration.
Terminate <ReasonDescription> 
Describe why the orchestration terminated. More detail can be passed to error property.
Call Rules  <PolicyName>
The policy name may need to be abbreviated.
Compensate Compensateor
If the shape compensates nested transactions, names should be suffixed with the name of the nested transaction – otherwise it should simply be Compensate.
For documentation purposes developers are recommended to add descriptive text to the shapes description property. The BizTalk Server documenter tool on Codeplexs (see Documenting BizTalk Server Systems) can generate Microsoft Word documents or compiled help file that will contain these shape descriptions. These documents can be used as the basis of system documentation.

Orchestration Type Naming

The following table shows an example orchestration type naming conventions.
Table 18 - Example Orchestration Type Naming
Shape Standard Notes and Examples
Multi-Part Message Types<LogicalDocumentType>
Multi-part types encapsulate multiple parts. The WSDL specification indicates "parts are a flexible mechanism for describing the logical abstract content of a message." The name of the multi-part type should correspond to the "logical" document type, that is, what the sum of the parts describes.
(which might encapsulate an invoice acknowledgment and a payment voucher)
Multi-Part Messsage Part<SchemaNameOfPart>
Should be named simply for the schema (or simple type) associated with the part.
MessagesSee the Message Instance Naming section.
Port Types
Should be named to suggest the nature of an endpoint using Pascal casing. If the orchestration is exposed as a Web service, the port name is exposed in the generated WSDL. For this reason these are not suffixed with with “PortType” to avoid the “PortType” being visible to external interface. If there will be more than one Port for a Port Type, the Port Type should be named according to the abstract service supplied.
which might have operations such as
Should be named to suggest a grouping of functionality using Pascal casing and suffixed with "Port." Typically the port name should reflect the likely name of the physical port (created in the binding files). This will aid configuration
Correlation typespascalCased
Should be named using Pascal casing, based on the logical name of what is being used to correlate.
Correlation setscamelCased
Should be named using camel casing based on the corresponding correlation type. If there is more than one, it should be named to reflect its specific purpose within the orchestration.
Orchestration parameterscamelCasedShould be named using camel casing and match the caller's names for the corresponding variables where appropriate.

Message Instance Naming

When naming messages within orchestrations, you should use a standard naming convention to avoid the confusion that can arise when messages are traveling between multiple systems in both directions.
Consider naming messages from the point of view of the integration layer (that is, incoming means incoming to the integration layer, outgoing means leaving the integration layer). It may help to include underscores (_) in the names.
The following table shows an example message naming convention.
Table 19 - Example Message Naming Convention
Message name Usage
<SystemName><OperationName>IncomingTo describe a message received from an asynchronous system by the integration layer
<SystemName><OperationName>OutgoingTo describe a message being sent to an asynchronous system by the integration layer
<SystemName><OperationName>OutRequest To describe the outgoing request message sent to a synchronous system by the integration layer
<SystemName><OperationName>OutResponse To describe the associated response incoming message received from a synchronous system by the integration layer
<SystemName><OperationName>InRequest To describe the incoming request message received from a synchronous system by the integration layer
<SystemName><OperationName>InResponse To describe the associated response outgoing message sent to a synchronous system by the integration layer
If the messages being described are traveling between multiple possible systems, then a naming convention based upon system names may not be meaningful. An alternative approach is to name the messages after their schema, with a description of the message usage in the orchestration, for example:

Documenting BizTalk Server Systems

The configuration of a deployed BizTalk Server system is stored within the configuration database of the BizTalk Server group. Tools have been written to query the Configuration database and produce formatted output that lists the configuration information to help you document BizTalk Server systems.
These tools can be useful for:
  • Documenting a skeleton design early in the design process
  • Producing complete system documentation
  • Validating a newly deployed system against an expected system configuration
You can find a BizTalk Server reporting tool that generates a compiled help file detailing the BizTalk Server configuration. For more information, see the BizTalk Server 2006 Documenter ( project on the CodePlex web site.
This tool creates compiled help (*.chm) to quickly document the many artifacts (hosts, ports, orchestration diagrams, schemas, maps, pipelines, adapters, rule engine vocabularies and policies, and more) and their dependencies within a BizTalk Server environment.

Testing BizTalk Server Solutions

By the nature of integration development, testing BizTalk Server solutions involves exchanging data with other systems. To functionally test BizTalk Server processes, you must obtain suitable test data and have access to the appropriate method of interacting with this data. In some cases you can interact with the actual systems that the integration layer requires to exchange data with during the development stage. In many cases, however, this will not be possible. In this scenario it is typical to use test harnesses and test stubs to allow development to proceed in the absence of the actual systems.

Test Data

Test data is a very significant resource when developing an integration solution. Without valid test data you often cannot test a business process fully. When planning an integration development, you must ensure that the project includes sufficient resource & time to obtain valid data to allow testing. This is particularly true when the test data needs to be produced from systems that have never previously been integrated.
If existing data flows already exist it can be extremely beneficial to the design and testing of the integration solution if example messages are captured for all existing business operations.

Test Harnesses, Test Stubs, and Mocking

For integration environments, test harnesses, test stubs and mocking can be defined as the following:
  • Test harness. Code or utilities used to allow the testing of an integration process. Typically a test harness is used to submit data to the integration layer to initiate a process.
  • Test stub. Code or utilities used to simulate a system that is not available, but from which a response is required to allow the testing of a process. Typically a test stub is used to accept data as if it was the unavailable system and respond with the appropriate response, thereby allowing the process to continue despite the absence of a system.
  • Mocking. A mock is a dynamically injected dependency in a unit test that simulates the behavior of the dependency in a controlled way.
Example test harnesses include:
  • A Visual Studio 2008 project simulating an unavailable system by making a Web service request into the integration layer using the WSDL "contract" file that describes the final SOAP calls between the system and the integration layer.
  • An ASP.NET application simulating an unavailable system by performing an HTTP POST to the integration layer, passing a sample message and displaying the responding XML output and HTTP response code.
  • Custom code that uses the System.Xml.XmlDocument classes to produce a specified number of test input XML files with varying values and sizes to allow the volume testing of a batch business process.
Example test stubs include:
  • A Visual Studio 2008 project that simulates an unavailable system by accepting incoming Web service requests from the integration layer and returning a response. The project will potentially use some simple logic to vary the response values for testing purposes. The Web services WSDL may be provided from a system that already exists or that is to be built using the WSDL as the agreed interface.
  • An ASP.NET application that accepts HTTP POSTS from the integration layer and returns one of several possible XML messages based upon some simple logic.
Example mocks include:
  • A unit test that set expectations regarding the behavior of an outgoing port in an orchestration to a web service. The mock can simulate correct or incorrect responses.
  • Mocks can be of several types such as: Receive Locations, One way or Solicit Response Ports, Message Instances, Databases, Eventlog, Rules, Pipelines etc.
It is generally recommended that test harnesses , test stubs and mocks are developed early in the project because they are a useful resource and they also help develop a deeper understanding of the actual interactions between the integration layer and the other systems.
For more information about example test stubs and harnesses, see the Additional Resources section.

Enable Runtime Schema Validation

By default BizTalk Server does not perform schema validation against incoming XML messages entering the BizTalk Server orchestration runtime. In this configuration BizTalk Server only examines the root node and target namespace of the incoming document. Comparing every incoming XML instance against its schema is a resource-intensive process and this default setting allows messages to enter BizTalk Server with the maximum throughput. Typically the default "no validation" setting is used for production environments.
During the development period, we recommend that runtime schema validation is switched on. By validating messages during development potential errors relating to badly formed messages are caught early in the development process rather than later, when they are typically more difficult to fix.
In some cases it may be applicable to switch on message validation in the production system (especially if messages may be coming from systems that may not have been through a rigorous integration testing process). If runtime validation is switched on, then be aware of the impact on throughput and capacity. Consider using the BizTalk Server performance counters to understand the impact.
To turn on runtime schema validation:
  1. Make a backup copy of the existing BTSNTSvc.exe.config file (found under C:\WINDOWS\Microsoft.NET\Framework\<FrameworkVersion>\config).
  2. Modify the BTSNTSvc.exe.config file by adding the configuration section shown below.
<?xml version="1.0" ?>
              <Debugging ValidateSchemas="true"/>
  1. Restart the BizTalk Server service to pick up the changes.
For more details on the runtime configuration, see Runtime Validation for the Orchestration Engine ( in the BizTalk Server documentation.

Switching On Runtime Schema Validation in Non-Orchestration Solutions

The preceding approach switches on validation of messages entering orchestrations. To enable validation of incoming messages in a "messaging only" solution requires turning on validation by setting the configuration options in the XML and FF disassemblers and by using the XML validator component in pipelines. For more information, see XML Disassembler Pipeline Component  ( and Flat File Assembler Pipeline Component  ( in the BizTalk Server documentation.


When putting in place the BizTalk Server team development and test environments, it is important to make certain the platform and environment is stable. The following sections detail techniques and approaches that will help deliver a stable environment and stable development process.

Backing up development data

Your Team Foundation Server environment provides a central environment that contains all of the source code. Having a backup strategy for the Team Foundation Server is crucial in order to return to consistent state during disaster recovery. For more information about setting up a backup strategy, see the Administration Guide for Microsoft Visual Studio Team System 2008 Team Foundation Server (
The recommended approach to using Source Control is to only check in code in the main branch when it has passed functional testing. This means that a considerable period of time can pass between check-in operations, during which time the code is not on the backed-up Source Control environment. To mitigate against the failure of a developer workstation it is important to ensure that a backup of the source code takes place, so make sure you are using the features described in the Checking In Intermediate Versions section and make use of the Shelving and Branching features of Team Foundation Server.

Debugging and Tracing

When developing orchestrations it is important to be able to understand the status of an orchestration both during and after execution. The BizTalk Management Console provides the capability to view the flow of an orchestration after it has completed through the Tracked Message Events and Tracked Service Instances queries. To place a breakpoint within an orchestration, see Debugging an Orchestration ( in the BizTalk Server documentation. In BizTalk Server 2009, the BizTalk compiler also creates a .cs file for each Orchestration in your solution. The .cs file can be found in the obj\Debug\BizTalk\XLang folder. You can open the .cs file in Visual Studio 2008 and leverage the Visual Studio debugger to attach to the btsntsvc.exe process and debug the code. Although Tracking is effective, it can also be useful to develop orchestrations that write out debugging information as they run.
One of the techniques most commonly used when developing debugging traditional code is to write out useful data during code execution to allow observation of the data and processes that are actually executing. The same functionality can be achieved in BizTalk Server orchestrations by using the following technique.
Within the .NET class library, the System.Diagnostics.Trace class allows the outputting of trace and debug information to listeners, which can capture and log the information. It is possible to write listeners using .NET code (see "Debug Class" in the .NET documentation), but it is also possible to use readily available utilities.
When DebugView is running, it will catch and log the output of any debug statements, allowing the developer to view progress and actual data within the process. Note that debug statements also exist within other Microsoft applications (including BizTalk Server), so expect to see debug output from other applications too.
The following steps describe how to output a hard-coded string containing a process name and version number:
  1. In Visual Studio 2008, open an existing orchestration, which can successfully be deployed and executed.
  2. Create a new expression shape below the start shape of the orchestration.
  3. Edit the expression as follows:

    System.Diagnostics.Trace.WriteLine("Entering Orchestration)");
  4. Compile, deploy, and start the orchestration as normal.
  5. Start DebugView or an alternative debug listener.
  6. Activate the BizTalk Server process and observe the output in DebugView as the orchestration is started by the BizTalk Server runtime.

Debugging Message Content from an Orchestration

The trace.writeline technique described above can also be used to write out the contents of messages and variables at runtime. This can be useful when working with complex orchestrations, for example, multi-map transformations where it would normally not be possible to observe the intermediate messages produced by maps inside the orchestration.
The following procedure describes how to write out the contents of a populated orchestration message (named myMsg) to a debug listener:
  1. Within an orchestration add the following variables:

    myString declared as an orchestration variable of type 'string'
    xmlDoc declared as an orchestration variable of type 'xmlDocument' (type found under .Net classes, System.XML)
    myMsg is the orchestration message whose contents are to be output
  2. Convert the message into a string and output the string in an expression block by adding the following expression to an expression shape:

    xmlDoc = myMsg; //create XML doc from the BTS message
    System.Diagnostics.Trace.WriteLine(xmlDoc.OuterXml); // output the xml as a string
  3. Run the orchestration with a debug listener running. The XML representation of the message will be output. 

    It is also worth noting that it is possible to perform the reverse of the "Message to string" operation and create a BizTalk Server message with an XML string. 

    To convert the string back to a document in a message construct, use the following code:

    myMsgFromString = xmlDoc;
To achieve this, the following requirements are necessary:
  • The orchestration message myMsgFromString must be defined as having a type derived from a valid schema.
  • The XML being assigned to the message should conform to the schema of the message.
  • The expression shape needs to be enclosed within a message assignment shape, which is explicitly creating message myMsgFromString.

Debugging Maps

Viewing the map output is an important step when testing maps. The resulting XML produced by the map is saved to disk, even if the output is not valid according to the destination schema. Examining this output can be valuable in understanding why the output of the map is not valid.
Compare the resulting output XML against the schema definition to determine why the error is occurring. The map output document can be seen by looking in the temporary location for the output of a map, for example:
  • C:\Users\<username>\AppData\Local\Temp\2\_MapData
The output document is also available by CTRL-clicking the file name in the output window after performing the test map operation.
When developing maps, if map links or functoids don't seem to be producing the expected output or are producing errors, then consider examining the XSLT to work out what node data is being processed and where the results are being written. The XSLT is produced when validating a map and it represents the actual mapping that the BizTalk Server runtime will perform. By reading the generated XSLT it is sometimes possible to observe the reason that a map is producing unexpected output.
The XSLT is usually quite simple to follow, typically consisting of many simple XPATH queries. By reading the XPATH statements it is often possible to clearly see the input values and any operators followed by the nodes in which the output is written.
The XSLT is produced during a map validation and is located at:
  • On Windows Vista c:\Users\<username>\Local Settings\Temp\_MapData
The Visual Studio debugger supports debugging XSLT. The debugger supports setting breakpoints, viewing XSLT execution state, and so on. When working with mappings from Visual Studio 2008, the XSLT debugger can be started from the solution explorer by right clicking a mapping and select “Debug map”.
For more information on debugging XSLT in Visual Studio, see Debugging XSLT (

Debugging Custom .NET Methods within the Scripting Functoid on a Map

BizTalk Server supports the creation of maps that use the scripting functoid. Users can create their own .NET methods and use them from within the scripting functoid to allow better reuse of custom code and functionality. Debugging these functions when testing a map can be very useful to determine if your custom map functions are working properly, but the steps to debug these custom assemblies are not obvious. Here are the steps you need to take in order to directly debug your scripting assembly while testing your map:
  1. Open your assembly class in its own Visual Studio 2008 development environment. Build the assembly normally, making sure that the build is in debug mode and a symbols file is created.
  2. Deploy your custom assembly to the global assembly cache normally – make sure you deploy the version from your bin\debug folder.
  3. Open up your BizTalk project in a second Visual Studio 2008 development environment, reference your custom assembly DLL in the same directory above, and configure your scripting functoid as normal (you should see it as a choice in the list of available assemblies).
  4. Return to your custom assembly environment, and under processes, attach to your BizTalk DEVENV process. Initially, when you set a breakpoint, the environment will say "symbols not loaded". This is normal, because the assembly is not loaded until it is actually invoked by the mapper.
  5. Return to your BizTalk project and test the map (using right-click Test Map). If you watch your assembly environment, the symbols get loaded and the "?" goes away, at which time your breakpoint should hit.
  6. Debug in the normal fashion. Your breakpoint will be hit every time the functoid is called.
Error: "Map contains a reference to a schema node that is not valid"
This design-time error can occur when the map being opened uses a project dependency for one of its schema. If this error occurs ensure that the dependency has been recompiled to reflect any recent changes.
To understand why this error arises, consider the following scenario:
  • Project Y references project X. Project Y uses schema X which are contained in Project X.
  • The schema information used by the map editor in Project Y is actually contained within the compiled DLL produced by the project X.
  • This means that if the DLL from project X is deleted then the maps in the project Y will not compile until the project Y's DLL are recompiled again.
This is significant when a source control system is being used to store projects. By default Visual Studio 2008 does not store DLLs automatically and when retrieving a BizTalk Server project containing schema the project will need to be recompiled to produce the DLLs that dependent projects need.
Error: "Element cannot contain text or white spaces. Content model is empty"
This runtime error can occur when using Complex Content elements (like <ANY> or <SEQUENCE>). Complex content elements are not allowed to contain text unless their “Mixed” property is set to “True.”
For example, if the schema uses complex content elements, the XML <Root> </Root> (note the space between the nodes) this will produce the above error. The XML <Root/> will not because no white space exists. To avoid this error, set the Mixed property of the root node of the schema to true, as shown in the Figure 11.
Figure 11 - Schema Root Node Mixed Property

Debugging Adapters and Pipelines

You can debug adapters by using a similar approach to that outlined in the Debugging Custom .NET Methods within the Scripting Functoid on a Map section. Be aware that although adapters do not need to be in the global assembly cache for the runtime to find them, they do need to be in the global assembly cache for Visual Studio 2008 to find them when debugging. Make sure that adapter assemblies are in the global assembly cache before attaching to the BTSNTSvc.exe to debug them.

Debugging Web Services Called by BizTalk Server

A common integration requirement is to a call a Web service from the orchestration, and then debug the Web service to determine the functionality taking place within the Web service when called from BizTalk Server. The following steps describe how to debug this scenario. The orchestration is shown in the following figure. It receives a message from the MessageBox, transforms it into the message expected by the Web service, and sends it to the Web service, and receives a response.
Figure 12 - Orchestration Calling a Web Service
To debug the Web service, perform the following steps:
  1. Open Visual Studio.
  2. Open the solution (or project) that contains your Web service.
  3. Set the breakpoint.
  4. On the Debug menu, select Processes. The Processes dialog box appears.

    Figure 13 - Process Dialog Box
  5. Select w3wp.exe

    The Web service is hosted in a worker process, which is called w3wp.exeThis process is the executable you need to attach to in order to debug ASP.NET applications. In a system hosting multiple web applications, it is likely that multiple w3wp.exe processes are present. In that case, using the appcmd.exe tool which can be found under %windir%\system32\inetsrv can prove helpful. Using the following command “appcmd.exe list wp” will list all worker processes on your machine.
  6. Select Attach to attach the debugger to the selected process. The Attach to Process dialog in Visual Studio 2008 will automatically determine the available program types for debugging, but this list of program types can be changed by the selecting the Select button.

    Figure 14 - Attach to Process Dialog Box

    In this dialog, choose Debug these code types and select the Managed type. Then click OK
    The Attach to Process dialog box closes.
  7. In the Processes dialog box, click Close.
The Web service is now ready to be debugged. Send the message to the orchestration and Developer Studio will stop at the breakpoint.

Debugging Pipelines

Pipeline assemblies are often used in BizTalk Server solutions to provide special handling of files before they are sent into or out of the BizTalk MessageBox. Debugging pipeline assemblies in Visual Studio 2008 is critical in order to quickly troubleshoot and fix problems.

Copying and Debugging Assemblies

After you build the pipeline assembly successfully, you will need to copy the files to the "Pipeline Assemblies" directory in the BizTalk Server installation directory (C:\Program Files\Microsoft BizTalk Server 2009\Pipeline Components) or register them in the Global Assembly Cache. If you are updating an assembly that has already been installed and you cannot copy over the existing assembly there are two possible causes:
  • BizTalk Server process may have the assembly in use. First stop and start the BizTalk service before copying over the existing assembly.
  • A BizTalk project has a pipeline that uses your assembly may have it in use. The Visual Studio 2008 IDE may be using the design-time aspects of the pipeline assembly. To release it, close Visual Studio 2008 (it may be necessary to close all instances, not just the pipeline project).
After the file has been copied, you attach to the BizTalk process:
  1. With the pipeline assembly project open, go to Tools -> Debug Processes.
  2. From the Available Processes list, select the BizTalk Server process (BTSNTSvc.exe), and then clickAttach.
For information about the dialogs displayed when attaching to a process, see Debugging Adapters and Pipelines.
Note  After changing a pipeline, the BizTalk Server runtime will not immediately pick up the change. The changes will not be visible until the runtime refreshes its configuration. This can be forced by enabling theRestart Host Instances option on the project properties as shown in Figure 15 or by restarting the hosts manually through BizTalk Server Administration or via a script.
Figure 15 - Restart Host Instances option
Note  If the Redeploy option is set to True, this allows a modified pipeline to be redeployed without undeploying a previous version. However be aware that this redeploy action will reset any send or receive ports that used this pipeline to "PassThrough." After the redeploy, you need to reassign the port to the custom pipeline.
It is also possible to debug custom pipelines using Pipeline.exe, as described in Pipeline Tools ( in the BizTalk Server help. This tool, which can be found in the <InstallationFolder>\SDK\Utilities\PipelineTools directory, allows the developer to debug the pipeline without deploying it to an actual BizTalk Server. To use pipeline.exe, perform the following steps:
  1. Load the custom pipeline project in Visual Studio.
  2. In the project properties, change the project build output path for the solution to the <InstallationFolder>\Pipeline components directory.
  3. In the project properties, under debug, configure the pipeline.exe component to start when debugging. Also configure the pipeline and an input document: 

  4. Configure breakpoints, and press F5 to start debugging the pipeline.

Helper Classes

It can be useful to include helper classes within a BizTalk Server project to perform business logic tasks or functions that aid the development process. A typical task of a developer helper class would be to write messages out to the file system to aid debugging and diagnostics.
Included in the samples coming with this whitepaper is a template for a generic BizTalk Server helper class that manipulates messages. It provides limited functionality to aid the debugging of messages including a "DumpMessageToFile" function that writes a BizTalk Server message to the file system.
This sample can be modified to handle more complex functionality. To use the sample within a BizTalk Server project complete the following steps:
  1. Compile the>
  2. Copy GeneralHelper.dll to a permanent location on the local file system.
  3. GAC the DLL so it is visible to the BTS runtime using: 

    GACUTIL /I GeneralHelper.dll

    Note: We recommend you automatically do this in a Visual Studio 2008 post-build event.
  4. Add a reference to the GeneralHelper.dll in the orchestration project.
  5. Within an expression shape in an orchestration use the following syntax:

<MessageName>: String, Name of the BizTalk Server message in the orchestration to be written out
<path>: String, Path for file to be written to, with backslashes denoted with "\\"
<filename>: String, Filename for the resulting file
<timestamp>: Boolean, prepends a timestamp to the filename to create quasi-unique filenaming for messages
Another example:
Note  It is a best practice to always mark BizTalk Helper classes with the “Serializable” keyword, so that the state of the class can be serialized when a BizTalk orchestration is being dehydrated, it is properly stored in the MessageBox as well.

Useful Resources

The following tools and documents are useful to have available within the BizTalk Server developer's environment:
  • BizTalk Server Documenter.  This tool runs against a the BizTalk management database (BTM) of a BizTalk Server group and documents the configuration of all the deployed BizTalk Server solutions within the group. The BizTalk Server 2006 Documenter  ( can be extremely useful in documenting and validation deployments as well as providing a basis for solution documentation.
  • DebugView.  As described in the debugging sections. You can find the DebugView tool on the TechNet Web site at DebugView for Windows v4.76  (
  • TCPTrace.  TCPTrace provides the ability to view and log the data flowing between TCP ports. When using BizTalk Server and the HTTP or SOAP protocols, it can be extremely useful to view the actual data flowing between systems and the integration layer. By configuring TCP to act as a proxy between the integration layer and an application, it is possible to capture HTTP and SOAP messages and then examine the schema and content of these messages. You can download the TCPTrace tool from thePocket Soap Web site  (
  • Windows Explorer BizTalk Server Extension.  The Windows Explorer, the BizTalk Server Extension tool is part of the BizTalk Server installation process, but by default is not registered. To register the tool, close all instances of Internet Explorer and run the following command from the command prompt:

    regsvr32 "c:\Program Files\Microsoft BizTalk Server 2009\Developer Tools\BtsAsmExt.dll
This tool adds a BizTalk Server search pane to the standard Windows Explorer, which can be accessed from the folders bar or using the View -> Explorer Bar -> BizTalk Server Search.
This tool also allows you to search across the deployed assemblies for any of the BizTalk artifacts types and to view the detailed information about the artifacts found.
The following figure shows the location of the tool.

Figure 16 - BizTalk Server Windows Explorer Extension


This section provides information, toolkits, templates, and tools useful to teams looking to develop efficient build and deployment phases.

Automating Developer Deployment Tasks

When a BizTalk Server developer is developing a solution, there is a common requirement to perform the following steps: build, deploy, test, and undeploy. The Visual Studio 2008 BizTalk Server Tools assist in carrying out these tasks. On the project-level there are two settings:
  • Redeploy
  • Restart Host instances
With these settings set to True as shown in Figure 17, developers can just right-click on a project and select Deploy to redeploy the already deployed project incorporating changes made.
Note  Be careful with projects that depend on other projects. If you redeploy the project that is referenced by another project, the last project won’t always be automatically redeployed.
Note  Unfortunately this doesn't provide a developer experience on the local sandbox that is consistent with the way BizTalk solutions are deployed on other environments like integration, test, and eventually production environments. While this feature can be beneficial in the early stages of a project, we recommend you work on consistent deployment tooling based on MSBuild or Powershell.
Figure 17 - Redeployment Settings

Redeployment Scripts

BizTalk Server provides command-line and WMI interfaces to the BizTalk Server deployment functionality. This allows the creation of command and script files. These files can enable a developer to redeploy simple or complex solutions with a single operation.
These scripts are also beneficial when working with orchestrations that have dependencies upon other orchestrations. BizTalk Server enforces the dependencies when starting/stopping and deploying/undeploying orchestrations, and a script is an efficient way to ensure that the dependency order is always followed.
To aid the creation of simple redeployment scripts, the code that comes with this whitepaper contains MSBuild scripts that are using the MSBuild SDC Tasks on Codeplex  ( and the MSBuild Community Tasks  (
To aid the reusability of the above scripts, a team may wish to parameterize the deploy and undeployment scripts to accept paths for the DLL and binding file folder locations. In this way, the same scripts can be reused from a deployment package that contains the final compiled DLLs and production binding files in alternative locations.
The scripts rely on the SDCTasks MSBuild task assembly being available in the accompanying sample. These tasks are used to control and deploy the BizTalk artifacts. The sample scripts are provided as samples to demonstrate the steps required. You can use these scripts as a basis for your own solution(s). It is recommended that the same scripts are used as part of the MSIs that are used to deploy the solution on other (non development) environments by creating an MSI and a custom deployment task that calls the MSBuild scripts.

Local Developer Workstation - Quick Redeploy

When developing BizTalk Server solutions, there is no way to test a modified orchestration without actually deploying and binding the resulting assembly. This means that changing just one line of code requires you to stop, unbind, rebuild, redeploy, rebind, and restart an assembly. The redeployment scripts listed above provide an automated "one-step" operation to achieve the necessary functionality; however, you can use an alternative approach in some circumstances.
Note  The above approach is not officially supported, and should be restricted to local development workstations.

Assumptions and constraints:

  • This approach assumes that an orchestration has been developed and has already been deployed to the global assembly cache and configuration database, and the orchestrations have been bound.
  • This approach will only work when the changes to an orchestration do not affect the preconfigured binding information. For example, the modification cannot include adding a new port that would require additional binding information.
Steps to set up quick redeploy:
  1. Go to Control Panel -> System -> Advanced -> Environment Variable, and set the system environment variable DEVPATH to the project binaries folder. For example: 


    Note The path must include the trailing backslash.
  2. Edit the machine.config file (found under C:\WINDOWS\Microsoft.NET\Framework\<FrameworkVersion>\config) and set <developmentMode developerInstallation="true"/> under /configuration/runtime. 

    Note You may need to add the <runtime> and <developmentMode> elements to the machine.config file.
  3. Restart the developer workstation to ensure all changes take effect.
From this point on the assembly is loaded from the DEVPATH location, not the global assembly cache. To make a simple change to the assembly, simply recompile the updated solution and restart the BizTalk Server service or the BizTalk Server host hosting the orchestration. 

For more information about the DEVPATH environment variable and machine.config settings, see Locating Assemblies Using DEVPATH  (

Note This procedure doesn't work all the time. Once the boundaries of the solution are touched, like incoming / outgoing message schemas, port names, pipelines, maps, etc, most of the time a redeployment is necessary. It can be hard to determine why a solution isn’t working if one only updates the assembly and the information in the BizTalk Configuration is out of sync. While this feature can be beneficial in the early stages of a project, we recommend you work on consistent deployment tooling based on MSBuild or Powershell.

Alternative Quick Redeploy

You can also use the following approach when making internal changes to an orchestration that does not require port binding changes. As in the quick redeploy option above, this approach is purely a development shortcut and is not officially supported.
  1. Make changes to project.
  2. Rebuild project.
  3. Shut down host instance.
  4. Open Assembly Explorer (that is, c:\Windows\assembly using Windows Explorer).
  5. Locate previous version of assembly and delete.
  6. Drag and drop new version of assembly from project bin\development folder into assembly explorer.
  7. Restart host instance.
  8. Test new version.

Restarting BizTalk Server Hosts and Services

When developing BizTalk Server on a local workstation, it is common to make a change and then redeploy the updated assembly to retest the functionality. Because the BizTalk Server runtime loads assemblies into memory periodically, BizTalk Server does not load up the new version until it reloads the assemblies, unless the updated version has an updated assembly version. To ensure that the BizTalk Server runtime picks up the updated version immediately, stop and restart the BizTalk Server runtime hosts.
Restart the hosts manually through BizTalk Server Administration or via a script. Alternatively, you can start and stop the BizTalk Server runtime by creating a command file that contains the following two lines of code:
net stop "BizTalk Service BizTalk Group : BizTalkServerApplication"
net start "BizTalk Service BizTalk Group : BizTalkServerApplication"
Note  If you have multiple hosts (with different names), you have to add and update the command line. Recycling hosts is quicker than starting and stopping the BizTalk Server service and is recommended, unless you have modifications such as BizTalk Server runtime configuration changes that need to be picked up quickly.
Alternatively, you can restart the Enterprise Single Sign-On service. The BizTalk Server Host service has dependencies on this service, so a restart of the SSO Service will cause the host services to restart as well.

Automating Build and Deployment Version Numbering

As the Planning Team Development section of this document discussed, a development team may wish to automate the updating of project version numbers and binding file version numbers. This section provides some tools to assist in these processes.

Patching Binding Files

When a development team is producing frequent build and deployment operations of multiple BizTalk projects with the version information on the DLLs changing frequently, it can be helpful to automate the modification of the version numbers or physical addresses in binding files.


The SDC Tasks provide a MSBuild task that can replace a single element or attribute in any XML file by using an XPath expression to identify the element or attribute to replace. This can be very useful as part of your build process to update bindingfiles, version attributes, BizTalk project files, etc., to incorporate values specific to the environments that you are targeting. The following example provides a sample how to use the MSBuild task:        
        <Target Name="Test" >
For more information about XPath Expressions, see XPath Examples  (

Automated Build Processes

An automated build process is designed to enable a team with a large number of team members, or a large number of code assemblies under development, to efficiently perform a unified build process. The unified build process produces a set of assemblies that can be used for testing the whole integration solution (as opposed to the isolated testing of individual processes on developers' workstations).
An automated build process is heavily dependent upon flexible access to developer's source code. Typically an automated build process integrates with a source management system like Visual Studio Team Foundation Server Source Control and a build environment based on a Visual Studio Team Foundation Server Team Build Server.

Automated Build Example

The automated build example described here is a simplified version of a typical build process used in a customer project. It is provided as an example from which a team could develop a fully automated nightly build process with related individual build processes for developers.
The process is described here as an example of the type of steps typically required, because an individual team's requirements will vary. All scripts mentioned in this section are included in the "BtsAutomatedBuildSample" folder of the accompanying samples.
The example assumes that a workstation or server is available as a Team Foundation Server build server and that the file assembly version number is incremented with each build. It is assumed that Assembly Versions are updated manually based on stringent versioning strategies.

About the Automated Build Process

The following list gives you additional information and hints about the automated build process:
  • The structure of the build server has to reflect the structure under which the developers have developed the solution.
  • The hints contained within the BTPROJ file for DLL references are relative paths for DLLs on the same drive and absolute for DLLs on another drive. This can cause issues if you try to compile the solution at a higher level of nested folders that it was originally developed. For example, if you develop the solution three folders deep, the relative path to the BizTalk DLLs would be “..\..\..\Program Files\Microsoft BizTalk Server 2009\.” If you tried to compile the solution with folders four deep, it would not find the path to the Program Files directory.
There are two versions of the build process:
  • A nightly build process that is intended to run on the build server used by TeamBuild.
  • A local build intended for developers to do a local compile on their workstations by using MSBuild.
The main difference between the two processes is that the developer process leaves no trace on Source Control and actually (re)deploys the solution on the local developer’s sandbox, whereas the actual build process increments the build number and other numbers and possibly (re)deploys the solution on the build server or a different test environment.

Automated Build Process Details

The command file that drives the build process in TeamBuild is called TFSBuild.proj, which defines the Build Type created in Team Explorer. The scripts can be found in the “BtsAutomatedBuildSample\BtsAutomatedBuildSample\TeamBuildTypes\BizTalk Build” folder.
The following steps outline the process:
  1. Clear out previous binaries.
  2. Get the latest version from Team Foundation Server source control.
  3. Compile the solution.
  4. Move assemblies to a common bin directory.
  5. Deploy Pipeline components.
  6. Deploy BizTalk Server resources.
  7. Deploy Bindings.
  8. Export MSIs.
TeamBuild provides an optional parameter called IsDesktopBuild to indicate it should run a local (developer) build.
The following table describes the automated build files.
Table 20 - Description of Automated Build Files
File name Description
TFSBuild.proj The file that defines the Team Foundation Server Build Type. This file calls the BtsAutomatedBuildSample.targets.
TFSBuild.rsp A TeamBuild Response File. Can be used to add custom MSBuild command line options.
BtsAutomatedBuildSample.targets Includes all the custom work for the build and the MSBuild specific overrides of certain functions.
Microsoft.Sdc.Common.tasks  Defines the SDC Tasks.
MSBuild.Community.Tasks.Targets Defines the MSBuild Community Tasks.
WorkspaceMapping.xml Defines the TFS Workspace mappings on the build server.

Automated Deployment with BizTalk Server

This section covers how a team developing with BizTalk Server can achieve an efficient process to collectively automate the deployment of the resulting solution. It focuses on automating the deploying process to allow nightly builds and test processes to be run on a set of test servers. It does not focus on the topic of deployment to a production environment, although some of the techniques may be applicable to this task.
Deploying a "real-world" BizTalk Server solution typically consists of several related tasks, including:
  • Installing BizTalk Server assemblies to all servers in the BizTalk Server group.
  • Installing and configuring the required adapters.
  • Generating and installing related BizTalk Server binding files.
  • Installing shared assemblies (like helper classes) to all servers in the BizTalk Server group.
  • Installing WCF Web services to all servers in the BizTalk Server group acting as receive servers.
  • Installing any required test harnesses and test stubs.
  • Installing any required cross-reference seed data.
  • Tuning the environment.
Typically a deployment also performs operational tasks like enlisting and starting processes (in the right order) and clearing down logs and tracking databases.

Appendix: Step-by-Step Guide to Setting Up a Partitioned Single Solution

The following steps provide a sample structure for creating a partitioned single solution that integrates with Visual Studio Team Foundation Server and holds the non-BizTalk Server project entities that are required (for example, test data or scripts).
This example assumes that you are following the partitioned single solution model for larger or more complex solutions. You can create the single solution model by following the same steps and excluding the "Create a partitioned solution for customer operations only" section.
Use the following steps as a sample and not as a definitive guide. They are presented here to assist developers new to team development with BizTalk Server and Visual Studio Team Foundation Server with a "quick start" approach. This approach allows the team to familiarize themselves with the partitioned single solution approach and evaluate which Source Control approach will best suit the team needs.

Creating the BizTalk Server Master Solution

This section provides the steps and information necessary to create an example Visual Studio 2008 solution structure that integrates with Visual SourceSafe.
  • Create the Team Foundation Server Team Project. First, you create the Team Project under which the solution will be build by using the Team Explorer project wizard as shown in Figure 18 and Figure 19. On the next screens in the wizard choose your process template and accept the defaults on the following screens. Eventually, click Finish.

    Figure 18 - Team Foundation Server New Team Project option.
  • Create the local root folder. First, you create the top-level folder under which all source is stored on developer workstations. In this example, the folder is called “BizTalkDev.” Create the folder c:\BizTalkDev and c:\BizTalkDev on the workstations that you are using to create the initial project structure.
  • Open the Source Control Explorer. Open the Source Control Explorer by double-clicking Source Control in Team Explorer for the Team Project you created (see Figure 19). 

    Figure 19 - Team Explorer Source Control option
  • Create a new local source control workspace. In Source Control Explorer, select the Workspaceoption (see Figure 20). And create a new workspace mapped to the folder you just created c:\BizTalkDev (see Figure 21).

    Figure 20 - Workspace option in the Source Control Explorer

    Figure 21 - Add Workspace
  • Create a new main folder for branching. In order to support branching the project later, the recommendation is to create a new folder called "Main" directly under the project root folder. You can do this with the New Folder option in the Source Control Explorer.

    Figure 22 - Creating subfolder in Source Control Explorer
  • Check in the new folder. Next you can check in the new folder by right-clicking the folder, and then clicking Check In Pending Changes (see Figure 23).

    Figure 23 - Check In Pending Changes
  • Master solution and Source Control restrictions. It is important to note that all Visual Studio 2008 projects included within a master Visual Studio solution and additionally integrated with Source Control must be created in the file system folder that contains the master solution file. The “_MasterSolution” folder created below serves this purpose.
  • Create the master new solution from within Visual Studio 2008. To create the empty master solution that you will use as the container for all projects in the build process and Source Control, complete the following steps: 

    1. Open Visual Studio 2008.
    2. Go to File -> New Project, create the new project based on the Blank Solution template (underOther Project Types -> Visual Studio Solutions), create the new master solution in a folder under C:\BizTalkDev\[AppName]\Main\, for example:

    3. Make sure that the Create directory for solution check box is selected. This option is available on the More tab.

      Note  The leading underscore naming convention is used here to denote a folder containing a solution as opposed to a folder containing an actual project. This notation can be helpful if you are creating many partitioned solution folders alongside of project folders.
  • Add the solution to source control. After you create the master solution, you must add it to Team Foundation Server Source Control.
    1. In Visual Studio 2008, right-click the solution and then click Add Solution to Source Control.
    2. Source Control now has pending changes it didn't yet apply to the system. Right-click the solution again, and the click Check In Pending Changes.
    3. In the Check In dialog box, optionally provide a comment and click the checkin button.
  • Create a new BizTalk Server project within the master solution. Perform the following steps to add a new project under the master solution. You can use this project for both partitioned and single solution models.
    1. In Visual Studio 2008 Solution Explorer, right-click the solution, point to Add, and then clickNew Project to add a new BizTalk Server project.
      Note that the new project directory is located underneath the master solution (for example, C:\BizTalkDev\[AppName]\Main\_MasterSolution\MonthlyBillingRun).
    2. The master solution is checked out automatically.
    3. Add additional BizTalk Server projects using steps 1 and 2.
    4. Check in the master solution. This ensures that all the files are written to the Source Control server before any further operations take place.
  • Add shared projects to the master solution. You add shared Visual Studio 2008 projects like C# or Visual Basic .NET helper classes that are included in the build process of the project to the master solution in the same way. 

    To add a shared project to the master solution: 
    1. In Visual Studio 2008, open the master solution.
    2. In Solution Explorer, right-click the solution, point to Add, and then click New Project to add the new helper project.
    3. Select and appropriate Project Template (for example the Visual C# Class Library project).
    4. Click Browse to create a shared folder structure under the master solution.
    5. In the Project Location dialog box, create a folder structure to hold the shared project (for example, c:\BizTalkDev\[AppName]\Main\Src\_MasterSolution\shared\test).

      The project is added under the newly created shared section of the solution.
    6. Repeat this process for other shared projects.

      Figure 24 shows the Add New Project dialog box.

      Figure 24 - Adding helper projects to the solution
  • Add non-project files to the structure. When developing a complete solution, you might create additional files not directly related to the Visual Studio 2008 project, for example, test data and deployment or testing scripts. It can be beneficial to the project to manage these files by using Visual Studio 2008 because this allows the following:
    • You can use Visual Studio to edit, view, and manipulate the contents of the files. This can be useful, for example, when copying and pasting test data between sample files.
    • All source-control operations are managed by using Visual Studio and the Team Foundation Server Source Control Explorer.
    • Managing project-specific files. If the files are related to a specific Visual Studio BizTalk Server project, then you can include these files in the BizTalk Server project and manage them in the same way as all other project files by performing the following steps.

      To include a file or folder into an existing Visual Studio project:
      1. In Solution Explorer, click Show All Files
      2. Right-click the folder or file to include in the solution, and then click Include In Project.

        Figure 25 shows the dialog box.

        Figure 25 - Solution Explorer showing Include In Project
  • Managing shared or generic files. If the non-project files are not associated with a specific process (for example, deployment scripts for the whole solution), they should not be associated with a process-specific project. To help manage these entities, create an empty C# Visual Studio 2008 project to act as a container for these files (Empty Project is available as a C# template in the New Project dialog box under the Windows folder). Add this project to the master solution (under the shared path as detailed earlier in "Add shared projects to the master solution"). Having created the project use the "Include In Project" functionality to add non-project files to the container project. 

    When the master solution is used to configure the build process, projects containing non-project files should be excluded from the build process to avoid build errors relating to the fact that the project is not a valid C# project. 

    See the Version Controlling Non-BizTalk Server Project Files section for more details on how to add non-project files to a project.
  • Create a common location for file dependencies. It is useful to create a subfolder to hold the pre-compiled binaries for the project. That is, DLLs that are not produced from Visual Studio projects that the team has control over. For example:

  • Create a partitioned solution for customer operations only. As discussed in this document, the partitioned single solution provides benefits when working with a complex or large solution. The following steps show how to partition the master solution to create a partitioned solution that holds just the two customer projects in our scenario: 
    1. Ensure that the newly created master solution has been checked in (this is to ensure that Team Foundation Server Source Control has copies of the projects about to be included in the new partitioned solution).
    2. Open Visual Studio 2008.
    3. Go to File ->  New Project. Create the new project based on the Blank Solution template (under Other Project Types -> Visual Studio Solutions) in a folder under BizTalkDev\[AppName]\Main (for example, c:\BizTalkDev\[AppName]\Main\_Customer_Solution).
      Again the leading underscore "_" is used to denote the fact that this project contains a solution rather than a project.
    4. Add the solution to source control by right-clicking the solution, and then click Add Solution to Source Control.
    5. Check in the solution to source control by right-clicking the solution, and the click Checkin.
    6. In the dialog box, click Check In.
  • Add projects to the partitioned solution. Now that you have created the new solution, the next step is to add the projects that belong under the customer solution. In our sample this means adding the customer projects. 

    To add the first customer project (MonthlyBillingRun) to _Customer_Solution, on the File menu, point to Add Existing Project, and then navigate to the C:\BizTalkDev\[AppName]\Main\Src\_MasterSolution\MonthlyBillingRun folder.
    Follow the same procedure to add additional projects from the main solution to the new partitioned solution.

    This partitioned solution can now be used by the developer who is responsible for developing the customer areas of the solution, without requiring that the full master solution is retrieved and stored on their local workstation.
  • Re-create the solution structure on a new workstation. The following steps show how the structures created in the previous steps allow a new developer to the team (or a developer with a new workstation) to quickly retrieve the necessary solution structure.

    These steps assume a different Team Foundation Server user from the one used in the previous steps and a clean workstation file system.    
    1. Create the root folder (C:\BizTalkDev).
    2. In the Source Control Explorer, create a new Workspace mapped to the C:\BizTalkDev folder.
    3. Retrieve the code by selecting get latest version on the solutions required by the developer.
    4. The developer can now open the specific solution files (for example the c:\BizTalkDev\[AppName]\Main\_Customer_Solution\_Customer_Solution.sln file). Upon opening the solution, Visual Studio will also retrieve any linked project files from other solutions.

      Figure 26 - Retrieving specific solutions

Additional Resources

 The following resources may be of use to BizTalk Server designers, developers, and project managers:


It is our hope that through this guide, solution designers have gained an understanding of the constituent parts of a BizTalk solution and the possible project structures used to develop BizTalk solutions.
We also trust that developers have gained a better understanding of the process of developing a BizTalk solution in a team by using Team Foundation Server. 
Project managers will also have gained an overview of the main project phases and gained a better understanding of the typical tasks necessary when planning a BizTalk Server team development.


This guide was created by the following team members:
  • Dennis Mulder, Senior Consultant, Microsoft Services
  • Henk van de Crommert, Associate Consultant, Microsoft Services
This whitepaper was based on Developing Integration Solutions with BizTalk Server 2004 by Angus Foreman and Andy Nash. The BizTalk Server 2006 R2 version of this paper is available at Developing Integration Solutions using BizTalk Server 2006 and Team Foundation Server  (

Contributors and Reviewers

Many thanks to the following contributors and reviewers:
  • Microsoft Contributors / Reviewers: Stephen Kaufmann, Todd van Nurden, Shah Khan, Romualdas Stonkus, Bertil Syamken, Paolo Salvatori, Trace Young
  • External Reviewers: Sander Schutten (Avanade), Jean-Paul Smit (Didago IT Consultancy), Penni Johnson (Linda Werner & Associates Inc
Read more ...