In-Depth
Connect Apps with WCF
Learn when and how to utilize Windows Communication Foundation to develop and maintain your
communications layer when creating a loosely coupled, scalable, interoperable services-oriented application.
- By Brian Noyes
- February 1, 2008
Technology Toolbox: C#, Other: Windows Communication Foundation
Windows Communication Foundation (WCF) is a powerful new technology for building services-oriented architecture (SOA)-based applications. The usefulness of WCF goes well beyond large-scale enterprise SOAs. WCF can be used even for simple scenarios where all you need is connectivity between two apps on the same machine or across processes on different machines, even if you haven't adopted the full architectural approach of SOA.
In this article, I'll discuss some best practices and things to keep in mind when applying WCF in the real world. I'll start with a quick review of the basics of connecting applications with WCF, and then focus on several areas where you have to make hard choices between creating an easy-to-develop-and-maintain communications layer or creating a loosely coupled, scalable, interoperable SOA-based application. I'll also emphasize a collection of best practices, describing the context of where those practices would apply and why they're important.
You can use WCF to get two different chunks of code talking to each other across a wide variety of connectivity scenarios. With WCF, you can create a full-blown SOA-based application that communicates across the open Internet with another service written in a completely different technology. You can also use it to get two classes in the same assembly in the same process talking to one another. In general, you should consider using WCF for any new code where you need to cross a process boundary, and even in some scenarios for connecting decoupled objects (as services) within the same process. Basically, you should forget that .NET Remoting, ASP.NET Web Services, and Enterprise Services exist (except for maintaining legacy code written in those technologies), and focus on WCF for all your connectivity needs.
Connecting two pieces of code with WCF requires that you implement five elements on the server side.
First, you need a service contract. This defines what operations you expose at the service boundary, as well as the data that is passed through those operations.
Second, you need data contracts for complex types passed through the service contract operations. These contracts define the shape of the sent data so that it can be consumed or produced by the client.
Third, you need a service implementation. This provides the functional code that answers the incoming service messages and decides what to do with them.
The fourth required element is a service configuration. This specifies how the service is exposed, in terms of its address, binding, and contract. Wrapped up in the binding are all the gory details of what network protocol you're using, the encoding of the message, the security mechanisms being used, and whether you're using reliability, transactions, or several other features that WCF supports.
Finally, you must have a service host. This is the process where the service runs. Your service host can be any .NET process if you want to self-host it, IIS, or Windows Activation Service (WAS) for Windows Vista and Windows Server 2008.
On the client side, you need to implement three different technologies to make service calls with WCF (whether the service is a WCF service or one implemented in some other technology): a service contract definition that matches what the server uses, a data contract definition that matches what the server is using, and a proxy that can form the messages to send to the service and process returned messages.
There are many different ways that you can use WCF, depending on the scenario and requirements; however, there are a number of best practices you should keep in mind as you design your WCF back-end services.
Use Service Boundary Best Practices
Layering is a good idea in any application. You should already be familiar with the benefits of separating functionality into a presentation layer, business layer, and data access layer. Layering provides separation of concerns and better factoring of code, which gives you better maintainability and the ability to split layers out into separate physical tiers for scalability. In the same way that data access code should be separated into its own layer that focuses on translating between the world of the database and the application domain, services should be placed in a separate service layer that focuses on translating between the services-oriented external world and the application domain (see Figure 1).
Having a service layer implies that you've put your service definitions into a separate class library, and host that library in your service host environment. The service layer dispatches calls into the business layer to get the work of the service operation done.
WCF supports putting your service contract and operation contract attributes directly in the implementation class, but you should always avoid doing so. Having an interface definition that clearly defines what the service boundary looks like, separate from the implementation of that service, is preferable.
For example, you might implement a simple service contract definition like this:
[ServiceContract()]
public interface IProductService
{
[OperationContract()]
List<ConsumerProduct> GetProducts();
}
An important aspect of SOA design is hiding all the details of the implementation behind the service boundary. This includes revealing or dictating what particular technology was used. It also means you shouldn't assume the consuming application supports a complex object model. Part of the service boundary definition is the data contract definition for the complex types that will be passed as operation parameters or return values.
For maximum interoperability and alignment with SOA principles, you should not pass .NET-specific types, such as DataSets or Exceptions, across the service boundary. You should stick to fairly simple data structure objects such as classes with properties and backing member fields. You can pass objects that have nested complex types such as a Customer with an Order collection, but you shouldn't make any assumptions about the consumer being able to support object-oriented constructs like interfaces or base classes for interoperable Web services.
However, if you're using WCF only as a new remoting technology to get two different pieces of code to talk to each other across processes, with no expectation or requirement for others to write consuming applications, then you can pass whatever you want as a data contract. You just have to make sure that those types are marked appropriately as data contracts or are serializable types. Generally speaking, you face various challenges when passing DataSets through WCF services, so you should avoid doing so, except for simple scenarios. If you do want to pursue using DataSets with WCF, you should definitely use typed DataSets and try to stick to individualy typed DataTables as parameters or return types.
Note that the simple service contract described previously is defined in terms of List<ConsumerProduct>. WCF is designed to flatten enumerable collections into arrays at the service boundary. Rather than limiting interoperability, this feature makes your life easier when populating and using your collections in the service and business layers.
For example, consider this data contract definition for the ConsumerProduct type:
[DataContract()]
public class ConsumerProduct
{
private int m_ProductID;
private string m_ProductName;
private double m_UnitPrice;
[DataMember()]
public int ProductID
{
get { return m_ProductID; }
set { m_ProductID = value; }
}
[DataMember()]
public string ProductName
{
get {return m_ProductName; }
set{ m_ProductName = value; }
}
[DataMember()]
public double UnitPrice
{
get {return m_UnitPrice; }
set { m_UnitPrice = value; }
}
}
Use Per-Call Instancing
Another important best practice to adhere to: Services should use per-call instancing as a default. WCF supports three instancing modes for services: Per-Call, Per-Session, and
Single. Per-Call creates a new instance of the service implementation class for each operation call that comes into the service and disposes of it when the service operation is
complete. This is the most scalable and robust option, so it's unfortunate that the WCF product team decided to change from this being the default instancing mode shortly before release of the product. Per-Session allows a single client to keep a service instance alive on the server as long as the client keeps making calls into that instance. This allows you to store state in member variables of the service instance between calls and have an ongoing, stateful conversation between a client application and a server object. However, it has several messy side effects, including the fact that consuming memory on a server when it isn't actively in use is bad for scalability. It also gets messy when transactions are involved. Single(ton) allows you to have all calls from all clients routed to a single service instance on the back-end. This allows you to use that single point of entry as a gatekeeper if your requirements dictate the need for such a thing. Using the Singleton mode is even worse for scalability because all calls into the singleton instance are serialized (one caller at a time) by default. Single mode also has some of the same side effects as sessionful services. That said, there are specific scenarios where using Per-Session or Single makes sense.
You should try to design your services as Per-Call initially because it's the cleanest, most scalable, and safest option, and only talk yourself into using Per-Session or Singleton if you understand the implications of using those modes.
This code declares a Per-Call service for the service contract illustrated previously:
[ServiceBehavior(
InstanceContextMode=
InstanceContextMode.PerCall)]
public class ProductService :
IProductService
{
List<ConsumerProduct>
IProductService.GetProducts()
{
...
}
}
I recommend checking out "Programming WCF Services" by Juval Löwy (O'Reilly, 2007) for a better understanding of the differences between--and implications of--Per-Session and Single instancing modes (see Additional Resources).
Deal with Exceptions
If an unhandled exception reaches the service boundary, WCF will catch that exception and return a SOAP fault to the caller. By default, that fault is opaque and doesn't reveal any details about what the real problem was on the back-end. This is good and aligns with SOA design principles. You should only reveal information to the client that you choose to expose, and avoid exposing details like stack traces and the like that would go with a normal exception delivery.
However, for most bindings in WCF, WCF will also fault the channel when an unhandled exception reaches the service boundary, which usually means you cannot make subsequent calls through the same proxy on the client side, and you will have to establish a new connection to the server. As a result, one of your responsibilities in designing a good service layer is to catch all exceptions and throw a FaultException<T> exception to WCF in cases when the service could recover from the exception and answer subsequent calls without causing follow-on problems.
FaultException<T> is a special type that still propagates as an Exception on the .NET call stack, but is interpreted differently by the WCF layer that performs the messaging. Think of it as a handled exception that you're throwing to WCF, as opposed to an unhandled exception that propagates into WCF on its own without your service intervening. You can pass non-detailed information to the client about what the problem was by using the Reason property on FaultException<T>. If a FaultException<T> is caught by WCF, it will still package it as a SOAP fault message, but it will not fault the channel. That means you can instruct the client to handle the resulting error and keep calling the service without needing to open a new connection.
There are many different scenarios for dealing with exceptions in WCF, as well as several ways to hook up your exception handling in the service layer.
Some basic rules you should follow include: Catch unhandled exceptions and throw a FaultException<T> if your service is able to recover and answer subsequent calls without causing further problems (which it should be able to do if you designed it as a per-call service); don't send exception details to the client except for debugging purposes during development; and pass non-detailed exception information to the client using the Reason property on FaultException<T>.
This service method captures a particular exception type and returns non-detailed information about the problem to the client through FaultException<T>:
List<ConsumerProduct>
IProductService.GetProducts()
{
try
{
...
}
catch (SqlException ex)
{
throw new FaultException<string>(
"The service could not connect to the data store",
// details argument
"Unknown error");
// Reason
}
}
The T type parameter can be any type you want, but for interoperability, you will probably want to stick to a simple data structure that is marked as a data contract or types that are serializable.
For small-scale services where you will be the only one writing the client applications, you can choose to wrap the caught exception type in a FaultException<T> (for example, FaultException<InvalidOperationException>) so that the client gets the full details of the exception, but you don't fault the channel.
Michele Leroux Bustamante's book, "Learning WCF" (O'Reilly, 2007), provides good coverage of exception handling in WCF (see Additional Resources).
Choose an Appropriate Service Host
Windows Communication Foundation has three options for hosting your services. You can be self-hosted (run your services in any .NET application that you design), IIS-hosted, or Windows Activation Service (WAS)-hosted.
Self-hosting gives you the most flexibility because you set up the hosting environment yourself. This means you can access and configure your service host programmatically and do other things like hook up to service host events for operations monitoring and control. However, self-hosting puts the responsibility for process management and other configuration options squarely on your shoulders.
This code describes a simple Windows Service self-hosting setup:
public partial class
MyNTServiceHost : ServiceBase
{
ServiceHost m_ProductServiceHost = null;
public MyNTServiceHost()
{
InitializeComponent();
}
protected override void OnStart(string[] args)
{
m_ProductServiceHost = new
ServiceHost(typeof(ProductService));
m_ProductServiceHost.Open();
}
protected override void OnStop()
{
if (m_ProductServiceHost != null)
m_ProductServiceHost.Close();
}
}
IIS-hosting allows you to deploy your services to IIS by dropping the DLLs into the \Bin directory and putting .SVC files as the service addressable endpoints. You gain the kernel-level request routing of IIS, the IIS management consoles for configuring the hosting environment, IIS's ability to start and recycle worker processes, and more. The big downside to IIS-hosting is that you're stuck with HTTP-based bindings only.
WAS is a part of IIS 7 (Windows Vista and Windows Server 2008) and gives you the hosting model of IIS. However, it also allows you to expose services using protocols other than HTTP, such as TCP, Named Pipes, and MSMQ.
WAS is almost always the best choice if you're targeting newer platforms. If you're exposing services outside your network, you will probably be using HTTP protocols anyway, so IIS-hosting is usually best for externally exposed services.
If WAS-hosting isn't an option for services running inside the intranet, you should plan on self-hosting to take advantage of other (faster and more capable) protocols inside the firewall, as well as to give you more flexibility when configuring your environment programmatically.
Use Callbacks Properly
WCF includes a capability to call back a client to return data to it asynchronously or as a form of event notification. This is handy when you want to signal the client that a long-running operation has completed on the back-end, or to notify a client of changing data that affects it. To do this, you must define a callback contract that is paired with your service contract. The client doesn't have to expose that callback contract publicly as a service to the outside world, but the service can use it to call back to the client after an initial call has been made into the service by the client. This code creates a service contract definition with a paired callback contract:
[ServiceContract(CallbackContract
=typeof(IProductServiceCallback))]
public interface IProductService
{
[OperationContract()]
List<ConsumerProduct>
GetProducts();
[OperationContract()]
void SubscribeProductChanges();
[OperationContract()]
void UnsubscribeProductChanges();
}
public interface IProductServiceCallback
{
[OperationContract()]
void ProductChanged(ConsumerProduct product);
}
If you intend to use callbacks, it's a good idea to expose a subscribe/unsubscribe API as part of the service contract. To perform callbacks, the service must capture a callback context from an incoming call, and then hold that context in memory until the point when a call is made back to the client. The client also needs to create an object that implements the callback interface to receive the incoming calls from the service, and must keep that object and the same proxy alive as long as callbacks are expected. This sets up a fairly tightly coupled communications pattern between the client and the service (including object lifetime dependencies), so it's a good idea to let the client control exactly when that tight coupling begins and ends through explicit service calls to start and end the conversation.
The biggest limitations of callbacks are that they don't scale and might not work in interop scenarios. The scaling problem is related to the fact that the service must hold a reference to the client in memory and perform the callbacks on that reference. Listing 1 contains code that illustrates how to capture, store, and change notification to the client through a callback reference for a service.
You also face an interop problem related to the fact that the callback mechanism in WCF is based on a proprietary technology with no ratified standard. It's proprietary, but at least it's expressed through things in the SOAP message that could be consumed and used properly by other technologies. However, if interop is a part of your requirements, you should avoid callbacks.
The alternatives are to set up a polling API where a client can come and ask for changes at appropriate intervals, or you can set up a publish-subscribe middleman service to act as a broker for subscriptions and publications to avoid coupling the client and the service and to keep them scalable. You can find an example of how to do this in the appendix of "Programming WCF Services" (see Additional Resources).
Use Maintainable Proxy Code
One of the tenets of service orientation is that you should share schema and contract, not types. So, to keep a client as decoupled as possible from a service definition, you shouldn't share any types between the service and the client that aren't part of the service boundary. However, if you're writing both the service and the client, you don't want to have to maintain two type definitions for the same thing.
That said, there's no crime in referencing an assembly on the client side that's also used by the service to access the .NET type definitions of the service contract and data contracts. You just have to make sure that your service is usable by clients if they don't have access to the shared assembly, rather than having to regenerate those types on the client side from the metadata. To do so, just define these in a separate assembly from the service implementation so that you can reference them from both sides without introducing additional coupling. If you do this, keep in mind that you're introducing a little more coupling between service and client in the interest of productivity and speed of development and maintenance.
As mentioned in the earlier section on faults, an unhandled exception delivered as a fault will also fault the communications channel with most of the built-in bindings. When the fault is received in a WCF client, it's raised as a FaultException if it wasn't specifically thrown as a FaultException<T> on the service side. Because it's not consistent across all bindings, and because your service code and client code should be decoupled from whatever particular binding you use, the only safe thing to do on the client side if a service call throws an exception is to assume the worst and avoid re-using the proxy. In fact, even disposing of or closing the proxy can result in a subsequent exception. This means you should wrap calls to a service in try/catch blocks and replace the proxy instance with a new one in the catch block:
public class MyClient
{
ProductServiceClient m_Proxy =
new ProductServiceClient();
private void OnGetProducts(
object sender, RoutedEventArgs e)
{
try
{
DataContext = m_Proxy.GetProducts();
}
catch (Exception ex)
{
m_Proxy = new ProductServiceClient();
}
}
}
That's about it for taking advantage of WCF in the real world. I've covered a lot of ground, including many of the constructs you will need to define WCF services and clients. I've also discussed how best practices and real-world considerations affect the choices you make for those constructs. Of course, this is just a start; the real way to absorb these lessons is to sit down at a keyboard and give them a go. To that end, the sample code for this article includes a full implementation of a service and client that calls that service with all the basic code constructs. Feel free to give the code a whirl and experiment with it to get a better feel for how the technologies in this article fit together.