Observability has become a cornerstone of modern application development, and OpenTelemetry has emerged as the industry standard for collecting telemetry data. While the CoreWCF project didn't have built-in OpenTelemetry support, the .NET Framework-based WCF had an existing implementation that served as the foundation for this integration.
The community request for OpenTelemetry support was first raised three years ago in GitHub issue #790. Now after that long time CoreWCF includes native OpenTelemetry support, bringing modern observability capabilities to CoreWCF services.
Let's walk through setting up a CoreWCF service with OpenTelemetry integration. This example uses .NET Aspire to simplify the setup and provide excellent tooling for visualizing traces, but the integration works with any OpenTelemetry-compatible setup.
Here's the Program.cs for our WCF calculator service:
var builder = WebApplication.CreateBuilder();
// Register CoreWCF services
builder.Services.AddServiceModelServices();
builder.Services.AddServiceModelMetadata();
// This call sets up OpenTelemetry, including CoreWCF instrumentation
builder.AddServiceDefaults();
var app = builder.Build();
app.MapDefaultEndpoints();
// Configure the WCF service
app.UseServiceModel(serviceBuilder =>
{
serviceBuilder.AddService<ICalculator.Calculator>();
serviceBuilder.AddServiceEndpoint<ICalculator.Calculator, ICalculator>(
new BasicHttpBinding(BasicHttpSecurityMode.Transport),
"/CalculatorService.svc");
});
app.Run();
The key to enabling OpenTelemetry in CoreWCF is the builder.AddServiceDefaults() call, which internally calls ConfigureOpenTelemetry. Here's the implementation:
public static TBuilder ConfigureOpenTelemetry<TBuilder>(this TBuilder builder)
where TBuilder : IHostApplicationBuilder
{
// Configure OpenTelemetry logging
builder.Logging.AddOpenTelemetry(logging =>
{
logging.IncludeFormattedMessage = true;
logging.IncludeScopes = true;
});
builder.Services.AddOpenTelemetry()
.WithMetrics(metrics =>
{
metrics.AddAspNetCoreInstrumentation()
.AddHttpClientInstrumentation()
.AddRuntimeInstrumentation();
})
.WithTracing(tracing =>
{
tracing.AddSource(builder.Environment.ApplicationName)
.AddSource("CoreWCF.Primitives") // This enables CoreWCF tracing
.AddHttpClientInstrumentation();
});
builder.AddOpenTelemetryExporters();
return builder;
}
The most important line for CoreWCF OpenTelemetry integration is:
.AddSource("CoreWCF.Primitives")
This tells OpenTelemetry to listen for traces emitted by the CoreWCF instrumentation. The "CoreWCF.Primitives" source name is the identifier used by CoreWCF's internal activity source to emit telemetry data.
Once configured, your CoreWCF service will automatically start emitting detailed traces for all service operations. Here's what the traces look like for our calculator service:

For comparison, here are the same service calls, traced with only the standard ASP.NET Core instrumentation (using AddAspNetCoreInstrumentation() instead of AddSource("CoreWCF.Primitives")):

As you can see, the CoreWCF-specific instrumentation provides much richer details about WCF operation execution, including operation names, binding information, and service-specific context that's not available with the standard instrumentation.
The source code for this example can be found under: Example
To start using OpenTelemetry with your CoreWCF services:
"CoreWCF.Primitives" sourceThis marks the beginning of OpenTelemetry support in CoreWCF. It is only the most basic implementation. If you discover areas that could benefit from additional telemetry or have suggestions for improvements, please create an issue on the CoreWCF GitHub repository.
]]>The CoreWCF project released a security fix in March of 2024 releated to connection initialization timeouts. The problem was caused by the ChannelInitializationTimeout not being used consistently. This setting can be found on the connection pool settings of the relevant transport binding element (e.g. TcpTransportBindingElement). This timeout is used to limit how long a client is allowed to take to complete the initial connection handshake. This includes the client informing the server which endpoint it's connecting to, and completing any security upgrade of the connection, e.g. wrapping the communication with SslStream if using certificate authentication. The vulnerability was that early in the connection handshake, a read was made from the incoming connection without applying this timeout. This allowed a client to connect to a service and cause a socket to be indefinitely allocated, never needung to send any data.
The security fix that was released was the minimum change required to prevent a service from being vulnerable, but in creating the minimal fix it became clear that there was a mismatch between the api to configure this timeout and the implementation. If you have multiple endpoints listening on the same port with different timeouts configured, CoreWCF doesn't know which endpoint a client is connecting to until part way through the handshake. Each endpoint could have a different timeout configured. The security patch released uses the first NetTcp binding configured in the service for the timeout. This isn't too dissimilar to how WCF on .NET Framework configures the timeout. The difference with WCF is that after the first endpoint is configured to listen, subsequent endpoints using the same port are compared to see if their binding is compatible, and if it isn't, will throw an exception and fault the entire ServiceHost, whereas CoreWCF will keep using the configuration from the first endpoint.
The hosting model for CoreWCF is significantly different due to being built on top of ASP.NET Core. Setting up the listening port is done at the start of the host startup before CoreWCF begins initialization and is configured independently from the service endpoint which contains the binding and listen Uri. This enables a cleaner solution as there's a single place to configure any common properties such as the connection initialization timeout.
NetNamedPipe had already introduced a newer configuration api shape which is closer in design to the ASP.NET Core Kestrel configuration api's. As NetNamedPipe, NetTcp, and UnixDomainSocket all share common code in NetFramingBase, having a shared base implementation will also result in simpler code.
The result of this is a new configuration api for using NetTcp. The existing api's will continue to be available as they've been reimplemented using the new api. All your existing code will continue to so there's no code change needed unless you wish to modify any of the connection properties exposed by the newer api.
Let's start with an existing CoreWCF which is using NetTcp. Using ASP.NET Core's Minimal API, the bare bones setup code would look like this:
using System.Net;
var builder = WebApplication.CreateBuilder();
builder.WebHost.UseNetTcp(IPAddress.IPv6Any, 8808);
builder.Services.AddServiceModelServices();
var app = builder.Build();
app.UseServiceModel(serviceBuilder =>
{
serviceBuilder.AddService<Service>();
serviceBuilder.AddServiceEndpoint<Service, IService>(
new NetTcpBinding(), "/Service.svc");
});
app.Run();
You can keep using this code if you wish, or you can replace the call to UseNetTcp to use the new api overload which would look like this:
builder.WebHost.UseNetTcp((NetTcpOptions options) =>
{
options.Listen("net.tcp://localhost:8808");
});
Providing a uri instead of an IP address and port number is how a base address is passed to ServiceHost in WCF. The IP address used to listen for connections is now also the same behavior as WCF. If the hostname portion of the uri is an IP address, it listens on that IP. If you specify a dns or machine name, then if the OS supports IPv6, it listens on IPAddress.IPv6Any, which listens on all IPv6 and IPv4 addresses. If the OS doesn't support IPv6, then it will listen on IPAddress.Any, which listens on all IPv4 addresses. The hostname "localhost" has no special treatment and will result in listening on all addresses. This again matches the behavior of WCF and hopefully minimizes any confusion when porting services from WCF to CoreWCF. If you wish to only accept connections on the loopback address, specify the loopback IP address as the hostname, ie options.Listen("net.tcp://127.0.0.1:8808").
If you wish to modify one of the properties of the listening port, then you can provide a delegate to the NetTcpOptions.Listen method to modify those properties. Here's an example:
builder.WebHost.UseNetTcp((NetTcpOptions options) =>
{
options.Listen("net.tcp://localhost:8808", (TcpListenOptions listenOptions) =>
{
listenOptions.ConnectionPoolSettings.ChannelInitializationTimeout =
TimeSpan.FromSeconds(5);
});
});
Both the NetTcpOptions and TcpListenOptions classes have the property IServiceProvider ApplicationServices { get; } defined to make it more convenient to retrieve values from DI. This can make scenarios such as using configuration to store your listen address easier to implement, especially if you don't use an anonymous delegate for your configuration when you might not be able to access anything outside of the passed in parameters. There are overloads of NetTcpOptions.Listen that take a Uri instead of a string.
If you wish to have a second service listening on a different port, there are 2 changes you need to make. The first is a second call to NetTcpOptions.Listen to specify the second port you wish to listen on. The second change would be to use an overload of AddService<TService> which takes a delegate to configure ServiceOptions. In this configuration delegate and you would configure the specific base address with the port number you are using for that service. The complete solution would look like this:
var builder = WebApplication.CreateBuilder();
builder.WebHost.UseNetTcp((NetTcpOptions options) =>
{
options.Listen("net.tcp://localhost:8808");
options.Listen("net.tcp://localhost:8809");
});
builder.Services.AddServiceModelServices();
var app = builder.Build();
app.UseServiceModel(serviceBuilder =>
{
serviceBuilder.AddService<Service>((ServiceOptions serviceOptions) =>
{
serviceOptions.BaseAddresses.Clear();
serviceOptions.BaseAddresses.Add(new Uri("net.tcp://localhost:8808/"));
});
serviceBuilder.AddServiceEndpoint<Service, IService>(new NetTcpBinding(), "/Service.svc");
serviceBuilder.AddService<Service2>((ServiceOptions serviceOptions) =>
{
serviceOptions.BaseAddresses.Clear();
serviceOptions.BaseAddresses.Add(new Uri("net.tcp://localhost:8809/"));
});
serviceBuilder.AddServiceEndpoint<Service2, IService2>(new NetTcpBinding(), "/Service.svc");
});
app.Run();
The existing binding api's for settings that now appear on TcpListenOptions now have the ObsoleteAttribute applied with a message to use the new api.
The CoreWCF 1.6.0 release introduced a new feature that allows to apply a ServiceBehavior registered in DI to only one Service when hosting multiple services in a single host.
This feature uses the IKeyedServiceProvider capabilities introduced in .NET8 and requires the registration of the ServiceBehavior type to be done using the IServiceCollection.AddKeyedSingleton<TService, TImplementation>(object? serviceKey) extension method.
In the below Startup class, the call to AddKeyedSingleton<IServiceBehavior, MyServiceBehavior>(typeof(ReverseEchoService)) apply MyServiceBehavior to ReverseEchoService only.
private class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddSingleton<EchoService>();
services.AddSingleton<ReverseEchoService>();
services.AddKeyedSingleton<IServiceBehavior, MyServiceBehavior>(typeof(ReverseEchoService));
services.AddServiceModelServices();
}
public void Configure(IApplicationBuilder app)
{
app.UseServiceModel(builder =>
{
builder.AddService<EchoService>();
builder.AddServiceEndpoint<EchoService, IEchoService>(new BasicHttpBinding(), "/EchoService.svc");
builder.AddService<ReverseEchoService>();
builder.AddServiceEndpoint<ReverseEchoService, IReverseEchoService>(new BasicHttpBinding(), "/ReverseEchoService.svc");
});
}
}
The CoreWCF 1.6.0 release introduced a new feature that enables injecting incoming message properties as operation contract parameters.
This feature is implemented in the OperationContractParameterGenerator which already support injecting services registered in the DI container or HttpContext into the operation contract parameters.
The InjectedAttribute.PropertyName is exposed to specify the message property to be injected.
namespace CoreWCF
{
[AttributeUsage(AttributeTargets.Parameter)]
public sealed class InjectedAttribute : Attribute
{
public string PropertyName { get; set; }
}
}
A service exposing a NetTcpBinding can inject the RemoteEndpointMessageProperty.
public interface IHelloWorldService
{
string SayHello();
}
public class HelloWorldService : IHelloService
{
public string SayHello([Injected(PropertyName = RemoteEndpointMessageProperty.Name)] RemoteEndpointMessageProperty remoteEndpointMessageProperty)
{
return $"Hello from {remoteEndpointMessageProperty.Address}:{remoteEndpointMessageProperty.Port}";
}
}
A service exposing a BasicHttpBinding can inject the HttpRequestMessageProperty.
public interface IHelloWorldService
{
string SayHello();
}
public class HelloWorldService : IHelloService
{
public string SayHello([Injected(PropertyName = HttpRequestMessageProperty.Name)] HttpRequestMessageProperty httpRequestMessageProperty)
{
return $"Hello from {remoteEndpointMessageProperty.Address}:{remoteEndpointMessageProperty.Port}";
}
}
When PropertyName is provided a null or empty string value a COREWCF_103 build error is triggered.
In addition, a KafkaMessageProperty has been added to the Kafka transport at both client and service ends to provide control over the partition key and the headers attached to the message and transported through the kafka topic.
namespace CoreWCF.ServiceModel.Channels;
public sealed class KafkaMessageProperty
{
public static readonly string Name = "CoreWCF.ServiceModel.Channels.KafkaMessageProperty";
public IList<KafkaMessageHeader> Headers { get; } = new List<KafkaMessageHeader>();
public byte[] PartitionKey { get; set; }
}
namespace CoreWCF.Channels;
public sealed class KafkaMessageProperty
{
private readonly IList<KafkaMessageHeader> _headers = new List<KafkaMessageHeader>();
public const string Name = "CoreWCF.Channels.KafkaMessageProperty";
internal KafkaMessageProperty(ConsumeResult<byte[], byte[]> consumeResult)
{
foreach (IHeader messageHeader in consumeResult.Message.Headers)
{
_headers.Add(new KafkaMessageHeader(messageHeader.Key, messageHeader.GetValueBytes()));
}
PartitionKey = consumeResult.Message.Key;
Topic = consumeResult.Topic;
}
public IReadOnlyCollection<KafkaMessageHeader> Headers => _headers as IReadOnlyCollection<KafkaMessageHeader>;
public ReadOnlyMemory<byte> PartitionKey { get; }
public string Topic { get; }
}
Client side the partition key and the headers can be provided using an OperationContextScope.
using (var scope = new System.ServiceModel.OperationContextScope((System.ServiceModel.IContextChannel)channel))
{
ServiceModel.Channels.KafkaMessageProperty outgoingProperty = new();
outgoingProperty.Headers.Add(new ServiceModel.Channels.KafkaMessageHeader("header1", Encoding.UTF8.GetBytes("header1Value")));
outgoingProperty.PartitionKey = Encoding.UTF8.GetBytes("key");
System.ServiceModel.OperationContext.Current.OutgoingMessageProperties[ServiceModel.Channels.KafkaMessageProperty.Name] =
outgoingProperty;
channel.DoSomething();
}
Service side the implementer can get these values back by injecting the KafkaMessageProperty
public void DoSomething([Injected(PropertyName = KafkaMessageProperty.Name)] KafkaMessageProperty kafkaMessageProperty)
{
IReadOnlyCollection<KafkaMessageHeader> headers = kafkaMessageProperty.Headers;
ReadOnlyMemory<byte> partitionKey = kafkaMessageProperty.PartitionKey;
string topic = kafkaMessageProperty.Topic;
}
CoreWCF 1.4 release introduced Apache Kafka transport support through the publish of 2 new nuget packages CoreWCF.Kafka and CoreWCF.Kafka.Client. The Kafka protocol implementation is provided by taking a dependency on the Confluent.Kafka nuget package and the underlying librdkafka C/C++ library.
Both server and client packages expose a KafkaBinding that should be sufficient to configure security and transport to the broker.
However in certain scenario it could be useful to finegrain properties of Confluent.Kafka / librdkafka, this can be achieved by creating a CustomBinding and pulling out the KafkaTransportBindingElement. This element exposes all the properties exposed by ConsumerConfig and ProducerConfig from Confluent.Kafka.
var binding = new KafkaBinding(KafkaDeliverySemantics.AtMostOnce)
{
AutoOffsetReset = AutoOffsetReset.Earliest,
GroupId = "my-group"
};
var customBinding = new CustomBinding(binding);
KafkaTransportBindingElement transport = customBinding.Elements.Find<KafkaTransportBindingElement>();
transport.Debug = "all";
The table below summarizes the mapping between Confluent.Kafka and KafkaBinding security modes.
| SecurityProtocol | SaslMechanism | ClientCertfiicate | CoreWCF KafkaBinding configuration |
|---|---|---|---|
Plaintext |
N/A | N/A | KafkaSecurityMode.None + KafkaCredentialType.None |
Ssl |
N/A | No | KafkaSecurityMode.Transport + KafkaCredentialType.None + requires configuring CaPem |
| N/A | Yes | KafkaSecurityMode.Transport + KafkaCredentialType.SslKeyPairCertificate + requires configuring CaPem + providing a SslKeyPairCredential instance |
|
SaslPlaintext |
Gssapi |
N/A | supported through custom binding |
Plain |
N/A | KafkaSecurityMode.TransportCredentialOnly + KafkaCredentialType.SaslPlain + providing a SaslUsernamePasswordCredential instance |
|
ScramSha256 |
N/A | supported through custom binding | |
ScramSha512 |
N/A | supported through custom binding | |
OAuthBearer |
N/A | supported through custom binding | |
SaslSsl |
Gssapi |
N/A | supported through custom binding |
Plain |
N/A | KafkaSecurityMode.Transport + KafkaCredentialType.SaslPlain + requires configuring CaPem + providing a SaslUsernamePassword instance |
|
ScramSha256 |
N/A | supported through custom binding | |
ScramSha512 |
N/A | supported through custom binding | |
OAuthBearer |
N/A | supported through custom binding |
First, configure CoreWCF to consume the topic my-topic with consumer group id my-consumer-group. To specify from which offset the consumer want to start consuming messages the AutoOffsetReset property should be provided.
var builder = WebApplication.CreateBuilder();
builder.Services.AddServiceModelServices().AddQueueTransport()
var app = builder.Build();
app.UseServiceModel(serviceBuilder =>
{
services.AddService<Service>();
services.AddServiceEndpoint<Service, IService>(new CoreWCF.Kafka.KafkaBinding
{
AutoOffsetReset = AutoOffsetReset.Earliest,
DeliverySemantics = KafkaDeliverySemantics.AtMostOnce,
GroupId = "my-consumer-group"
}, $"net.kafka://localhost:9092/my-topic");
});
Then, configure a client to produce messages to topic my-topic.
CoreWCF.ServiceModel.Channels.KafkaBinding kafkaBinding = new();
var factory = new System.ServiceModel.ChannelFactory<IService>(kafkaBinding,
new System.ServiceModel.EndpointAddress(new Uri($"net.kafka://localhost:9092/my-topic")));
IService channel = factory.CreateChannel();
await channel.CallServiceAsync(name);
The delivery semantic can be configured at the binding level to AtLeastOnce or AtMostOnce.
CoreWCF.Kafka.KafkaBinding = new KafkaBinding
{
DeliverySemantics = KafkaDeliverySemantics.AtLeastOnce;
}
The error handling strategy can be configured at the binding level to Ignore or DeadLetterQueue.
When specifying DeadLetterQueue, DeadLetterQueueTopic should also be provided.
CoreWCF.Kafka.KafkaBinding = new KafkaBinding
{
ErrorHandlingStrategy = KafkaErrorHandlingStrategy.DeadLetterQueue,
DeadLetterQueueTopic = "my-topic-DLQ",
}
AWS has published two NuGet packages, AWS.CoreWCF.Extensions (server side) and AWS.WCF.Extensions (client side), to enable WCF clients and CoreWCF services to communicate via AWS SQS Queue Transport. This is primarily to enable users to migrate services using MSMQ binding from on-premises to cloud. With AWS SQS transport binding, customers can send SOAP messages to AWS SQS via the WCF client and run CoreWCF services to receive and process those messages without changing any contract or service implementations.
Along with moving SOAP services to the AWS cloud, this transport provides the extensibility to attach callback methods that can trigger SNS notifications, AWS lambda invocations, etc. once a message is processed. By using this transport, customers can get all AWS SQS metrics out of the box. SQS metrics can also be used to cost-effectively scale your services utilizing the EC2 Autoscaling functionality. [ Scaling based on Amazon SQS ]
On the WCF client side you add the AWS.WCF.Extensions package from Nuget. Once you have added the package, you need to instantiate a ChannelFactory for your contract using the AWS SQS binding.
var queueName = "your-aws-sqs-queue";
var sqsClient = new AmazonSQSClient("Your AWS AccessKey", "your aws secret key");
var sqsBinding = new AWS.WCF.Extensions.SQS.AwsSqsBinding(sqsClient, queueName);
var endpointAddress = new EndpointAddress(new Uri(sqsBinding.QueueUrl));
var factory = new ChannelFactory<YourFooService>(sqsBinding, endpointAddress);
var channel = factory.CreateChannel();
((System.ServiceModel.Channels.IChannel)channel).Open();
channel.InvokeFoo("Hello there");
To transmit messages, you must first identify the SQS queue and the credentials you will need. [ Setting up Amazon SQS ]
var sqsClient = new AmazonSQSClient("Your AWS AccessKey", "your aws secret key");
AmazonSQSClient can be initialized by providing your AWS Credentials directly. If you prefer using configuration based approach, here is the documentation. [ Using the IConfiguration interface ]
On the CoreWCF server side, you need to add the AWS.CoreWCF.Extensions package from NuGet. Then you can initialize the service as shown below.
public class Program
{
static void Main(String[] args)
{
var host = WebHost.CreateDefaultBuilder(Array.Empty<string>())
.UseStartup<Startup>()
.Build();
host.Run();
}
public class Startup
{
private static readonly string _queueName = "your-aws-sqs-queue";
public void ConfigureServices(IServiceCollection services)
{
services.AddSingleton<LoggingService>();
services.AddServiceModelServices();
services.AddQueueTransport();
// AWS Configuration
AWSOptions option = new AWSOptions();
option.Credentials = new BasicAWSCredentials("your access key", "your secret key");
services.AddDefaultAWSOptions(option);
services.AddSQSClient(_queueName);
//end of AWS Configuration
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
var queueUrl = app.EnsureSqsQueue(_queueName);
app.UseServiceModel(services =>
{
services.AddService<LoggingService>();
services.AddServiceEndpoint<LoggingService, ILoggingService>(
new AwsSqsBinding(), queueUrl);
});
}
}
There are a few things to note here. In the ConfigureServices method, you are passing AWS credentials for the SQS queue via the AddDefaultAWSOptions method and calling the AddSQSClient extension method to initialize the SQS client.
There are a few things to note here . In the Configure method, EnsureSqsQueue is called which ensures that the queue exists. If the queue doesn't already exist, it will be created and returns the url for the queue. If the queue already exists, it returns the url for the existing queue. Optionally, you can pass several parameters to create the queue if needed via CreateQueueRequestBuilder.
We've added a property named DispatchCallbacksCollection to the AwsSqsBinding class. This property is of type IDispatchCallbacksCollection which is defined as follows.
public interface IDispatchCallbacksCollection
{
NotificationDelegate NotificationDelegateForSuccessfulDispatch { get; set; }
NotificationDelegate NotificationDelegateForFailedDispatch { get; set; }
}
public delegate Task NotificationDelegate(IServiceProvider services, QueueMessageContext context);
You can provide an implementations of IDispatchCallbacksCollection to customize the service behavior when a message has completed being processed. If a message is successfully dispatched with no exceptions thrown, the delegate NotificationDelegateForSuccessfulDispatch will be called. If there is a problem while dispatching a message, either in deserializing the message and selecting the operation, or the service implementation throws, the delegate NotificationDelegateForFailedDispatch will be called. Some examples of what an implementation could do are notifying other consumers, triggering AWS Lambda functions, notifying SNS subscribers, etc.
By default, AwsSqsBinding uses a default concurrency of 1, meaning the client will fetch 10 messages in a batch and one thread will process them one at a time. If you increase the concurrency level to more than 1, your messages will be pulled from SQS in batches by a single thread, and processed concurrently on multiple threads.
There is a sample available here. Please try and provide feedback here.
Thanks to AWS .NET open source team for working on this project.
]]>UnixDomainSocketSecurityMode, and its value TransportCredentialOnly, and renaming the credential enum value IdentityOnly to PosixIdentity. This change was made to reflect that PosixIdentity doesn't provide any transport encryption or integrity.With the new v1.5.0-preview1 release, CoreWCF will have an additional binding using Unix Domain Sockets (UDS). We're providing a UDS based transport for CoreWCF and the WCF Client to provide an alternative to NetNamedPipe which works on Linux. NetNamedPipe only works in Windows, while UDS is cross platform and is supported on Linux and Windows.
We have added a new extension method for IHostBuilder called UseUnixDomainSocket. This extension method adds all the types to DI that are necessary for CoreWCF to configure ASP.NET Core for the UDS transport. We implement the UDS transport using a hosted service implementing the IHostedService interface. This hosted service creates its own instance of KestrelServer and configures it to use Unix domain sockets. We implemented it this way as a KestrelServer instance can only use a single transport type. This enables using Kestrel for TCP based communication for handling HTTP requests via the regular ASP.NET Core configuration mechanisms without conflicting with the need for CoreWCF to use Unix domain sockets.
In the initial release, UDS supports two security modes, None and Transport. This matches the capabilities of the NetNamedPipe binding as they are intended for use in the same scenarios. When using Transport security, we support four client credential types. These are:
The client credential type of PosixIdentity was specifically introduced for the Linux OS. As the name suggests, it provides the service with the Posix identity of the calling client, but it does not encrypt or sign the data that flows back and forth between the client and the service. As UDS is used for communication between processes on the same host, the communication can't be observed by a 3rd party host. If you wish to keep the communication private from other processes on the same machine, we recommend you use the Certificate client credential type which will secure the communication using TLS.
The server gets the user information for the process that owns the client end of the socket and populates the Claims information. This allows authorizing clients with a custom authorization manager without the need to manage any additional infrastructure such as certificate distribution. When a client makes a service call, the username is populated in a GenericIdentity wrapped in a ClaimsIdentity and is available from OperationContext.ServiceSecurityContext.PrimaryIdentity. The ClaimsIdentity instance will have the following claims added.
| Claim Type | Purpose/Value |
|---|---|
| http://schemas.xmlsoap.org/ws/2005/05/identity/claims/posixgroupid | The id of the group the client process belongs to |
| http://schemas.xmlsoap.org/ws/2005/05/identity/claims/posixgroupname | The nane of the group the client process belongs to |
| http://schemas.xmlsoap.org/ws/2005/05/identity/claims/processid | The process id of the client process |
The client credential type Default uses a different credential type based on the OS. When running on Windows, it's equivalent to Windows and will authenticate the client the same way as NetNamedPipe authenticates. When running on Linux, it's equivalent to PosixIdentity.
Here is how you initialize a service in your start up class:
var hostBuilder = Host.CreateDefaultBuilder(Array.Empty<string>());
hostBuilder.UseUnixDomainSocket(options =>
{
options.Listen(new Uri("net.uds://" + "yoursocketfilepath"));
});
hostBuilder.ConfigureServices(services => services.AddServiceModelServices());
IHost host = hostBuilder.Build();
CoreWCF.UnixDomainSocketBinding serverBinding = new CoreWCF.UnixDomainSocketBinding(UnixDomainSocketSecurityMode.TransportCredentialOnly);
host.UseServiceModel(builder =>
{
builder.AddService<Services.TestService>();
builder.AddServiceEndpoint<Services.TestService, ServiceContract.ITestService>(serverBinding, "net.uds://" + "yoursocketfilepath");
});
await host.RunAsync();
Once the service has been started, the client can be used like below:
var binding = new System.ServiceModel.UnixDomainSocketBinding(System.ServiceModel.UnixDomainSocketSecurityMode.TransportCredentialOnly);
binding.Security.Transport.ClientCredentialType = System.ServiceModel.UnixDomainSocketClientCredentialType.PosixIdentity;
var factory = new System.ServiceModel.ChannelFactory<ClientContract.ITestService>(binding, new System.ServiceModel.EndpointAddress(new Uri("net.uds://" + yoursocketfilepath)));
channel = factory.CreateChannel();
((IChannel)channel).Open();
string result = channel.EchoString(testString);
The UDS client packages have been released as part of the WCF Client project at https://github.com/dotnet/wcf as a part of the 6.2 release. The client NuGet package is called System.ServiceModel.UnixDomainSocket.
Thanks to AWS(https://aws.amazon.com/blogs/opensource/category/programing-language/dot-net/) for supporting this project for the last 3 years.
]]>We've just released the preview1 release of CoreWCF 1.4.0, and it comes with some new transports. We're adding named pipe support along with multiple queue based transports. We're initially releasing with MSMQ and RabbitMQ support, with Apache Kafka support coming in a future release. There is a lot of new code with this release which is why we're releasing it as a preview. Different components will come out of preview at different times, and I'll talk about the milestones needed for a GA release of each of them.
The CoreWCF.NetNamedPipe package provides the NetNamedPipe binding that many will be familiar with from WCF. This package will only work on Windows as it uses an indirect connection protocol involving named shared memory that isn't transferable to Linux. The named pipe transport shares most of its code with the NetTcp transport, the only real difference being how to create a connection, and send/receive bytes. The connection open handshake and message framing is identical for these two transports. That common code has been moved from the NetTcp package into the NetFramingBase package and is a dependency for both transports. This makes it a lot easier to add more connection based transports in the future. We plan to add Unix domain socket support to provide an equivalent to named pipe support that can be used on Linux. Recent releases of Windows also support Unix domain sockets and we'll be able to release the Unix domain socket support across platform.
Adding NetNamedPipe support to an app is similar to how you add NetTcp support. The API pattern has evolved a bit to make dealing with base paths a bit easier.
var builder = WebApplication.CreateBuilder();
builder.WebHost.UseNetNamedPipe(options =>
{
options.Listen("net.pipe://localhost/MyService.svc");
});
builder.Services.AddServiceModelServices();
var app = builder.Build();
app.UseServiceModel(serviceBuilder =>
{
serviceBuilder.AddService<Service>();
serviceBuilder.AddServiceEndpoint<Service, IService>(new NetNamedPipeBinding(), "/netpipe");
});
app.Run();
There are overloads to the Listen method which takes another delegate to configure some settings which used to exist on NetNamedPipeBinding or NamedPipeTransportBindingElement but make more sense to be placed as listen options when configuring the WebHost. This API pattern matches the who you configure Kestrel. An application which uses that might look like this.
var builder = WebApplication.CreateBuilder();
builder.WebHost.UseNetNamedPipe(options =>
{
options.Listen(new Uri("net.pipe://localhost/service.svc/"), listenOptions =>
{
// Set connection buffer size to 64K
listenOptions.ConnectionBufferSize = 64 * 1024;
});
});
There are a couple of items needing to be completed before named pipe support will come out of preview. The biggest one is that there's currently no WSDL support. The second one is when stopping the WebApplication, the channel close doesn't happen cleanly. Instead of informing the client that the channel is closing, the underlying named pipe connection is abruptly closed.
When discussing bringing MSMQ support in a CoreWCF issue, I mentioned I'd like to make something generic so that CoreWCF could support multiple queue transport protocols. There was a lot of positive response to this idea, so I wrote up a brief description of how that could look. Biroj Nayak (@birojnayak) from Amazon AWS took this description and after a few discussions wrote a detailed design document . Dmitry Maslov (@Ximik87) submitted a PR with an initial implementation of this base queue support along with an MSMQ implementation built on top of it. Jon Louie (@jonlouie) from Amazon AWS implemented the RabbitMQ transport on top of this. Biroj helped iterate the base queue implementation due to needing some changes to accommodate different API patterns of the RabbitMQ client library. There are two other queue transports coming soon. The first will be for Apache Kafka and is being contributed by Guillaume Delahaye (@g7ed6e). This will be part of and released by the CoreWCF project. The second will be for Azure Queue Storage which is being developed by Microsoft and will be released by them.
To use any of the queue transports, you need to add the generic queue support to your application services.
var builder = WebApplication.CreateBuilder();
builder.Services.AddServiceModelServices()
.AddQueueTransport();
The CoreWCF project has a requirement to only depend on 3rd party packages which are backed by an open source foundation with a charter that ensures continued support of libraries if the maintainers step away from the project. A well known example in the .NET ecosystem is the .NET Foundation. This is important to ensure that any potential security issues can be fixed in a timely manner. The current MSMQ implementation depends on a community owned release of a fork of the .NET Framework System.Messaging libraries which doesn't meet this requirement. The CoreWCF.MSMQ package will remain in a pre-release state until we have resolved this. We would encourage you to use the CoreWCF MSMQ transport as a stepping stone to moving to a more modern queue transport.
To use the MSMQ transport, in addition to adding the queue transport support, you also need to add Msmq support.
var builder = WebApplication.CreateBuilder();
builder.Services.AddServiceModelServices()
.AddQueueTransport()
.AddServiceModelMsmqSupport();
var app = builder.Build();
app.UseServiceModel(serviceBuilder =>
{
serviceBuilder.AddService<Service>();
serviceBuilder.AddServiceEndpoint<Service, IService>(new NetMsmqBinding(), "net.msmq://localhost/private/myqueue");
});
app.Run();
The RabbitMQ support has separate service and client packages, CoreWCF.RabbitMQ and CoreWCF.RabbitMQ.Client respectively. While the CoreWCF.RabbitMQ service package supports netstandard2.0, the client only CoreWCF.RabbitMQ.Client packge supports .NET 6.0 and later. This is due to requiring the latest (as of writing still in pre-release) WCF client packages which support .NET 6.0 or later. These packages will come out of preview once we've had enough feedback/usage to have confidence that we have the correct APIs for configuring and developers are able to use these packages successfully.
Here's the code you need to use the service binding.
var builder = WebApplication.CreateBuilder();
builder.Services.AddServiceModelServices()
.AddQueueTransport()
var app = builder.Build();
app.UseServiceModel(serviceBuilder =>
{
var uri = new Uri("net.amqps://localhost:5672/amq.direct/corewcf-classic-queue#corewcf-classic-key");
var sslOption = new SslOption
{
ServerName = uri.Host,
Enabled = true
};
// Replace with actual credentials for connecting to RabbitMQ host
var credentials = new NetworkCredential(ConnectionFactory.DefaultUser, ConnectionFactory.DefaultPass);
serviceBuilder.AddService<Service>();
services.AddServiceEndpoint<Service, IService>(
new RabbitMqBinding
{
SslOption = sslOption,
Credentials = credentials,
QueueConfiguration = new ClassicQueueConfiguration()
},
uri);
});
});
app.Run();
For the queue configuration, we have two types, ClassicQueueConfiguration and QuorumQueueConfiguration. This configures whether the queue is either a classic mirrored queue, or the newer quorum queue. More information can be found here. There are properties on these classes to configure various aspects of the queue such as whether it is durable.
There is a mapping of the Uri passed to the endpoint and the queue options. The format of the Uri is SCHEME://HOSTNAME:PORT/EXCHANGE/QUEUE_NAME#ROUTING_KEY. The scheme can be one of two values, net.amqp or net.amqps, with the latter using a TLS secured connection to the AMQP server. The exchange and routing key are optional. If you aren't using an exchange and only have a queue name, ensure the Uri does NOT end in a slash. For example, the Uri net.amqp://server/queuename will connect to the queue called queuename and won't use an exchange. You can provide an optional routing key as a Uri fragment.
For the code example above, we're connecting to an AMQP server running on port 5672, using TLS. The exchange is amq.direct, and the queue name is corewcf-classic-queue. It's using the routing key corewcf-classic-key.
This is how you would use the client binding with the WCF Client.
var uri = new Uri("net.amqps://localhost:5672/amq.direct/corewcf-classic-queue#corewcf-classic-key");
var sslOption = new SslOption
{
ServerName = uri.Host,
Enabled = true
};
var endpointAddress = new System.ServiceModel.EndpointAddress(uri);
var rabbitMqBinding = new ServiceModel.Channels.RabbitMqBinding
{
SslOption = sslOption
};
var factory = new ChannelFactory<IService>(rabbitMqBinding, endpointAddress);
factory.Credentials.UserName.UserName = ConnectionFactory.DefaultUser;
factory.Credentials.UserName.Password = ConnectionFactory.DefaultPass;
var channel = factory.CreateChannel();
((System.ServiceModel.Channels.IChannel)channel).Open();
await channel.CallServiceAsync();
There's one last minor feature that's shipping in 1.4.0 that many have been asking for. With this release you can now host a NetTcp service in the same WebHost that's serving HTTP requests with an IServer other than Kestrel. This means you can host a NetTcp service in IIS without having to run a second WebHost instance in the same process. This doesn't bring support for NetTcp activation or port sharing yet.
Please give this release a try and give any feedback in the discussion linked in the release notes here.
]]>The latest release of CoreWCF will bring support of ASP.NET Core Authorization to allow developers to use ASP.NET Core builtin authentication middleware such as the Microsoft.AspNetCore.Authentication.JwtBearer and apply appropriate authorization policies.
When working with ASP.NET Core MVC usually developers use [Authorize] and [AllowAnonymous] to decorate actions that require specific authorizations.
To enable a seamless developer experience we brought the ability to decorate an OperationContract implementation with the ASP.NET Core Authorize attribute. However we introduced the below limitations to suggest developers to embrace the flexible Policy-based model based on IAuthorizationRequirement.
AuthenticationSchemes property is not supported and will trigger a build warning COREWCF_0201.Roles property is not supported and will trigger a build warning COREWCF_0202.We did not bring support of the [AllowAnonymous] attribute as we believe that a strong interface segregation between anonymous and secured operations should be set. Moreover supporting this attribute would imply delaying the authentication step in the pipeline leading to potential DoS vulnerabilities. Decorating an OperationContract implementation with [AllowAnonymous] will have no effect and will trigger a build warning COREWCF_0200.
To setup this feature in your CoreWCF application you should follow the below steps. I'm assuming that we want to enforce clients authenticating using a JWT Bearer token issued by an authorization server https://authorization-server-uri, the service should be protected by the audience my-audience and two policies should be defined, one requiring a scope read and another one requiring a scope write.
<PackageReference Include="Microsoft.AspNetCore.Authentication.JwtBearer" Version="6.0.12" />
Note: Due to this issue, you have to explicitly reference the latest version of Microsoft.IdentityModel.Protocols.OpenIdConnect after installing Microsoft.AspNetCore.Authentication.JwtBearer.
<PackageReference Include="Microsoft.IdentityModel.Protocols.OpenIdConnect" Version="6.25.1" />
AuthenticationScheme. (Internally CoreWCF is calling HttpContext.AuthenticateAsync() with the default registered authentication scheme).services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddJwtBearer(JwtBearerDefaults.AuthenticationScheme, options =>
{
options.Authority = "https://authorization-server-uri";
options.Audience = "my-audience";
options.TokenValidationParameters = new TokenValidationParameters
{
ValidateIssuer = true,
ValidateAudience = true,
ValidateLifetime = true,
RequireSignedTokens = true,
};
});
services.AddAuthorization(options =>
{
options.DefaultPolicy = new AuthorizationPolicyBuilder(JwtBearerDefaults.AuthenticationScheme)
.RequireAuthenticatedUser()
.RequireClaim("scope", "read")
.Build();
options.AddPolicy("WritePolicy", new AuthorizationPolicyBuilder(JwtBearerDefaults.AuthenticationScheme)
.RequireAuthenticatedUser()
.RequireClaim("scope", "write")
.Build());
})
ClientCredentialType to HttpClientCredentialType.InheritedFromHost.app.UseServiceModel(builder =>
{
builder.AddService<SecuredService>();
builder.AddServiceEndpoint<SecuredService, ISecuredService>(new BasicHttpBinding
{
Security = new BasicHttpSecurity
{
Mode = BasicHttpSecurityMode.Transport,
Transport = new HttpTransportSecurity
{
ClientCredentialType = HttpClientCredentialType.InheritedFromHost
}
}
}, "/BasicWcfService/basichttp.svc");
}
[ServiceContract]
public interface ISecuredService
{
[OperationContract]
string ReadOperation();
[OperationContract]
void WriteOperation(string value);
}
public class SecuredService : ISecuredService
{
[Authorize]
public string ReadOperation() => "Hello world";
[Authorize(Policy = "WritePolicy")]
public void WriteOperation(string value) { }
}
ASP.NET Core Authorization policies support is implemented in http based bindings:
BasicHttpBindingWSHttpBindingWebHttpBindingASP.NET Core 3.0 introduced a FallbackPolicy. This authorization policy is executed when no policy is configured for a given endpoint. As CoreWCF does not expose its endpoints to the endpoint routing infrastructure, this policy may be executed depending on the configured request pipeline. To avoid the FallbackPolicy being executed the call to CoreWCF middleware (i.e UseServiceModel(...)) should occur before the call to the authorization middleware (i.e UseAuthorization(...)).
There's an important difference regarding the "when" authorization evaluation occurs between ServiceAuthorizationManager usage and the ASP.NET Core Authorization usage.
When using ASP.NET Core Authorization, ths below steps will be executed before authorization which didn't when using ServiceAuthorizationManager.
IDispatchMessageInspector.AfterReceiveRequestAnother impact is that authorization will now run on a captured SynchronizationContext. This point can impact CoreWCF services hosted in a UI thread (WPF or WinForms app).
ServiceAuthorizationManagerHaving ClientCredentialType set to InheritedFromHost disables the execution of an authorization logic implemented in ServiceAuthorizationManager.
These two classes now have async versions of the virtual methods which you can override. The existing synchronous method have been deprecated using the Obsolete attribute and will cause a build warning if you override them. If you are overriding one of the existing synchronous virtual methods, your code will continue to function the same as it always has and will continue to do so for all future 1.x releases. The synchronous variations of the methods will likely be removed in a future 2.x release. You can safely suppress the build warning until you have migrated your implementation to the async methods.
CoreWCF\Samples repo provides samples:
CoreWCF provides flexibility around authentication and authorization allowing implementation of more up to date security standards and programming patterns well known from developers.
]]>I have a personal passion for WCF as it solves many difficult problems in interesting and often complicated ways and I enjoy solving interesting and complicated problems. I was asked if I wanted to personally own the project. I was hesitant at first as I was worried that I would be personally committing to porting most of the code base on my own. Shortly after the project began development in the open I was contacted by Biroj Nayak from Amazon AWS asking how they could help contribute to Core WCF. They had their own customers asking what could be done to enable porting their WCF services to the cloud. This started a multi year collaboration with Amazon where they ported some very large and significant functionality from WCF to Core WCF. Rebuilding the channel layer on top of ASP.NET Core requires a significant refactoring of much of the code base and some features involved a lot of code that needed to be committed in one large piece. Biroj took on the multi-month task of porting some of the larger missing features to CoreWCF.
After a while, we started getting some smaller contributions from the community. Adding support for narrow scenarios which hadn't been included, or fixing an edge case which the new code didn't handle. As time has gone on, the size and number of community contributions has gradually and continuously increased. We've seen more companies contribute developer resources to porting significant features. My worry about being the only person working on porting WCF to .NET Core has been completely dispelled. We recently hit a milestone that I have contributed less than half the commits to the Core WCF repo. I now spend a large part of the time I have available for Core WCF reviewing others code and taking more of an architect role to enable others to contribute. We'd like to express a big thank you to all those who have contributed to this project to make it a success.
Besides naming your variables, one of the toughest questions in software development is when is it ready for release? If we waited for feature parity with WCF, we might never get to v1 as some features have missing dependencies. We decided that we would be willing to apply the v1 label when Core WCF is "useful" for use in production by a large number of WCF customers. Being useful is a very vague and blurry bar so we had to decide what that meant. What we came up with is being able to use SOAP with the HTTP transports, having a sessionful transport, and being able to generate the WSDL for a service. I had already implemented NetTcp on top of the connection handler feature of ASP.NET Core so supporting a sessionful transport was covered. The major thing left to implement was WSDL support. Along the way the community decided to contribute support for TransportWithMessageCredentials, WS-Federation, Configuration, WebHttpBinding for RESTful services, and many other smaller features including some which don't even exist on WCF. With the recent completion of WSDL generation, we're now at a point where we believe Core WCF should be useful to many developers using WCF.
There are still some notable features missing. For example, we don't have tracing support yet, and you need to configure HTTP authentication in ASP.NET Core and not via the binding. If this is your first time looking at using CoreWCF, I recommend reading the prior blog posts as they contain many answers on how to port your service to Core WCF.
Missing features fall into two categories.
When the implementation is there but not public, it's because we don't have tests for it yet. Making an API public without having tested that there are no problems with any changes made in the port will lead to a lot of noise and bad experiences. If you discover there's an extensibility point that you need which is internal, the fastest way to get it supported is to submit a PR making it public along with some tests verifying that the extensibility point is working as expected.
If the feature you need is completely absent, you have two options:
Another alternative might be to modify your service to use a different feature which provides the same capabilities. For example, switching to NetTcpBinding if you are currently using NetNamedPipeBinding.
The following new features have been added since Core WCF 0.4.0 was released:
There are 3 new blog posts talking about some of these new features:
-WebHttpBinding support
-WSDL support
-WS-Federation support
With the v1.0.0 release of Core WCF, Microsoft are providing support. The current support lifecycle can be found at http://aka.ms/corewcf/support. Microsoft has published a blog post explaining the support policy for Core WCF.
]]>