ManagedCode.Orleans.SignalR.Server
                               
                            
                                9.0.0
                            
                        
                            
                                
                                
                                    Prefix Reserved
                                
                            
                    dotnet add package ManagedCode.Orleans.SignalR.Server --version 9.0.0
NuGet\Install-Package ManagedCode.Orleans.SignalR.Server -Version 9.0.0
<PackageReference Include="ManagedCode.Orleans.SignalR.Server" Version="9.0.0" />
<PackageVersion Include="ManagedCode.Orleans.SignalR.Server" Version="9.0.0" />
<PackageReference Include="ManagedCode.Orleans.SignalR.Server" />
paket add ManagedCode.Orleans.SignalR.Server --version 9.0.0
#r "nuget: ManagedCode.Orleans.SignalR.Server, 9.0.0"
#:package ManagedCode.Orleans.SignalR.Server@9.0.0
#addin nuget:?package=ManagedCode.Orleans.SignalR.Server&version=9.0.0
#tool nuget:?package=ManagedCode.Orleans.SignalR.Server&version=9.0.0
Orleans.SignalR
Cloud-native SignalR backplane powered by Microsoft Orleans virtual actors. Orleans.SignalR lets you scale ASP.NET Core SignalR hubs horizontally without surrendering real-time guarantees. Connections, groups, and invocations are coordinated and fanned out through Orleans grains, giving you deterministic delivery, automatic resilience, and pluggable persistence.
Highlights
- Orleans-first SignalR lifetime manager with transparent multi-silo fan-out.
 - Connection and group partitioning built on consistent hashing and dynamic scaling hints.
 - Full parity with SignalR primitives (
Clients.All,Groups.AddToGroupAsync, user targeting, client invocations, etc.). - Works with any Orleans persistence provider; ships with memory storage defaults for quick starts.
 - Tested under heavy load with automated stress and partitioning suites.
 
Packages
| Package | Description | 
|---|---|
ManagedCode.Orleans.SignalR.Core | 
Core abstractions, options, helper utilities, hub lifetime manager. | 
ManagedCode.Orleans.SignalR.Server | 
Orleans grains (coordinators, partitions, groups, users, invocations) for silo hosts. | 
ManagedCode.Orleans.SignalR.Client | 
Client extensions to plug Orleans into SignalR with no ceremony. | 
Quick Start
1. Install NuGet packages
Install-Package ManagedCode.Orleans.SignalR.Server
Install-Package ManagedCode.Orleans.SignalR.Client
2. Configure your Orleans silo
using ManagedCode.Orleans.SignalR.Core.Config;
var builder = Host.CreateApplicationBuilder(args);
builder.Host.UseOrleans(silo =>
{
    silo.ConfigureOrleansSignalR();
    silo.AddMemoryGrainStorage(OrleansSignalROptions.OrleansSignalRStorage);
});
builder.Services
    .AddSignalR()
    .AddOrleans(options =>
    {
        options.ConnectionPartitionCount = 4;
        options.GroupPartitionCount = 4;
    });
3. Configure your Orleans client
var clientBuilder = new ClientBuilder()
    .UseLocalhostClustering()
    .ConfigureServices(services =>
    {
        services
            .AddSignalR()
            .AddOrleans();
    });
4. Use typed hub context inside grains
public class WeatherGrain : Grain, IWeatherGrain
{
    private readonly IOrleansHubContext<WeatherHub, IWeatherClient> _hub;
    public WeatherGrain(IOrleansHubContext<WeatherHub, IWeatherClient> hub) => _hub = hub;
    public Task BroadcastAsync(string forecast)
    {
        return _hub.Clients.All.ReceiveForecast(forecast);
    }
}
Architecture Overview
At the heart of Orleans.SignalR sits OrleansHubLifetimeManager<THub>. It replaces the default SignalR lifetime manager and orchestrates fan-out through Orleans grains when hubs interact with connections, groups, and users.
High-Level Flow
flowchart LR
    hub[ASP.NET Core SignalR Hub]
    manager[OrleansHubLifetimeManager<T>]
    subgraph Orleans
        grains[Orleans grain topology<br/>(coordinators & partitions)]
    end
    clients[Connected clients]
    hub --> manager --> grains --> clients
Connection Fan-Out Pipeline
- Connection observed — when a client connects, the lifetime manager creates a hub subscription (
ISignalRObserver). - Coordinator assignment — 
SignalRConnectionCoordinatorGrainmaps the connection to a partition via consistent hashing. - Partition grain — 
SignalRConnectionPartitionGrainstores the observer key and relays messages to the client. - Dynamic scaling — partition counts expand to powers of two when tracked connections exceed 
ConnectionsPerPartitionHint. When the load drops to zero, the count resets to the configured base. 
flowchart TD
    connect([Client connect / disconnect])
    coordinator{SignalRConnectionCoordinator<br/>consistent hashing}
    partitions[[SignalRConnectionPartition(s)]]
    observers[[Observer notifications]]
    clients[[Connected clients]]
    scaling[(Adjust partition count<br/>via hints)]
    connect --> coordinator --> partitions --> observers --> clients
    coordinator -. dynamic scaling .-> scaling -.-> partitions
Group Fan-Out Pipeline
- Group coordinator — 
SignalRGroupCoordinatorGraintracks group names and membership counts. - Group partition assignment — groups are consistently hashed to 
SignalRGroupPartitionGraininstances using the same power-of-two heuristic (GroupPartitionCount+GroupsPerPartitionHint). - Partition state — each partition stores bidirectional maps of connections-to-groups and group-to-observer links, enabling efficient 
SendToGroup,SendToGroups, and exclusions. - Automatic cleanup — when a group empties, the coordinator is notified so partitions can release unused entries and (if idle) shrink back to the base partition count.
 
flowchart TD
    action([Group operation])
    groupCoord{SignalRGroupCoordinator<br/>assign hash partition}
    groupPartition[[SignalRGroupPartition<br/>(stateful fan-out)]]
    membership[(Membership maps<br/>(connection <-> group))]
    cleanup([Notify coordinator when empty])
    action --> groupCoord --> groupPartition
    groupPartition --> membership --> groupPartition
    membership --> cleanup -.-> groupCoord
Connection, Group, and User Grains
SignalRConnectionHolderGrainandSignalRGroupGrainremain as non-partitioned fallbacks when partitioning is disabled (ConnectionPartitionCount = 1orGroupPartitionCount = 1).SignalRUserGrainaggregates all connections for a given user identifier and issues fan-out when you targetClients.User.SignalRInvocationGrainhandles client-to-server invocation plumbing (Clients.Client(connectionId).InvokeCoreAsync(...)), ensuring tasks run off the activation thread.
Partitioning Strategy
- Consistent hashing — connection IDs and group names are hashed onto a ring with virtual nodes (
PartitionHelper). This keeps existing connections stable when the partition set expands. - Dynamic sizing — coordinators compute the optimal partition count as the next power of two above 
expected / hint, ensuring evenly balanced partitions for millions of connections or groups. - Reset semantics — when no entries remain, coordinators revert to the configured 
ConnectionPartitionCount/GroupPartitionCountbase, so idle hubs do not hold unnecessary grains. - Observer fan-out — partition grains rely on Orleans 
ObserverManagerto multiplex message delivery to every connected client within that partition. 
How Connection Partitioning Works
- Hub lifetime manager routing — when a client connects, 
OrleansHubLifetimeManager<THub>asks theSignalRConnectionCoordinatorGrainfor a partition id and registers the observer with the correspondingSignalRConnectionPartitionGrain. When the client disconnects the lifetime manager removes the observer and notifies the coordinator so the mapping can be cleaned up. - Coordinator bookkeeping — 
SignalRConnectionCoordinatorGrainkeeps an in-memory dictionary of connection ids to partition ids. It callsPartitionHelper.GetPartitionIdto pick a slot, andEnsurePartitionCapacitygrows the partition ring to the next power of two when tracked connections exceedConnectionsPerPartitionHint. If all connections vanish it resets to the configuredConnectionPartitionCount. - Consistent hash ring — 
PartitionHelpercaches hash rings with 150 virtual nodes per physical partition to spread connections evenly.GetOptimalPartitionCountandGetOptimalGroupPartitionCountimplement the “power of two” heuristic used by both coordinators. - Partition grain fan-out — each 
SignalRConnectionPartitionGrainpersists the connection → observer mapping and uses OrleansObserverManagerto broadcast to subscribers, includingSendToPartition,SendToPartitionExcept, and per-connection delivery. On deactivation it clears or writes state based on whether any observers remain. 
Connection Partitions in Depth
What they are — a connection partition is just a regular Orleans grain (
SignalRConnectionPartitionGrain) whose primary key composes the hub identity with a partition number.NameHelperGenerator.GetConnectionPartitionGrainhashes the hub name withXxHash64and folds in the partition id to produce a long key, so every hub keeps a deterministic set of partition activations.Where they live — all connection-level grains (coordinator + partitions) are placed in the
ManagedCode.Orleans.SignalR.Serverassembly. The coordinator grain is keyed by the hub name (typeof(THub).FullNamecleaned to be storage-safe). Partition grains use the same hub key plus the partition number; Orleans activates them on demand and persists theConnectionStaterecord in the storage provider registered underOrleansSignalROptions.OrleansSignalRStorage.How connections land there — when a new client connects, the lifetime manager creates an
ISignalRObserversubscription and callsAddConnectionon the chosen partition. The partition storesconnectionId -> observerKeyin persistent state and subscribes the observer withObserverManager, so later broadcasts simply loop through observers and push HubMessage payloads.Scaling behaviour — the coordinator maintains a dictionary of active connections. Before assigning a partition, it calls
EnsurePartitionCapacity, which compares the current count against the hint and grows the partition ring to the next power of two if necessary. Existing connections keep their partition id thanks to the dictionary; only newly seen connection ids are distributed across the expanded ring. When the number of tracked connections drops to zero,_currentPartitionCountshrinks back to the configured base, so idle hubs stop consuming extra partition activations.Sending messages — hub calls such as
Clients.AllorClients.Client(connectionId)are routed back through the coordinator. It looks up the partition, resolves the grain key viaNameHelperGenerator, and invokesSendToPartition,SendToPartitionExcept, orSendToConnection. Each partition grain executes the fan-out on the Orleans scheduler usingObserverManager.Notify, ensuring delivery stays responsive even when thousands of clients share a partition.Fallback path — if you set
ConnectionPartitionCount = 1, the system bypasses the coordinator entirely and relies onSignalRConnectionHolderGrain, which keeps the single connection list without the hash ring. This is useful for small deployments or debugging but sacrifices the horizontal scaling afforded by partitions.Keep-alive orchestration — when
KeepEachConnectionAlive = true,SignalRConnectionHeartbeatGrainruns an OrleansRegisterTimerper connection to callPingon the owning partition/holder. This keeps observer subscriptions warm even if the web host is busy, whileKeepEachConnectionAlive = falserelies purely on application traffic and the configured timeout.
Configuration
Configure OrleansSignalROptions to tune throughput and lifecycle characteristics:
| Option | Default | Description | 
|---|---|---|
ClientTimeoutInterval | 
00:00:30 | How long a client can remain silent before the server times out the connection. Mirrors SignalR keep-alive. | 
KeepEachConnectionAlive | 
true | 
When enabled, the subscription timer pings partition grains so observers never expire. Disable to reduce ping traffic; connections still register with partitions but idle observers can be trimmed once they exceed ClientTimeoutInterval. | 
KeepMessageInterval | 
00:01:06 | Persistence window for offline message delivery (grains store messages briefly so reconnecting clients do not miss data). | 
ConnectionPartitionCount | 
4 | 
Base number of connection partitions (set to 1 to disable partitioning). | 
ConnectionsPerPartitionHint | 
10_000 | 
Target connections per partition; coordinators add partitions when this hint is exceeded. | 
GroupPartitionCount | 
4 | 
Base number of group partitions (set to 1 to disable partitioning). | 
GroupsPerPartitionHint | 
1_000 | 
Target groups per partition; controls dynamic scaling for group fan-out. | 
Example: custom scaling profile
services.AddSignalR()
    .AddOrleans(options =>
    {
        options.ConnectionPartitionCount = 8;      // start with 8 partitions
        options.ConnectionsPerPartitionHint = 5_000;
        options.GroupPartitionCount = 4;
        options.GroupsPerPartitionHint = 500;
        options.ClientTimeoutInterval = TimeSpan.FromMinutes(2);
        options.KeepMessageInterval = TimeSpan.FromMinutes(5);
    });
Working with Hub Context inside Orleans
- Request the 
IOrleansHubContext<THub>orIOrleansHubContext<THub, TClient>via DI in any grain. - You can still inject the classic 
IHubContext<THub>if you prefer manual access toClients,Groups, etc. - Client invocations (
Clients.Client(connectionId).InvokeAsync(...)) are supported. Run them viaTask.Run(or another scheduler hop) so the Orleans scheduler is never blocked. 
public class LiveScoreGrain : Grain, ILiveScoreGrain
{
    private readonly IHubContext<LiveScoreHub> _hub;
    public LiveScoreGrain(IHubContext<LiveScoreHub> hub) => _hub = hub;
    public Task PushScoreAsync(string matchId, ScoreDto score) =>
        _hub.Clients.Group(matchId).SendAsync("ScoreUpdated", score);
}
Running Locally
- Restore and build: 
dotnet restorethendotnet build -c Debug - Execute the full test suite (including partition scaling tests): 
dotnet test -c Debug - The 
ManagedCode.Orleans.SignalR.Tests/TestAppfolder contains a minimal test host you can use as a reference for spinning up a local cluster with SignalR hubs. 
Troubleshooting Tips
- Stuck messages — ensure both client and silo share the same 
OrleansSignalROptionssetup. Partition counts must match or messages cannot reach the correct grain. - Massive fan-out — when broadcasting to thousands of groups at once, the group coordinator uses fire-and-forget tasks. Monitor logs for any 
Failed to send to groupsmessages to catch slow partitions. - Long-lived idle connections — consider lowering 
KeepEachConnectionAliveor tweakingClientTimeoutIntervalif you run huge numbers of clients that rarely send data. 
Contributing
Bug reports and feature ideas are welcome—open an issue or submit a PR. Before pushing code:
- Run 
dotnet buildanddotnet test -c Debug - Ensure 
dotnet formatleaves no diffs - Follow the repo conventions outlined in 
Directory.Build.props(nullable enabled, analyzers, C# 13 style) 
License
Orleans.SignalR is released under the MIT License.
| Product | Versions Compatible and additional computed target framework versions. | 
|---|---|
| .NET | net9.0 is compatible. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 was computed. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. | 
- 
                                                    
net9.0
- ManagedCode.Orleans.SignalR.Core (>= 9.0.0)
 - Microsoft.Extensions.DependencyInjection (>= 9.0.10)
 - Microsoft.Orleans.Serialization (>= 9.2.1)
 - Microsoft.Orleans.Server (>= 9.2.1)
 
 
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.
| Version | Downloads | Last Updated | 
|---|---|---|
| 9.0.0 | 44 | 11/1/2025 | 
| 8.1.1 | 1,431 | 6/19/2024 | 
| 8.1.0 | 350 | 5/13/2024 | 
| 7.2.1 | 5,109 | 9/12/2023 | 
| 7.1.6 | 278 | 6/6/2023 | 
| 7.1.5 | 231 | 6/2/2023 | 
| 7.1.4 | 218 | 6/2/2023 | 
| 7.1.3 | 227 | 6/1/2023 | 
| 7.1.2 | 247 | 5/31/2023 | 
| 7.1.1 | 232 | 5/31/2023 | 
| 7.1.0 | 213 | 5/25/2023 | 
| 7.0.1 | 231 | 5/23/2023 | 
| 7.0.0 | 278 | 4/26/2023 | 
| 0.0.4 | 272 | 4/25/2023 | 
| 0.0.3 | 277 | 4/24/2023 | 
| 0.0.2 | 274 | 4/23/2023 | 
| 0.0.1 | 285 | 4/4/2023 |