This content from this markdown file has moved a new, happier home where it can serve more people. Please check it out : https://docs.microsoft.com/azure/azure-cache-for-redis/cache-best-practices.
-
-
Save JonCole/925630df72be1351b21440625ff2671f to your computer and use it in GitHub Desktop.
- Don't use the same Jedis connection instance from multiple threads at the same time.
- Using the same Jedis instance from multiple threads at the same time will result in socket connection errors/resets or strange error messages like "expected '$' but got ' '".
- This allows you to talk to redis from multiple threads while still getting the benefits of reused connections.
- The JedisPool object is thread-safe and can be used from multiple threads at the same time.
- This pool should be configured once and reused.
- Make sure to return the Jedis instance back to the pool when done, otherwise you will leak the connection.
- We have seen a few cases where connections in the pool get into a bad state. As a failsafe, you may want to re-create the JedisPool if you see connection errors that continue for longer than 30 seconds.
Some important settings to consider:
Setting | Description |
---|---|
connectTimeout | How long to allow for new connections to be established (in milliseconds). In general, this should be at least 5000ms. If your client application tends to have high spikes CPU usage, setting this to 15000ms or 20000ms would be a good idea. |
soTimeout | This configures the socket timeout (in milliseconds) for the underlying connection. You can basically think of this as the operation timeout (how long you are willing to wait for a response from Redis). Think about this one in terms of worst case, not best case. Setting this too low can cause you to get timeout errors due to unexpected bursts in load. I typically recommend 1000ms as a good value for most customers. |
port | In Azure, 6379 is non-ssl and 6380 is SSL/TLS. Important Note: 6379 is disabled by default - you have to explicitly enable this insecure port if you wish to use it. |
Setting | Description |
---|---|
maxTotal | This setting controls the max number of connections that can be created at a given time. Given that Jedis connections cannot be shared across threads, this setting affects the amount of concurrency your application can have when talking to Redis. Note that each connection does have some memory and CPU overhead, so setting this to a very high value may have negative side effects. If not set, the default value is 8, which is probably too low for most applications. When chosing a value, consider how many concurrent calls into Redis you think you will have under load. |
maxIdle | This is the max number of connections that can be idle in the pool without being immediately evicted (closed). If not set, the default value is 8. I would recommend that this setting be configured the same as maxTotal to help avoid connection ramp-up costs when your application has many bursts of load in a short period of time. If a connection is idle for a long time, it will still be evicted until the idle connection count hits minIdle (described below). |
minIdle | This is the number of "warm" connections (e.g. ready for immediate use) that remain in the pool even when load has reduced. If not set, the default is 0. When choosing a value, consider your steady-state concurrent requests to Redis. For instance, if your application is calling into Redis from 10 threads simultaneously, then you should set this to at least 10 (probably a bit higher to give you some room. |
blockWhenExhausted | This controls behavior when a thread asks for a connection, but there aren't any that are free and the pool can't create more (due to maxTotal). If set to true, the calling thread will block for maxWaitMillis before throwing an exception. The default is true and I recommend true for production environments. You could set it to false in testing environments to help you more easily discover what value to use for maxTotal. |
maxWaitMillis | How long to wait in milliseconds if calling JedisPool.getResource() will block. The default is -1, which means block indefinitely. I would set this to the same as the socketTimeout configured. Related to blockWhenExhausted. |
TestOnBorrow | Controls whether or not the connection is tested before it is returned from the pool. The default is false. Setting to true may increase resilience to connection blips but may also have a performance cost when taking connections from the pool. In my quick testing, I saw a noticable increase in the 50th percentile latencies, but no significant increase in 98th percentile latencies. |
minEvictableIdleTimeMillis | This specifies the minimum amount of time an connection may sit idle in the pool before it is eligible for eviction due to idle time. The default value is 60 seconds for JedisPoolConfig , and 30 minutes if you construct a configuration with GenericObjectPoolConfig . Azure Redis currently has 10 minute idle timeout for connections, so this should be set to less than 10 minutes |
- This will improve the throughput of the application. Read more about redis pipelining here https://redis.io/topics/pipelining.
- Jedis does not do pipelining automatically for you. You have to call diffeent APIs in order to get the significant performance benefits that can come from using pipelining.
- Examples can be found here
- Debugging performance problems due to JedisPool contention issues will be easier if you log the pool usage regularly.
- If you ever get an error when trying to get a connection from the pool, you should definitely log usage stats. There is sample code here that shows which values to log...
- Here are some links to some sample code demonstrating these best practices
Azure Redis currently has 10 minute idle timeout for connections, which will cause short network blips if your connection has long periods of inactivity. The most common Node.js libraries should automatically reconnect.
However, you can avoid this brief connectivity loss if your application sends a PING command on connections to prevent them from being idle. Some client libraries send this ping automatically.
At the time of this writing, the node.js redis client library I tested (node_redis) did NOT do this automatically. You can do this yourself by adding a timer that sends PING command every minute or two. Below is an example of how to do this.
setInterval(function(){
console.log('redisClient => Sending Ping...');
redisClient.ping();
}, 60000); // 60 seconds
We have seen a few cases where a node_redis connection gets into a bad state and can no longer successfully send commands to Redis even though other clients are actively able to interact with Redis. If you see connection issues that last longer than some threshold (say 30 seconds), then you may want to add logic to your app that forcefully recreates the connection instead of waiting for node_redis to reconnect.
The most common problem we have seen with PHP clients is that they either don't support persistent connections or the ability to reuse connections is disabled by default. When you don't reuse connections, it means that you have to pay the cost of establishing a new connection, including the SSL/TLS handshake, each time you want to send a request. This can add a lot of latency to your request time and will manifest itself as a performance problem in your application. Additionally, if you have a high request rate, this can cause significant CPU churn on both the Redis client-side and server-side, which can result in other issues.
As an example, the Predis Redis client has a "persistent"
connection property that is false by default. Setting the "persistent"
property to true will should improve behavior drastically.
- Enable session state only on required pages - This will avoid known session state provider performance problems.
- You can disable session state by setting the web.config enableSessionState option to false.
<system.web> <pages enableSessionState=false>
- You can enable it on specific pages by setting the page directive's EnableSessionState option to true
<%@ Page EnableSessionState=true %>
- Mark pages using Session State as ReadOnly whenever possible - this helps avoid locking contention.
<%@ Page EnableSessionState=ReadOnly %>
- You can disable session state by setting the web.config enableSessionState option to false.
- Avoid Session State (or at least use ReadOnly) on pages that have long load times - When a page with write-access to the session state takes a long time to load, it will hold the lock for that session until the load completes. This can prevent other requests for other pages for the same session from loading. Also, the session state module in ASP.NET will, in the background, continue to ask for the session lock for any additional requests for that same session until the lock is available or until the executionTime is exceeded for the lock. This can generate additional load on your session state store.
- Make sure you understand the impact of session state locks. Read this article for an example of why this is important.
- Select your httpRuntime/executionTime carefully - The executionTime you select is the duration that the session lock is held should the app crash without releasing the lock. Select a value that is as low as possible while still meeting your application's specific requirements.
Note:
None of these recommendations are specific to Redis - they are good recommendations regardless of which SessionStateProvider you use. Also, some of these recommendations are based on this article, which has additional recommendations beyond those specifically called out here.
-
Set AbortConnect to false, then let the ConnectionMultiplexer reconnect automatically. See here for details
-
Reuse the ConnectionMultiplexer - do not create a new one for each request. The
Lazy<ConnectionMultiplexer>
pattern shown here is strongly recommended. -
Configure your ThreadPool settings to avoid timeouts.
-
Be aware of the performance costs associated with different operations you are running. For instance, the "KEYS" command is an O(n) operation and should be avoided. The redis.io site has details around the time complexity for each operation that it supports.
-
Consider turning on "Server GC". "Workstation GC" is the default and can impact the latencies when garbage collection is in happening.
-
Most customers find that a single ConnectionMultiplexer is sufficient for their needs. However, if you have high throughput requirements, you may consider slowly increasing the number of connections to see if you get an improvement in throughput. Avoid setting it to an arbitrarily large number of connections as it may not give the desired benefit.
-
Configure supported TLS settings. If you are targeting .NET 4.7 or later, you should not have to do anything because StackExchange.Redis will automatically use the OS level settings when deciding which TLS versions to support (which is a good thing in most cases). If you are targeting an older version of .NET, then you should configure the client to use the highest TLS version that your client system supports (typically TLS 1.2). Once you move to a newer version of the .NET framework, then you should probably remove this configuration and let the OS settings take precedence. This can configured through the sslProtocols connection string entry (requires NuGet package version 1.2.3 or later), or through the ConfigurationOptions class as show here:
var options = ConfigurationOptions.Parse(connectionString);
options.SslProtocols = System.Security.Authentication.SslProtocols.Tls12;
ConnectionMultiplexer.Connect(options);
We have seen a few rare cases where StackExchange.Redis fails to reconnect after a connection blip (for example, due to patching). Restarting the client or creating a new ConnectionMultiplexer will fix the issue. Here is some sample code that still uses the recommended Lazy<ConnectionMultiplexer>
pattern while allowing apps to force a reconnection periodically. Make sure to update code calling into the ConnectionMultiplexer so that they handle any ObjectDisposedException
errors that occur as a result of disposing the old one.
using System; | |
using System.Threading; | |
using StackExchange.Redis; | |
// Source Code Usage License: https://gist.github.com/JonCole/34ca1d2698da7a1aa65ff781c37ecdea | |
static class Redis | |
{ | |
static long lastReconnectTicks = DateTimeOffset.MinValue.UtcTicks; | |
static DateTimeOffset firstError = DateTimeOffset.MinValue; | |
static DateTimeOffset previousError = DateTimeOffset.MinValue; | |
static object reconnectLock = new object(); | |
// In general, let StackExchange.Redis handle most reconnects, | |
// so limit the frequency of how often this will actually reconnect. | |
public static TimeSpan ReconnectMinFrequency = TimeSpan.FromSeconds(60); | |
// if errors continue for longer than the below threshold, then the | |
// multiplexer seems to not be reconnecting, so re-create the multiplexer | |
public static TimeSpan ReconnectErrorThreshold = TimeSpan.FromSeconds(30); | |
static string connectionString = "TODO: CALL InitializeConnectionString() method with connection string"; | |
static Lazy<ConnectionMultiplexer> multiplexer = CreateMultiplexer(); | |
public static ConnectionMultiplexer Connection { get { return multiplexer.Value; } } | |
public static void InitializeConnectionString(string cnxString) | |
{ | |
if (string.IsNullOrWhiteSpace(cnxString)) | |
throw new ArgumentNullException(nameof(cnxString)); | |
connectionString = cnxString; | |
} | |
/// <summary> | |
/// Force a new ConnectionMultiplexer to be created. | |
/// NOTES: | |
/// 1. Users of the ConnectionMultiplexer MUST handle ObjectDisposedExceptions, which can now happen as a result of calling ForceReconnect() | |
/// 2. Don't call ForceReconnect for Timeouts, just for RedisConnectionExceptions or SocketExceptions | |
/// 3. Call this method every time you see a connection exception, the code will wait to reconnect: | |
/// a. for at least the "ReconnectErrorThreshold" time of repeated errors before actually reconnecting | |
/// b. not reconnect more frequently than configured in "ReconnectMinFrequency" | |
/// </summary> | |
public static void ForceReconnect() | |
{ | |
var utcNow = DateTimeOffset.UtcNow; | |
var previousTicks = Interlocked.Read(ref lastReconnectTicks); | |
var previousReconnect = new DateTimeOffset(previousTicks, TimeSpan.Zero); | |
var elapsedSinceLastReconnect = utcNow - previousReconnect; | |
// If mulitple threads call ForceReconnect at the same time, we only want to honor one of them. | |
if (elapsedSinceLastReconnect > ReconnectMinFrequency) | |
{ | |
lock (reconnectLock) | |
{ | |
utcNow = DateTimeOffset.UtcNow; | |
elapsedSinceLastReconnect = utcNow - previousReconnect; | |
if (firstError == DateTimeOffset.MinValue) | |
{ | |
// We haven't seen an error since last reconnect, so set initial values. | |
firstError = utcNow; | |
previousError = utcNow; | |
return; | |
} | |
if (elapsedSinceLastReconnect < ReconnectMinFrequency) | |
return; // Some other thread made it through the check and the lock, so nothing to do. | |
var elapsedSinceFirstError = utcNow - firstError; | |
var elapsedSinceMostRecentError = utcNow - previousError; | |
var shouldReconnect = | |
elapsedSinceFirstError >= ReconnectErrorThreshold // make sure we gave the multiplexer enough time to reconnect on its own if it can | |
&& elapsedSinceMostRecentError <= ReconnectErrorThreshold; //make sure we aren't working on stale data (e.g. if there was a gap in errors, don't reconnect yet). | |
// Update the previousError timestamp to be now (e.g. this reconnect request) | |
previousError = utcNow; | |
if (shouldReconnect) | |
{ | |
firstError = DateTimeOffset.MinValue; | |
previousError = DateTimeOffset.MinValue; | |
var oldMultiplexer = multiplexer; | |
CloseMultiplexer(oldMultiplexer); | |
multiplexer = CreateMultiplexer(); | |
Interlocked.Exchange(ref lastReconnectTicks, utcNow.UtcTicks); | |
} | |
} | |
} | |
} | |
private static Lazy<ConnectionMultiplexer> CreateMultiplexer() | |
{ | |
return new Lazy<ConnectionMultiplexer>(() => ConnectionMultiplexer.Connect(connectionString)); | |
} | |
private static void CloseMultiplexer(Lazy<ConnectionMultiplexer> oldMultiplexer) | |
{ | |
if (oldMultiplexer != null) | |
{ | |
try | |
{ | |
oldMultiplexer.Value.Close(); | |
} | |
catch (Exception) | |
{ | |
// Example error condition: if accessing old.Value causes a connection attempt and that fails. | |
} | |
} | |
} | |
} |
import javax.net.ssl.HostnameVerifier; | |
import javax.net.ssl.SSLPeerUnverifiedException; | |
import javax.net.ssl.SSLSession; | |
import redis.clients.jedis.*; | |
import javax.net.ssl.*; | |
// Source Code Usage License: https://gist.github.com/JonCole/34ca1d2698da7a1aa65ff781c37ecdea | |
public class Redis { | |
private static Object staticLock = new Object(); | |
private static JedisPool pool; | |
private static String host; | |
private static int port; // 6379 for NonSSL, 6380 for SSL | |
private static int connectTimeout; //milliseconds | |
private static int operationTimeout; //milliseconds | |
private static String password; | |
private static JedisPoolConfig config; | |
// Should be called exactly once during App Startup logic. | |
public static void initializeSettings(String host, int port, String password, int connectTimeout, int operationTimeout) { | |
Redis.host = host; | |
Redis.port = port; | |
Redis.password = password; | |
Redis.connectTimeout = connectTimeout; | |
Redis.operationTimeout = operationTimeout; | |
} | |
// MAKE SURE to call the initializeSettings method first | |
public static JedisPool getPoolInstance() { | |
if (pool == null) { // avoid synchronization lock if initialization has already happened | |
synchronized(staticLock) { | |
if (pool == null) { // don't re-initialize if another thread beat us to it. | |
JedisPoolConfig poolConfig = getPoolConfig(); | |
boolean useSsl = port == 6380 ? true : false; | |
int db = 0; | |
String clientName = "MyClientName"; // null means use default | |
SSLSocketFactory sslSocketFactory = null; // null means use default | |
SSLParameters sslParameters = null; // null means use default | |
HostnameVerifier hostnameVerifier = new SimpleHostNameVerifier(host); | |
pool = new JedisPool(poolConfig, host, port, connectTimeout,operationTimeout,password, db, | |
clientName, useSsl, sslSocketFactory, sslParameters, hostnameVerifier); | |
} | |
} | |
} | |
return pool; | |
} | |
public static JedisPoolConfig getPoolConfig() { | |
if (config == null) { | |
JedisPoolConfig poolConfig = new JedisPoolConfig(); | |
// Each thread trying to access Redis needs its own Jedis instance from the pool. | |
// Using too small a value here can lead to performance problems, too big and you have wasted resources. | |
int maxConnections = 200; | |
poolConfig.setMaxTotal(maxConnections); | |
poolConfig.setMaxIdle(maxConnections); | |
// Using "false" here will make it easier to debug when your maxTotal/minIdle/etc settings need adjusting. | |
// Setting it to "true" will result better behavior when unexpected load hits in production | |
poolConfig.setBlockWhenExhausted(true); | |
// How long to wait before throwing when pool is exhausted | |
poolConfig.setMaxWaitMillis(operationTimeout); | |
// This controls the number of connections that should be maintained for bursts of load. | |
// Increase this value when you see pool.getResource() taking a long time to complete under burst scenarios | |
poolConfig.setMinIdle(50); | |
Redis.config = poolConfig; | |
} | |
return config; | |
} | |
public static String getPoolCurrentUsage() | |
{ | |
JedisPool jedisPool = getPoolInstance(); | |
JedisPoolConfig poolConfig = getPoolConfig(); | |
int active = jedisPool.getNumActive(); | |
int idle = jedisPool.getNumIdle(); | |
int total = active + idle; | |
String log = String.format( | |
"JedisPool: Active=%d, Idle=%d, Waiters=%d, total=%d, maxTotal=%d, minIdle=%d, maxIdle=%d", | |
active, | |
idle, | |
jedisPool.getNumWaiters(), | |
total, | |
poolConfig.getMaxTotal(), | |
poolConfig.getMinIdle(), | |
poolConfig.getMaxIdle() | |
); | |
return log; | |
} | |
private static class SimpleHostNameVerifier implements HostnameVerifier { | |
private String exactCN; | |
private String wildCardCN; | |
public SimpleHostNameVerifier(String cacheHostname) | |
{ | |
exactCN = "CN=" + cacheHostname; | |
wildCardCN = "CN=*" + cacheHostname.substring(cacheHostname.indexOf('.')); | |
} | |
public boolean verify(String s, SSLSession sslSession) { | |
try { | |
String cn = sslSession.getPeerPrincipal().getName(); | |
return cn.equalsIgnoreCase(wildCardCN) || cn.equalsIgnoreCase(exactCN); | |
} catch (SSLPeerUnverifiedException ex) { | |
return false; | |
} | |
} | |
} | |
} |
ForceReconnect needs something to reset the firstErrror on successful use of the multiplexer, something like
firstError = DateTimeOffset.MinValue;
so that a connection blip doesn't trigger destroying and recreating the multiplexer. The code as is doesn't let the multiplexer just handle the reconnection itself very well, and needs a bit of a test harness to ensure it works as designed.
@danlarson Yeah, looking at the code again, I can see how we have a problem here when we hit this code but we don't end up doing the actual forced reconnect. Seems like this would mean that this would sometimes (every other time?) force a reconnect when it doesn't need to. Another problem that this code doesn't handle very well is the case where the multiplexer is connected to a clustered redis instance. If the connection to just one shard is broken, this logic forces a reconnect to all shards to fix the one broken connection. Not ideal. Ultimately, the underlying issue needs to be fixed inside StackExchange.Redis. Clustering makes this a harder problem to solve - we can't say that one successful request means that the connection issue is fixed because the success call might be from connection to a different shard. Maybe we reset the firstError if elapsed time is greater than some threshold? Thoughts?
Is there any way to access or log Redis Connection Pool while using it through Spring Redis Data template?
How can we force reconnect on spring-boot reactive application which is using Lettuce connection on azure.We see it is taking 15 min to reconnect in azure ,where as local docker image takes less than 5 min to reconnect.
https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-lazyreconnect-cs-L38
I am planning on calling ForceReconnect if RedisTimeoutException occurs continuously for more than 5 minutes, but it says that "Don't call ForceReconnect for Timeouts".
I would like to know the reason for this.
Is there some problem on forcefully reconnecting on RedisTimeoutException?
Most customers find that a single ConnectionMultiplexer is sufficient for their needs. However, if you have high throughput requirements, you may consider slowly increasing the number of connections to see if you get an improvement in throughput. Avoid setting it to an arbitrarily large number of connections as it may not give the desired benefit.
How does one increase the number of connections? The provided static class allows for only a single ConnectionMultiplexer and ForceReconnection of only a single ConnectionMultiplexer...?
Thanks for your help
How does one increase the number of connections?
You would have to create a pool of multiplexers to draw from, but I don't have example code for that currently. ForceReconnect is designed for a single connection multiplexer. If you do create a pool, make sure that each instance in the pool has its own set of variables for the ForceReconnect logic.
"> I would really appreciate a C# Serverless Function example. Nowhere can I find good practice information on that. Especially if I should be using the Lazy pattern as described here. Ideally, the StackExchange.Redis client would be mentioned here in addition to the HttpClient and DocumentClient"
Yes. You should. Here's some code I'm using for a thread-safe singleton that leverages the Lazy (T) model.
"
public class MyRedisConnectorHelper
{private static readonly Lazy<ConnectionMultiplexer> LazyConnection = new Lazy<ConnectionMultiplexer>(() => { string cacheConnection = ConfigurationManager.AppSettings["CacheConnection"].ToString(); return ConnectionMultiplexer.Connect(cacheConnection); }); public static ConnectionMultiplexer Connection { get { return LazyConnection.Value; } } }"
You'll still need to add a config file as described in the Azure documentation. I'm actually using mine for a local Redis server I'm running on Ubuntu, but it works great!
I don't have a serverless Redis example right now. If I work on one later or tomorrow, I'll post it too. Good luck!
It's probably telling you that the value for cacheConnection is null. Just pass the string to the Multiplexer as in this example
I am investigating locking\deadlocking\thread-pool-starvation issues related to this code. I am not a redis expert so some help is appreciated.
Concerns:
- Lazy with sync IO inside:
Connect
hasTask.Wait()
and.Result
- Reconnect closes multiplexer under
lock
whileClose
has .Result inside
#Lazy
If async await code calls Lazy.Value and initialization happens then all thread-pool threads are synchronousely blocked on waiting for initialization. If for any reason intialization stucks or is just a bit slow a service which goes through high-load will be simply paralyzed and thread pool will be forced to grow and add more more and threads which all will get stuck on Lazy<T>.Value
. Recommendation to increase ThreadPool.MinThreads is like to try to quench the fire with gasoline.
#Reconnect
I looked into Close implementation and found few cases of .Result inside. Given that close happens under lock the same issue as with Lazy value happens.
The code inside StackExchange.Redis is a mixture of await, .Wait, .Result which is a very dangerous approach. Is the code in this article still recommended to be used in high-loaded production services or maybe there are better alternatives?
I am investigating locking\deadlocking\thread-pool-starvation issues related to this code. I am not redis expert so some help is appreciated.
A C# web application that uses this code intermittently gets killed off and complains of thread pool starvation, so it's very interesting that you are suggesting that this might be a problem caused by the implementation here. I'd be very interested to know the outcome of this.
Our case was like this:
- service is restarted
- after start service receives heavy async-io load spawning many tasks
- the load hangs in Monitor (lock) and Lazy.Value
- thread-pool quickly grows but it does not help - application is still stuck, even tasks unrelated to Redis are stuck because they cannot be completed
Absence of sync-context in modern ASP.NET Core and improved thread pool scaling in .NET does not mean that mixing await and .Result.Wait is now a good idea. It is not and it will never be. From my point of view the code in this article should be deprecated or at least rewritten using SemaphoreSlim instead of Lazy and Lock. Also only async-IO may be used (ConnectAsync, CloseAsync).
If I understand the concern correctly, the main problem is that many threads are blocked on the locking at the same time, and the high level solution would be to make the creation of the multiplexer be asynchronous. Is that correct? At some point, it seems like you have to have a lock if you want to ensure only one connection object is created. I suppose you could create multiple then discard the extras but that has its own set of problems it can create (for instance, flooding the server with connection requests, many of which will just get closed).
The problem
Yes, there is a problem with Lazy and Lock.
Here is a simplified scenario for starvation: https://gist.github.com/yapaxi/cfc529bee9bd5361482051d45a62a571.
In a nutshell it has:
- initialization task (get-google) which is pseudo-synchronousely waited in Lazy
- loop that adds fake "load" that synchronousely waits for Lazy
- in the end "load" waits for lazy and lazy waits for load to release thread-pool threads
- eventhough thread pool grows new thread-pool threads are not used to finish get-google until some kind of timeout happens (in the test case)
On sunny days get-google finishes in 30 seconds, sometimes in 1 minute, sometimes never. In the example case most likely DNS resolve locks or deadlocks. In real case it is almost impossible to tell why Multiplexer.Connect is slow or near-deadlocks - too many things happen.
In real world 30+ seconds is sometimes enough for cluster to consider a node dead.
A solution
The best way I see is to use awaitable SemaphoreSlim instead of lazy and lock. That will ensure only single Multiplexer is created.
I do not know anything about Redis (I am only a part of outage investigation team) but I can help with multithreading.
Here is one approach: https://gist.github.com/yapaxi/9526ca734c8cc30173276c616324d33b
Be careful:
- I changed timeouts and thresholds for testing purposes
- I do not know how to handle Redis connection timeout properly so I just added timeout tasks
- I kept logic in reconnected but reduced nesting a bit
- It is hard to guess if you could use the same lock in reconnect because this is an incomplete solution so I decided to make Reconnect non-waiting to completely avoid deadlocks and transfer connection responsibility to GetConnection() method: reconnect only nullifies the state;
Hope this helps. Questions are welcome.
In the Redis-LazyReconnect.cs, at line number 57, inside the lock, we should re-read the value of previousReconnect.
elapsedSinceLastReconnect = utcNow - previousReconnect;
As this thread might have waited to acquire a lock and some other thread might have got it first and reconnected successfully, thus updating previousReconnect. And we are reading stale value in all other threads for this variable.
ForceReconnect needs something to reset the firstErrror on successful use of the multiplexer, something like
firstError = DateTimeOffset.MinValue;
so that a connection blip doesn't trigger destroying and recreating the multiplexer. The code as is doesn't let the multiplexer just handle the reconnection itself very well, and needs a bit of a test harness to ensure it works as designed.