StackExchange.Redis timeout and “No connection is available to service this operation”











up vote
34
down vote

favorite
7












I have the following issues in our production environment (Web-Farm - 4 nodes, on top of it Load balancer):



1) Timeout performing HGET key, inst: 3, queue: 29, qu=0, qs=29, qc=0, wr=0/0
at StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message message, ResultProcessor``1 processor, ServerEndPoint server) in ConnectionMultiplexer.cs:line 1699
This happens 3-10 times in a minute



2) No connection is available to service this operation: HGET key at StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message message, ResultProcessor``1 processor, ServerEndPoint server) in ConnectionMultiplexer.cs:line 1666



I tried to implement as Marc suggested (Maybe I interpreted it incorrectly) - better to have fewer connections to Redis than multiple.
I made the following implementation:



public class SeRedisConnection
{
private static ConnectionMultiplexer _redis;

private static readonly object SyncLock = new object();

public static IDatabase GetDatabase()
{
if (_redis == null || !_redis.IsConnected || !_redis.GetDatabase().IsConnected(default(RedisKey)))
{
lock (SyncLock)
{
try
{
var configurationOptions = new ConfigurationOptions
{
AbortOnConnectFail = false
};
configurationOptions.EndPoints.Add(new DnsEndPoint(ConfigurationHelper.CacheServerHost,
ConfigurationHelper.CacheServerHostPort));

_redis = ConnectionMultiplexer.Connect(configurationOptions);
}
catch (Exception ex)
{
IoC.Container.Resolve<IErrorLog>().Error(ex);
return null;
}
}
}
return _redis.GetDatabase();
}

public static void Dispose()
{
_redis.Dispose();
}
}


Actually dispose is not being used right now. Also I have some specifics of the implementation which could cause such behavior (I'm only using hashes):
1. Add, Remove hashes - async
2. Get -sync



Could somebody help me how to avoid this behavior?



Thanks a lot in advance!



SOLVED - Increasing Client connection timeout after evaluating network capabilities.



UPDATE 2: Actually it didn't solve the problem. When cache volume starting to get increased e.g. from 2GB.
Then I saw the same pattern actually these timeouts were happend about every 5 minutes.
And our sites were frozen for some period of time every 5 minutes until fork operation was finished.
Then I found out that there is an option to make a fork (save to disk) every x seconds:



save 900 1
save 300 10
save 60 10000


In my case it was "save 300 10" - save in every 5 minutes if at least 10 updates were happened. Also I found out that "fork" could be very expensive. Commented "save" section resolved the problem at all. We can commented "save" section as we are using only Redis as "cache in memory" - we don't need any persistance.
Here is configuration of our cache servers "Redis 2.4.6" windows port: https://github.com/rgl/redis/downloads



Maybe it has been solved in recent versions of Redis windows port in MSOpentech: http://msopentech.com/blog/2013/04/22/redis-on-windows-stable-and-reliable/
but I haven't tested yet.



Anyway StackExchange.Redis has nothing to do with this issue and it works pretty stable in our production environment, thanks to Marc Gravell.



FINAL UPDATE:
Redis is single-threaded solution - it is ultimately fast but when it comes to the point of releasing the memory (Removing items that are stale or expired) the problems are emerged due to one thread should reclaim the memory (that is not fast operation - whatever algorithm is used) and the same thread should handle GET, SET operations. Of course it happens when we are talking about medium-loaded production environment. Even if you use a cluster with slaves when the memory barrier is reached it will have the same behavior.










share|improve this question
























  • If you are using Asp.Net please try to refer my answer in stackoverflow.com/questions/25416562/… this fixed queue timeout errors for me, i.e. your issue 1 and 2. I tried other ways (singletons and locks) to maintain less connections but with no luck !, hope this helps
    – Sharat Pandavula
    Sep 20 '14 at 8:33










  • Very helpful, thank you! I actually modified the time between saves to make sure there isn't ever a huge queue of items to save to disk which would cause enough lag to timeout a query.
    – tommed
    Feb 12 '15 at 13:33








  • 2




    Hey tommed, well unfortunatelly we stopped using Redis at all as it proved its single threaded architecture with timeouts. Example: We had 32GB / 4 nodes cache servers (clustering). When max threshold of memory has been reached Redis tries release the memory and then timeouts happen. I admit that we are using heavily Redis in our production it had worked perfectly until threshold of memory has been reached.So we chose another multithreaded caching solution. But as I said maybe for your volume and config it would work out but you should make some load tests when threshold of memory is reached.
    – George Anisimov
    Feb 16 '15 at 9:49








  • 3




    Thanks, we have to process 10 million records per day which require several GETs to Redis per record. Most of which are received within a peak hour. We have accepted that there will be times when Redis sync to disk and lock the only thread and just wait patiently until this is completed. After tweaking the I/O sync times to limit this issue (**at the expense of elevating the risk of loosing records) we have found a good balance where Redis works well. Seems a shame that there isn't a separate thread for I/O syncing!!
    – tommed
    Mar 9 '15 at 17:15






  • 2




    @GeorgeAnisimov So we chose another multi threaded caching solution can you provide us with this solution
    – mohammed sameeh
    Dec 6 '17 at 23:17

















up vote
34
down vote

favorite
7












I have the following issues in our production environment (Web-Farm - 4 nodes, on top of it Load balancer):



1) Timeout performing HGET key, inst: 3, queue: 29, qu=0, qs=29, qc=0, wr=0/0
at StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message message, ResultProcessor``1 processor, ServerEndPoint server) in ConnectionMultiplexer.cs:line 1699
This happens 3-10 times in a minute



2) No connection is available to service this operation: HGET key at StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message message, ResultProcessor``1 processor, ServerEndPoint server) in ConnectionMultiplexer.cs:line 1666



I tried to implement as Marc suggested (Maybe I interpreted it incorrectly) - better to have fewer connections to Redis than multiple.
I made the following implementation:



public class SeRedisConnection
{
private static ConnectionMultiplexer _redis;

private static readonly object SyncLock = new object();

public static IDatabase GetDatabase()
{
if (_redis == null || !_redis.IsConnected || !_redis.GetDatabase().IsConnected(default(RedisKey)))
{
lock (SyncLock)
{
try
{
var configurationOptions = new ConfigurationOptions
{
AbortOnConnectFail = false
};
configurationOptions.EndPoints.Add(new DnsEndPoint(ConfigurationHelper.CacheServerHost,
ConfigurationHelper.CacheServerHostPort));

_redis = ConnectionMultiplexer.Connect(configurationOptions);
}
catch (Exception ex)
{
IoC.Container.Resolve<IErrorLog>().Error(ex);
return null;
}
}
}
return _redis.GetDatabase();
}

public static void Dispose()
{
_redis.Dispose();
}
}


Actually dispose is not being used right now. Also I have some specifics of the implementation which could cause such behavior (I'm only using hashes):
1. Add, Remove hashes - async
2. Get -sync



Could somebody help me how to avoid this behavior?



Thanks a lot in advance!



SOLVED - Increasing Client connection timeout after evaluating network capabilities.



UPDATE 2: Actually it didn't solve the problem. When cache volume starting to get increased e.g. from 2GB.
Then I saw the same pattern actually these timeouts were happend about every 5 minutes.
And our sites were frozen for some period of time every 5 minutes until fork operation was finished.
Then I found out that there is an option to make a fork (save to disk) every x seconds:



save 900 1
save 300 10
save 60 10000


In my case it was "save 300 10" - save in every 5 minutes if at least 10 updates were happened. Also I found out that "fork" could be very expensive. Commented "save" section resolved the problem at all. We can commented "save" section as we are using only Redis as "cache in memory" - we don't need any persistance.
Here is configuration of our cache servers "Redis 2.4.6" windows port: https://github.com/rgl/redis/downloads



Maybe it has been solved in recent versions of Redis windows port in MSOpentech: http://msopentech.com/blog/2013/04/22/redis-on-windows-stable-and-reliable/
but I haven't tested yet.



Anyway StackExchange.Redis has nothing to do with this issue and it works pretty stable in our production environment, thanks to Marc Gravell.



FINAL UPDATE:
Redis is single-threaded solution - it is ultimately fast but when it comes to the point of releasing the memory (Removing items that are stale or expired) the problems are emerged due to one thread should reclaim the memory (that is not fast operation - whatever algorithm is used) and the same thread should handle GET, SET operations. Of course it happens when we are talking about medium-loaded production environment. Even if you use a cluster with slaves when the memory barrier is reached it will have the same behavior.










share|improve this question
























  • If you are using Asp.Net please try to refer my answer in stackoverflow.com/questions/25416562/… this fixed queue timeout errors for me, i.e. your issue 1 and 2. I tried other ways (singletons and locks) to maintain less connections but with no luck !, hope this helps
    – Sharat Pandavula
    Sep 20 '14 at 8:33










  • Very helpful, thank you! I actually modified the time between saves to make sure there isn't ever a huge queue of items to save to disk which would cause enough lag to timeout a query.
    – tommed
    Feb 12 '15 at 13:33








  • 2




    Hey tommed, well unfortunatelly we stopped using Redis at all as it proved its single threaded architecture with timeouts. Example: We had 32GB / 4 nodes cache servers (clustering). When max threshold of memory has been reached Redis tries release the memory and then timeouts happen. I admit that we are using heavily Redis in our production it had worked perfectly until threshold of memory has been reached.So we chose another multithreaded caching solution. But as I said maybe for your volume and config it would work out but you should make some load tests when threshold of memory is reached.
    – George Anisimov
    Feb 16 '15 at 9:49








  • 3




    Thanks, we have to process 10 million records per day which require several GETs to Redis per record. Most of which are received within a peak hour. We have accepted that there will be times when Redis sync to disk and lock the only thread and just wait patiently until this is completed. After tweaking the I/O sync times to limit this issue (**at the expense of elevating the risk of loosing records) we have found a good balance where Redis works well. Seems a shame that there isn't a separate thread for I/O syncing!!
    – tommed
    Mar 9 '15 at 17:15






  • 2




    @GeorgeAnisimov So we chose another multi threaded caching solution can you provide us with this solution
    – mohammed sameeh
    Dec 6 '17 at 23:17















up vote
34
down vote

favorite
7









up vote
34
down vote

favorite
7






7





I have the following issues in our production environment (Web-Farm - 4 nodes, on top of it Load balancer):



1) Timeout performing HGET key, inst: 3, queue: 29, qu=0, qs=29, qc=0, wr=0/0
at StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message message, ResultProcessor``1 processor, ServerEndPoint server) in ConnectionMultiplexer.cs:line 1699
This happens 3-10 times in a minute



2) No connection is available to service this operation: HGET key at StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message message, ResultProcessor``1 processor, ServerEndPoint server) in ConnectionMultiplexer.cs:line 1666



I tried to implement as Marc suggested (Maybe I interpreted it incorrectly) - better to have fewer connections to Redis than multiple.
I made the following implementation:



public class SeRedisConnection
{
private static ConnectionMultiplexer _redis;

private static readonly object SyncLock = new object();

public static IDatabase GetDatabase()
{
if (_redis == null || !_redis.IsConnected || !_redis.GetDatabase().IsConnected(default(RedisKey)))
{
lock (SyncLock)
{
try
{
var configurationOptions = new ConfigurationOptions
{
AbortOnConnectFail = false
};
configurationOptions.EndPoints.Add(new DnsEndPoint(ConfigurationHelper.CacheServerHost,
ConfigurationHelper.CacheServerHostPort));

_redis = ConnectionMultiplexer.Connect(configurationOptions);
}
catch (Exception ex)
{
IoC.Container.Resolve<IErrorLog>().Error(ex);
return null;
}
}
}
return _redis.GetDatabase();
}

public static void Dispose()
{
_redis.Dispose();
}
}


Actually dispose is not being used right now. Also I have some specifics of the implementation which could cause such behavior (I'm only using hashes):
1. Add, Remove hashes - async
2. Get -sync



Could somebody help me how to avoid this behavior?



Thanks a lot in advance!



SOLVED - Increasing Client connection timeout after evaluating network capabilities.



UPDATE 2: Actually it didn't solve the problem. When cache volume starting to get increased e.g. from 2GB.
Then I saw the same pattern actually these timeouts were happend about every 5 minutes.
And our sites were frozen for some period of time every 5 minutes until fork operation was finished.
Then I found out that there is an option to make a fork (save to disk) every x seconds:



save 900 1
save 300 10
save 60 10000


In my case it was "save 300 10" - save in every 5 minutes if at least 10 updates were happened. Also I found out that "fork" could be very expensive. Commented "save" section resolved the problem at all. We can commented "save" section as we are using only Redis as "cache in memory" - we don't need any persistance.
Here is configuration of our cache servers "Redis 2.4.6" windows port: https://github.com/rgl/redis/downloads



Maybe it has been solved in recent versions of Redis windows port in MSOpentech: http://msopentech.com/blog/2013/04/22/redis-on-windows-stable-and-reliable/
but I haven't tested yet.



Anyway StackExchange.Redis has nothing to do with this issue and it works pretty stable in our production environment, thanks to Marc Gravell.



FINAL UPDATE:
Redis is single-threaded solution - it is ultimately fast but when it comes to the point of releasing the memory (Removing items that are stale or expired) the problems are emerged due to one thread should reclaim the memory (that is not fast operation - whatever algorithm is used) and the same thread should handle GET, SET operations. Of course it happens when we are talking about medium-loaded production environment. Even if you use a cluster with slaves when the memory barrier is reached it will have the same behavior.










share|improve this question















I have the following issues in our production environment (Web-Farm - 4 nodes, on top of it Load balancer):



1) Timeout performing HGET key, inst: 3, queue: 29, qu=0, qs=29, qc=0, wr=0/0
at StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message message, ResultProcessor``1 processor, ServerEndPoint server) in ConnectionMultiplexer.cs:line 1699
This happens 3-10 times in a minute



2) No connection is available to service this operation: HGET key at StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message message, ResultProcessor``1 processor, ServerEndPoint server) in ConnectionMultiplexer.cs:line 1666



I tried to implement as Marc suggested (Maybe I interpreted it incorrectly) - better to have fewer connections to Redis than multiple.
I made the following implementation:



public class SeRedisConnection
{
private static ConnectionMultiplexer _redis;

private static readonly object SyncLock = new object();

public static IDatabase GetDatabase()
{
if (_redis == null || !_redis.IsConnected || !_redis.GetDatabase().IsConnected(default(RedisKey)))
{
lock (SyncLock)
{
try
{
var configurationOptions = new ConfigurationOptions
{
AbortOnConnectFail = false
};
configurationOptions.EndPoints.Add(new DnsEndPoint(ConfigurationHelper.CacheServerHost,
ConfigurationHelper.CacheServerHostPort));

_redis = ConnectionMultiplexer.Connect(configurationOptions);
}
catch (Exception ex)
{
IoC.Container.Resolve<IErrorLog>().Error(ex);
return null;
}
}
}
return _redis.GetDatabase();
}

public static void Dispose()
{
_redis.Dispose();
}
}


Actually dispose is not being used right now. Also I have some specifics of the implementation which could cause such behavior (I'm only using hashes):
1. Add, Remove hashes - async
2. Get -sync



Could somebody help me how to avoid this behavior?



Thanks a lot in advance!



SOLVED - Increasing Client connection timeout after evaluating network capabilities.



UPDATE 2: Actually it didn't solve the problem. When cache volume starting to get increased e.g. from 2GB.
Then I saw the same pattern actually these timeouts were happend about every 5 minutes.
And our sites were frozen for some period of time every 5 minutes until fork operation was finished.
Then I found out that there is an option to make a fork (save to disk) every x seconds:



save 900 1
save 300 10
save 60 10000


In my case it was "save 300 10" - save in every 5 minutes if at least 10 updates were happened. Also I found out that "fork" could be very expensive. Commented "save" section resolved the problem at all. We can commented "save" section as we are using only Redis as "cache in memory" - we don't need any persistance.
Here is configuration of our cache servers "Redis 2.4.6" windows port: https://github.com/rgl/redis/downloads



Maybe it has been solved in recent versions of Redis windows port in MSOpentech: http://msopentech.com/blog/2013/04/22/redis-on-windows-stable-and-reliable/
but I haven't tested yet.



Anyway StackExchange.Redis has nothing to do with this issue and it works pretty stable in our production environment, thanks to Marc Gravell.



FINAL UPDATE:
Redis is single-threaded solution - it is ultimately fast but when it comes to the point of releasing the memory (Removing items that are stale or expired) the problems are emerged due to one thread should reclaim the memory (that is not fast operation - whatever algorithm is used) and the same thread should handle GET, SET operations. Of course it happens when we are talking about medium-loaded production environment. Even if you use a cluster with slaves when the memory barrier is reached it will have the same behavior.







stackexchange.redis






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jul 5 '17 at 18:32









Ross Presser

3,7051741




3,7051741










asked Apr 8 '14 at 7:52









George Anisimov

6281616




6281616












  • If you are using Asp.Net please try to refer my answer in stackoverflow.com/questions/25416562/… this fixed queue timeout errors for me, i.e. your issue 1 and 2. I tried other ways (singletons and locks) to maintain less connections but with no luck !, hope this helps
    – Sharat Pandavula
    Sep 20 '14 at 8:33










  • Very helpful, thank you! I actually modified the time between saves to make sure there isn't ever a huge queue of items to save to disk which would cause enough lag to timeout a query.
    – tommed
    Feb 12 '15 at 13:33








  • 2




    Hey tommed, well unfortunatelly we stopped using Redis at all as it proved its single threaded architecture with timeouts. Example: We had 32GB / 4 nodes cache servers (clustering). When max threshold of memory has been reached Redis tries release the memory and then timeouts happen. I admit that we are using heavily Redis in our production it had worked perfectly until threshold of memory has been reached.So we chose another multithreaded caching solution. But as I said maybe for your volume and config it would work out but you should make some load tests when threshold of memory is reached.
    – George Anisimov
    Feb 16 '15 at 9:49








  • 3




    Thanks, we have to process 10 million records per day which require several GETs to Redis per record. Most of which are received within a peak hour. We have accepted that there will be times when Redis sync to disk and lock the only thread and just wait patiently until this is completed. After tweaking the I/O sync times to limit this issue (**at the expense of elevating the risk of loosing records) we have found a good balance where Redis works well. Seems a shame that there isn't a separate thread for I/O syncing!!
    – tommed
    Mar 9 '15 at 17:15






  • 2




    @GeorgeAnisimov So we chose another multi threaded caching solution can you provide us with this solution
    – mohammed sameeh
    Dec 6 '17 at 23:17




















  • If you are using Asp.Net please try to refer my answer in stackoverflow.com/questions/25416562/… this fixed queue timeout errors for me, i.e. your issue 1 and 2. I tried other ways (singletons and locks) to maintain less connections but with no luck !, hope this helps
    – Sharat Pandavula
    Sep 20 '14 at 8:33










  • Very helpful, thank you! I actually modified the time between saves to make sure there isn't ever a huge queue of items to save to disk which would cause enough lag to timeout a query.
    – tommed
    Feb 12 '15 at 13:33








  • 2




    Hey tommed, well unfortunatelly we stopped using Redis at all as it proved its single threaded architecture with timeouts. Example: We had 32GB / 4 nodes cache servers (clustering). When max threshold of memory has been reached Redis tries release the memory and then timeouts happen. I admit that we are using heavily Redis in our production it had worked perfectly until threshold of memory has been reached.So we chose another multithreaded caching solution. But as I said maybe for your volume and config it would work out but you should make some load tests when threshold of memory is reached.
    – George Anisimov
    Feb 16 '15 at 9:49








  • 3




    Thanks, we have to process 10 million records per day which require several GETs to Redis per record. Most of which are received within a peak hour. We have accepted that there will be times when Redis sync to disk and lock the only thread and just wait patiently until this is completed. After tweaking the I/O sync times to limit this issue (**at the expense of elevating the risk of loosing records) we have found a good balance where Redis works well. Seems a shame that there isn't a separate thread for I/O syncing!!
    – tommed
    Mar 9 '15 at 17:15






  • 2




    @GeorgeAnisimov So we chose another multi threaded caching solution can you provide us with this solution
    – mohammed sameeh
    Dec 6 '17 at 23:17


















If you are using Asp.Net please try to refer my answer in stackoverflow.com/questions/25416562/… this fixed queue timeout errors for me, i.e. your issue 1 and 2. I tried other ways (singletons and locks) to maintain less connections but with no luck !, hope this helps
– Sharat Pandavula
Sep 20 '14 at 8:33




If you are using Asp.Net please try to refer my answer in stackoverflow.com/questions/25416562/… this fixed queue timeout errors for me, i.e. your issue 1 and 2. I tried other ways (singletons and locks) to maintain less connections but with no luck !, hope this helps
– Sharat Pandavula
Sep 20 '14 at 8:33












Very helpful, thank you! I actually modified the time between saves to make sure there isn't ever a huge queue of items to save to disk which would cause enough lag to timeout a query.
– tommed
Feb 12 '15 at 13:33






Very helpful, thank you! I actually modified the time between saves to make sure there isn't ever a huge queue of items to save to disk which would cause enough lag to timeout a query.
– tommed
Feb 12 '15 at 13:33






2




2




Hey tommed, well unfortunatelly we stopped using Redis at all as it proved its single threaded architecture with timeouts. Example: We had 32GB / 4 nodes cache servers (clustering). When max threshold of memory has been reached Redis tries release the memory and then timeouts happen. I admit that we are using heavily Redis in our production it had worked perfectly until threshold of memory has been reached.So we chose another multithreaded caching solution. But as I said maybe for your volume and config it would work out but you should make some load tests when threshold of memory is reached.
– George Anisimov
Feb 16 '15 at 9:49






Hey tommed, well unfortunatelly we stopped using Redis at all as it proved its single threaded architecture with timeouts. Example: We had 32GB / 4 nodes cache servers (clustering). When max threshold of memory has been reached Redis tries release the memory and then timeouts happen. I admit that we are using heavily Redis in our production it had worked perfectly until threshold of memory has been reached.So we chose another multithreaded caching solution. But as I said maybe for your volume and config it would work out but you should make some load tests when threshold of memory is reached.
– George Anisimov
Feb 16 '15 at 9:49






3




3




Thanks, we have to process 10 million records per day which require several GETs to Redis per record. Most of which are received within a peak hour. We have accepted that there will be times when Redis sync to disk and lock the only thread and just wait patiently until this is completed. After tweaking the I/O sync times to limit this issue (**at the expense of elevating the risk of loosing records) we have found a good balance where Redis works well. Seems a shame that there isn't a separate thread for I/O syncing!!
– tommed
Mar 9 '15 at 17:15




Thanks, we have to process 10 million records per day which require several GETs to Redis per record. Most of which are received within a peak hour. We have accepted that there will be times when Redis sync to disk and lock the only thread and just wait patiently until this is completed. After tweaking the I/O sync times to limit this issue (**at the expense of elevating the risk of loosing records) we have found a good balance where Redis works well. Seems a shame that there isn't a separate thread for I/O syncing!!
– tommed
Mar 9 '15 at 17:15




2




2




@GeorgeAnisimov So we chose another multi threaded caching solution can you provide us with this solution
– mohammed sameeh
Dec 6 '17 at 23:17






@GeorgeAnisimov So we chose another multi threaded caching solution can you provide us with this solution
– mohammed sameeh
Dec 6 '17 at 23:17














1 Answer
1






active

oldest

votes

















up vote
0
down vote













It looks like in most cases this exception is a client issue. Previous versions of StackExchange.Redis used Win32 socket directly which sometimes has a negative impact. Probably Asp.net internal routing somehow related to it.

The good news is that StackExchange.Redis's network infra was completely rewritten recently. The last version is 2.0.513. Try it and there is a good chance that your problem will go.






share|improve this answer























    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f22930856%2fstackexchange-redis-timeout-and-no-connection-is-available-to-service-this-oper%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    0
    down vote













    It looks like in most cases this exception is a client issue. Previous versions of StackExchange.Redis used Win32 socket directly which sometimes has a negative impact. Probably Asp.net internal routing somehow related to it.

    The good news is that StackExchange.Redis's network infra was completely rewritten recently. The last version is 2.0.513. Try it and there is a good chance that your problem will go.






    share|improve this answer



























      up vote
      0
      down vote













      It looks like in most cases this exception is a client issue. Previous versions of StackExchange.Redis used Win32 socket directly which sometimes has a negative impact. Probably Asp.net internal routing somehow related to it.

      The good news is that StackExchange.Redis's network infra was completely rewritten recently. The last version is 2.0.513. Try it and there is a good chance that your problem will go.






      share|improve this answer

























        up vote
        0
        down vote










        up vote
        0
        down vote









        It looks like in most cases this exception is a client issue. Previous versions of StackExchange.Redis used Win32 socket directly which sometimes has a negative impact. Probably Asp.net internal routing somehow related to it.

        The good news is that StackExchange.Redis's network infra was completely rewritten recently. The last version is 2.0.513. Try it and there is a good chance that your problem will go.






        share|improve this answer














        It looks like in most cases this exception is a client issue. Previous versions of StackExchange.Redis used Win32 socket directly which sometimes has a negative impact. Probably Asp.net internal routing somehow related to it.

        The good news is that StackExchange.Redis's network infra was completely rewritten recently. The last version is 2.0.513. Try it and there is a good chance that your problem will go.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Nov 11 at 10:37









        Moritz

        57.1k19131184




        57.1k19131184










        answered Nov 11 at 6:54









        Bennie Zhitomirsky

        14215




        14215






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f22930856%2fstackexchange-redis-timeout-and-no-connection-is-available-to-service-this-oper%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Florida Star v. B. J. F.

            Danny Elfman

            Lugert, Oklahoma