I write a simple concurrency scheduler, but it seems to have a performance issue at a high level concurrency.
Here is the code (scheduler + concurrent rate limiter test)
At a surface level the only thing that I have questions about is ordering of incrementing wait group and enqueing the work:
func (s *Scheduler) Enqueue(req interface{}) {
select {
case s.reqChan <- req:
s.wg.Add(1)
}
}
I don't think the above will cause much problem in practice with this large of workload, but I think it may be a logical race condition. At lower levels of concurrency and smaller work sizes, it may enqueue a message, conext switch to a goroutine that starts work on that message, THEN the work in the wait group.
Next are you sure process
method is threadsafe?? I'd assume so based on the redis go documentation , does running with go run -race
have any output?
At some point It's completely reasonable and expected for performance to drop off. I would recommend starting performance tests to see where latency and throughput start to drop off:
maybe a pool of 10, 100, 500, 1000, 2500, 5000, 10000, or whatever makes sense. IMO it looks like there are 3 important variables to tune:
MaxActive
The biggest thing that jumps out is that it looks like redis.Pool is configured to allow an unbounded number of connections:
pool := &redis.Pool{
MaxIdle: 50,
IdleTimeout: 240 * time.Second,
TestOnBorrow: func(c redis.Conn, t time.Time) error {
_, err := c.Do("PING")
return err
},
Dial: func() (redis.Conn, error) {
return dial("tcp", address, password)
},
}
// Maximum number of connections allocated by the pool at a given time. // When zero, there is no limit on the number of connections in the pool. MaxActive int
I would personally try to understand where and when performance starts to drop off with respect to the size of your worker pool. This might make it easier to understand what your program is constrained by.
My test result shows, when the routine number increases, the execution time per routine per take function increases nearly exponentially.
It should be a problem of redis, here is reply from redis library community:
The problem is what you suspected the pool connection lock, which if your requests are small / quick will pushing the serialisation of your requests.
You should note that redis is single threaded so you should be able to obtain peak performance with just a single connection. This isn't quite true due to the round trip delays from client to server but in this type of use case a limited number of processors is likely the best approach.
I have some ideas on how we could improve pool.Get() / conn.Close() but in your case tuning the number of routines would be the best approach.