Skip to content

Instantly share code, notes, and snippets.

@xeoncross
Last active October 22, 2019 14:45
Show Gist options
  • Select an option

  • Save xeoncross/cf9a5cd5b63552958b3dd69e3aec7eb8 to your computer and use it in GitHub Desktop.

Select an option

Save xeoncross/cf9a5cd5b63552958b3dd69e3aec7eb8 to your computer and use it in GitHub Desktop.
Map of N locks contented by X goroutines. Use to limit access to a given function based on a key.

Map of N locks contented by X goroutines. Example of locking workers (or http handlers) from running the same process if another goroutine is already working for that "key". In this example there are 2 keys (pid %2) shared between 10 threads. This means only two threads (2 unique keys) are ever processing in parallel. The other 8 threads wait their turn.

Developed when I had an expensive API request that the same client might end up calling repeatedly instead of only calling once - then sharing the result. Go had this issue with parallel requests for the same DNS record (though I don't know how they solved that).

package syncmaptest
import (
"fmt"
"sync"
"testing"
"time"
)
var lockMap = &sync.Map{}
func runIt(pid int, key int) {
// Check to see if ads are being loaded by another process first!
for {
// If loaded, then another process is working on this and we need to wait
if _, loaded := lockMap.LoadOrStore(fmt.Sprintf("%d", key), pid); loaded {
fmt.Println(pid, "still locked", key)
time.Sleep(time.Millisecond * 5)
continue
}
break
}
fmt.Println(pid, "aquired lock", key)
// Always delete the lock
defer lockMap.Delete(fmt.Sprintf("%d", key))
// Do expensive calculation or I/O here
time.Sleep(time.Millisecond * 10)
fmt.Println(pid, "release lock", key)
}
func Test(t *testing.T) {
wg := &sync.WaitGroup{}
for i := 0; i < 10; i++ {
go func(pid int) {
wg.Add(1)
runIt(pid, pid%2)
wg.Done()
}(i)
}
wg.Wait()
}
@xeoncross
Copy link
Copy Markdown
Author

Another suggestion was to create a sync.Map of locks.

	imu, _ := lockMap.LoadOrStore(key, &sync.Mutex{})
	mu := imu.(*sync.Mutex)
	mu.Lock()
	defer lockMap.Delete(key)
	defer mu.Unlock()

This looks like it works, but how do we clean up keys without race-conditions? One process might delete the key while another one has already put a new lock into the store (or pulled the existing one out).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment