Skip to content

Instantly share code, notes, and snippets.

@arrieta
Created October 6, 2018 22:12
Show Gist options
  • Save arrieta/acbc506cfec4a8d2650db0bf51c65e81 to your computer and use it in GitHub Desktop.
Save arrieta/acbc506cfec4a8d2650db0bf51c65e81 to your computer and use it in GitHub Desktop.
C++ Non-Owning Thread-Safe Object Cache
#include <chrono>
#include <iostream>
#include <memory>
#include <string>
#include <thread>
#include <unordered_map>
#include <vector>
// In practice these would be actual types instead of aliases.
using Id = int;
using Widget = std::string;
// Imagine this is an expensive, non-thread-safe operation that loads a widget.
std::shared_ptr<Widget> load_widget(Id id) {
std::cout << "load_widget(" << id << ") called" << std::endl;
return std::make_shared<Widget>(std::to_string(id));
}
// This cache will hand strong references in a thread-safe manner.
//
// When the last handed reference goes out of scope, the underlying Widget will
// be destroyed. In other words, the cache will not keep Widgets alive unless
// they are held externally.
std::shared_ptr<Widget> fetch(Id id) {
static std::unordered_map<Id, std::weak_ptr<Widget>> cache;
static std::mutex cache_mux;
std::lock_guard<std::mutex> lg(cache_mux); // thread safe access
if (auto p = cache[id].lock(); p) { // so much stuff in just this one line
std::cout << "cache hit" << std::endl; // remove in production
return p;
} else {
std::cout << "cache miss" << std::endl; // remove in production
return (cache[id] = load_widget(id)).lock();
}
}
int main() {
{
// some scope
auto widget = fetch(1);
std::cout << "widget @" << widget.get() << " contains " << *widget << "\n";
// ask for same widget again
auto another_widget = fetch(1);
std::cout << "widget @" << another_widget.get() << " contains "
<< *another_widget << "\n";
}
// out of previous scope. The next call should load widget(1) again
{
// some scope
auto widget = fetch(1);
std::cout << "widget @" << widget.get() << " contains " << *widget << "\n";
// ask for same widget again
auto another_widget = fetch(1);
std::cout << "widget @" << another_widget.get() << " contains "
<< *another_widget << "\n";
}
// out of previous scope. Test multi-threaded access
{
std::cout << "testing multi-threaded access" << std::endl;
std::vector<std::thread> workers;
auto worker = []() {
// I won't care about interleaved output
using namespace std::chrono_literals;
auto widget = fetch(1); // every worker will use the same widget
std::this_thread::sleep_for(1s);
};
for (std::size_t k = 0; k < 20; ++k) {
workers.emplace_back(worker);
}
for (auto&& worker : workers) {
worker.join();
}
}
std::cout << "At this point the cache is 'empty' (in the sense that\n"
<< "it holds no strong references). A periodic pruning of\n"
<< "dead objects would be appropriate if there was a chance\n"
<< "for the cache to grow without bounds. This is unlikely\n"
<< "because Object Id's are likely finite and small in number.\n";
}
@arrieta
Copy link
Author

arrieta commented Oct 6, 2018

I found the presented design in Herb Sutter's 2016 CppCon presentation "Leak Freedom in C++... By Default".

Possible output:

$ clang++ non-owning-ts-cache.cpp -std=c++17
$ ./a.out 
cache miss
load_widget(1) called
widget @0x7fe996402848 contains 1
cache hit
widget @0x7fe996402848 contains 1
cache miss
load_widget(1) called
widget @0x7fe996402878 contains 1
cache hit
widget @0x7fe996402878 contains 1
testing multi-threaded access
cache miss
load_widget(1) called
cache hit
cache hit
cache hit
cache hit
cache hit
cache hit
cache hit
cache hit
cache hit
cache hit
cache hit
cache hit
cache hit
cache hit
cache hit
cache hit
cache hit
cache hit
cache hit
At this point the cache is 'empty' (in the sense that
it holds no strong references). A periodic pruning of
dead objects would be appropriate if there was a chance
for the cache to grow without bounds. This is unlikely
because Object Id's are likely finite and small in number.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment