Skip to content

Instantly share code, notes, and snippets.

@ThePhD
Created June 18, 2014 12:11
Show Gist options
  • Select an option

  • Save ThePhD/cd0c3488e83e5249aefd to your computer and use it in GitHub Desktop.

Select an option

Save ThePhD/cd0c3488e83e5249aefd to your computer and use it in GitHub Desktop.
void ThreadPool::QueueWork( TWork item ) {
static const std::size_t n = 256;
typedef fixed_vector<proxy_cost_t, n> container_t;
std::lock_guard<std::mutex> lockguard( additionalworklock );
container_t workingcosts( threads.size( ) );
for ( std::size_t i = 0; i < workingcosts.size( ); ++i ) {
workingcosts[ i ].first = costs[ i ].id;
workingcosts[ i ].second = costs[ i ].cost;
}
// we construct a fresh container everytime.
// If we don't and we keep one in memory,
// other threads will update the priority queue
// but not trigger a full sort:
// this breaks invariants of the priority queue
// (without popping everything and then re-pushing on the other thread)
// (which is a dangerous game)
// ( because then we have to lock on all threads when doing cost updates)
// Here we lose to-the-cycle-split correct costs, but for the most part maintain
// a good sort with not too much damage thanks to fixed_vector's "allocate-where-you-are"
// semantics
std::priority_queue<proxy_cost_t, container_t, compare_cost> priorities( compare_cost( ), std::move( workingcosts ) );
proxy_cost_t top = priorities.top( );
++costs[ top.first ].cost;
QueueWorkAt( top.first, std::move( item ) );
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment