Skip to content

Instantly share code, notes, and snippets.

View markpapadakis's full-sized avatar
💭
Seeking Knowledge 24x7

Mark Papadakis markpapadakis

💭
Seeking Knowledge 24x7
View GitHub Profile

I and many others do not get any Wikipedia or web search results on Spotlight, neither in Yosemite, nor in iOS 8.0+. Apparently, it's because Apple doesn't want us to get any results.

Try

curl "https://api.smoot.apple.com/search?q=Apple&locale=en-US&calendar=gregorian&key=andromeda" -H "X-Apple-UI-Scale: 1.000000"   -A "(OS X 14A389) Spotlight/916" -H "Accept-Language: en-us"

Chances are you are getting:

template<typename T>
static const T *BinarySearch(const T *const first, const T *const last, const T value, std::function<int32_t(const T&, const T&)> cmp)
{
const auto n = last - first;
int32_t top = n - 1, btm = 0;
while (btm <= top)
{
const auto mid = (btm & top) + ((btm ^ top) >> 1);
const auto p = first + mid;
// Ref: http://iaroslavski.narod.ru/quicksort/DualPivotQuicksort.pdf
template<typename T>
void DualPivotQuickSort(T *lo, T *hi, const std::function<int32_t(const T &, const T&)> &cmp)
{
auto *const hiMinus1 = hi - 1;
if (hi - lo < 3)
{
if (hi != lo && cmp(*lo, *hiMinus1) > 0)
{
//
// Dropping exta 0 bits starting from the most significant bit in order to create a better LUT for binary search
// This is a practical example of this idea
//
// You can e.g serialize (LUT, totalValues) and when you need to perform a binary search, consult the LUT first.
// This should do wonders, in particular, when each lookup requires a disk seek(even on SSDs)
//
// You may want to consider multiple LUTs(or skiplists), if the distribution of values is such that the majority of values
// are mapped to the same index. This shoudl be trivial to accomplish, and should help mitigate distribution related issues.
// A simple LUT backed index for faster binary search, with LUT partition encoding
// A bloom filter can be attached to it, for when you have many, many values - though in practice
// it is rarely needed, especially if the bits(resolution) is over 16
//
// You need (1 << bits) * sizeof(T::key) for the lut, e.g if T::key is uint32_t, for a 16bits LUT, that's
// 262k. Increasing resolution results in higher lookup efficiency, reducing it results in lower memory requirements
template<typename T>
static __attribute__((always_inline)) constexpr T MSB(const T n, const uint8_t span)
{
# CloudDS
2048 requests in sequence - percentiles captured in an HDRHistogram
All CloudDS caching options disabled.
Cluster Size: 7
Replication Factor: 3
Row size in Columns: 10
Row size/payload: 2.5KBs
Column Family size in rows: 4MM
Cluster configuration: half of those 7 nodes commodity HW with 8GB RAM, other blade-class nodes with 4GB of RAM. Gigabit link for interconnect.
Cluster in active/heavy use by many other production services(not idle).
#include <stdio.h>
#include <stdint.h>
#include <sys/types.h>
#include <sys/file.h>
#include <unistd.h>
#include <linux/limits.h>
#include <stdio.h>
#include <errno.h>
#include <string.h>
template<typename T>
static inline constexpr T Max(void)
{
return std::numeric_limits<T>::is_signed ? (T)((((uint64_t)~0) >> (64 - (sizeof(T) * 8 - 1)))) : ((T)~0) - 1;
}
template<typename T>
static inline constexpr T Min(void)
{
- responding with a file that contains 'Hello World'
- Using latest releases of all benchmarked web servers
- selected because of claims/benchmarks made wrt to their performance/speed
- HTTPSrv is using the same configuration (number of threads, responds with a similar file)
- tried differnet configs. for lhttd but still really slow
- Using a 12 cores node, at 2Ghz, 16GB of RAM (rougly x2 slower than system used in test in that page)
- Except nginx, other web servers tested with default configuration(nginx with default settings ran slower)
- Clearly, g-web claims are valid. Faster by a wide margin than all other HTTP servers in this simple test case
(except our optimized HTTPSrv, which, though, is built to support a minimal features-set
and its only real use here is serving static files, and, optionally, resizing images before
Interconnect: LAN(1GB)
Server: 12 core Xeon E5-2620 at 2Ghz, 16GB RAM
Client running on idle node with similar h/w configuration. Used latest wrk2 release
https://github.com/giltene/wrk2
Tried different configuration options, results for configuration that gave best results
See also: https://gist.github.com/markpapadakis/dee39f95a404edfb8d6c
# Apache2: http://10.5.5.20/index.html
Requests/sec: 83704.15
> More or less expected that kind of throughput