Skip to content

Instantly share code, notes, and snippets.

View markpapadakis's full-sized avatar
💭
Seeking Knowledge 24x7

Mark Papadakis markpapadakis

💭
Seeking Knowledge 24x7
View GitHub Profile
double AsDouble(const char *p, const uint32_t len)
{
const char *it = p, *const e = p + len;
double sign;
if (it == e)
return 0;
else if (*it == '-')
{
++it;
static uint8_t DigitsCount(uint64_t v)
{
for (uint8_t res{1}; ;res+=4, v/=10000U)
{
if (likely(v < 10)) return res;
if (likely(v < 100)) return res + 1;
if (likely(v < 1000)) return res + 2;
if (likely(v < 10000)) return res + 3;
}
}
bool TrySend(connection *const c)
{
iovec iov[128];
int fd = c->fd;
sendv:
auto it = c->respTail;
int32_t r;
bool haveCork;
@markpapadakis
markpapadakis / Lambda architecture and stream processing
Last active August 29, 2015 14:19
Lambda architecture and Stream Processing semantics
This originated from @jboner's tweet (https://twitter.com/jboner/status/588806186667024385 ):
I was going to email @benjchristensen, but @paulrpayne suggested this may not be the right way to conclude
our participation in a Twitter thread about Lambda architecture semantics, stream processing
and data partitioning.
Here are some my thoughts on this topic as well as my experience building and running such services.
The Lambda architecture core concept is that ingested/incoming events/messages/datums/whatever are
forwarded to two different layers; one practically buffers them as-is, or with little processing/transformation
@markpapadakis
markpapadakis / stackless_coros_httpreqs.cpp
Created March 20, 2015 09:23
Pseudo-code; thread accepting requests and processing them; thread does not block
struct filepagecache_warmer
: public coroutine
{
int fd;
const uint64_t offset;
const uint64_t len;
filepagecache_warmer(int _fd, const uint64_t _o, const uint64_t _l)
: fd(_fd), offset(_o), len(_l)
{
@markpapadakis
markpapadakis / simpleStacklessCorosActors.cpp
Last active June 21, 2020 09:43
A very simple (first take) implementation of stack-less coroutines/actors
// https://gist.github.com/markpapadakis/8dba5c480c13b12a056e (example)
// https://medium.com/@markpapadakis/high-performance-services-using-coroutines-ac8e9f54d727
#include <switch.h>
#include <switch_print.h>
#include <switch_ll.h>
#include <switch_bitops.h>
#include <md5.h>
#include <text.h>
#include <network.h>
class Object
{
};
auto **list = nullptr;
uint32_t size{0};
..
..
size = newSize;
Interconnect: LAN(1GB)
Server: 12 core Xeon E5-2620 at 2Ghz, 16GB RAM
Client running on idle node with similar h/w configuration. Used latest wrk2 release
https://github.com/giltene/wrk2
Tried different configuration options, results for configuration that gave best results
See also: https://gist.github.com/markpapadakis/dee39f95a404edfb8d6c
# Apache2: http://10.5.5.20/index.html
Requests/sec: 83704.15
> More or less expected that kind of throughput
- responding with a file that contains 'Hello World'
- Using latest releases of all benchmarked web servers
- selected because of claims/benchmarks made wrt to their performance/speed
- HTTPSrv is using the same configuration (number of threads, responds with a similar file)
- tried differnet configs. for lhttd but still really slow
- Using a 12 cores node, at 2Ghz, 16GB of RAM (rougly x2 slower than system used in test in that page)
- Except nginx, other web servers tested with default configuration(nginx with default settings ran slower)
- Clearly, g-web claims are valid. Faster by a wide margin than all other HTTP servers in this simple test case
(except our optimized HTTPSrv, which, though, is built to support a minimal features-set
and its only real use here is serving static files, and, optionally, resizing images before
template<typename T>
static inline constexpr T Max(void)
{
return std::numeric_limits<T>::is_signed ? (T)((((uint64_t)~0) >> (64 - (sizeof(T) * 8 - 1)))) : ((T)~0) - 1;
}
template<typename T>
static inline constexpr T Min(void)
{