Skip to content

Instantly share code, notes, and snippets.

View yhafri's full-sized avatar

Younès HAFRI yhafri

  • Switzerland
View GitHub Profile
@yhafri
yhafri / minipool.hpp
Created June 29, 2020 06:08 — forked from rofirrim/minipool.hpp
Very simple memory pool
#ifndef MINIPOOL_H
#define MINIPOOL_H
#include <cassert>
#include <cstddef>
#include <memory>
#include <new>
#include <utility>
/*
@yhafri
yhafri / simpleStacklessCorosActors.cpp
Created June 21, 2020 09:43 — forked from markpapadakis/simpleStacklessCorosActors.cpp
A very simple (first take) implementation of stack-less coroutines/actors
// https://gist.github.com/markpapadakis/8dba5c480c13b12a056e (example)
// https://medium.com/@markpapadakis/high-performance-services-using-coroutines-ac8e9f54d727
#include <switch.h>
#include <switch_print.h>
#include <switch_ll.h>
#include <switch_bitops.h>
#include <md5.h>
#include <text.h>
#include <network.h>
static uint8_t DigitsCount(uint64_t v)
{
for (uint8_t res{1}; ;res+=4, v/=10000U)
{
if (likely(v < 10)) return res;
if (likely(v < 100)) return res + 1;
if (likely(v < 1000)) return res + 2;
if (likely(v < 10000)) return res + 3;
}
}
@yhafri
yhafri / FastEditDistanceCandidates.cpp
Created June 21, 2020 09:41 — forked from markpapadakis/FastEditDistanceCandidates.cpp
From input string, identify all strings within a specific edit distance from it - based on deletions only.
// Compile list of tokens based on deletions, and respecting max edit distance
// Original implementation would concatenate left/right string segments using memcpy() etc, but that didn't seem right
// The idea is, to instead partition S into 1+ segments and build hash from them, no need for memory copies etc.
// This is almost x8 times faster compared to that - but it's still in the 10s of microseconds range.
//
// ( for a 31 characters long string, with max edit distance = 2, it takes 24 microseconds, including the cost for computing the FNV rolling hash )
#include <switch.h>
#include <switch_print.h>
#include <ansifmt.h>
@yhafri
yhafri / Trinity intro.md
Created June 21, 2020 09:40 — forked from markpapadakis/Trinity intro.md
Trinity Intro

Trinity is a modern C++ information-retrieval library for building queries, indexing documents and other content, running queries and scoring documents matching them. It facilitates the development of search engines and other systems and applications that depend on that functionality, and has been designed with simplicity, performance, modularity, extensibility, and elegance in mind.

This is the initial release and as such, new features and API extensions may come in later releases. It has been under development for 2 weeks, and will be updated frequently with improvements, enhancements and fixes to any potential issues reported.

It's named after Trinity in the Matrix movies, and it's also about the Trinity of (query, index, search) that are the core functions supported by the library.


Trinity makes it easy to index documents and access all documents that match a query. It solves the recall problem and everything else about it, and it gives

#!/bin/bash
# This builds a self-contained archive for access to abseil containers+hashes
mkdir /tmp/BUILDYARD
cd /tmp/BUILDYARD
git clone --depth 1 https://github.com/abseil/abseil-cpp.git
cd abseil-cpp
mkdir build
cd build
cmake .. -DABSL_RUN_TESTS=ON -DABSL_USE_GOOGLETEST_HEAD=ON -DCMAKE_CXX_STANDARD=17
cmake --build . --target all
// use it like so:
// auto l = co_away LM.lock(co_await this_coro{}, resource_type, resource_id);
// auto l = co_await LM.shared_lock(resource_type, resource_id);
//
// this_coro{} is a statub; using the await_transform() facilities of promise_type{}
// facilites zero-cost access to the current coroutine handle; which is
// used for exclusive locks, e.g
/*
struct this_coro final {
//
@yhafri
yhafri / tsunami.MD
Created June 15, 2020 22:08 — forked from AllenEllis/tsunami.MD
Faster UDP file transfers using tsunami

I recently had to move about 3TB of video footage between residential gigabit fiber connections. The fastest speed I could get was around ~150Mbps through Dropbox and other cloud providers, so I opted for a more direct answer.

After using Resilio Sync to buffer the files onto an Ubuntu Virtual Machine, I installed Tsunami. Using this I was able to get 600Mbps with no tuning, between a server in Los Angeles to my residential connection in Pittsburgh.

tsunami

What it does:

  • Very fast file transfers because it's UDP based

What it does not:

  • Support the ability to transfer more than one file [1]
@yhafri
yhafri / puppeteer.js
Created May 19, 2020 20:55 — forked from rcarmo/puppeteer.js
Trying to get a single-page PDF screenshot out of puppeteer
const puppeteer = require('puppeteer');
function sleep(ms){
return new Promise(resolve=>{
setTimeout(resolve,ms)
})
}
(async() => {
@yhafri
yhafri / lazy-load-puppeteer.js
Created May 14, 2020 11:18 — forked from mcspx/lazy-load-puppeteer.js
Lazy-loading content in Puppeteer
function wait (ms) {
return new Promise(resolve => setTimeout(() => resolve(), ms));
}
export default async function capture(browser, url) {
// Load the specified page
const page = await browser.newPage();
await page.goto(url, {waitUntil: 'load'});
// Get the height of the rendered page