These are insights shared by various people on operating in a downturn. These notes are incomplete by nature, but I'm sharing this to learn in public.
Contents:
These are insights shared by various people on operating in a downturn. These notes are incomplete by nature, but I'm sharing this to learn in public.
Contents:
import { ethers } from 'ethers' | |
// To use this you need to read the Uniswap v2 contract for a pair/ | |
// PRICE pulled from priceXCumulativeLast | |
// TIMESTAMPS pulled from _blockTimestampLast in getReserves() | |
// Mock Data | |
// In a real scenario you would fetch and store price & timestamp at an interval | |
// to mirror the contract calculating the TWAP on chain | |
const price0 = '529527205677379158060966860839' |
Lecture 1: Introduction to Research — [📝Lecture Notebooks] [
Lecture 2: Introduction to Python — [📝Lecture Notebooks] [
Lecture 3: Introduction to NumPy — [📝Lecture Notebooks] [
Lecture 4: Introduction to pandas — [📝Lecture Notebooks] [
Lecture 5: Plotting Data — [📝Lecture Notebooks] [[
I am testing Postgres insertion performance. I have a table with one column with number as its data type. There is an index on it as well. I filled the database up using this query:
insert into aNumber (id) values (564),(43536),(34560) ...
I inserted 4 million rows very quickly 10,000 at a time with the query above. After the database reached 6 million rows performance drastically declined to 1 Million rows every 15 min. Is there any trick to increase insertion performance? I need optimal insertion performance on this project. Using Windows 7 Pro on a machine with 5 GB RAM.
#!/bin/bash -ex | |
export PGBIN=/usr/pgsql-9.3/bin | |
export PGUSER=postgres | |
export PGDATABASE=bench | |
export DATADIR=/dados/pgbench | |
export CLUSTER_LOG=/tmp/benchmark.log | |
export TOTAL_CPUS=$(grep 'cpu cores' /proc/cpuinfo | uniq | awk '{print $NF}') |
This is an outdated version
(function (d, w, c) { | |
(w[c] = w[c] || []).push(function() { | |
try { | |
w.yaCounter42337284 = new Ya.Metrika({ | |
id:42337284, | |
clickmap:true, | |
trackLinks:true, | |
accurateTrackBounce:true | |
}); |
UNLOGGED
table. This reduces the amount of data written to persistent storage by up to 2x.WITH (autovacuum_enabled=false)
on the table. This saves CPU time and IO bandwidth
on useless vacuuming of the table (since we never DELETE
or UPDATE
the table).COPY FROM STDIN
. This is the fastest possible approach to insert rows into table.time timestamp with time zone
is enough.synchronous_commit = off
to postgresql.conf
.A while back I wrote a blog post explaining Markov chains and demonstrating different ways of finding their steady-state distribution in R. Now, I want to play with Markov chains as a graph. I’m going to pull examples from around the internet and answer the same questions in Cypher as the authors do with matrices. This gives me the opportunity to explore more advanced Cypher queries while working with a topic I enjoy very much (stochastic processes and Markov chains). So this is officially just for funsies.
I found three Markov chains online that I’m going to showcase, and they involve the following topics: