Skip to content

Instantly share code, notes, and snippets.

View zsfelfoldi's full-sized avatar

Felföldi Zsolt zsfelfoldi

View GitHub Profile

Modular Protocol Framework proposal

Author: Zsolt Felfoldi [email protected]

Abstract

This document proposes a standardized handshake and message/metadata format for peer-to-peer protocols. Implementing this standard in Ethereum-related protocols would make it easy to share mechanisms like flow control or micropayment incentivization. It could also replace devp2p protocol multiplexing which separates protocols completely and instead handle different protocols as not necessarily disjoint sets of supported messages or message/metadata combinations.

Server capacity

In the LES service model capacity is defined as a scalar value which is assigned to each client while the total capacity of simultaneously connected clients is limited (totalConnCap <= totalCapacity). The exact meaning of capacity values may depend on the server implementation and may change in subsequent versions. The behavior expected from the server is that in any situation the served amount of similar types of requests should be proportional to the capacity value. (Note: this behavior is also tested by https://github.com/ethereum/go-ethereum/blob/master/les/api_test.go). The server also specifies specifies a minimum useful value for client capacity minCapacity which in practice means a large enough flow control buffer that allows the client to send all types of requests.

The total capacity of the server can change dynamically for various reasons. The actual request serving capacity can change due to external circumstances and it can only be properly measured while actually serv

### New LES server API calls implemented in https://github.com/ethereum/go-ethereum/pull/18230
// TotalCapacity queries total available capacity for all clients
func (api *PrivateLightServerAPI) TotalCapacity() hexutil.Uint64
// SubscribeTotalCapacity subscribes to changed total capacity events.
// If onlyUnderrun is true thenhttps://github.com/ethereum/go-ethereum/pull/18230 notification is sent only if the total capacity
// drops under the total capacity of connected priority clients.
//
// Note: actually applying decreasing total capacity values is delayed while the

Bandwidth model

The quality of service received through an LES connection depends on many factors and hard guarantees cannot be provided. What LES and its flow control design aims at is some degree of predictability. Servers have an estimate of their serving capacity for different types of requests and assign maximum estimated cost values for each request type. They also define the concept of bandwidth which equals the "minimum recharge rate" flow control parameter.

In the current implementation the request cost is interpreted as a conservative estimate for the serving time of the request in a single thread, in nanoseconds. After serving the request, the difference between the upper estimate and the actual serving time is added to the buffer as extra recharge (if positive). If some client buffers are already full then the unused amount of buffer recharge is also distributed among the currently recharging ones, proportionally to their bandwidth.

Since bandwidth is specified as maximum recharged units per

A payment channel protocol for LES incentivization

Author: Zsolt Felfoldi ([email protected])

Special thanks to:

  • Daniel A. Nagy for the idea of the "stamper" role
  • Viktor Tron and the rest of the Swarm team for their work on the SWAP, SWEAR and SWINDLE framework

Abstract

Probabilistic "light" verification of the beacon chain

Author: Zsolt Felföldi ([email protected])

Abstract

This document explores the possibility of a probabilistic syncing method on the beacon chain that requires significantly less network bandwidth than full verification while still providing a high level of safety based on the security assumption of Casper (that a 2/3 majority consensus of the active validator set is always honest). It also proposes a new representation of the active validator set in the crystallized state and an extra rule when changing the validator set in order to make the necessary verification steps possible and make the design "light client friendly".

Note: this proposal is based on the latest version of the specification at the time of writing:

State size limitation proposal

totalSize(block): sum of the sizes of RLP encoded trie nodes and contract codes in the account trie and storage tries. Items or subtrees referenced multiple times are counted multiple times. Needs a database upgrade to store totalSubtreeSize for each internal node. Total size is the total subtree size of the state root. Storing in consensus is not needed.

sizeLimit(block): specified in the protocol and can only be changed for future blocks in a hard fork. (only applies to the state, chain history is not discussed here)

Individual entry size

Since internal account trie nodes do not belong to a single entry we use an estimated value overheadEstimate over the actual enrty size when calculating individual entry size on which the actual pricing is based.

^Cpanic: boom
goroutine 150 [running]:
github.com/ethereum/go-ethereum/internal/debug.LoudPanic(0xdffc20, 0x1142b30)
/home/fefe/go/src/github.com/ethereum/go-ethereum/internal/debug/loudpanic.go:26 +0x4e
github.com/ethereum/go-ethereum/cmd/utils.StartNode.func1(0xc420112280)
/home/fefe/go/src/github.com/ethereum/go-ethereum/cmd/utils/cmd.go:79 +0x23a
created by github.com/ethereum/go-ethereum/cmd/utils.StartNode
/home/fefe/go/src/github.com/ethereum/go-ethereum/cmd/utils/cmd.go:65 +0xb7

You can run TrueBit verification games for C/C++ code on both Kovan and Rinkeby testnets, and we welcome you to test our code! Please start with this Docker container:

https://github.com/TrueBitFoundation/test-node-docker

We have a VM that is based on WASM, there are some changes to make it easier to interpret. Here is some documentation: https://github.com/TrueBitFoundation/ocaml-offchain/wiki/Initializing-and-preprocessing-WebAssembly

Currently, the code will have to be written C/C++, or Rust, and for output and input ,the program has to read and write to files. In the blockchain, the files are identified by their binary merkle roots. So for example if one would like to calculate a sha256 hash of a large file, then the task would have the merkle root of the file as an argument. But if the merkle root is nonsense, and somebody posts a bad solution, there is no efficient way to disprove the (data availability problem). So there might be some problems with checking data that is available in the blockchain