- ENR.dhtAddress(): the Kademlia address of a node, calculated according to the ID scheme
- ENR.cipher( packetData, key ): cipher algorithm
- ENR.symmEncryptKey( privKeyA, pubKeyB ): symmetric encryption key generator
- ENR.asymmEncrypt( packetData, recipient.pubKey ): asymmetric encrypt function
- ENR.signature( packetData, pubKey ): digital signature
- ENR.powValid( packetHash, packetType, packetFormat ): proof of work validity check
Running benchmark without bloombits | |
Clearing bloombits data... | |
Cleared bloombits data | |
Running filter benchmarks... | |
Finished running filter benchmarks | |
1m2.479473502s total 17.342566172s per million blocks | |
BenchmarkNoBloomBits-4 1 67490450777 ns/op | |
Running bloombits benchmark section size: 512 compression method: 0 |
0xa0b325beb7cd53c031bf0ec17532ce1ffa363366 |
Here are some pointers to implementing the ULC mode. Just take a look at the relevant code, then feel free to ping me whenever you have questions (I guess it will happen a few times, I tried to document the most important parts but some of the code is still a bit messy, which I apologize for in advance). | |
- adding the ultra light client option: | |
Add an extra option for light client mode where the user can specify a text/json ULC config file that contains a list of trusted server enodes and N out of M parameters (which may depend on how many servers we are connected to, there may be 5 servers listed but we can still accept 2 out of 3 connected). | |
Geth parameters are implemented like this: | |
https://github.com/ethereum/go-ethereum/blob/master/cmd/utils/flags.go#L162 | |
Search for LightModeFlag or LightServFlag to find out how it is wired through the code. | |
- requesting signed headers |
I've collected some points to be addressed at the nov.27. LES collaboration kickoff meeting: | |
- approximately how much time can you contribute to LES-related tasks in the foreseeable future? | |
- preferred communication channels and methods? | |
- gitter channel | |
- github issue tracking | |
- two standup calls per week | |
- regularly scheduled discussion calls for individual tasks | |
- choose a suitable service for voice calls and screen sharing | |
- questions about individual tasks, who is interested in which one (not necessarily committing to them during the first chat, just discussing) |
There are two more or less separated directions I'd like to pursue for testing LES. One of them is testing the actual protocol functionality through all available APIs and platforms in a synthetic environment: isolated private chain and test nodes (probably created with Puppeth), reproduceable API call test sequences with controlled peer connections/disconnections. The other direction is testing automatic peer connectivity behavior and request load distribution in a real world setting, with our own LES servers doing actual service on the mainnet while logging certain events and collecting them in a database. | |
Right now LES is practically unusable on the public network because of serious peer connectivity issues so I believe the "real world" testing is more urgent at the moment. There are a number of reasons that could probably contribute to connectivity problems: | |
- Peer discovery | |
LES uses an experimental DHT that allows node capability advertisement. This DHT has some problems and it would make sense to log |
This is my proposal for efficiently garbage collecting tries. It is low overhead and easy to maintain so hopefully not painful in the long term. In the short term it is a bit painful because it changes the database structure and also requires protocol change for fast sync (not for LES). Still I think it is manageable and I feel that this is the "right" way to store/access/reference trie nodes. It could also significantly increase both EVM and fast sync performance.
- old format: (trie node hash) -> (trie node)
- new format: (key prefix) (trie node hash) (creation block number) -> (trie node)
- new format for contract storage tries: (contract key) (storage key prefix) (trie node hash) (creation block number) -> (trie node)
This GC scheme proposal is somewhat related to my first proposal https://gist.github.com/zsfelfoldi/5c4f36fb8a898acd092a62dea4336f88 but it is also closer to the reference counting approach because it does not duplicate trie nodes just stores extra reference data. Instead of just counting references to nodes, it actually stores all refs which is still not a big overhead but greatly simplifies GC process. Iterating through the old tries is not necessary, we can just scan the db and remove expired refs and data. Since nodes are still referenced by hash there is also no need to modify the fast sync protocol.
Note: I still believe that reorganizing nodes by position (as it happens in my first proposal) might yield some performance improvements but on the other hand if we can keep most of the state in memory then this is not really relevant. This GC variant is definitely easier and safer to implement.
- Trie nodes are stored as they are now:
trie node hash
-> `
Here are the two basic structures required for chain based logging; This is just a rough draft, no need to follow it exactly.
ObserverChain (implemented in les/observer package) creates observer blocks, stores them in the database and can access them later. Observer blocks have the following fields:
- PrevHash common.Hash
- Number uint64
- UnixTime uint64
- TrieRoot common.Hash // root hash of a trie.Trie structure that is updated for every new block
Checkpoint syncing requires a CHT which is a trie that associates block hashes and TDs (total chain difficulty values) to block numbers.
https://github.com/zsfelfoldi/go-ethereum/wiki/Canonical-Hash-Trie
Knowing the root hash of this trie allows the client to access the entire chain history securely with Merkle proofs so only the last few thousand headers need to be downloaded. Right now we have a hardcoded trusted CHT root hash in geth.