Skip to content

Instantly share code, notes, and snippets.

View ericxtang's full-sized avatar

Eric Tang ericxtang

View GitHub Profile
@ericxtang
ericxtang / lp-go-client-status.json
Last active January 16, 2018 22:15
lp-go-client-status.json
[
{
"date": "1/16/2018",
"yesterday": [
"New release - 0.1.10"
],
"today": [
"Add more http error code - https://github.com/livepeer/go-livepeer/pull/268"
]
},

Engineering Planning 2/20/2018

Protocol Explorer

  • Feedback for bond task flow (everyone)
  • Task flows for Unbond, ClaimEarning, WithdrawStake, WithdrawFee (Josiah)
  • Airdrop experience depending on discussions this week, not expected to be fully designed by eow (Josiah)
  • Update sdk binding (Eric coordinate with Josiah)

Protocol

  • Truebit collaboration: compile ffmpeg into wasm, set up a repo similar to truebit scrypt repo (Yondon/Josh)
  • Improve test coverage to 100% (Yondon)

First of all, I realized my load testing was actually on an AWS transcoder...

But this can be replicated in the devenv. Prereq is that we fix the current protocol deployment script so the protocol is in a running state after the deployment. (Currently it's paused by default, but @yondon is working on a new branch, hopefully it can be merged back in soon)

With that fixed, we can do the following:

  1. Deploy a fresh protocol
  2. Run a broadcasting node (also the bootnode)
  3. Run a transcoding node
  4. Stream a video into the broadcasting node (rtmp://localhost:1935). At this point, the transcoder should pick up the work.
Received verify request #11
{ address: '0x162b462d81456fdb486b8f3b0316d13878829045',
blockNumber: 2078382,
transactionHash: '0x23d73eccc50a573830677e0da005396a61054852f03291da75c5afb2dd47c0b2',
transactionIndex: 0,
blockHash: '0x0a2e563ae6d348eaabf2078cfb353e25d5e1154e055464d63ff7849943e5bba6',
logIndex: 0,
removed: false,
event: 'VerifyRequest',
args:
[Erics-MBP-2][ericxtang]-> bash test.sh
Testing core
?   	github.com/livepeer/lpms/core	[no test files]
Testing vidplayer
E0430 12:48:45.262234   53379 player.go:62] Error getting stream: error
E0430 12:48:45.262380   53379 player.go:62] Error getting stream: error
PASS
ok  	github.com/livepeer/lpms/vidplayer	0.025s
Testing vidlistener

Behaviors:

  • You take responsibility rather than pass the buck
  • You write
  • You establish expectations
  • You love 1:1s

Skills:

Table-stakes

  • Ability to find/attract, hire, and retain a strong engineering team that can execute on the company's vision and priorities.

Here is an implementatoin proposal for an S3-based object store solution. The S3-based object store serves as an improvment to our current p2p relay-based video delivery approach.

Object Store in the broadcaster

  • Implement object store broadcaster as S3Broadcaster in the networking interface (instead of BasicBroadcaster).
  • When broadcaster creates a job, it waits until a transcoder is assigned. At this time, the broadcaster asks the transcoder for an "object store location" via an http request.
  • The broadcaster gets the write destination (S3 bucket and sub-directory) and credential from the transcoder.
    • Need to do research in S3 permissioning
  • S3Broadcaster writes the following data into the sub-directory
  • segment ts file
I0716 12:31:47.058626 7376 mediaserver.go:213] Current deposit is: 1323974057998998372
I0716 12:31:47.058667 7376 mediaserver.go:244] Cannot automatically detect the video profile - setting it to {P720p30fps16x9 4000k 30 1280x720 16:9}
I0716 12:31:47.058726 7376 mediaserver.go:362]
Video Created With ManifestID: 1220abae04d7b02747344968b64e9482c237227f1ce5dcd7ec31e93efbcace3e30da252d6eb1413c21cbcdeadf985eb54c295fdf54c2d0dabc9db8e75992dff01bf3
I0716 12:31:47.058739 7376 mediaserver.go:363]
hlsStrmID: 1220abae04d7b02747344968b64e9482c237227f1ce5dcd7ec31e93efbcace3e30da252d6eb1413c21cbcdeadf985eb54c295fdf54c2d0dabc9db8e75992dff01bf3P720p30fps16x9
infura call pending
@ericxtang
ericxtang / gist:11e073ca0eddbc11ab7912d045daefed
Created November 14, 2018 16:50
Offchain Eng Considerations Outline

Offchain engineering considerations

  • PM contains an interactive protocol for ticket generation. B first asks O for a random number, and then starts sending tickets based on the result from O. The generation of the random number should be secure, and the random number should NOT be leaked. Otherwise B will be able to forge tickets that will never will for O.
  • PM double-spend issue.
  • PM risk management issue.
  • O writes its serviceURI onchain. As a result, it should be DDos-resistant. There is an incentive for Os to DDos each other in order to take down the competition.
  • Price discovery remains to be an open research topic. For example, it's possible for Os to arbitrage by negotiating with another O if Bs don't negotiate with every O. This is not necessarily a bad thing - if the intermediate Os are facilitating market-efficiency.
  • VOD payment risk / data availability issue. VOD jobs have the potential to be spread across many Os. However, PM depends on the assumption that if O doesn't perform,