Skip to content

Instantly share code, notes, and snippets.

View jayrbolton's full-sized avatar
🍕

Jay R Bolton jayrbolton

🍕
View GitHub Profile
import parsec as p
whitespace = p.regex(r'\s+')
ignore = p.many(whitespace)
lexeme = lambda parser: parser.skip(ignore)
lparen = lexeme(p.string('('))
rparen = lexeme(p.string(')'))
symbol = lexeme(p.regex(r'[\d\w_-]+'))
and_op = lexeme(p.string('and'))
or_op = lexeme(p.string('or'))
- ability to iterate on anything in late stages - being able to deploy small updates to a late-stage tool
without bottlenecks. this requires a lot of technical things in place (ie. modularity,
versioning, CI). Eg: how can we add (or remove) a feature from the SDK and deploy that new
version to users within a week or two?
- developers should work (and communicate) directly with users
- don't prioritize process tools and let devs choose those tools
- most of agile is not about process or ritual but about technical requirements such as modularity, versioning, CI, etc
- make sure you can refactor internals without breaking the interface (eg. versioning and dependency management in apps)
- don't add features to software until you find that you need them
- define the product, not the project
from datetime import datetime
from pynwb import NWBFile, NWBHDF5IO
from pynwb.ophys import ImageSegmentation, TwoPhotonSeries, OpticalChannel
nwbfile = NWBFile('a', 'b', 'c', datetime.now())
mod = nwbfile.create_processing_module('a', 'b', 'c')
optical_channel = OpticalChannel('a', 'b', 'c', 500.)
WARNING: exception encountered when trying to cache the index files:
Traceback (most recent call last):
File "/kb/module/lib/kb_Bowtie2/util/Bowtie2IndexBuilder.py", line 178, in _put_cached_index
save_result = ws.save_objects(save_params)
File "/kb/module/lib/Workspace/WorkspaceClient.py", line 901, in save_objects
[params], self._service_ver, context)
File "/kb/module/lib/Workspace/baseclient.py", line 268, in call_method
return self._call(url, service_method, args, context)
File "/kb/module/lib/Workspace/baseclient.py", line 183, in _call
raise ServerError(**err['error'])
@jayrbolton
jayrbolton / kb-sdk-test-logs.txt
Last active May 10, 2018 23:57
Logs of running SDK tests
git submodule init
git submodule update
cp submodules_hacks/AuthConstants.pm submodules/auth/Bio-KBase-Auth/lib/Bio/KBase/
Running unit tests
nose2 -s test_scripts/py_module_tests -t src/java/us/kbase/templates
Loading test config from /vagrant/kb_sdk/test_scripts/test.cfg
Authorization url: https://ci.kbase.us/services/auth/api/legacy/KBase/Sessions/Login
ant test -DKBASE_COMMON_JAR=kbase/common/kbase-common-0.0.23.jar
Buildfile: /vagrant/kb_sdk/build.xml
@jayrbolton
jayrbolton / sdk-feedback.md
Last active April 11, 2018 21:21
KBase SDK app dev experience feedback

SDK developer experience feedback

This doc contains some initial dev feedback as a first-time SDK app developer. Some of the feedback might be in the form of confused questions, which could give some ideas about usability of the SDK.

My feedback generally fits into a few categories:

  • Faster testing turnaround
  • Code mixing/stubbing and package management
  • Config consolidation
  • Docs on SDK apps
@jayrbolton
jayrbolton / crypto-identities.md
Created January 11, 2018 16:56
crypto identity systems

Cryptographic user profiles could be used to exchange keys, sign, and encrypt data in-band across many dats or apps. Every user has an id, encryption public/private keys, and signing public/private keys.

There are a couple ways I've been thinking about how to implement this:

Method 1: discovery-swarm server

  • Every user listens to a discovery-swarm with their user id
  • Other users can join the swarm and send messages to that user, encrypted to that user's pubkey
  • We can use signal's double ratchet algorithm here for user-user messaging over the swarm
  • The user data (keys and contacts) are all saved in a hyperdb and can be replicated across many devices
@jayrbolton
jayrbolton / frp-app-arch.js
Last active January 2, 2018 23:43
different frp app architecture
// Usually, streams for the app are created in a tree:
// - different event streams from different parts of the page get passed into functions that create objects of streams
// - those streams can also get passed into other functions down the tree to create nested objects of streams
// The downside of this is that it's hard for different sections of the app to communicate. You might want to access and modify a stream from a branch that is living far down on another branch. Sometimes the order of how streams get created becomes circular
// An example is a global notification message stream. You might want to set this in 100 places in your app, but you only want to handle and render it in one place. With the tree structure, you get a lot of repetitive code
// To solve this, we can treat the app's object of streams as shallow & *concatenative*, rather than a nested tree
// Models take the app's object of streams as parameters and returns an object of streams
// Models are chained together in this way; one mode
@jayrbolton
jayrbolton / beaker-extension.md
Created December 13, 2017 01:00
dats dats dats dats

Instead of the model of one dat being one web page / app, you could split it into four dats

  1. Schema: a syntax describing the content format (eg filenames, datatypes, lengths, etc)
  2. Content: the actual content itself (social feeds, videos, map data, etc)
  3. Application code: the javascript code that turns the content into a user interface, and handles updates.
  4. User identity: crypto keys (signing and encryption public keys, with self-signed cetificate, etc) for the user

The advantage here is that there would be some nice decoupling and standardization when making apps:

  • Different application dats can render the same content
  • Apps can be considered compatible when they render and update content using the same schema
const R = require('ramda')
const stream = require('../stream')
const h = require('../html')
const counter = (initial) => {
const [inc, dec, reset] = [stream.create(), stream.create(), stream.create()]
const sum = stream.scanMerge([
[inc, R.add(1)]
, [dec, R.add(-1)]
, [reset, R.always(0)]