This assumes the account has already been connected to Context.IO. See http://context.io/docs/lite/users/email_accounts for the field definitions of :id
and :label
POST /users/:id/email_accounts/:label
- 200 - Account added and being scanned
pub trait Fooable { | |
pub fn foo(); | |
} | |
pub impl Fooable for String { | |
pub fn foo() { | |
println!("Hello world"); | |
} | |
} |
This assumes the account has already been connected to Context.IO. See http://context.io/docs/lite/users/email_accounts for the field definitions of :id
and :label
POST /users/:id/email_accounts/:label
def let_once(method, &block) | |
before(:all) do | |
@let_once ||= {} | |
@let_once[method] = block.call | |
end | |
let(method) do | |
@let_once[method] | |
end | |
end |
package event | |
// Events returns the set of events describing the change from prev -> next | |
// | |
// This is a fairly complex operation, and needs to handle the following | |
// edge cases: | |
// * A message could be copied from one folder to another, which we can only | |
// determine based on message ID (which isn't present in the snapshots) | |
// * A message coule be moved from one folder to another, which also relies on | |
// message ID |
My work emphasizes horizontal scalability, simple well-defined organizational boundaries and pervasive introspection through logging, metrics & alerting. My tools of choice are usually streams of immutable data, distributed data stores, canonical & unambiguous interface description languages and containerized runtimes
tl;dr
We currently need to solve a couple of problems in the mailservice:
The current approach to event notification is to write SQS events to signal state changes, for instance when a new account is created, the process creating the account writes an event to one of 64 "created" queues, which is consumed by the sync process. We could create a partitioned set of "deleted" queues and write deletion events into them. This approach feels brittle, as it relies on SQS events being created anywhere accounts are modified, and we're already doing this in at least two places (the API and the CIO kafka topic watcher).
package main | |
import ( | |
"crypto/tls" | |
"crypto/x509" | |
"flag" | |
"fmt" | |
"log" | |
"time" |
// Code generated by protoc-gen-go. | |
// source: src/demo/demo.proto | |
// DO NOT EDIT! | |
/* | |
Package demo is a generated protocol buffer package. | |
It is generated from these files: | |
src/demo/demo.proto |
var parsedEvent = JsonParser.Default.Parse<ParsedDataEventWebhook>(json); | |
//verify the signature to validate the webhook came from RP | |
var key = "CIO_SECRET"; //this should be your CIO auth secret | |
var encoding = BinaryStringEncoding.Utf8; | |
var algorithmProvider = MacAlgorithmProvider.OpenAlgorithm("HMAC_SHA256"); | |
var contentBuffer = | |
CryptographicBuffer.ConvertStringToBinary(parsedEvent.checksum, | |
encoding); | |
var keyBuffer = CryptographicBuffer.ConvertStringToBinary(key, encoding); |