Skip to content

Instantly share code, notes, and snippets.

View samuelgoto's full-sized avatar
💭
hello world!

Sam Goto samuelgoto

💭
hello world!
View GitHub Profile

Manipulations:

  • Scaling
  • Cropping and
  • Rotation
  • Angle you take the photo and the
  • Ambient light
  • Colours that are shown on a pc monitor or the printed photo.

https://poly.google.com/view/9-b6-yqrwEe

reasons why we need view source declarativeness:

how do i reproduce this? i can download the model, but how do i get that lighting? that sky? how do i even learn these words? how do i position the camera? how would i even learn about the existance of a camera?

holographic projections. how do i learn about that?

The routing problem in content-centric networks is making (efficiently and resiliently) a connection between content names and hosts.

The overall idea of the user-oriented name system is to address a subset of the problem where the content is addressed by author first and filename second (but in a manner that still de-couples content from hosts).

UNS, much like DNS, works as a hierarchical resolution algorithm. It talks to root-level ANS servers to find trackers given an author name.

Trackers, much like bittorrent, enable hosts to register their ability/desire to serve content for a specific content name, which is then later used to answer user queries of where to find content.

UNS servers exchange data between themselves with a gossip protocol, propagating the routing tables (mapping usernames to trackers) to each other, eventually converging.

Discovery

<link rel="alternate" href="/api" type="application/vnd.microforms">

Self-described payloads

{

JSON-LD encapsulation of forms:

{
  @context: "https://example.com",
  @type: MyType,
  action: {
    @context: "https://w3c.org/2018/forms",
    @type: Form,
 ...

CoreML/ONNX/TF JSON-LD based for serving.

{
  @context: "https://w3c.org/2018/deeplearning",
  @type: Model,
  network: {
    @type: NeuralNetwork,
    ...
 }

ATOM feeds in JSON-LD.

{
  "@context": "http://www.w3.org/2005/Atom",
  "@type": "Feed",
  "title": "Example Feed",
  "subtitle": "A subtitle.",
  "links": [{

Alternatives considered

TODO(goto): go over this.

ARML is an XML-based data format to describe an interact with AR scenes.

ARML

Deep Learning Processing Units

10-15% cost of interpreation seems like a non-starter to me when they are trying to squeeze in every level of performance. I'm seeing some convergence in the industry regarding "inference models" file formats / representations

I would challenge that assertion. Your going to get a larger drop on an android phone when thermal throttling kicks in.

Prior Art