Last active
October 31, 2016 22:24
-
-
Save SeijiEmery/756ff11e6350023db375 to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
micropacket architecture: | |
- Instead of sending C structs across the wire (with multiple specialized packet types), | |
we send lists of key/value micropackets for each XXXXX field / property. | |
To update an entity whose position, rotation, and angular + linear velocity have changed, you would: | |
- Send the entity's id ("Entity.id", entity.id) | |
- Send the entity's position, rotation, and velocity: | |
[ ("Entity.position", entity.position), ("Entity.rotation", entity.rotation), | |
("Entity.velocity", entity.velocity), ("Entity.angularVelocity", entity.angularVelocity)] | |
- This gets packed into one big packet, and is prepended with a UDP header and timestamp | |
(or some synchronized representation of the current time) | |
- Important features: | |
- Keys are strings, but are transmitted across the wire as varint-sized key ids. | |
- The values are c++ types (quuid, glm::vec3, float, etc), but encoded into some compressed format | |
with an explicit bit-size, etc. As far as the protocol is concerned, these are just a bag of bits | |
(not bytes -- we can reduce bandwidth (and increase world complexity) if we compress across byte | |
boundaries, and pack/unpack at each end). | |
- When two clients/servers begin communication, they transmit a lookup table to transmit these ids | |
and decode compressed data. | |
(eg. you'd declare 'I am exposing "Entity.position" as the 9-bit key 0x9a, and it is a glm::vec3 | |
encoded using our XYZ compression, which gives it a field size of 39 bits' (plus the 9-bit key)) | |
– Each client uses its own translation table, and stores the lookup tables of the servers/clients | |
it is communicating with (since they're not necessarily using the same one). | |
- The actual format you're transmitting packets with (ie. the binary structure of a sent packet, | |
minus the header and other info which is fixed) is defined at runtime, and is generated | |
automatically depending on the number of keys you're sending, and the formats you're using to | |
compress your data. A side effect of this is that you couldn't inspect (and comprehend) a packet's | |
contents by looking at it in a hex editor. | |
– The compression type (eg. for angular velocity) can be changed without breaking binary compatiblity | |
- New properties can be added, and existing ones removed, without breaking binary compatibility | |
- Adding new compression formats and data types would break binary compatability (since clients/servers | |
wouldn't have the appropriate translation functions in the c++ source code). We could solve this, | |
but we'd need to have a mechanism for a client to say, 'hey server, that compression format isn't | |
supported on my end -- resend it using something else instead' (or vice versa), but the downside | |
is clients/servers would have to maintain multiple translation interfaces for different users, and | |
send/re-encode their packets multiple times instead of just encoding + broadcasting them once. | |
- Keys are only sent when their associated value changes, but we *would* rebroadcast everything every | |
x seconds (distributed using some algorithm) to ensure that everything stays in sync if packets are | |
lost, etc. Clients do not ask servers for dropped packets (since they don't know anything about the | |
how the server transmits updates). | |
- This protocol would only be used for data that can be broadcast incrementally and asynchrounously | |
(ie. if a few packets are lost then some data is out of sync for up to a few seconds, but gets | |
recorrected as soon as a packet with new data does get through. The timestamp would be used to order | |
packets that may be recieved out of order, and to give the physics system some timing data to do | |
interpolation. | |
A protocol format: | |
Packets: | |
[ UDP Header ] | |
[ packet-id ] | |
[ timestamp ] | |
[ key1 ] [ value 1 ] | |
[ key2 ] [ value 2 ] | |
[ key3 ] [ value 3 ] | |
[ key4 ] [ value 4 ] | |
[ END ] | |
Comm Header (translation packet) | |
[ UDP Header ] | |
[ packet-id ] | |
[ num fields ] | |
==== Fields ==== | |
(field 1) [ key_ptr ] [ type_id ] [ size ] | |
(field 2) [ key_ptr ] [ type_id ] [ size ] | |
... | |
(field n) [ key_ptr ] [ type_id ] [ size ] | |
==== Strings ===== | |
(key 1) [ key str... ] '\0' | |
(key 2) [ key str... ] '\0' | |
... | |
(key n) [ key str... ] '\0' | |
(key_ptr(s) are local 16-bit ptrs into the string array (which is written to the end of | |
the packet). This probably isn't the best way to transmit tables of strings, but it is | |
one approach. | |
I also didn't take into account UDP packet size limitations, so this table might | |
potentially need to be split into multiple packets). | |
In addition to this we'd have a message protocol (for transactions, cross-network script | |
calls, etc), which would be unrelated (this protocol is only for entity/avatar/physics | |
updates). There's also other network packets we'd be sending (ie. audio) that would not | |
be affected. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Old suggestion for a dynamic network / packet infrastructure for hifi while working as an intern.
Details are fairly rough, but the general idea was to replace programmer-defined packets with a dynamic algorithmic / procedural system. Goal would be to compress entity packets significantly, reducing bandwidth and allowing more complex scenes, etc.
Never implemented afaik.