conda activate sandbox
pip install --pre tiled[all] databroker[all] ipython
Note for Mac: Mac uses zsh
by default and it requires single quotes like 'databroker[all]'
. On the other hand, Windows requires that you not have quotes.
mkdir sandbox
cd sandbox/
Create a file config.yml
in this directory:
trees:
# Create a Mongo "database" in memory.
# When the server is stopped the database will be forgotten.
- tree: databroker.experimental.server_ext:MongoAdapter.from_mongomock
path: /
args:
directory: ./data_files
Within sandbox/
create a directory data_files/
and then start the tiled server.
mkdir data_files
tiled serve config config.yml
In another Terminal (or in Jupyter) you can upload data like this from Python:
In [1]: import tiled
In [2]: from tiled.client import from_uri
In [3]: c = from_uri("http://127.0.0.1:8000?api_key=fd2c689d82a4d3cff20ce471eb9d03b304faff5681f2650fa9fde954de6b8bca")
In [4]: c
Out[4]: <Node {}>
In [5]: c.write_array([1,2,3], metadata={"hello": "world"})
Out[5]: <ArrayClient shape=(3,) chunks=((3,),) dtype=int64>
In [6]: c
Out[6]: <Node {'c6da9ac3-7475-48c6-b66c-643c1ba4d1df'}>
This issues two requests, one to create the new dataset and upload the metadata, and another to upload the data. For large data sets this will be 1 + N requests, where the data is chunked over N separate requests each of reasonable size.
@danielballan this works perfectly. If anyone comes here and wonders why this
doesn't work, try
👌