The storage API consists of 2 seperate (sets of) interfaces:
- Storage Implementation Interface: it is the main interface to implement to create a custom storage backend.
- Storage Adapter Interface: it is the interface that upper layers like end user application can implement to access the storage backend.
3 supported storage types
- Storage backend fully handles all meta data (CERN EOS)
- Storage backend provides meta data which is cached (Local Storage, most external)
- Storage backend doesn't provide any meta data, only blob storage (Object Storage like S3)
The responsabilities of this interface are splitted over 5 interfaces to allow reuse from storage to storage implementations.
Splitting the interface makes it possible to seperate the various parts of the storage interface and allow an implementation to decide to either reuse parts (such as meta data storage in the db) or implement their own (read metadata from a custom backend).
Responsible for storing file data, no knowledge about any meta data besides file path
readStream(string $path): resource
writeStream(string $path, resource $data)
delete(string $path)
Handles directories and file tree operations (list content, rename)
exists(string $path): bool
newFolder(string $path): string[]
deleteFolder(string $path)
listFolderContents(string $path)
move(string $source, string $target)
Provides read access to metadata
getMeta(string $path): MetaData
getFolderContentsMeta(string $path): MetaData[]
Provides write access to metadata
setMeta(string $path, array $data)
move(string $souce, string $target)
remove(string $path)
getMetaById(int $id): MetaData
getFolderContentsMetaById(int $id): MetaData[]
getParentsById(int $id): MetaData[]
traverse(string $path): Traversable<MetaData>
The storage adapter takes care of hiding the difference in storage implementation types from the user of the storage interface
The adapter takes one or more classes which implement the various implementation interfaces.
Different implementations of the storage adapter can be used to add functionality such as metadata caching or spreading data over multiple data stores.
Adapator for storage implementation which fully manage their own metadata
Requires one Data, Tree, MetaRead and a MetaTree instance
readStream(string $path): resource
writeStream(string $path, resource $): resource
newFolder(string $path)
delete(string $path)
rename(string $source, string $path)
exists(string $path)
getMeta(string $path): MetaData
getMetaById(int $id): MetaData
getFolderContents(string $path): MetaData[]
getFolderContentsById(int $id): MetaData[]
traverse(string $path): Traversable<MetaData>
getParentsById(int $id): MetaData[]
Adapter for storage implementation where we need to update the metadata manually after write operations
Requires one Data, Tree, MetaRead, MetaWrite and a MetaTree instance
Adapter for storage implementation where meta data should be cached.
Requires two MetaRead and Tree instances, one Data, MetaTree and a MetaWrite instance
Adapter for storage implementation where we need to manage all metadata manually
Requires one Data, Tree, MetaRead and a MetaWrite instance
Takes one Tree and MetaRead instance as source and syncronizes it with a MetaRead and MetaWrite instance
Local implements Data, Tree and MetaRead where all meta data is read from the underlying filesystem DBCache implements Tree, MetaRead, MetaWrite and MetaTree with all data stored in the database
Adapter\Caching reads it's meta data from the DBCache, updates the DBCache when needed and reads and writes the files from Local
ObjectStore implements Data and only reads and writes blobs from the objectstore DBCache implements Tree, MetaRead, MetaWrite and MetaTree with all data stored in the database
Adapter\FullMeta handles maintaining all metadata in the DBCache
EOS implements Data, Tree, MetaRead, MetaWrite and MetaTree and handles all metadata operations itself
Adater\Adapter only has to pass all operations down to EOS
From https://gist.github.com/labkode/a84a8f66920a6cb9355c:
There are several placing in oC here we need the metadata of all parent folders
Not sure what you mean with "This is needed to avoid passing storage implementation objects to upper layers"