The following guide helps to align the Exonomy app with IPFS and OrbitDB for best practices modeling of both data and process, ensuring robust, decentralized handling of Exonomy’s voucher and chat functionalities while laying the groundwork for future integration with Exocracy and expanded P2P features:
In a decentralized application, IPFS and OrbitDB complement each other by addressing different aspects of data storage and management:
IPFS is responsible for storing the actual content of the voucher system. Each file (e.g., metadata, images, videos) is chunked, hashed, and assigned a Content Identifier (CID). These CIDs represent immutable, content-addressed storage, ensuring data integrity. However, IPFS alone does not manage the logical structure or relationships between assets, making it unsuitable for complex queries or relational data management.
For example:
- Metadata: Stored in IPFS as a JSON file, containing structured information about the voucher (e.g., title, description, timestamp). This file has its own CID.
- Assets: Any related files (e.g., images, videos) are also stored in IPFS, each with a unique CID.
- Directories: IPFS can organize multiple assets into a directory structure, but the directory CID merely aggregates the content—it does not enforce logical relationships or support queries.
OrbitDB provides a decentralized database layer for managing the logical relationships between vouchers and their assets. It acts as the system’s brain, offering queryable and updatable records:
- Each voucher has a unique voucher_id (e.g., a UUID or custom identifier).
- This
voucher_id
acts as the primary key in OrbitDB, logically representing the voucher and its related assets. - The
metadata_cid
andasset_cids
fields in the OrbitDB document store link the voucher to its metadata and associated assets in IPFS.
-
Voucher as a Logical Container:
OrbitDB defines a document schema where each voucher is represented as a single record. This record includes:voucher_id
: A unique identifier for the voucher.metadata_cid
: The CID pointing to the voucher's metadata stored in IPFS.asset_cids
: A list of CIDs pointing to the voucher’s related assets stored in IPFS.
-
Querying and Retrieval:
- In OrbitDB: Queries are performed on the document store using
voucher_id
as the primary key, allowing efficient lookups. - In IPFS: Once CIDs (e.g.,
metadata_cid
,asset_cids
) are retrieved from OrbitDB, they are used to fetch the actual content from IPFS. This separation ensures that relational logic remains flexible while the underlying content remains immutable.
- In OrbitDB: Queries are performed on the document store using
-
Immutability and Updates:
While IPFS ensures that content (metadata and assets) is immutable, updates to the voucher (e.g., adding new assets or modifying metadata) are handled by creating new CIDs in IPFS and updating the corresponding fields in the OrbitDB document. This approach provides a versioned history for auditability while maintaining logical consistency. -
Access Control:
OrbitDB can incorporate fine-grained access control by encrypting records or selectively exposing specific fields to authorized users. For example:- Public vouchers can include metadata in OrbitDB that is readable by all users.
- Private vouchers can encrypt their metadata and assets, with decryption keys shared only with authorized parties.
-
Relational Design Principles:
IPFS handles the physical layer (storing immutable assets), while OrbitDB handles the logical layer (defining relationships). This combination allows:- A single
voucher_id
in OrbitDB to act as the logical "container" for all associated metadata and assets. - Fine-grained querying and retrieval of vouchers without needing to traverse the IPFS directory structure manually.
- Efficient deduplication and storage in IPFS through content addressing, while OrbitDB remains the point of interaction for application logic.
- A single
By combining IPFS and OrbitDB, you create a modular, decentralized architecture where each tool plays to its strengths. IPFS ensures content integrity and immutability, while OrbitDB adds relational structure and querying capabilities. This approach is scalable, future-proof, handling both robust content storage and logical management.
Below is a proposed list of all the coding examples for the concepts discussed in Step 1. Each section includes a title, a brief description, and a suggested file name (where applicable).
To initialize the tools and dependencies required for working with IPFS and OrbitDB, ensuring your environment is ready for development. This documentation includes the more performant bun rather than the more popular npm. Bun's ESM (ECMAScript Module) handling is designed to be faster and more efficient than Node.js, supporting native ES module syntax (import/export
) while also being highly compatible with CommonJS (require
) modules. It automatically resolves file extensions, supports package exports, and allows seamless mixing of ESM and CommonJS in most cases. The significance is that some Node.js libraries, especially older ones, may rely on specific quirks of Node.js' ESM implementation, which could lead to compatibility issues in Bun. Testing is key when working with libraries like IPFS and OrbitDB.
-
Install IPFS CLI
Install IPFS Command-Line Interface (CLI):curl -s https://dist.ipfs.tech/go-ipfs/v0.21.0/ipfs-install.sh | bash
Verify the installation:
ipfs --version
-
Install OrbitDB and Related Libraries
Initialize a bun project and install required libraries:mkdir Exonomy cd Exonomy bun init --yes bun add ipfs orbit-db orbit-db-storage-adapter level uuid
Set up a local IPFS bun programmatically.
-
Start an IPFS bun (CLI)
Start the IPFS daemon for local operations:ipfs init ipfs daemon
-
Start an IPFS Node (Programmatically)
Create a scriptstart-ipfs.ts
to run an IPFS instance programmatically:import IPFS from 'ipfs'; async function startIpfsNode() { const node = await IPFS.create(); console.log('IPFS node is running'); console.log('Node ID:', (await node.id()).id); return node; } startIpfsNode().catch(console.error);
Set up and configure an OrbitDB instance.
- Create a Script to Start OrbitDB
Save the script asstart-orbitdb.ts
:import IPFS from 'ipfs'; import OrbitDB from 'orbit-db'; async function startOrbitDb() { // Start IPFS const ipfs = await IPFS.create(); // Start OrbitDB const orbitdb = await OrbitDB.createInstance(ipfs); console.log('OrbitDB instance created'); // Create a database const db = await orbitdb.docs('vouchers', { indexBy: 'voucher_id' }); console.log('Database created:', db.address.toString()); return { orbitdb, db }; } startOrbitDb().catch(console.error);
-
Run the IPFS daemon:
ipfs daemon
-
Run the script to start OrbitDB:
ts-node start-orbitdb.ts
Check the output for successful initialization.
- Description: Example of creating a metadata JSON object for a voucher and storing it in IPFS.
- File Name:
add-metadata-to-ipfs.ts
import { create as createIpfsNode } from 'ipfs-core';
import { v4 as uuidv4 } from 'uuid';
// Initialize a local IPFS node
const ipfs = await createIpfsNode();
// Define the voucher metadata
const voucherMetadata = {
voucher_id: uuidv4(), // Generate a unique voucher ID
metadata_cid: '',
asset_cids: [
'bafybeib32jjsmv7qup3zjlxl5oysybw52fqnxljjcjs3viyox7xqsbxave',
'bafybeid2dgn5ht3qqo2s4l5yf66jwno73voacgtpzj7mnxe5c7qse25tvi',
],
};
async function addMetadataToIpfs() {
try {
// Add the voucher metadata to the local IPFS node
const { cid } = await ipfs.add(JSON.stringify(voucherMetadata));
// Save the CID of the added metadata
voucherMetadata.metadata_cid = cid.toString();
console.log('Metadata CID:', voucherMetadata.metadata_cid);
return voucherMetadata;
} catch (error) {
console.error('Error adding metadata to IPFS:', error);
}
}
// Run the function
addMetadataToIpfs();
- IPFS Node Setup: We create a local IPFS node programmatically using
ipfs-core
, ensuring the application is fully decentralized and does not rely on external gateways. - Voucher Metadata: A JSON object is created for the voucher metadata, including a unique
voucher_id
(generated usinguuidv4
) and sample asset CIDs. - Add Metadata to IPFS: The metadata is serialized into a JSON string and added to the local IPFS node using the node's
add()
method. - Store the CID: The resulting CID of the added metadata is converted to a string and stored back into the
metadata_cid
field of the voucher metadata object.
This file ensures that voucher metadata is added to a local IPFS node, and the resulting CID is returned, maintaining complete decentralization. It uses Bun for dependency management and is compatible with our environment.
- Description: Storing voucher-related assets (e.g., images, videos) in IPFS and retrieving their CIDs.
- File Name:
add-assets-to-ipfs.ts
import * as fs from 'fs/promises';
import { create } from 'ipfs-core';
// Initialize IPFS node
const startIpfsNode = async () => {
const ipfs = await create();
console.log('IPFS node is running');
return ipfs;
};
const addAssetsToIpfs = async (filePaths: string[]) => {
try {
const ipfs = await startIpfsNode();
const assetCids: string[] = [];
for (const filePath of filePaths) {
const fileContent = await fs.readFile(filePath); // Read file content
const added = await ipfs.add(fileContent); // Add file to IPFS
assetCids.push(added.cid.toString()); // Store the CID
console.log(`File added to IPFS: ${filePath}, CID: ${added.cid.toString()}`);
}
await ipfs.stop(); // Stop the IPFS node
return assetCids;
} catch (error) {
console.error('Error adding assets to IPFS:', error);
}
};
// Example usage
const filePaths = ['./assets/image.jpg', './assets/video.mp4'];
addAssetsToIpfs(filePaths).then((cids) => console.log('Asset CIDs:', cids));
- IPFS Node Initialization: A local IPFS node is started programmatically using
ipfs-core
. - Reading Asset Files: Assets are read from the local filesystem using
fs.promises
. - Adding Files to IPFS: Each file is added to the IPFS node, and its CID is captured in an array.
- Returning CIDs: After processing all assets, their CIDs are returned for further use (e.g., linking to metadata).
We must replace the filePaths
array with the paths to our actual asset files. This script ensures full decentralization without any reliance on external IPFS gateways.
- Description: Organizing voucher assets into an IPFS directory and retrieving the directory CID.
- File Name:
create-ipfs-directory.ts
Here’s the implementation for create-ipfs-directory.ts
, which organizes voucher assets into an IPFS directory:
import { create } from 'ipfs-core';
import * as fs from 'fs/promises';
import * as path from 'path';
// Initialize IPFS node
const startIpfsNode = async () => {
const ipfs = await create();
console.log('IPFS node is running');
return ipfs;
};
const createIpfsDirectory = async (voucherId: string, assetFilePaths: string[]) => {
try {
const ipfs = await startIpfsNode();
const filesToAdd = [];
for (const filePath of assetFilePaths) {
const fileContent = await fs.readFile(filePath);
const fileName = path.basename(filePath); // Extract file name from the path
filesToAdd.push({ path: `${voucherId}/${fileName}`, content: fileContent }); // Structure for IPFS directory
}
// Add all files as a directory to IPFS
const { cid } = await ipfs.addAll(filesToAdd, { wrapWithDirectory: true });
console.log(`IPFS directory created for voucher ${voucherId}, CID: ${cid.toString()}`);
await ipfs.stop(); // Stop the IPFS node
return cid.toString();
} catch (error) {
console.error('Error creating IPFS directory:', error);
}
};
// Example usage
const voucherId = '123e4567-e89b-12d3-a456-426614174000';
const assetFilePaths = ['./assets/image.jpg', './assets/video.mp4'];
createIpfsDirectory(voucherId, assetFilePaths).then((cid) =>
console.log('Directory CID:', cid)
);
- Voucher ID Integration: Each file is assigned a structured path in the directory using the
voucherId
. - Files-to-Add Structure: Files are prepared with a path (including the voucher ID) and their content for creating an IPFS directory.
wrapWithDirectory
Option: Ensures that all files are grouped under a single directory CID.- Directory CID Return: The CID representing the root of the directory is returned for linking to metadata.
This script maintains decentralization, storing all voucher-related assets in a structured IPFS directory. The voucherId
ensures that each directory is uniquely identifiable.
- Description: Setting up an OrbitDB document store for managing vouchers and defining the schema for the voucher records.
- File Name:
initialize-orbitdb.ts
- Description: Example of adding a voucher record to the OrbitDB document store with
voucher_id
,metadata_cid
, andasset_cids
. - File Name:
add-voucher-to-orbitdb.ts
- Description: Querying OrbitDB for a voucher record using
voucher_id
and retrieving associated CIDs. - File Name:
query-voucher-from-orbitdb.ts
- Description: Retrieving metadata and assets from IPFS using the CIDs obtained from OrbitDB.
- File Name:
fetch-from-ipfs.ts
- Description: Updating the metadata or assets of a voucher in OrbitDB by adding new CIDs and maintaining a version history.
- File Name:
update-voucher-in-orbitdb.ts
- Description: Encrypting voucher metadata and assets for private vouchers and selectively sharing decryption keys.
- File Name:
access-control.ts
- Description: Maintaining a version history of voucher updates using OrbitDB, ensuring traceability.
- File Name:
version-control.ts
- Description: Establishing a logical structure where the voucher ID in OrbitDB acts as the primary key linking to multiple IPFS CIDs.
- File Name:
integrate-ipfs-orbitdb.ts
- Description: Optimizing queries in OrbitDB and fetching associated data from IPFS efficiently.
- File Name:
efficient-queries.ts
- Description: End-to-end implementation of adding, querying, and updating a voucher with metadata and assets.
- File Name:
real-world-example.ts
In a decentralized P2P application, voucher metadata is stored on IPFS as immutable files with cryptographic signatures. When creating a voucher, the metadata is signed with a private key and stored on IPFS, generating a CID. OrbitDB stores the CID, the public key, a checksum of the metadata, and dynamic fields like status. To update a voucher (e.g., redeem it), a new metadata version is created, signed, and stored on IPFS, while OrbitDB is updated with the new CID, checksum, and status.
Validation ensures consistency: The IPFS metadata is fetched using the CID, the checksum is recomputed and compared to the OrbitDB entry, and the signature is verified using the stored public key. This process ensures data integrity, prevents tampering, and maintains synchronization between IPFS and OrbitDB without centralized servers. Access control relies entirely on cryptographic keys, ensuring that only the entity with the private key can update voucher metadata, while all peers can validate the data independently.
This approach ensures controlled writes to OrbitDB using cryptographic signatures, and data consistency between OrbitDB and IPFS using validation techniques.
When creating a voucher:
- The voucher metadata is stored on IPFS with a digital signature.
- OrbitDB stores the CID of the IPFS file, the public key, and a checksum/hash for validation.
import { create } from "ipfs-http-client"; // IPFS client
import * as OrbitDB from "orbit-db"; // OrbitDB
import * as crypto from "crypto";
// Initialize IPFS
const ipfs = create({ url: "https://ipfs.infura.io:5001" });
// Generate Key Pair for Signing
const keyPair = crypto.generateKeyPairSync("rsa", {
modulusLength: 2048,
publicKeyEncoding: { type: "spki", format: "pem" },
privateKeyEncoding: { type: "pkcs8", format: "pem" },
});
const { publicKey, privateKey } = keyPair;
// Voucher metadata
const voucher = {
id: "123",
status: "active",
owner: "userX",
};
// Create a digital signature of the metadata
const signVoucher = (data: any, privateKey: string) => {
const sign = crypto.createSign("SHA256");
sign.update(JSON.stringify(data));
sign.end();
return sign.sign(privateKey, "base64");
};
const signature = signVoucher(voucher, privateKey);
// Add the signature to the metadata
const voucherWithSignature = {
...voucher,
signature,
};
// Store the metadata in IPFS
async function storeOnIPFS(data: object) {
const { cid } = await ipfs.add(JSON.stringify(data));
return cid.toString();
}
// Store on IPFS and update OrbitDB
(async () => {
const cid = await storeOnIPFS(voucherWithSignature);
// Initialize OrbitDB
const orbitdb = await OrbitDB.createInstance(ipfs);
const db = await orbitdb.keyvalue("voucher-db");
await db.load();
// Add metadata to OrbitDB
const checksum = crypto
.createHash("sha256")
.update(JSON.stringify(voucher))
.digest("hex");
await db.put(voucher.id, {
cid,
publicKey,
checksum,
status: voucher.status,
});
console.log("Voucher created and stored!");
})();
When a voucher is redeemed:
- A new version of the metadata is created, signed, and stored on IPFS.
- The corresponding OrbitDB entry is updated with the new CID and checksum.
// Update voucher metadata
async function redeemVoucher(voucherId: string, db: any, privateKey: string) {
const currentVoucher = db.get(voucherId);
if (!currentVoucher) throw new Error("Voucher not found!");
const updatedVoucher = {
...voucher,
status: "redeemed",
};
// Sign the updated voucher
const updatedSignature = signVoucher(updatedVoucher, privateKey);
const updatedVoucherWithSignature = {
...updatedVoucher,
signature: updatedSignature,
};
// Store updated metadata on IPFS
const updatedCid = await storeOnIPFS(updatedVoucherWithSignature);
// Update OrbitDB
const updatedChecksum = crypto
.createHash("sha256")
.update(JSON.stringify(updatedVoucher))
.digest("hex");
await db.put(voucherId, {
cid: updatedCid,
publicKey: currentVoucher.publicKey,
checksum: updatedChecksum,
status: updatedVoucher.status,
});
console.log("Voucher redeemed!");
}
When querying vouchers, validate:
- The metadata in IPFS matches the checksum in OrbitDB.
- The signature in the IPFS metadata is valid using the stored public key.
// Verify digital signature
const verifySignature = (data: any, signature: string, publicKey: string) => {
const verify = crypto.createVerify("SHA256");
verify.update(JSON.stringify(data));
verify.end();
return verify.verify(publicKey, signature, "base64");
};
// Validate CID, metadata, and signature
async function validateVoucher(voucherId: string, db: any) {
const entry = db.get(voucherId);
if (!entry) throw new Error("Voucher not found!");
// Fetch metadata from IPFS
const ipfsData = await ipfs.cat(entry.cid);
const metadata = JSON.parse(ipfsData.toString());
// Verify checksum
const computedChecksum = crypto
.createHash("sha256")
.update(JSON.stringify(metadata))
.digest("hex");
if (computedChecksum !== entry.checksum) {
throw new Error("Checksum mismatch!");
}
// Verify signature
const { signature, ...originalData } = metadata;
const isSignatureValid = verifySignature(
originalData,
signature,
entry.publicKey
);
if (!isSignatureValid) {
throw new Error("Invalid signature!");
}
console.log("Voucher is valid!");
return metadata;
}
Access control is achieved through cryptographic signatures:
- Signing ensures only the voucher creator can issue updates (no centralized admin).
- Verification ensures changes are valid.
- Decentralized Authorization: Private keys enable self-sovereign identity for entities that create or update vouchers.
- Tamper-Proofing: CIDs in IPFS and cryptographic signatures ensure data integrity.
- Query Validation: OrbitDB data is cross-checked with IPFS for consistency.
- Ease of Use: All validation logic is baked into the app, making it seamless for end users.
Definition: Splitting large data into smaller parts to manage storage and transfer efficiently.
Explanation: The process of data chunking involves breaking a large file, like a PDF or a video, into smaller parts (chunks) to make it easier to store and transfer, particularly when using systems like IPFS. Each chunk is a manageable piece of the file, and by processing them separately, we can improve performance and reliability.
In IPFS, file chunking is primarily handled by the IPFS implementation itself, which splits files into smaller chunks based on the chunking algorithm configured in the IPFS node. Developers usually don’t need to manually chunk files unless custom processing is required. The default chunking method is fixed-size, typically 256 KB per chunk, but content-aware chunking algorithms like Rabin are also available. These algorithms break files based on their content structure, improving deduplication and storage efficiency, especially for files with repetitive patterns. Choosing the right chunking method is crucial for balancing efficiency and deduplication needs.
-
Default Behavior:
- IPFS uses content-based chunking (such as Rabin) or fixed-size chunking by default. These algorithms are designed to optimize storage and deduplication.
-
Developer Responsibility:
- We don’t need to manually chunk files unless we want to preprocess or hash chunks ourselves before uploading.
- Examples of when manual chunking might be useful:
- Custom hash verification before storing in IPFS.
- Dividing large files into logical sections (e.g., chapters in an eBook).
-
IPFS Handles the Rest:
- IPFS ensures that chunks are stored, deduplicated, and referenced via a Content Identifier (CID).
- We simply provide the file or data, and IPFS takes care of chunking, linking chunks in a Merkle DAG, and creating the CID.
While not mandatory, understanding or implementing manual chunking helps when:
- Working with custom data workflows.
- Storing large or streaming files with specific requirements.
- Ensuring consistency in chunk sizes or specific encryption needs before passing chunks to IPFS.
Here’s how this works:
-
Cryptographic Hashing:
- Each chunk of data produced during the chunking process is hashed using a cryptographic hash function.
- The default hash algorithm is SHA-256, which produces a unique output based on the content of the chunk. If two chunks have the same content, they will have the same hash.
-
Content-Addressed Storage:
- The hash of each chunk becomes its unique identifier (CID).
- The CID is used to locate and retrieve that chunk from the IPFS network.
-
Merkle DAG:
- IPFS organizes the chunks into a Merkle Directed Acyclic Graph (Merkle DAG).
- The hashes of individual chunks are combined to create a root hash, representing the entire file.
-
Deduplication:
- If a chunk already exists on the network (determined by its hash), it is not stored again. This optimizes storage.
Even though IPFS automatically handles chunking and hashing, understanding the process is important for use cases like:
- Verifying data integrity: The hash ensures that data retrieved from IPFS is exactly what was uploaded.
- Custom workflows: Developers can preprocess or rehash data for additional security or application-specific logic.
IPFS uses two types of Content Identifiers (CIDs): CIDv0 and CIDv1.
-
CIDv0: The older version, which uses Base58 encoding and is primarily used for backward compatibility with legacy systems and tools. It supports only the SHA-256 hash algorithm. CIDv0 is increasingly becoming obsolete.
-
CIDv1: A more modern, flexible version that supports multiple hash algorithms, including SHA-256 and others. CIDv1 uses Base32 encoding and is designed to be extensible and future-proof. It’s better for supporting newer features and is now the default in most modern IPFS versions, especially for new operations and data storage.
-
In older IPFS versions, the default CID format was CIDv0. This was primarily for compatibility with older tools and systems that expected Base58 encoded SHA-256 hashes.
-
In newer IPFS versions, the default CID format has switched to CIDv1. CIDv1 is more flexible, supporting multiple hash algorithms and Base32 encoding. It’s designed to be future-proof and works better with modern IPFS features.
- If you're using newer versions of IPFS, CIDv1 is the default format.
- CIDv0 is still supported for backward compatibility but is becoming less common in modern IPFS setups.
For our use with OrbitDB and IPFS, CIDv1 is the preferred format, especially with new IPFS versions.
-
Database Design:
- Use separate databases for distinct functionalities:
vouchers
: For metadata and lifecycle logs.transactions
: For transaction history and ownership transfers.chats
: For decentralized chatrooms per voucher.profiles
: For user DIDs and public profile data.
- Use separate databases for distinct functionalities:
-
Replication and Consistency:
- Leverage OrbitDB’s pub/sub mechanism for eventual consistency.
- Use fine-grained access controls for databases (e.g., voucher owners have write access to related chatrooms).
-
Indexes and Queries:
- Utilize document stores (
orbitdb.docs
) for searchable voucher data. - Use indexable fields (e.g.,
state
,owner_did
) for efficient querying.
- Utilize document stores (
-
Conflict Resolution:
- Design schema to avoid conflicts:
- Use timestamps and version numbers for updates.
- Ensure ownership transfers are atomic to prevent duplication.
- Design schema to avoid conflicts:
-
Encryption and Privacy:
- Encrypt sensitive fields (e.g., user chats, voucher terms) before storing.
- Keep public data (e.g., voucher catalog) unencrypted for discovery.
-
Voucher Metadata:
- Store metadata on IPFS with a pointer to its OrbitDB entry for ownership and state tracking.
- Example:
{ "cid": "bafy...hash", "owner_did": "did:example:123", "state": "active", "terms": "Redeemable for $10 value", "media": { "image": "/vouchers/{voucher_id}/image.jpg", "video": "/vouchers/{voucher_id}/video.mp4" } }
-
Voucher Lifecycle:
- Update the OrbitDB entry upon state changes (e.g.,
active
→redeemed
). - Maintain a log of ownership and status updates.
- Update the OrbitDB entry upon state changes (e.g.,
-
Chat Integration:
- Create an OrbitDB event log for each voucher’s chatroom.
- Use the voucher owner DID as the admin/moderator key for permissions.
-
Wallet & Social Feed:
- Use a replicated OrbitDB store for the wallet’s internal and external feed.
- Maintain pointers to voucher metadata in the feed for fast lookup.
-
Data Syncing:
- Use OrbitDB’s event-based replication for syncing voucher updates.
- Sync media via Bluetooth/Wi-Fi Direct when offline.
-
Hotspot Profiles:
- Use IPFS’s
swarm
and OrbitDB’speers
to discover nearby devices for local sync.
- Use IPFS’s
-
Decentralized Moderation:
- Store moderation actions (e.g., flagging chats) as OrbitDB log entries.
- Assign ownership-based permissions for moderation roles.
-
Exocracy Integration:
- Define a shared schema for linking vouchers to Exocracy projects.
- Maintain logs of Exocracy project milestones in OrbitDB.
-
Hybrid State Management:
- Use OrbitDB with Pinia for syncing app state (e.g., voucher catalog, chat updates) across peers.
-
Crypto Payments:
- Integrate payment hashes and metadata into OrbitDB for transaction history.
- Store references to external wallet transactions in IPFS for auditability.
- CIDv1: Encoded in Base32, supports multiple hash algorithms, and is future-proof.
- CIDv0: Legacy, Base58 encoding, limited flexibility.
ipfs cid format --cid-base=base32 QmTz3...abcd
Definition: Group related files into a single directory for better organization.
- Efficient sharing of grouped files.
- Versioning is seamless when files change.
ipfs add -r ./voucher-files
# Creates directory CID
OrbitDB does not support SQL-like queries (e.g., joins). Use document stores for hierarchical or relational-like structures.
Restrict access using encryption or public/private key pairs.
Encrypt data for a specific user:
const crypto = require('crypto');
const publicKey = userPublicKey; // Target user
const encryptedData = crypto.publicEncrypt(publicKey, Buffer.from('voucher data'));
// Only the user can decrypt this data with their private key
Use orbitdb.docs
for storing documents with indexes.
await db.put({ voucherId: '1234', status: 'active', owner: 'userDid123' });
const result = db.query((doc) => doc.owner === 'userDid123');
Use version numbers for updates:
await db.put({ voucherId: '1234', version: 1, status: 'active' });
await db.put({ voucherId: '1234', version: 2, status: 'redeemed' });
Atomic Transfers:
Utilize CRDT to prevent duplication during ownership changes.
Encrypt vouchers and expose only specific ones.
const data = JSON.stringify({ voucherId: '1234', owner: 'userDid123' });
const encrypted = crypto.publicEncrypt(userPublicKey, Buffer.from(data));
OrbitDB is append-only but supports versioning for updates. Use version metadata to enforce immutability.
Implement delegation using capability chains, where a DID delegates permissions through signed attestations.
A replicated store ensures redundancy and syncs internal/external feeds across peers.
Store pointers as CID links in the feed for fast lookups.
await db.put({ voucherId: '1234', metadata: 'QmTz3...abcd' });
OrbitDB uses PubSub for syncing updates efficiently.
db.events.on('replicated', () => {
console.log('Database synced!');
});
IPFS swarm and OrbitDB peers can sync local devices.
ipfs swarm peers
# Finds nearby devices
Log moderation actions in OrbitDB logs.
await db.add({ action: 'delete', voucherId: '1234', by: 'moderatorDid' });
Define moderators based on wallet or DID ownership, storing roles in OrbitDB.
Define schemas for linking vouchers to projects.
await db.put({ projectId: 'proj123', vouchers: ['1234', '5678'] });
Logs allow tracking all actions, not just milestones, for better audit trails.
Use OrbitDB and Pinia to sync app states.
const state = reactive({ catalog: db.all() });
Store hashes in OrbitDB and payment proofs in IPFS for auditability.
await db.put({ paymentHash: 'hash123', metadata: 'QmTz3...abcd' });
- Entity Definitions:
User
,Voucher
. - Attributes:
voucherId
,status
. - Relationships:
Voucher
links toUser
. - Constraints: Uniqueness:
voucherId
. - Indexes: Index by
voucherId
. - Storage Details: Metadata in IPFS, indexes in OrbitDB.
- Access Control: Encrypt per user.