A migration guide for refactoring your application code to use the new JS IPFS core API.
Impact key:
- π easy - simple refactoring in application code
- π medium - involved refactoring in application code
- π hard - complicated refactoring in application code
- Migrating from callbacks
- Migrating from
PeerId
- Migrating from
PeerInfo
- Migrating to Async Iterables
- Migrating from
addFromFs
- Migrating from
addFromURL
- Migrating from
addFromStream
Callbacks are no longer supported in the API. If your application primarily uses callbacks you have two main options for migration:
Impact π
Switch to using the promise API with async/await. Instead of program continuation in a callback, continuation occurs after the async call and functions from where the call is made are changed to be async functions.
e.g.
function main () {
ipfs.id((err, res) => {
console.log(res)
})
}
main()
Becomes:
async function main () {
const res = await ipfs.id()
console.log(res)
}
main()
Impact π
Alternatively you could "callbackify" the API. In this case you use a module to convert the promise API to a callback API either permanently or in an interim period.
e.g.
function main () {
ipfs.id((err, res) => {
console.log(res)
})
}
main()
Becomes:
const callbackify = require('callbackify')
const ipfsId = callbackify(ipfs.id)
async function main () {
ipfsId((err, res) => {
console.log(res)
})
}
main()
Libp2p PeerId
instances are no longer returned from the API. If your application is using the crypto capabilities of PeerId
instances then you'll want to convert the peer ID string
returned by the new API back into libp2p PeerId
instances.
Impact π
Peer ID strings are also CIDs so converting them is simple:
const peerId = PeerId.createFromCID(peerIdStr)
You can get hold of the PeerId
class using npm or in a script tag:
const PeerId = require('peer-id')
const peerId = PeerId.createFromCID(peerIdStr)
<script src="https://unpkg.com/peer-id/dist/index.min.js"></script>
<script>
const peerId = window.PeerId.createFromCID(peerIdStr)
</script>
Libp2p PeerInfo
instances are no longer returned from the API. Instead, plain objects of the form { id: string, addrs: Multiaddr[] }
are returned. To convert these back into a PeerInfo
instance:
Impact π
Instantiate a new PeerInfo
and add addresses to it:
const peerInfo = new PeerInfo(PeerId.createFromCID(info.id))
info.addrs.forEach(addr => peerInfo.multiaddrs.add(addr))
You can get hold of the PeerInfo
class using npm or in a script tag:
const PeerInfo = require('peer-info')
const PeerId = require('peer-id')
const peerInfo = new PeerInfo(PeerId.createFromCID(info.id))
info.addrs.forEach(addr => peerInfo.multiaddrs.add(addr))
<script src="https://unpkg.com/peer-info/dist/index.min.js"></script>
<script src="https://unpkg.com/peer-id/dist/index.min.js"></script>
<script>
const peerInfo = new window.PeerInfo(window.PeerId.createFromCID(info.id))
info.addrs.forEach(addr => peerInfo.multiaddrs.add(addr))
</script>
Async Iterables are a language native way of streaming data. The IPFS core API has previously supported two different stream implementations - Pull Streams and Node.js Streams. Similarly to those two different implementations, streaming iterables come in different forms for different purposes:
- source - something that can be consumed. Analogous to a "source" pull stream or a "readable" Node.js stream
- sink - something that consumes (or drains) a source. Analogous to a "sink" pull stream or a "writable" Node.js stream
- transform - both a sink and a source where the values it consumes and the values that can be consumed from it are connected in some way. Analogous to a transform in both Pull and Node.js streams
- duplex - similar to a transform but the values it consumes are not necessarily connected to the values that can be consumed from it
More information and examples here: https://gist.github.com/alanshaw/591dc7dd54e4f99338a347ef568d6ee9
List of useful modules for working with async iterables: https://github.com/alanshaw/it-awesome
Note that iterables might gain many helper functions soon: https://github.com/tc39/proposal-iterator-helpers
Modern Node.js readable streams are async iterable so there's no changes to any APIs that you'd normally pass a stream to. The *ReadableStream
APIs have been removed. To migrate from *ReadableStream
methods, there are a couple of options:
Impact π
Use a for/await loop to consume an async iterable.
e.g.
const readable = ipfs.catReadableStream('QmHash')
readable.on('data', chunk => {
console.log(chunk.toString())
})
readable.on('end', () => {
console.log('done')
})
Becomes:
const source = ipfs.cat('QmHash')
for await (const chunk of source) {
console.log(chunk.toString())
}
console.log('done')
Impact π
Convert the async iterable to a readable stream.
e.g.
const readable = ipfs.catReadableStream('QmHash')
readable.on('data', chunk => {
console.log(chunk.toString())
})
readable.on('end', () => {
console.log('done')
})
Becomes:
const toStream = require('it-to-stream')
const readable = toStream.readable(ipfs.cat('QmHash'))
readable.on('data', chunk => {
console.log(chunk.toString())
})
readable.on('end', () => {
console.log('done')
})
Sometimes applications will "pipe" Node.js streams together, using the .pipe
method or the pipeline
utility. There are 2 possible migration options:
Impact π
Use it-pipe
and a for/await loop to concat data from an async iterable.
e.g.
const { pipeline, Writable } = require('stream')
let data = Buffer.alloc(0)
const concat = new Writable({
write (chunk, enc, cb) {
data = Buffer.concat([data, chunk])
cb()
}
})
pipeline(
ipfs.catReadableStream('QmHash'),
concat,
err => {
console.log(data.toString())
}
)
Becomes:
const pipe = require('it-pipe')
let data = Buffer.alloc(0)
const concat = async source => {
for await (const chunk of source) {
data = Buffer.concat([data, chunk])
}
}
const data = await pipe(
ipfs.cat('QmHash'),
concat
)
console.log(data.toString())
...which, by the way, could more succinctly be written as:
const concat = require('it-concat')
const data = await concat(ipfs.cat('QmHash'))
console.log(data.toString())
Impact π
Convert the async iterable to a readable stream.
e.g.
const { pipeline, Writable } = require('stream')
let data = Buffer.alloc(0)
const concat = new Writable({
write (chunk, enc, cb) {
data = Buffer.concat([data, chunk])
cb()
}
})
pipeline(
ipfs.catReadableStream('QmHash'),
concat,
err => {
console.log(data.toString())
}
)
Becomes:
const toStream = require('it-to-stream')
const { pipeline, Writable } = require('stream')
let data = Buffer.alloc(0)
const concat = new Writable({
write (chunk, enc, cb) {
data = Buffer.concat([data, chunk])
cb()
}
})
pipeline(
toStream.readable(ipfs.cat('QmHash')),
concat,
err => {
console.log(data.toString())
}
)
Commonly in Node.js you have a readable stream of a file from the filesystem that you want to add to IPFS. There are 2 possible migration options:
Impact π
Use it-pipe
and a for/await loop to collect all items from an async iterable.
e.g.
const fs = require('fs')
const { pipeline } = require('stream')
const items = []
const all = new Writable({
objectMode: true,
write (chunk, enc, cb) {
items.push(chunk)
cb()
}
})
pipeline(
fs.createReadStream('/path/to/file'),
ipfs.addReadableStream(),
all,
err => {
console.log(items)
}
)
Becomes:
const fs = require('fs')
const pipe = require('it-pipe')
const items = []
const all = async source => {
for await (const chunk of source) {
items.push(chunk)
}
}
await pipe(
fs.createReadStream('/path/to/file'), // Because Node.js streams are iterable
ipfs.add,
all
)
console.log(items)
...which, by the way, could more succinctly be written as:
const fs = require('fs')
const pipe = require('it-pipe')
const all = require('it-all')
const items = await pipe(
fs.createReadStream('/path/to/file'),
ipfs.add,
all
)
console.log(items)
Impact π
Convert the async iterable to a readable stream.
e.g.
const fs = require('fs')
const { pipeline } = require('stream')
const items = []
const all = new Writable({
objectMode: true,
write (chunk, enc, cb) {
items.push(chunk)
cb()
}
})
pipeline(
fs.createReadStream('/path/to/file'),
ipfs.addReadableStream(),
all,
err => {
console.log(items)
}
)
Becomes:
const toStream = require('it-to-stream')
const fs = require('fs')
const { pipeline } = require('stream')
const items = []
const all = new Writable({
objectMode: true,
write (chunk, enc, cb) {
items.push(chunk)
cb()
}
})
pipeline(
fs.createReadStream('/path/to/file'),
toStream.transform(ipfs.add),
all,
err => {
console.log(items)
}
)
Pull Streams can no longer be passed to IPFS API methods and the *PullStream
APIs have been removed. To pass a pull stream directly to an IPFS API method, first convert it to an async iterable using pull-stream-to-async-iterator
. To migrate from *PullStream
methods, there are a couple of options:
Impact π
Use a for/await loop to consume an async iterable.
e.g.
pull(
ipfs.catPullStream('QmHash'),
pull.through(chunk => {
console.log(chunk.toString())
}),
pull.onEnd(err => {
console.log('done')
})
)
Becomes:
for await (const chunk of ipfs.cat('QmHash')) {
console.log(chunk.toString())
}
console.log('done')
Impact π
Convert the async iterable to a pull stream.
e.g.
pull(
ipfs.catPullStream('QmHash'),
pull.through(chunk => {
console.log(chunk.toString())
}),
pull.onEnd(err => {
console.log('done')
})
)
Becomes:
const toPull = require('async-iterator-to-pull-stream')
pull(
toPull.source(ipfs.cat('QmHash')),
pull.through(chunk => {
console.log(chunk.toString())
}),
pull.onEnd(err => {
console.log('done')
})
)
Frequently, applications will use pull()
to create a pipeline of pull streams.
Impact π
Use it-pipe
and it-concat
concat data from an async iterable.
e.g.
pull(
ipfs.catPullStream('QmHash'),
pull.collect((err, chunks) => {
console.log(Buffer.concat(chunks).toString())
})
)
Becomes:
const pipe = require('it-pipe')
const concat = require('it-concat')
const data = await pipe(
ipfs.cat('QmHash'),
concat
)
console.log(data.toString())
You might have a pull stream source of a file from the filesystem that you want to add to IPFS. There are 2 possible migration options:
Impact π
Use it-pipe
and it-all
to collect all items from an async iterable.
e.g.
const fs = require('fs')
const toPull = require('stream-to-pull-stream')
pull(
toPull.source(fs.createReadStream('/path/to/file')),
ipfs.addPullStream(),
pull.collect((err, items) => {
console.log(items)
})
)
Becomes:
const fs = require('fs')
const pipe = require('it-pipe')
const all = require('it-all')
const items = await pipe(
fs.createReadStream('/path/to/file'), // Because Node.js streams are iterable
ipfs.add,
all
)
console.log(items)
Or more succinctly:
const fs = require('fs')
const pipe = require('it-pipe')
const all = require('it-all')
const items = await all(ipfs.add(fs.createReadStream('/path/to/file')))
console.log(items)
Impact π
Convert the async iterable to a pull stream.
e.g.
const fs = require('fs')
const toPull = require('stream-to-pull-stream')
pull(
toPull.source(fs.createReadStream('/path/to/file')),
ipfs.addPullStream(),
pull.collect((err, items) => {
console.log(items)
})
)
Becomes:
const fs = require('fs')
const streamToPull = require('stream-to-pull-stream')
const itToPull = require('async-iterator-to-pull-stream')
pull(
streamToPull.source(fs.createReadStream('/path/to/file')),
itToPull.transform(ipfs.add),
pull.collect((err, items) => {
console.log(items)
})
)
The old APIs like ipfs.add
, ipfs.cat
, ipfs.ls
and others were "buffering APIs" i.e. they collect all the results into memory before returning them. The new JS core interface APIs are streaming by default in order to reduce memory usage, reduce time to first byte and to provide better feedback. The following are examples of switching from the old ipfs.add
, ipfs.cat
and ipfs.ls
to the new APIs:
Impact π
Adding files.
e.g.
const results = await ipfs.add([
{ path: 'root/1.txt', content: Buffer.from('one') },
{ path: 'root/2.txt', content: Buffer.from('two') }
])
// Note that ALL files have already been added to IPFS
results.forEach(file => {
console.log(file.path)
})
Becomes:
const addSource = ipfs.add([
{ path: 'root/1.txt', content: Buffer.from('one') },
{ path: 'root/2.txt', content: Buffer.from('two') }
])
for await (const file of addSource) {
console.log(file.path) // Note these are logged out as they are added
}
Alternatively you can buffer up the results using the it-all
utility:
const all = require('it-all')
const results = await all(ipfs.add([
{ path: 'root/1.txt', content: Buffer.from('one') },
{ path: 'root/2.txt', content: Buffer.from('two') }
]))
results.forEach(file => {
console.log(file.path)
})
Often you just want the last item (the root directory entry) when adding multiple files to IPFS:
const results = await ipfs.add([
{ path: 'root/1.txt', content: Buffer.from('one') },
{ path: 'root/2.txt', content: Buffer.from('two') }
])
const lastResult = results[results.length - 1]
console.log(lastResult)
Becomes:
const addSource = ipfs.add([
{ path: 'root/1.txt', content: Buffer.from('one') },
{ path: 'root/2.txt', content: Buffer.from('two') }
])
let lastResult
for await (const file of addSource) {
lastResult = file
}
console.log(lastResult)
Alternatively you can use the it-last
utility:
const lastResult = await last(ipfs.add([
{ path: 'root/1.txt', content: Buffer.from('one') },
{ path: 'root/2.txt', content: Buffer.from('two') }
]))
console.log(lastResult)
Impact π
Reading files.
e.g.
const fs = require('fs')
const data = await ipfs.cat('/ipfs/QmHash')
// Note that here we have read the entire file
// i.e. `data` holds ALL the contents of the file in memory
await fs.writeFile('/tmp/file.iso', data)
console.log('done')
Becomes:
const pipe = require('it-pipe')
const toIterable = require('stream-to-it')
const fs = require('fs')
// Note that as chunks arrive they are written to the file and memory can be freed and re-used
await pipe(
ipfs.cat('/ipfs/QmHash'),
toIterable.sink(fs.createWriteStream('/tmp/file.iso'))
)
console.log('done')
Alternatively you can buffer up the chunks using the it-concat
utility (not recommended!):
const fs = require('fs')
const concat = require('it-concat')
const data = await concat(ipfs.cat('/ipfs/QmHash'))
await fs.writeFile('/tmp/file.iso', data.slice())
console.log('done')
Impact π
Listing directory contents.
e.g.
const files = await ipfs.ls('/ipfs/QmHash')
// Note that ALL files in the directory have been read into memory
files.forEach(file => {
console.log(file.name)
})
Becomes:
const filesSource = ipfs.ls('/ipfs/QmHash')
for await (const file of filesSource) {
console.log(file.name) // Note these are logged out as they are retrieved from the network/disk
}
Alternatively you can buffer up the directory listing using the it-all
utility:
const all = require('it-all')
const results = await all(ipfs.ls('/ipfs/QmHash'))
results.forEach(file => {
console.log(file.name)
})
The addFromFs
API method has been removed and replaced with a helper function globSource
that is exported from js-ipfs
/js-ipfs-http-client
. See the API docs for globSource
for more info.
Impact π
e.g.
const IpfsHttpClient = require('ipfs-http-client')
const ipfs = IpfsHttpClient()
const files = await ipfs.addFromFs('./docs', { recursive: true })
files.forEach(file => {
console.log(file)
})
Becomes:
const IpfsHttpClient = require('ipfs-http-client')
const { globSource } = IpfsHttpClient
const ipfs = IpfsHttpClient()
for await (const file of ipfs.add(globSource('./docs', { recursive: true }))) {
console.log(file)
}
The addFromURL
API method has been removed and replaced with a helper function urlSource
that is exported from js-ipfs
/js-ipfs-http-client
. See the API docs for urlSource
for more info.
Impact π
e.g.
const IpfsHttpClient = require('ipfs-http-client')
const ipfs = IpfsHttpClient()
const files = await ipfs.addFromURL('https://ipfs.io/images/ipfs-logo.svg')
files.forEach(file => {
console.log(file)
})
Becomes:
const IpfsHttpClient = require('ipfs-http-client')
const { urlSource } = IpfsHttpClient
const ipfs = IpfsHttpClient()
for await (const file of ipfs.add(urlSource('https://ipfs.io/images/ipfs-logo.svg'))) {
console.log(file)
}
The addFromStream
API method has been removed. This was an alias for add
. Please see the section on Migrating to Async Iterables for instructions on how to migrate to async iterables from streams.