Rationale: Indicate type of model being loaded more explicitly in code.
// Before:
await tf.loadModel('http://server/model.json');
// After:
await tf.loadLayersModel('http://server/model.json');
// Before:
await tf.loadFrozenModel('http://server/model.pb', 'http://server/weights-manifest.json');
// After:
await tf.loadGraphModel('http://server/model.json');
// Also note the fact that models converted from TensorFlow now uses a single file
// to hold the model topology and the manifest of weights.
Rationale: Consolidate the growing list of optional arguments of the function.
// Before:
const strict = false;
await tf.loadModel('http://server/model.json', strict);
// After:
await tf.loadLayersModel('http://server/model.json', {strict: false});
Rationale: Consolidate the growing list of optional arguments of the function.
const requestInit = {credentials: 'include'};
const fetchFunc = myCustomFetchFunc;
// Before:
await tf.loadModel(tf.io.browserHTTPRequest(
'http://server/model.json', requestInit, null, fetchFunc));
// After:
await tf.loadLayersModel(tf.io.browserHTTPRequest('http://server/model.json', {
requestInit,
fetchFunc,
onProgress: fraction => console.log(`Model loading is ${fraction * 100}% done`)
}));
// Also note the function name change and the new `onProgress` field.
Rationale: Reflect the exact type of model; avoid confusion with models from TensorFlow GraphDef-based models.
// Before:
console.log(myModel instanceof tf.Model);
// After:
console.log(myModel instanceof tf.LayersModel);
Rationale: Repeated get()
call is a common mistake that harms performance.
const x = tf.randomNormal([3, 3]);
// Before:
console.log(x.get(2, 1));
// After:
const nestedArray = await x.array();
console.log(nestedArray[2][1]):
Rationale: Encourage async usage to reduce the likelihood of blocking UI.
const x = tf.randomNormal([3, 3]);
// Before:
const myBuffer = x.buffer();
// After:
const myBuffer = await x.buffer();
// or
const myBuffer = x.bufferSync();
Rationale: Make it clear that these functions don't work in Node.js.
// Before:
const imageTensor = tf.fromPixels(canvasElement1);
await tf.toPixels(imageTensor, canvasElement2);
// After:
const imageTensor = tf.browser.fromPixels(canvasEleemnt1);
await tf.browser.toPixels(imageTensor, canvasElement2);
Rationale: To be consistent with TensorFlow (Python) and more logical.
// Before:
tf.batchNormalization(x, mean, variance, varianceEpsilon, scale, offset);
// After:
tf.batchNorm(x, mean, variance, offset, scale, varianceEpisilon);
Rationale: Avoid unnecessary overhead of string creation.
// Before:
tf.util.assert(x.rank >= 3, `x is required to be 3D or higher, but is ${x.rank}D`);
// After:
tf.util.assert(x.rank >= 3, () => `x is required to be 3D or higher, but is ${x.rank}D`);
Rationale: Make it clear that the method is async. Avoid confusion with the built-in forEach
construct.
// Before:
await myDataset.forEach(item => console.log(item));
// After:
await myDataset.forEachAsync(item => console.log(item));
11. tf.LayersModel.fitDataset() now expects each item to be an object with two fields: xs and ys, instead of an array of two elements.
Rationale: Make it explicit what are the features and what are the labels (targets). Avoid surprising behavior during batching.
const model = tf.sequential({layers: [tf.layers.dense({units: 1, inputShape: [3]})]});
model.compile({loss: 'meanSquaredError', optimizer: 'sgd'});
const xs = [[1, 2, 3], [4, 5, 6], [7, 8, 9]];
const ys = [[0], [1], [2]];
const xDataset = tf.data.array(xs).map(x => tf.tensor(x));
const yDataset = tf.data.array(ys).map(y => tf.tensor(y));
// Before:
const dataset = tf.data.zip([xDataset, yDataset]).batch(2);
await model.fitDataset(dataset, {epochs: 4});
// After:
const dataset = tf.data.zip({xs: xDataset, ys: yDataset}).batch(2);
// Note that the features are specified explicitly by the key 'xs' and
// the labels (targets) by the key 'ys'.
await model.fitDataset(dataset, {epochs: 4});
Rationale: Consistency among all tfjs-vis functions.
// Before:
tfvis.render.linechart(dataToPlot, divElement, options);
tfvis.render.scatterplot(dataToPlot, divElement, options);
tfvis.render.barchart(dataToPlot, divElement, options);
tfvis.render.histogram(dataToPlot, divElement, options);
tfvis.render.heatmap(dataToPlot, divElement);
// After:
tfvis.render.linechart(divElement, dataToPlot, options);
tfvis.render.scatterplot(divElement, dataToPlot, options);
tfvis.render.barchart(divElement, dataToPlot, options);
tfvis.render.histogram(divElement, dataToPlot, options);
tfvis.render.heatmap(divElement, dataToPlot);
Rationale: Succinctness and clarity.
// Before:
tfvis.show.confusionMatrix(divElement, confusionMatrix, classNames);
// After:
tfvis.show.confusionMatrix(divElement, {
values: confusionMatrix,
tickLabels: classNames
});
Rationale: Avoid confusion between axis labels and tick labels.
// Before:
tfvis.show.confusionMatrix(divElement, {
values,
labels: ['apple', 'banana', 'orange']
});
tfvis.render.heatmap(divElement, {
values,
xLabels: ['x1', 'x2', 'x3'],
yLabels: ['y1', 'y2', 'y3']
});
// After:
tfvis.show.confusionMatrix(divElement, {
values,
tickLabels: ['apple', 'banana', 'orange']
});
tfvis.render.heatmap(divElement, {
values,
xTickLabels: ['x1', 'x2', 'x3'],
yTickLabels: ['y1', 'y2', 'y3']
});
Rationale: In v1.0.0, tf.data.generator()
expects an actual JavaScript generator. In 0.x, it took a plain function and hence was a misnomer.
let i = 0;
// Before:
const gen = tf.data.generator(() => ({value: ++i, done: i === 10}));
// After:
const gen = tf.data.func(() => ({value: ++i, done: i === 10}));
Hi, I'm currently on tensorflow 2.0 alpha (python) and tensorflow.js 1.0.0.
I just used the tensorflowjs_converter.
I have 3 shard1of3, 1of2 and 3of3.bin.
they've all loaded in the browser. I checked the element inspector. They're all at 200 and "OK". Also recognized as octet stream.
I'm using tf.loadLayersModel but nothing happens. the model json and the bin files have loaded. even the settings file. Is there any other file i should be waiting for to guarantee the async await function when loading in tfjs does not fail? thanks.