I'm going through the code execution of featuresAt
in Mapbox GL JS. This is to take notes and understand how all of this works.
Initiating the feature search based on the example on Mapbox GL API Docs...
map.on('click', function(e) {
map.featuresAt(e.point, {radius: 5}, function(err, features) {
if (err) throw err;
document.getElementById('features').innerHTML = JSON.stringify(features, null, 2);
});
});
Stepping into map.featuresAt
...
featuresAt: function(point, params, callback) {
var coord = this.transform.pointCoordinate(Point.convert(point));
this.style.featuresAt(coord, params, callback);
return this;
},
What exactly is transform.pointCoordinate
doing?
Stepping into this.style.featuresAt
...
featuresAt: function(coord, params, callback) {
var features = [];
var error = null;
if (params.layer) {
params.layer = { id: params.layer };
}
util.asyncEach(Object.keys(this.sources), function(id, callback) {
var source = this.sources[id];
source.featuresAt(coord, params, function(err, result) {
if (result) features = features.concat(result);
if (err) error = err;
callback();
});
}.bind(this),
// asynchEach calls this function when done
function() {
if (error) return callback(error);
features.forEach(function(feature) {
feature.layer = this._layers[feature.layer].json();
}.bind(this));
callback(null, features);
}.bind(this));
}
Ok, it looks like the style object keeps track of all of the sources. It loops through the sources using a helper function asynchEach
. asynchEach
calls an asynchronous function on an array of arguments, calling the last argument, a callback, once all calls complete.
Stepping into source.featuresAt
...
This brings us to a differently named function, _vectorFeaturesAt
.
exports._vectorFeaturesAt = function(coord, params, callback) {
if (!this._pyramid)
return callback(null, []);
var result = this._pyramid.tileAt(coord);
if (!result)
return callback(null, []);
this.dispatcher.send('query features', {
uid: result.tile.uid,
x: result.x,
y: result.y,
scale: result.scale,
source: this.id,
params: params
}, callback, result.tile.workerID);
}
We are sending in the coord to a tileAt
function. The tile I am focusing on is in west Sacramento, 14/2659/6286
. tileAt
is returning a result object that has a scale
, tile
, and an x,y
value. The x,y being returned by tileAt is the actual position being click on in the coordinate space of the vector tile itself. In MapboxGL, the coordinate space for a vector tile is hard-coded to be 4096 x 4096.
Could you explain how TilePyramid works, paricularly tileAt
? My assumption of the word pyramid is that this is where we keep track of tiles loaded, and the QuadTree of tiles in the Google Z/X/Y tile schema can be thought of as a pyramid.
The tile object itself has many things in it, and this is probably the core object with the correctly formed vectors the render is using to draw. This object also does not have a z/x/y value in it that matches the tile we clicked on.
We are sending a task to the dispatcher that will do our RTree query computation in a separate thread. Notice that one of the parameters is an ID used to idenify a specific worker thread delegated to a given tile.
The actual query is taking place in a funciton called query features
in worker.js.
'query features': function(params, callback) {
var tile = this.loaded[params.source] && this.loaded[params.source][params.uid];
if (tile) {
tile.featureTree.query(params, callback);
} else {
callback(null, []);
}
}
Here is where we actually query the RTree.
FeatureTree.prototype.query = function(args, callback) {
if (this.toBeInserted.length) this._load();
var params = args.params || {},
radius = (params.radius || 0) * 4096 / args.scale,
x = args.x,
y = args.y,
result = [];
var matching = this.rtree.search([ x - radius, y - radius, x + radius, y + radius ]);
for (var i = 0; i < matching.length; i++) {
var feature = matching[i].feature,
layers = matching[i].layers,
type = vt.VectorTileFeature.types[feature.type];
if (params.$type && type !== params.$type)
continue;
if (!geometryContainsPoint(feature.loadGeometry(), type, new Point(x, y), radius))
continue;
var geoJSON = feature.toGeoJSON(this.coord.x, this.coord.y, this.coord.z);
for (var l = 0; l < layers.length; l++) {
var layer = layers[l];
if (params.layer && layer !== params.layer.id)
continue;
result.push(util.extend({layer: layer}, geoJSON));
}
}
callback(null, result);
};
We hit the RTree with a search based on an envelope of the click point buffer. If we get matches, we have to then check to see if the click point buffer intersects with a geometry.
It looks like there can be a parameter to the query specifying a search for a specific type. That is what (params.$type && type !== params.$type)
is for. We can also have a parameter to specify which vector tile layer we want to query. All of this is optional.
We then convert this vector tile feature into GeoJSON - providing an object in WGS84 for us to use.
One obvious use case of a featuresAt would be to select and highlight a given map feature and provide the user with contextual information. Now that we have the desired vector tile feature, how do we extend this to tell the renderer to actually style this specific feature differently to the user?
I like like that we are getting GeoJSON of the vector tile features, however, isn't this going to be simplified, non-original vector data? How would this be useful?