Attempt to discuss the question: does the tendency towards schemaless data transfer make type systems less relevant? The question came up in a conversation on twitter.
Short answer: No. Data transfer, data serialization, schemas, and type systems are all independent from each other.
Given a datastore with no schema, what can you know about the shape and meaning of the data stored in it? Barely anything. You may know that valid values can be represented using some particular serialization format, such as JSON. Maybe you even know that values can only be JSON objects, as opposed to arrays or primitive values. Those are constraints that a system may apply to data. (In my tweet I called that a "meta-schema", but I realize that that's not actually a fitting term.) They allow for concrete expectations about the shape of the data that can possibly be stored in and retrieved from the system.
What makes such constraints not a schema is the fact that there is no way for a schemaless system to have knowledge of a way to split up the space that contains all possible values into multiple separate groups. There is only one group of values. Despite that limitation of the system itself, it is possible to split the value space into separate value spaces outside of it. For example, you could expect a key-value pair to exist at a certain location in every item that is supposed to indicate the type of it. And then there would be some code somewhere that knows how to construct valid values of that type and knows how to read the necessary pieces of data from the representation the schemaless system provides. After going through such a stage, you wouldn't be able to distinguish "late-matched" schemaless data from "early-matched" schema-conform data.
Saying that data is schemaless really only states the absence of a schema checker in some particular part of a system that deals with data, such as a database, or an API. At some point, though, that step has to be gone through for any data to become meaningful. So you can think of a schemaless system as a system that delays schema checking, or as a system with implicitly externalized schema checking. That's why in my tweet I said, "it's not a simple way of looking at data", because the lack of a schema makes all items maximally polymorphic and thus opaque.
JSON, as a popular format for schemaless datastores and interfaces, is a format that can be expressed as a sum type.
That type definition allows for arbitrarily big and deeply nested JSON values. It also makes it impossible to construct values that are not valid JSON.
Schemalessness and static types are not a contradiction.
Instantiating a value that is a representation of a JSON document is only half of what usually needs to happen when processing data from a schemaless source. The next step would be to map such values to values that more clearly and plainly express the meaning of the data and make it possible to apply functions to it. A generic JSON value could for example be turned into a value that represents a user profile, or a blog post, or a page view, and that has the corresponding static type.
Whether or not some part of a system enforces a schema has little influence on how easy or useful it is to make use of a type system.