-
-
Save craicoverflow/8c2631eda88771bff825700dad04bc3b to your computer and use it in GitHub Desktop.
import { GraphbackDataProvider, GraphbackPlugin } from "graphback" | |
import { DocumentNode, GraphQLSchema } from 'graphql' | |
import { IResolvers, PubSubEngine } from 'apollo-server-express' | |
import { PgKnexDBDataProvider } from '@graphback/runtime-knex' | |
import Knex from 'knex' | |
interface DataProviderModelMap { | |
[modelName: string]: GraphbackDataProvider | |
} | |
/** | |
* Graphback config that is define by the user in their API code. | |
* | |
* modelDefs: | |
* The input model that Graphback uses as input to create a GraphQL schema and resolvers | |
* schema: | |
* The schema if provided will be used instead of modelDefs. | |
* This is because schema can have more resolvers and other information | |
* resolvers: | |
* Supply your own resolvers and we will merge them with Graphback resolvers | |
* dataProviders: | |
* Provide your data providers with a mapping to your Model name | |
* Or provide one and it will be used for all | |
* pubSub: | |
* Provide your pubSub. This can be optional since users may not use subscriptions | |
* schemaPlugins: | |
* Users can provide an array of custom schema plugins | |
* They will be executed in the order they are in the array | |
*/ | |
interface GraphbackConfig { | |
modelDefs?: DocumentNode | DocumentNode[] | string | string[] | |
schema?: GraphQLSchema | |
resolvers?: IResolvers | IResolvers[] | |
dataProviders: DataProviderModelMap | GraphbackDataProvider | |
pubSub?: PubSubEngine | |
schemaPlugins?: GraphbackPlugin[] | |
} | |
function makeGraphback(config: GraphbackConfig) { | |
// do some work and return generated stuff | |
return { typeDefs: '', resolvers: {}, dataSources: {} } | |
} | |
const db = Knex.Config() | |
const postgresProvider = new PgKnexDBDataProvider(db) | |
const { typeDefs, resolvers, dataSources } = makeGraphback({ | |
modelDefs: ` | |
""" | |
@model | |
""" | |
type Note { | |
id: ID! | |
title: String | |
} | |
`, | |
dataProviders: postgresProvider | |
}) |
Client side generator is using this, but you can get schema from running service as well.
That is the first conclusion.
On this. Will client document generation be moved out of Graphback? As an option, the template could implement writing the runtime schema to a file.
Fully agree.
That is separate discussion. We might still keep createResources on limited scope.
Worth adding - I do not particulairly like makeGraphback
because it is not clearly stating what we are making and also it also accepts tons of objects/arrays etc. that developers might not particularly understand etc.
However there is also confusion related to specific database methods as people can connect some of the objects to different database or REST API.
I see this as kinda tradeoff
const resolversBuilder = new MongoResolvers(...);
resolvesBuilder.validate();
resolvesBuilder.setService("User", userService)
resolvesBuilder.setDataSource("Task", taskRestDs)
resolvesBuilder.getSchemaString()
resolvesBuilder.getResolvers();
This seems a little confusing.
- would users explicitly have to set the server and data source for each model? Or would there be some kind of default set in the constructor and then you would override this with setters.
- What is
userService
andtaskRestDs
here? Are these instantiated classes?
would users explicitly have to set the server and data source for each model? Or would there be some kind of default set in the constructor and then you would override this with setters.
No. Service contains datasource currently so passing service means passing datasource.
Technically we specify our main data source by choosing Mongo or Postgress(Knex) then can override some specific elements to different datasources if needed. However for use cases where there is no Mongo or Postgres this will mean that we will need to override every possible object with our own service. Do we see cases like this? What Graphback offers in this cases for developers?
const resolversBuilder = new MongoResolvers(...);
How would you see these resolvers being added to Graphback?
Something like:
const resolversBuilder = new MongoResolvers(...);
...
...
const apolloServer = new ApolloServer({
typeDefs: resolversBuilder.getSchema(),
resolvers: resolversBuilder.getResolvers()
});
?
I typically see schema still being in the project as a file. WHY? because you need to see it to add your own resolvers. Then you merging graphback generated resolvers with your own resolvers and scalars and calling it a day.
const resolversBuilder = new MongoResolvers(...);
...
...
const apolloServer = new ApolloServer({
typeDefs: schemaString,
resolvers: [resolversBuilder.getResolvers(), resolversBuilder.getScalars(), myresolvers]
});
I typically see schema still being in the project as a file. WHY? because you need to see it to add your own resolvers.
So we would still need two steps in this case?
- Run graphback to generate schema file.
- run server and load schema string from file into GraphbackResolvers to build resolver context.
This might still be achievable without the need for a file if MongoResolvers actually wraps full runtime implementation and gives you everything:
const graphbackBuilder = new GraphbackResolvers({model, crudService, dbProvider});
resolvers.setDataSource('Note', customDbProvider);
const { schemaSDL, resolvers } = graphbackBuilder.build();
// 1. Execute schema plugins
// 2. Build resolvers with data sources from builder
// 3. Output built schema with resolvers
const apolloServer = new ApolloServer({
typeDefs: schemaSDL,
resolvers
});
We will give developers much more than just black box schema and resolvers. They can add this to their project etc.
Plus we actually giving developers starter templates - others do not provide that.
Our templates can ignore the fact of the custom schema/generated one, but people could still generate schema and work with it.
Like if we target to provide single method thing let's:
- Kill templates
- Kill services and datasources
Have 2 methods - createMongo
and createPostgres
, but that is not going to work for us as we need to have more integrations that sometimes will exclude each other.
https://graphql-mesh.com is another good example how those guys resolved similar problem - worth checking their api
Fair points
graphql-mesh.com is another good example how those guys resolved similar problem - worth checking their api
Their docs are incomplete, I only see YAML configs.
We will give developers much more than just black box schema and resolvers. They can add this to their project etc.
True, but developers want simplicity and should be able to start with a miminal config API, adding more if they need it.
Plus we actually giving developers starter templates - others do not provide that.
The fewer starter templates we provide, the better in a way. Shouldn't starter templates focus on the non-Graphback elements of the application, like Apollo versus GrahQL.js, Mongo versus Postgres, how developers organise their architecture (we could have a GraphQL modules template).
If we need to have a template for very specific differences in Graphback API it might mean that the API is too difficult to make simple changes without having to change it all?
We could combine both approaches, where the resolver building would still be abstracted into the API:
interface GraphbackConfig {
schema: GraphQLSchema
resolvers?: ResolverBuilder | ResolverBuilder[]
pubSub?: PubSubEngine
schemaPlugins?: GraphbackPlugin[]
}
const modelSchema = buildSchema(`
"""
@model
"""
type Note {
id: ID!
title: String
}
`)
const resolverBuilder = new PostgresResolverBuilder(modelSchema, {})
// very minimal Graphback configuration
const { schema, resolvers, dataSources } = makeGraphback({
schema: modelSchema,
resolvers: [resolverBuilder] // accepts multiple resolver builders and merges them after doing the building, making it easy to mix data sources
})
I have put some thoughts into this and putting some info here
Database driven API
Runtime interfaces should be different to the databases we support.
Each database will have different getting started as you usually work with single database anyway.
This will allow us to get nice API. Get some features specifically for Mongo, Postgress that are specific for those databases etc.
It will also improve our docs
Focus on resolvers
We need to focus on resolvers as name. Backend is kinda confusing - we building resolvers layer with the data etc.
Utilizing builder pattern for API
This way we can practically add any interesting feature in the future in the single object with multiple methods.
It will be easy to track our API and there will be no issues with different formats as builders will incorporate that already.
Having this two there gives us ultimate flexibility with simplicity (assuming we start with something easy.
We can also support multiple use cases.
For example the simplest API usage will look as follows: