Skip to content

Instantly share code, notes, and snippets.

@craicoverflow
Last active May 26, 2020 14:53
Show Gist options
  • Save craicoverflow/8c2631eda88771bff825700dad04bc3b to your computer and use it in GitHub Desktop.
Save craicoverflow/8c2631eda88771bff825700dad04bc3b to your computer and use it in GitHub Desktop.
Pseudo-prototype of new runtime API
import { GraphbackDataProvider, GraphbackPlugin } from "graphback"
import { DocumentNode, GraphQLSchema } from 'graphql'
import { IResolvers, PubSubEngine } from 'apollo-server-express'
import { PgKnexDBDataProvider } from '@graphback/runtime-knex'
import Knex from 'knex'
interface DataProviderModelMap {
[modelName: string]: GraphbackDataProvider
}
/**
* Graphback config that is define by the user in their API code.
*
* modelDefs:
* The input model that Graphback uses as input to create a GraphQL schema and resolvers
* schema:
* The schema if provided will be used instead of modelDefs.
* This is because schema can have more resolvers and other information
* resolvers:
* Supply your own resolvers and we will merge them with Graphback resolvers
* dataProviders:
* Provide your data providers with a mapping to your Model name
* Or provide one and it will be used for all
* pubSub:
* Provide your pubSub. This can be optional since users may not use subscriptions
* schemaPlugins:
* Users can provide an array of custom schema plugins
* They will be executed in the order they are in the array
*/
interface GraphbackConfig {
modelDefs?: DocumentNode | DocumentNode[] | string | string[]
schema?: GraphQLSchema
resolvers?: IResolvers | IResolvers[]
dataProviders: DataProviderModelMap | GraphbackDataProvider
pubSub?: PubSubEngine
schemaPlugins?: GraphbackPlugin[]
}
function makeGraphback(config: GraphbackConfig) {
// do some work and return generated stuff
return { typeDefs: '', resolvers: {}, dataSources: {} }
}
const db = Knex.Config()
const postgresProvider = new PgKnexDBDataProvider(db)
const { typeDefs, resolvers, dataSources } = makeGraphback({
modelDefs: `
"""
@model
"""
type Note {
id: ID!
title: String
}
`,
dataProviders: postgresProvider
})
@craicoverflow
Copy link
Author

@wtrocki here is a high-level overview of how I think the Graphback API entry should look. What happens under the hood is secondary for now.

@craicoverflow
Copy link
Author

DataProviderModelMap

I really liked the idea of being able to provide an instantiated data provider to Graphback and it would assemble it. This is easy to do when you only pass one and assume all models use it, but more difficult when allowing a map between various model types. Also, if someone wanted their own data provider with a different constructor signature, this approach would not work

We would probably need to remove baseType from the ctor though and this can by Graphback in a secondary setModel function or something. But in fact we should not pass the GraphQLObject at all.

@wtrocki
Copy link

wtrocki commented May 21, 2020

Reviewing now!

@craicoverflow
Copy link
Author

I will move this to issue tomorrow

@craicoverflow
Copy link
Author

For some reason I thought IResolvers was from graphql not apollo, so that will need to change.

@wtrocki
Copy link

wtrocki commented May 21, 2020

I would love to chat about this as I see couple issues in this Api we can easily fix.

As nice reading would you consider checking API from:

@craicoverflow
Copy link
Author

Will do, we can have a call tomorrow then

@craicoverflow
Copy link
Author

Just a question as these are all ORMs, what specifically is the problem with this API? Something with the data provider? It will help put my reading into context.

@wtrocki
Copy link

wtrocki commented May 22, 2020

Our API needs to caher different scenarios:

  • Someone want to get out of the box support for the specific database and defaults like
    Current API is kinda good with this as we provide helpers etc.

  • Someone want to make change for the Service or DataLayer
    Very hard in current scenario.

  • Supporting different use cases (reading model from the file/getting access to compiled schema.

 modelDefs?: DocumentNode | DocumentNode[] | string | string[]
  schema?: GraphQLSchema

This is sign of thebadly designed API. Model definitions can be actually full schema there is no need to bring that in.
However there is challenge - if we provide model definitions only we loosing ability to access dynamically added fields.
This makes entire implementation really hard - (relationships code I'm here to kill you)
I think we need to agree first if we kill schema as file (generation of the schema).
Passing full schema will resolve some issues and simplify our code.

With that in mind entire ./model folder makes less sense as you will be able to just use your own schema add some annotations etc.
However this is generating the challenge - generated schema is losely connected with runtime resolvers.

@wtrocki
Copy link

wtrocki commented May 22, 2020

Mapping

We have now data provider for the type - that is using the information for the schema.
If we provide helper where you can pass entire schema to Mongo, Postgress and this will return you entire object containing the data facade.

ServiceFacade can he another helper that will wrap the DataFacades.

On top of that users will be able to use helper methods on the both Data and ServiceFacades to swap some elements or even
pass them along the schema (as you have in example above.

In this approach we seeing runtime API as actually 3 different apps for 3 different use cases we will be able to archive following features:

  • Simplicity to create runtime for single database using defaults with mapping with just 2 lines of code
  • Ability to connect different datasources (even user ones) to the Service and CRUD layer.

@wtrocki
Copy link

wtrocki commented May 22, 2020

Challenges

Generally there could be more required in future than just:

  pubSub?: PubSubEngine
  schemaPlugins?: GraphbackPlugin[]

That is why we would probably not be able to avoid multiple helpers for single line approach that will accept different interfaces.
For example createOffixRuntime would have different options for digging strategies etc.

@wtrocki
Copy link

wtrocki commented May 22, 2020

const { typeDefs, resolvers, dataSources } = makeGraphback

I'm not 100% that using single makeGraphback will work. After adding AuthService now etc. we need to be able to control ServiceLayer and DataLayers independently. That is why researching other APi was kinda cool exercise to see how simple they are.

@craicoverflow
Copy link
Author

I'm not 100% that using single makeGraphback will work. After adding AuthService now etc. we need to be able to control ServiceLayer and DataLayers independently.

Aren't we going to remove CRUD service?

@craicoverflow
Copy link
Author

craicoverflow commented May 22, 2020

Our API needs to caher different scenarios:

  • Someone want to get out of the box support for the specific database and defaults like
    Current API is kinda good with this as we provide helpers etc.

This worked well when you assume all models should use the same dataprovider, to keep it as helpers then it could look something like:

const pgProviders = createKnexPGProviders(schema, dbConfig, optionalArrayOfModelsThisShouldMapTo);
const mongoProviders = createMongoProviders(schema, mongoConfig, optionalArrayOfModelsThisShouldMapTo);
const NoteProvider = new MyCustomDataProvider(someConfig);

const dataSources = {
  ...pgProviders,
  ...mongoProviders,
  Note: NoteProvider
}
  • Someone want to make change for the Service or DataLayer
    Very hard in current scenario.

In current API yes it is very hard because of dependency of helpers. In my opinion the data provider should not have the GraphQLObject type because this automatically makes it harder to create without a helper.

  • Supporting different use cases (reading model from the file/getting access to compiled schema.
 modelDefs?: DocumentNode | DocumentNode[] | string | string[]
  schema?: GraphQLSchema

This is sign of thebadly designed API. Model definitions can be actually full schema there is no need to bring that in.
However there is challenge - if we provide model definitions only we loosing ability to access dynamically added fields.
This makes entire implementation really hard - (relationships code I'm here to kill you)

Model definitions here is basically a schema string or multiple schema strings, which Graphback would build into a single schema object. We could change this to:

schema: GraphQLSchema | DocumentNode | string

which might be an easier API, with same result.

I think we need to agree first if we kill schema as file (generation of the schema).

There is no real reason to generate schema file. If we output the typeDefs and the end user can write this to a file themselves if required.

With that in mind entire ./model folder makes less sense as you will be able to just use your own schema add some annotations etc.

Agreed

However this is generating the challenge - generated schema is losely connected with runtime resolvers.

@wtrocki
Copy link

wtrocki commented May 22, 2020

There is no real reason to generate schema file.

Client side generator is using this, but you can get schema from running service as well.
That is the first conclusion.
This will literally kill graphback cli - generate will be pointless if that is going to be used only for client.

In current API yes it is very hard because of dependency of helpers. In my opinion the data provider should not have the GraphQLObject type because this automatically makes it harder to create without a helper.

Yes. This is clear to me, but when I tried to address this I ended up removing tons of stuff and breaking a lot of things.

@wtrocki
Copy link

wtrocki commented May 26, 2020

I have put some thoughts into this and putting some info here

Database driven API

Runtime interfaces should be different to the databases we support.
Each database will have different getting started as you usually work with single database anyway.

This will allow us to get nice API. Get some features specifically for Mongo, Postgress that are specific for those databases etc.
It will also improve our docs

Focus on resolvers

We need to focus on resolvers as name. Backend is kinda confusing - we building resolvers layer with the data etc.

Utilizing builder pattern for API

const resolversBuilder = new MongoResolvers(...);
resolvesBuilder.validate();

resolvesBuilder.setService("User", userService)
resolvesBuilder.setDataSource("Task", taskRestDs)

resolvesBuilder.getSchemaString()
resolvesBuilder.build().getResolvers();

This way we can practically add any interesting feature in the future in the single object with multiple methods.
It will be easy to track our API and there will be no issues with different formats as builders will incorporate that already.

Having this two there gives us ultimate flexibility with simplicity (assuming we start with something easy.
We can also support multiple use cases.

For example the simplest API usage will look as follows:

const resolversBuilder = new MongoResolvers(...);
const resolvers = resolvesBuilder.build();

@craicoverflow
Copy link
Author

Client side generator is using this, but you can get schema from running service as well.
That is the first conclusion.

On this. Will client document generation be moved out of Graphback? As an option, the template could implement writing the runtime schema to a file.

@wtrocki
Copy link

wtrocki commented May 26, 2020

Fully agree.
That is separate discussion. We might still keep createResources on limited scope.

@wtrocki
Copy link

wtrocki commented May 26, 2020

Worth adding - I do not particulairly like makeGraphback because it is not clearly stating what we are making and also it also accepts tons of objects/arrays etc. that developers might not particularly understand etc.

However there is also confusion related to specific database methods as people can connect some of the objects to different database or REST API.
I see this as kinda tradeoff

@craicoverflow
Copy link
Author

const resolversBuilder = new MongoResolvers(...);
resolvesBuilder.validate();

resolvesBuilder.setService("User", userService)
resolvesBuilder.setDataSource("Task", taskRestDs)

resolvesBuilder.getSchemaString()
resolvesBuilder.getResolvers();

This seems a little confusing.

  1. would users explicitly have to set the server and data source for each model? Or would there be some kind of default set in the constructor and then you would override this with setters.
  2. What is userService and taskRestDs here? Are these instantiated classes?

@wtrocki
Copy link

wtrocki commented May 26, 2020

would users explicitly have to set the server and data source for each model? Or would there be some kind of default set in the constructor and then you would override this with setters.

No. Service contains datasource currently so passing service means passing datasource.

Technically we specify our main data source by choosing Mongo or Postgress(Knex) then can override some specific elements to different datasources if needed. However for use cases where there is no Mongo or Postgres this will mean that we will need to override every possible object with our own service. Do we see cases like this? What Graphback offers in this cases for developers?

@craicoverflow
Copy link
Author

const resolversBuilder = new MongoResolvers(...);

How would you see these resolvers being added to Graphback?

Something like:

const resolversBuilder = new MongoResolvers(...);
...
...
const apolloServer = new ApolloServer({
  typeDefs: resolversBuilder.getSchema(),
  resolvers: resolversBuilder.getResolvers()
});

?

@wtrocki
Copy link

wtrocki commented May 26, 2020

I typically see schema still being in the project as a file. WHY? because you need to see it to add your own resolvers. Then you merging graphback generated resolvers with your own resolvers and scalars and calling it a day.

const resolversBuilder = new MongoResolvers(...);
...
...
const apolloServer = new ApolloServer({
  typeDefs: schemaString,
  resolvers: [resolversBuilder.getResolvers(), resolversBuilder.getScalars(), myresolvers]

});

@craicoverflow
Copy link
Author

craicoverflow commented May 26, 2020

I typically see schema still being in the project as a file. WHY? because you need to see it to add your own resolvers.

So we would still need two steps in this case?

  1. Run graphback to generate schema file.
  2. run server and load schema string from file into GraphbackResolvers to build resolver context.

This might still be achievable without the need for a file if MongoResolvers actually wraps full runtime implementation and gives you everything:

const graphbackBuilder = new GraphbackResolvers({model, crudService, dbProvider});
resolvers.setDataSource('Note', customDbProvider);

const { schemaSDL, resolvers } = graphbackBuilder.build();

// 1. Execute schema plugins
// 2. Build resolvers with data sources from builder
// 3. Output built schema with resolvers

const apolloServer = new ApolloServer({
  typeDefs: schemaSDL,
  resolvers
});

@wtrocki
Copy link

wtrocki commented May 26, 2020

We will give developers much more than just black box schema and resolvers. They can add this to their project etc.
Plus we actually giving developers starter templates - others do not provide that.

Our templates can ignore the fact of the custom schema/generated one, but people could still generate schema and work with it.
Like if we target to provide single method thing let's:

  • Kill templates
  • Kill services and datasources

Have 2 methods - createMongo and createPostgres, but that is not going to work for us as we need to have more integrations that sometimes will exclude each other.

https://graphql-mesh.com is another good example how those guys resolved similar problem - worth checking their api

@craicoverflow
Copy link
Author

Fair points

graphql-mesh.com is another good example how those guys resolved similar problem - worth checking their api

Their docs are incomplete, I only see YAML configs.

@craicoverflow
Copy link
Author

We will give developers much more than just black box schema and resolvers. They can add this to their project etc.

True, but developers want simplicity and should be able to start with a miminal config API, adding more if they need it.

Plus we actually giving developers starter templates - others do not provide that.

The fewer starter templates we provide, the better in a way. Shouldn't starter templates focus on the non-Graphback elements of the application, like Apollo versus GrahQL.js, Mongo versus Postgres, how developers organise their architecture (we could have a GraphQL modules template).

If we need to have a template for very specific differences in Graphback API it might mean that the API is too difficult to make simple changes without having to change it all?

@craicoverflow
Copy link
Author

craicoverflow commented May 26, 2020

We could combine both approaches, where the resolver building would still be abstracted into the API:

interface GraphbackConfig {
  schema: GraphQLSchema
  resolvers?: ResolverBuilder | ResolverBuilder[]
  pubSub?: PubSubEngine
  schemaPlugins?: GraphbackPlugin[]
}

const modelSchema = buildSchema(`
"""
@model
"""
type Note {
  id: ID!
  title: String
}
`)

const resolverBuilder = new PostgresResolverBuilder(modelSchema, {})

// very minimal Graphback configuration
const { schema, resolvers, dataSources } = makeGraphback({
  schema: modelSchema,
  resolvers: [resolverBuilder] // accepts multiple resolver builders and merges them after doing the building, making it easy to mix data sources
})

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment