- Get the database name that you want to copy from and paste it onto the
originalDbName
variable atnormalize.js:16
- Create a
normalized_db
database using the schema atschema.sql
- Run
node --max-old-space-size=16384 normalize.js
(had to set my memory up to the max otherwise it would fail when copying abeam) - Run
time psql normalized_db < normalized-indexes.sql
to measure how long it takes to index the normalized db
const downloadURI = (uri, name) => { | |
const link = document.createElement("a"); | |
link.download = name; | |
link.href = uri; | |
document.body.appendChild(link); | |
link.click(); | |
document.body.removeChild(link); | |
} | |
const downloaded = new Set() |
In order to improve the response time of our reports, as well as enabling us to write more features such as visualizations, we are comparing two possible solutions for storing, querying and exporting precompiled datasets: Elasticsearch and PostgreSQL.
Within a dataset of up to millions of documents and adding up to GBs, we want
/* | |
* Usage: | |
$ node decode.js https://test-idp1.gakunin.nii.ac.jp/idp/profile/SAML2/Redirect/SSO?SAMLRequest=nVPBjtowEP2VyPckNgvqyiKsKKgq0rYbkWwPe6mMMyxuEzv1THbp39cJUNFqlwOnWDPPzzPvvUzv9k0dvYBH42zGRMLZ3WyKqqlbOe9oZ9fwqwOkKMAsyqGRsc5b6RQalFY1gJK0LOZf7uUo4bL1jpx2NYvmiOAp8C6cxa4BX4B%2FMRoe1%2FcZ2xG1KNOUAnmMrUie1c%2FOGptYYxKlkx9tWuzMZuNqoF2C6NL%2BhVGaPxQli5bhlrGKhqH%2FoTLVm1yhnIbJtqaGI9EaKuNBU1oUDyxaLTP2XUy4vp2Mt2IrPqgbzicCbkMLsYOVRVKWMjbiIx5zEd%2FwUnApxnLEk8l48sSi%2FLj4R2MrY58vq7Q5gFB%2BLss8Piz17eRCALCDCXJ43J%2Bpf5lWnSRns%2F8EjkWiXecRNgohHKfpGf%2FJ8a%2BBcLXMXW3072sc%2F%2BR8o%2Bgyuq%2BYKt4OUNn2OyOBpZCXunavCw%2BKIGPkO2DpabBjCqEaMhnyRLC%2FKpML17TKG%2Bxlhr3SdBL6nHhRBx3XsL1G9oswLXVPHcp5%2BLw6X%2FWhCSGEqvTKYus8HY15a57ZofeOHH%2B75%2F%2Ft7A8%3D | |
<?xml version="1.0"?><samlp:AuthnRequest xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" AssertionConsumerServiceURL="https://test-sp1.gakunin.nii.ac.jp/Shibboleth.sso/SAML2/POST" Destination="https://test-idp1.gakunin.nii.ac.jp/idp/profile/SAML2/Redirect/SSO" ID="_150c854f1f1 |
- Make sure the org has an entry in
org_domain
- Prepare the configuration for passport-saml, all keys are accepted, but the minimum necessary are:
idpIssuer
(idP's ID),entryPoint
(idP's endpoint for sending SAML requests),cert
(idP's public certificate) andcallbackUrl
(Coursebase's endpoint for the SAML response). - Serialize the configuration object with
JSON.stringify(config)
(if you'll copy and paste, make sure to do it on node environment since the browser can print the serialized strings with some escaping issues). For example:
let config = {
idpIssuer: "https://app.onelogin.com/saml/metadata/c7ad6f53-52e0-4ff1-996c-3222c0850812",
entryPoint: "https://greyhound-dev.onelogin.com/trust/saml2/http-redirect/sso/930729",
cert: "-----BEGIN CERTIFICATE-----\\nMIID4jCCAsqgAwIBAgIUAipD1o1iXRi/BVk4iHeBGhei+S4wDQYJKoZIhvcNAQEF\\nBQAwRzESMBAGA1UECgwJR3JleWhvdW5kMRUwEwYDVQQLDAxPbmVMb2dp
This guide is based on https://www.freecodecamp.org/news/how-to-get-https-working-on-your-local-development-environment-in-5-minutes-7af615770eec/
- Generate
rootCA.key
:
openssl genrsa -des3 -out rootCA.key 2048
- Generate
rootCA.pem
root certificate:
openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem
You'll need to:
- clone this branch for api2
- clone this branch for ui
- run these migrations
- add this entry to your
/etc/hosts
:127.0.0.1 coursebase.onelogin.com
- setup your
.env
in coursebase repo to useHTTP_SERVER_PORT=3000
- setup your
.env
in ui repo to useDEFAULT_API_V2_URL=http://coursebase.onelogin.com:3000/v2
open Task; | |
type task('a) = Task.t('a); | |
type issuer; | |
type nameId; | |
type email; | |
type user; | |
type validatedUser = ValidatedUser(user); | |
type unvalidatedUser; |
Let's first analyze how to write the queries. I experimented the graphql_ppx
with a few different features (nullable and non-nullable variables, fragments, convert response into record automatically). The type safety is very neat, both the input variables and the response have specific types (no Js.Json.t
!), and the queries/mutations are validated against the schema! This makes it really easy to write queries, although I worry about integrating new versions of the schema since the repos are separated (we should probably re-fetch the schema as part of the CI).
As for the drawbacks, there are not many:
- The ppx really messes up my language server sometimes.
- I haven't tried a lot but I couldn't find easily a way to use refmt to format the queries.
- I couldn't find anything about custom directives, the lack of which would prevents us from using some features from the GraphQL clients (for example, managing local state using Apollo's
@client
directive). - I haven't