^ Italo ^ As we move to microservices, and break up monoliths, we need to be able to seamlessly route requests to the right place ^ Each service should not care about auth, rate limiting, etc. so we can offload this to the gateway ^ Existing solutions were not sufficient as they were tightly coupled with their authentication providers, we wanted to make something agnostic of auth.
^ Italo ^ Again we needed this centralised - we merged oauth from APiv2, OT, Bob etc. into one service ^ Each of our applications can now use social login, impersonation, etc with little code ^ First step towards SSO ^ We needed an in house one because price, we have too many customers to use auth0 ^ Couldn't use OSS because complexity from legacy
^ Italo
- All requests to APIs will go through the Gateway
- When creating a new service you don't need to take care of Auth, Rate Limiting, CORS...
- Sorry, but you need to learn how to configure the Gateway!
^ Kieran ^ These were built and (mostly) working already ^ The OT was already integrated and (mostly) working ^ Created a tiger team for integration with APIv2
^ Kieran ^ Change to JWT (because we can store data in the token, plus signing is easy) ^ Event-driven arch - user.updated, etc. ^ Proper oauth flow - the URLs and flow we already had in APIv2 were totally strange
^ Kieran ^ Implementing features from the old auth (impersonation, social login) ^ Creating auth-service-consumers and adding the requisite events to APIv2 and Bob ^ Add compatibility with JWT to APIv2 ^ Route migration tool to import APIv2 routes into the gateway ^ Proxy APIv2 token routes to real oAuth URLs ^ Nginx proxy maxim magic
^ Max
^ Nikita
- DB migration - hashing a hash
- Wrong DB instance type
- Wrong auth-service instance type
- Wrong event payload
^ Nikita
- No index for country
- Website was unable to handle logout
- Very hard to test social login
^ Nikita
^ Nikita
^ Italo ^ Fixed all the things ^ Over-scaled ^ Load testing with Gatling ^ Wrote a lot of tests
^ Italo ^ We provisioned the same number of APIv2 and Intfood instances as a "fake live" ^ These instances had our required changes on to work with the auth service ^ More Maxim Magic
^ Maxim ^ the way we did blue/green by switching something nginx something ldd compiler ^ Fighting policy enforcement
^ We were live but things were still a bit buggy, but at this point we didn't want to roll back again so we pushed on
^ Kieran ^ Customers not being able to login after password change in Bob - took ages to identify because Bob ^ Refresh token problems - instability of auth service, was quickly fixed ^ Token TTL - website was not able to refresh tokens so TTL is a month. Breaking because it was originally 1h. Small & calculated security risk while js stuff is fixed.
^ Kieran
^ Kieran ^ We cache tokens from the auth-service in redis so we can quickly look them up ^ because TTL was so large, our puny 1GB redis instance quickly filled up ^ Increased it to 4GB but again got full ^ Now we have a 59GB Redis instance that will still get full
^ Italo ^ Now, the TTL will be stored in the token itself ^ The rate limit will also be moved out of Redis ^ So we will have no more $$$ for redis and a happy Roberto
^ Italo ^ OpenTrace ID ^ File-based gateway configuration option (instead of mongodb) ^ Integration with different auth providers ^ nicer configuration