Skip to content

Instantly share code, notes, and snippets.

@timbogit
Last active February 16, 2016 23:26
Show Gist options
  • Save timbogit/6fcce94af60e73a49c90 to your computer and use it in GitHub Desktop.
Save timbogit/6fcce94af60e73a49c90 to your computer and use it in GitHub Desktop.
Script for the actual "workshop" part of the Rocky Mountain Ruby 2014

Rocky Mountain Ruby

SOA from the Start - Workshop Part


What will we show

We are building a set of applications that have will show deals (aka. 'inventory items') available in or near a given city. These items can also be organized by a 'category' (aka. tags), to cater to the various customers' areas of interest.

To bootstrap things, and to get us focussed on some key areas of developing in a SOA, we have gone ahead and built these apps out at varying stages. The workshops will focus on showing some of this code, and intersperse exercises to add features, or to refactor.

The Front-End app (Mini-LivingSocial)

  • a classic web app, showing:
    • a list of deals (inventory items) in a market (city)
      • links to the pages for nearby city
    • a list of deals (inventory items) nearby (i.e., in cities within a given range in miles)
    • details about a single deal
      • look an an item's image, price, and description
      • a section with deals nearby
    • a page for deals of a particular category (i.e., inventory items that are tagged)

Three services

  • general inventory information (price, market, title, image url, description)
  • endpoint for items anchored in a particular city
  • endpoint for all items in cities nearby a given city
  • endpoint for a list of all inventory items
  • various endpoints to add / manipulate inventory items
  • endpoints for tag management
    • list of all tags
    • fetching info about a single tag (includes list of items tagged)
    • creating / manipulating tags
  • endpoints for managing tagged inventory items
    • return a list of all inventory items tagged with a given tag
    • return a single tag + item combination
    • create an "item is tagged" relationship (i.e., tag an item)
    • delete an "item is tagged" relationship (i.e., 'untag' an item)
  • city info, like name, country, state, lat/long
  • endpoint for
    • finding a city by ID
    • lisiting all cities
    • listing all cities in a given country
    • listing all cities near a given city (in a given range)
    • creating / manupulating cities


Setting up the apps locally


We know, we just told you we don't want to ever have developers have to set up all the services on their local machines. Unless, of course, you would like to make changes to service apps ... which is exactly what we are going to be doing from now on.

  • Clone the four applications

    • git clone https://github.com/timbogit/deals.git
    • git clone https://github.com/timbogit/cities_service.git
    • git clone https://github.com/timbogit/tags_service.git
    • git clone https://github.com/timbogit/inventory_service.git
  • In each of the 3 ..._service apps:

    • $ bundle install
    • $ bundle exec rake db:create
    • $ bundle exec rake db:migrate
    • $ bundle exec rake db:seed
    • $ foreman start
  • In the front-end deals app:

    • $ bundle install
    • $ foreman start

Heroku setup (optional, but recommended)

  • visit Heroku and sign up for a free developer account
  • we are loosely following the "Getting Started with Rails 4.x on Heroku" guide:
    • $ heroku login
    • in each of the ..._service local git repos, do:
      • $ heroku apps:create <my_unique_heroku_app_name>
      • $ heroku git:remote -a <my_unique_heroku_app_name> -r development
      • $ git push development master
      • $ heroku run rake db:migrate
      • $ heroku run rake db:seed
        • NOTE: the cities-service DB seed exceed the limit for the "Hobby Dev" pg instance; either trim down the seeds file to < 10k entries, or upgrade to "Hobby Basic" (~ $9 / month)
      • $ heroku ps:scale web=1
      • visit your app, via a browser, or maybe via $ heroku open
  • Other useful heroku commands:
    • heroku logs : shows your app's rails logs
    • heroku run rails console : bring up a rails console on your application server
    • heroku maintenance:[on|off] : dis-/enables your app and shows a maintenance page
    • heroku ps : lists all your app's dynos, and what they are running
    • heroku ps:scale web=[0|1] : dis-/enables your web worker dynos
    • heroku config:set MY_ENV_VAR=foo --remote development : sets an environment variable (e.g., RAILS_ENV) on your app's server
      • We set the RAILS_ENV and RACK_ENV variables on some of the heruko apps to development this way ... more later.

So, ... where are the tests for this code?

Well, there are no tests. Yeah, yeah, we know ... we will go into TDD hell, and all.

But seriously, why are there no tests? Here are the main reasons:

  • These projects will never see any production traffic
  • We have tested all this manually via some cool 'service explorer' JavaScript UI
  • The underlying service framework code is well-tested in our production projects
    • much of what you'll see was very much "copty&paste" inspired by LivingSocial code
  • we are lazy, and we want you to do all of you work for us
    • adding some tests will be part of the workshop


Development and Deployment Workflow


Manage several environments

To effectively develop, test and deploy in a SOA, you will need (at least) three environment 'fabrics'

  • local development machine (running tests and used for developing)

    • all your local development is done here
    • only check out the services you actually need to change
    • service yml's for all dependent services point into the development fabric (next)
  • remote development (or staging) fabric

    • all services run here at a known good domain name
    • e.g., cities-service-development, tags-service-development, inventory-service-development
    • once development is complete, new (feature) branches get promoted to here
    • data here is 'production like', so that production conditions can be simulated, but probably seed data.
    • after some quality assurance, code will be promoted to the next (production) stage
  • production fabric

    • stable production level code
    • data is the actual production data
    • this is the code and service instances that the end customers will see

How is this implemented?

  • every app has (yml) configuration files that declare where its dependent services are located:
    • for the test, development and production envrionments (e.g., the respective sections in config/tags_service.yml and config/cities_service.yml in the inventroy_service repository)
    • the development sections point at the well-known development fabric instances of the dependent service, while production sections point at production services
    • the test sections most likely point at the well-known development fabric instances, or the actual production services
      • this way, tests can retrieve (test) data from such dependencies without having to run them all locally
      • the service responses can be stored in canned responses for future runs (e.g. using gems like vcr ... about which we will talk later)

Working with Heroku

  • Heroku has good documentation for working with multiple environments

  • Create your development fabric instance of a service via the heroku command line tool

    • heroku create --remote development

    ...or rename your current heroku remote via

    • git remote rename heroku development
  • Create your production fabric instance of a service

    • heroku create --remote production
  • By default, heroku apps run in production environment mode; to have your development fabric instances all easily point at each other, change their RACK_ENV and RAILS_ENV environment settings to development, like so:

    • heroku config:set RACK_ENV=development RAILS_ENV=development --remote development --app <my_dev_heroku_appname>
  • As heroku ignores all branches that are not master, you need to push you local feature branches to the remote master branches

    • I.e., to push a branch called features to your development fabric instance, you need to do:

      git push development feature:master

  • As an example of my current git set-up for inventory_service:

     $ git remote -v
     development     [email protected]:inventory-service-development.git (fetch)
     development     [email protected]:inventory-service-development.git (push)
     origin  [email protected]:timbogit/inventory_service.git (fetch)
     origin  [email protected]:timbogit/inventory_service.git (push)
     production      [email protected]:inventory-service.git (fetch)
     production      [email protected]:inventory-service.git (push)
    

Exercise "Configuring Service Dependencies"

  • The inventory_service depends on cities_service and tags_service. Change the services' config yml files (tags_service.yml and cities_service.yml) to be pointed at your locally running versions of these dependent services via changing development sections of these two yaml files respectively. [Keep your test and production yml file entries at the heroku deployed apps, as they are.]
  • Make similar changes to the cities_service repository, so that it will point at the locally running dependent tags_service in its respective development fabrics
  • Change the deals application's RemoteInventory model to not hardcode the service it depends on, but to move the configuration of the remote service into a YAML configuration file.


Documenting and Generating your APIs


APIs will only be used when there is documentation on how to use them. We recommend you think about how to structure and document your APIs from day one.

Standards and Consistency

"The Good Thing About Standards: there are so many of them" (TM)

Even if you say your API is RESTful, it doesn't really say much in detail. Spend the time to negotiate guidelines internal to your organizatio. Spend some time thinking about all the various options and degrees of freedom, and then, most importantly, stick to them. The principal of least surprise will pay off for your clients, and for yourself.

  • What HTTP Status (success and error) codes will you use for which situations
    • 204 or 200 for POSTs/PUTs?
    • consistency around the 4xx code range
  • Will there be additional error messaging in the body?
  • Where does authentication go?
    • header, url, ...
  • Consistency in resource hierarchy
    • Truly "RESTful", or are there some "RPC" like endpoints (/inventory_items/near_city)
  • True 'hypermedia' with self-discover, or simply show IDs?
    • {'id': 123'} vs. {'uri': 'http://some.host.com/api/resource/123'}
  • What about versioning information?

Specifying your Endpoints

"What do you mean, specifying the endpoints ... can't you read the routes file?!"

Just not good enough, sorry! You just spent all this time going over some standards and consistency, now spend a little more to define your API in a "machine digestible", declarative format.

This specification should preferably be published at a (well-known) endpoint in your application, so that your clients can auto-discover your APIs ... and maybe even auto-generate client code.

Using an IDL

We have found during our careers that IDLs are a very convenient thing, and that the benefits by far outweigh the effort. And no: you don't need to learn JEE, WS-* or XML for all this.

JSON-RPC, ProtoBuf, Thrift (all of which have Ruby bindings), and the like, all follow the same principle:

You specify your API in some schema, and then client libraries / gems, and often even service-side stubs are generated for you ... either via a separate 'build step' (rake tasks, e.g.) ahead of time, or even 'on the fly' when clients are consuming the IDL specification of your published endpoints. Best of all, most of these tools work cross-language (Java shop, anyone?), and often have support for auto-generating human-readable docs.

What we use: Swagger

Specification via JSON Schema

"Swagger™ is a specification and complete framework implementation for describing, producing, consuming, and visualizing RESTful web services."

In essence, you use JSON to specify your endpoints, according to a well-defined schema. A suite of surrounding tools help with auto-generating the JSON based on annotations or DSLs, auto-generating client and service stub code (across a growing set of languages). The project is open-source, maintained by Wordnik.

See here and here for the inventory-service's JSON specs.

Ruby Tools

Here are some of the Ruby-specific projects surrounding swagger.

Two (competing?) gems that allow for specifying your APIs in a Ruby DSL, and then generating the JSON specification via a separate rake task. See some sample swagger-docs DSL code that describes the inventory-service.

Unfortunately, neither of these two gems is entirely mature, but hey: it's Open-Source, so go and contribute!

Swagger-codegen is the general project to generate client code by parsing your Swagger JSON specs. Ruby support exists, albeit less 'stable' than for scala. Swagger-grape is a gem to add swagger compliant documentation to a grape API.

Killer feature: Service Explorer

A swagger subproject called swagger-ui is essentially a single HTML page, plus some spiffy CSS and JavaScript to consume your JSON API specs. In return, it generates a very useful 'service explorer' UI. This UI lists all your endpoints, and gives you the option to fire single requests for each of them to your server.

You can see it in action in each of the root pages for our sample services:

Exercise "Generating Documentation for your APIs"

We are going to replace the hard-coded swagger-specs in tags-service with using swagger yard

  • Delete all contents of the public directory of your application (simply leaving a dummy index.html in there if you like)
  • Add the following gem definition to the development and production groups in your Gemfile gem 'swagger_yard', :git => 'git://github.com/tpitale/swagger_yard', :branch => 'master'
  • change your routes.rb to mount the swagger yard engine, like so mount SwaggerYard::Engine, at: "/swagger" unless Rails.env.test?
  • add an initializer (as documented here) for swagger_yard.
  • Document some (/all?) of the tags_controller.rb actions using swagger yard, and try hitting the /swagger/doc/ path in your local tags-service instance.
  • Tip / Solution: This commit in a tags-service branch shows all changes needed to switch all documentation over to swagger_yard


Caching Service Responses Client-Side


Making service calls can be expensive for your application. But fortunately, the service clients can often tolerate data that is slightly stale.

A good way to exploit this fact is to cache service responses client-side, i.e.: inside the front-end applications, or inside other services depending on your service. The general rule here is that the fastest service calls are the ones you never need to make.

Here are some things we learned when considering these points:

Build caching into your client gem

  • Often, you will develop your own Ruby client gems for the services you build.
    • we do it at LivingSocial, on top of a general low-level gem that takes care of all general HTTP communication and JSON (de)serialization
  • Offer the option to inject a cache into your client gem
    • use the injected cache to store all service call responses
    • spend some time to think about your cach keys (e.g., include the gem and API version, full service URL)
  • Require as little as feasibly possible about the cache object that is injected
    • we usually just require it have a #fetch method that takes a cache key and a block to execute on cache miss

Have the clients decide if / how to cache

  • Some clients might not want to cache at all, so they should be able to just disable caching in your gem
  • It is the client applications' responsibility to set cache expiration policies, cache size, and caching back-end
    • Rails client apps could simply use Rails.cache, some others a Dalli::Client instance, a ActiveSupport::Cache::MemoryStore, or even something 'hand-rolled' by your clients.

Exercise "Client-side caching"

In this exercise you will build a 'client-gem like' code (albeit inside of the inventory_service application), which will allow for caching; you will also set up your inventory_service code to administer a cache for this code.

  • Abstract common functionality of the Remotetag and RemoteCity classes into a common superclass (e.g., named RemoteModel)

    • Add an accessor for RemoteModel.cache, so the application code can choose the cache to use
    • implement a RemoteModel.with_possible_cache(cache_key, &blk), and wrap all calls to cities_service and tags_service inside it.
    • make sue the cache_key passed takes the full service call URL and the API version into account
  • Add administration of the cache inside of the tags_service.rb and cities_service.rb initalizers in inventory_service. You can choose whichever cache store you like. (We chose ActiveSupport::Cache::MemoryStore for simplicity)

  • Tip / Solutions: to this exercise can be found in this commit of the inventory_service application



Testing with dependent services


Approaches

There are three general approaches:

1. Mocking / Stubbing of service calls

  • Tests (or a general test helper) use stubbing and mocking of the service calls, so that no HTTP calls are ever made
  • Results of the service calls are assumed / 'faked', so that the rest of the code under test can be exercised based on these assumptions
  • Pros:
    • fast
  • Cons:
    • The full 'integrated' functionality will never be exercised by the tests
    • If the dependent APIs ever change, the code under test will never know
    • Often lots of (boring, boilerplate, distracting) code is written to boostrap test case

2. Tests always call dependencies

  • The code under test calls all dependent services (in production/development fabric, or locally installed) every time the tests run
  • Pros:
    • The test results are always true / reflect the actual service environment and APIs
  • Cons:
    • slow
    • tests can never run in isolation, as services (in production/development fabric, or locally installed) always need to be available during test runs
    • Changes in the dependent services' data can cause 'false negative' test failures

3. Tests call dependencies once

  • Mixture of the previous two approaches
  • Code under tests calls dependent services once, records the responses, and then replays them in future runs.
  • Pros:
    • fast most of the time (unless during recordings)
    • The test results are most of the time true integration / reflect the actual service environment and APIs
    • dependent services never need to be run (or even installed) locally
  • Cons:
    • recorded 'canned responses' can get big
    • one will need to find a good freqeuncy for re-recording the canned responses

Testing with VCR

We recommend using approach number 3 above, and we mainly use VCR.

VCR is very easy to configure, works well with Rails, and it can hook into a large variety of HTTP gems (including Typhoeus, which is used in our sample services).

As an example, we used it for a single test inside of the cities_service repository, performing the following steps:

  • Added the vcr gem to the :test gem group in the Gemfile of cities_service
  • Changed the test_helper.rb to configure VCR to hook into typhoeus, and recorded cassettes (i.e., VCR's term for the recorded, to be replayed service responses) into a special fixtues/vcr_cassettes directory, like so:
	VCR.configure do |c|
	  c.hook_into :typhoeus
	  c.cassette_library_dir = 'fixtures/vcr_cassettes'
	end
  • Recorded, and subsequently used, the cassettes in the tests for the RemoteTag class whenever a service call is made in the code under test, i.e.:
	VCR.use_cassette('tags_bacon')  do
	  RemoteTag.find_by_name('bacon').name.must_equal 'bacon'
	end

A smarter way to mock

As mentioned above, while mocking has its disadvantages, it certainly helps with increasing the speed of test suites. To take advantage of this, we have recently found ourselves trying to address some of the short-comings of mocking via building mock objects right into a service's client library.

While we are planning to write up a separate blog post with more details about this approach, here are some key points:

  • When building a client library (or in our case, a Ruby gem), we provide for a way to configure (at least) two backend alternative: one that makes actual HTTP calls to the respective service in order to assemble the library's response objects; and a second 'fake' backend, which never makes actual network calls to retrieve response objects, but instead chooses them from a pool of well-known response objects.
  • These well-known response objects expose the same API as the objects returned from actual network service calls, and they come pre-loaded inside the mock backend's registry of responses.
  • As part of their test suite set-up, applications under test place the service client library into 'mock mode', thereby configuring it to serve responses entirely out of the mock backend's registry of pre-loaded response objects
  • To serve the needs of special-case, non-standard request situations, the client library allows for creating additional mock response objects, and for adding / removing them to / from the mock backend's response object registry.
  • When the client application is running in a production environment, the production-specific setup will configure the client library to use the actual HTTP service-based backend instead.

This approach allows the mock objects to evolve in lock-step with the client library version, which will increase the client application's confidence level that it is testing against the same API as the actual objects returned by the latest service API version.

Additionally, none of the usual cumbersome and boilerplate code to create and register mock objects for the various tests needs to be written: the mock backend comes pre-configured with a variety of standard responses which the application code under test will simply use without any additional configuration.

Exercise "Testing with Dependent Services"

  • Add vcr and all necessary configuration to your inventory_service repository
  • Write some tests (e.g., unit tests for RemoteTag and / or RemoteCity) that:
    • exercise, and record results for, calling city_service for the city of a given IntentoryItem
    • exercise, and record results for, calling tags_service for all tags given a list of IntentoryItems


Optimiziting for client performance


Restricting the response size

Benchmarking our services at LivingSocial, we noticed that the lion's share of slow APIs were growing linearly slower with the size of the (JSON) response.

Once there is enough data, the share of connection setup / teardown times and even DB queries are dwarfed by time spent in:

  • result serialization into JSON
  • shipping large JSON payloads over the wire
  • de-serializing the JSON client-side

Here are some tips of how we went about addressing these issues.

Result paging

  • Provide the client with an option to request a limited amount of results whenever a list of objects is returned (e.g., #index or #search like actions)
  • A very simple and yet effective way of providing such "poor man's paging" s to accept limit and offset parameters for such list endpoints
  • To improve performance, experiment with optimal default and maximum values for such a limit parameter
    • finding the 'sweet spot' that is acceptable for both the clients and the service depends very much on your data and use cases, and you will find yourself iterating a coule of times on the best number for such a limit.

Content Representations

  • Not every client needs all the information the service can know about an 'entity'
  • A good way to provide flexibility is to honor requests for different representations of a 'thing'
    • some clients might just need two or three fields of potentially 10s/100s of existing fields
    • try and make it easy for clients to define (and iterate over) the best representation for their use case
  • Good candidates for information that might be worth removing is any information that might come from a secondary service (e.g., the tags or the city name for an inventory item).
    • some clients might be able to make by themselves, as secondary requests, to the authoritative service for such information (e.g., to cities-service and tags-service)
    • other clients might want to have this aggregated information be returned by a service, but only when requesting a single object (e.g., one inventory item), not when asking for an entire list of them.

Make the service work less

Retrieving information from the DB and serializing it take valuable time in the API request/response cycle. Here are some ways to avoid incurring this time.

HTTP Conditional GET

Using ETag, Last-Modified and Cache-Control response headers is standardized, yet flexible ... and still very often unused in API services.

Rails has great support for honoring and setting the respective HTTP request / response headers to allow clients to specify what that of the service objects they know, and the service to declare when and how this information will become stale.

While it is not easy to find Ruby HTTP client libraries that automatically honor / send these headers, browsers will certainly honor them out of the box, and so will reverse proxies (see next section)

Using a Reverse Proxy

For our most-trafficed internal API services, LivingSocial relies heavily on Varnish, a reverse proxy that has excellent performance and scaling characteristics:

  • some endpoints are sped up by a factor of 50x
  • varnish is flexible enough to function as a 'hold the fort' cache
    • if the service itself is down, varnish can return the last good (= 2XX Status Code) response
  • It can be administered to cache based on a full URI, including or excluding headers
    • tip 1: try making all query parameters sorted, so that any reverse proxy can yield a higher cache hit rate
    • tip 2: make all parameters (like authentication) that do not affect the JSON responses be sent in request headers, and make varnish igonre these headers for its cache key

##Exercises "Performance tuning"

  1. Add a 'small' and a 'full' inventory item representation a. 'full' is the currently existing representation which makes dependent calls out to cities-service and tags-service for all city and tagging information on inventory items. b. small represented inventory items just have the Hyperlinked URI for their city and their tags inside cities-service and tags-service c. make the representation be selectable via a representation query paramter, which will be honored by all endpoints that return inventory items (#show, #index, #in_city, #near_city)

  2. Add "limit and offset" based paging of results a. allow for paging through API-returned inventory items by letting the client request a smaller number of results (via a limit query parameter), and starting at a particular offset (via an additional offset parameter) b. make these paging parameters be honored by all endpoints that return lists of inventory items (#index, #in_city, #near_city)

  3. Make sure to look at the various places in the application_controller and the inventory_items_controller that implemented service-side HTTP Conditional GET logic a. Come up with a curl request that makes the inventory_items#show endpoint return a 304 response, based on a If-Modified-Since request header b. Come up a curl request that makes the inventory_items#show endpoint return a 304 response, based on a If-None-Match request header

Tip / Solution: see the following inventory-service branch



Versioning your APIs


Change happens! You will encounter new requirements for your service. What to do when you have existing clients that would break when you changed your API to meet the new requirements? Most of the time, "lock-step upgrades" of your service and all your clients are simply not an option.

The answer: versioning!

You can make changes to an existing service by bumping the version number on its API endpoints. Existing clients will keeping functioning, while new clients can use the updated / new features of our "API V2".

Where does the version go?

There are many options for specifying a version for the API. Here are the most common approaches we have seen:

  • In a special API header

    • 'X-LivingSocial-API-Version: 2'
    • RESTful 'purists' will probably go this route, as they do not think a version of a resource should ever be part of the resources URI, as it is in essence still the same resource you are describing.
  • In a query parameter

    • /api/cities/123.json?version=2
    • some API designers choose this approach, but we don't like it, as it seems less 'obvious', and muddles the waters around parameters for representations, search terms, etc.
  • In a path parameter

    • /api/v2/cities/123.json
    • we usually use this approach, as it is simple and seems most intuitive to us.

What numbering scheme should I use?

Most implementers tend to either use consecutive integers (v1, v2, v3, ...), or dates (v20140424, v20130811, ...). Whatever you do, we encourage using something that allows for easy and efficient 'comparisions' to understand which API is the later version. I.e., we discourage using schemes like "v_new", "v_old", "vEasterEdition", or even "v08112013" ( ... if you use dates, use the ISO format, where later dates yeild larger integers).

Deprecating APIs

Any API you publish will live for a long time, ... especially public APIs! Invest some though pre-launch to think about your deprecation policy.

  • Make sure you have a way to identify your clients for any given version of your API.
    • maybe require a client_name (header?) parameter for every call to your service, and make sure to log it.
  • Think about how to notify / contact your clients about any updates to your APIs
    • Do you have email addresses / contact details of all client app owners?
    • Internally, we use mailing list for communicating changes, bugfixes, etc.
  • Along with any API, be sure to publish a deprecation and support policy
    • that way, client apps can plan ahead, ... and you have a better position to tell them "get off this version" :-)
  • For internal APIs, be sure to set expectations / responsibilities around which team is responsible for upgrading your client applications
    • Is the client app team responsible to act on it, or will the service team be on the hook? (At LivingSocial, it's usually the service team that sends PRs to all client apps.)

Walk-through of an example:

Someone in our business department thinks it's a great idea to not ony show a list of inventory items filtered by category (... tags, for us techies), but also allow for presenting the customer a list of cities that our business department would like to categorize (... again, we as techies hear "tagging").

The engineers get together and think that we can probably best implement this by allowing things other than just inventory items to be tagged inside the existing tags-service.

Let's walk through the changes to tags-service (in the features/tag_entities branch) that were necessary to allow for tagging cities (or in principle, arbitraty 'entities'), and exposing these capabilities as a V2 API ... all while keeping the V1 API unchanged. That means that there will be no service interruption for inventory-service's usage of the v1 API endpoints.

The v2 API's swagger JSON spec can be best viewed in tags-service' swagger UI by pointing it at this API spec:

http://tags-service-development.herokuapp.com/api_docs/v2/api-docs.json

By the way: we also changed cities-service (in the features/tag_cities branch) to have it call tags-service for tagging information about a given city, so that these tags can be displayed in the city JSON representation.

Exercise "API Versioning"

  • Use tags-service's API V2 to tag some cities with (existing, or newly created tags).
    • hint: curl -v -H "Content-type: application/json" -X POST 'http://localhost:3000/api/v2/tags/bacon/tagged_items.json' -d '{"id":1, "item_type": "city"}'
  • Add a simple "show me all cities tagged with <tag_name>" page to the deals application, using data from cities-service and tags-service

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment