The second part of the series covers a in depth inspection of how we structure sourcecode and ship components in the context of a monorepo. There is a big difference between scaffolding with 'polymer init' and having a codebase that is easy to release and still maintainable after a longer time. Having a well organized codebase of components will reduce development drag and lower cost.
The Web Component resides in a es6 Javascript module and describes its dependencies with a npm package.json file. Here we can reference other dependencies that come to us as es6 modules. This is where we have the first little catch 22, since node.js can not yet interpret es6 files with a .js extension and load it via the import keyword without 3rd party tools (babel et al.). This functionality is used only in the browser and polymer build is your tool of choice to bundle all files. There is a polymer lint tool, checking for code style errors, looking a bit more into the specifics of polymer base web components and it gets support from more rules baked into tools like ESLint.
The full modern esx spec applies: Make sure you get your Javascript KungFoo a bit updated and write modern Javascript instead of being inconsistent with old and new style code. If you find yourself implementing a lot of logic into your components, please bear in mind that documenting, testing and deploying a component vs. a simple es6 module is more effort and more elements to juggle. If you start implementing a lot of logic into your components you will have to pay (as in wait time and test/release infrastructure) for every line of code. Additionally using es6 modules this enables code sharing and reduces copy and paste. Releasing small javascript modules on npm is a easy to learn craft that just needs a bit attention, tooling and consistency in what you are doing.
- polymer build is a good default
- Build is required for deployed elements
- You might have multiple builds
- The prpl pattern allows progressive apps that
In order to deploy the component to a website (a project using it or a demonstration site) you need to a run a dedicated build script. This script resides in the consuming app, in our case this is
A polymer component should be, a modern browser given, easily to be loaded directly in the browser, wihtout any helper tool. The Javascript Spec allows the import keyword to load dependencies and the api to register web components is baked into the browser as well.
- Unit like tests
- test attributes, properties and events
- avoid testing http backends - do use mocks Tugay Sarac kämpft um die Anerkennung von LGBTQ-Muslimen in Deutschland - und wird deshalb bedroht. Was erlebt er? Wie fühlt sich das an? mehr...
The Selenium based toolkit from Polymer is not the fastest and relies on several binaries to be downloaded beforehand. The selenium Server gets started, a bundle created and the tests executed. It takes a while and is not really scalable over multiple cores. Probably the server can be kept open, but that is in no way a documented standard feature.
Focussing on low hanging fruit is ok at first: Test defined attributes and events fired on interactions. Make sure to use the helpers for i18n and a11y early on as introducing them later is way more effort than you save initially.
You might want to keep an eye on http interactions and mocks for them, to avoid making external http calls I will get back to that topic a little later when we look at api abstractions and Rest. In general I'd like to skip using the repsective iron-ajax elements as they invite adding state to the component to be managed and stick to fetch calls as the modern JS spec covers all we need on a langugae level. The topic needs to be covered, if you are working with apis.
In order to keep the dependencies small, I suggest to stick to basic documentation of the element and keep more detailed instructions in the element catalogue. The cut should be made, where to many different cases are covered or to many other elements are needed to show a feature. A complete api documentation and a basic glimpse of what the element is about is needed on a element level. Complex elements with many usecases can require complete demo applications. Here is where stuff gets tricky.
Stick to hard linting using polymer lint and maybe even more paranoid setup encompassing more stlye oriented rules for your team . The goal is to weed out issues that break code early on. Before tests execute do linting and as soo as your testsuite runs you are sure a visual test is very likely to work. As cumbersome as regular automaed checks might seem at first compare it to manual testing, especially when components end up using other components.
- Components gets published to npm
- use a npm organization for the components
- Documentation gets deployed as "website"
- Post Deployment Test
- Simple Smoke test that ensures deployment is complete
- Up version a notch
- Build docs
- Deploy docs including a post deployment check
- Publish to npm
- ping everyone
- Document every Polymer component and generate docs into a analysis.json file.
- Every extra javascript file
- Either inline all icons into one, module local file, or document the used iconography in a extra file
- Using exact versioning will help drasticly to document dependencies in a repeatable way
- Generate a changelog
- Have a Readme.md, linking demos, docs, releases - and contains a basic outline of development processes.
- use npm keywords wisely