I was recently tasked with scraping data from a few thousand of word
documents each of which contained nested docx documents which were
referenced by w:altChunk elements in the body of the main
./word/document.xml file, like so:
| // this requires `namedExports` to work with rollup: | |
| import {join } from "lodash"; | |
| // this works with both webpack and rollup and yields a smaller package: | |
| // import join from "lodash/join"; | |
| // this works everythere, of course: | |
| // import _ from "lodash" | |
| // this works everwhere because it's an external module and isn't bundled by webpack or rollup: |
| { | |
| "requires": true, | |
| "lockfileVersion": 1, | |
| "dependencies": { | |
| "@types/bluebird": { | |
| "version": "3.5.23", | |
| "resolved": "https://registry.npmjs.org/@types/bluebird/-/bluebird-3.5.23.tgz", | |
| "integrity": "sha512-xlehmc6RT+wMEhy9ZqeqmozVmuFzTfsaV2NlfFFWhigy7n6sjMbUUB+SZBWK78lZgWHA4DBAdQvQxUvcB8N1tw==" | |
| }, | |
| "@types/fs-extra": { |
| declare module "kld-intersections" { | |
| class Shape {} | |
| export class Point2D { | |
| constructor(x:number,y:number) | |
| } | |
| class Bezier2 extends Shape { |
The standard typings for react-dnd suck. In particular
- They ignore the fact that thre props recieved by the wrapped component are a union of the props passed in to the react-dnd HOC and props collected by react-dnd collectors
- Items are assigned type
Object - DropResults are assigned the type
Object
These types are a work and progress, which means they suck too. They just suck a little less than the standard typings
You're using the awesome JSDOM package to process a few hundred html files and your process crashes with an error like this:
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
If you're sure your code it not accidentally holding on to a reference to (nodes from) prior window objects, then the most likely cause of the apparent memory leak is that
| { | |
| "name": "svg.js-test", | |
| "version": "1.0.0", | |
| "lockfileVersion": 1, | |
| "requires": true, | |
| "dependencies": { | |
| "acorn": { | |
| "version": "5.7.1", | |
| "resolved": "https://registry.npmjs.org/acorn/-/acorn-5.7.1.tgz", | |
| "integrity": "sha512-d+nbxBUGKg7Arpsvbnlq61mc12ek3EY8EQldM3GPAhWJ1UVxC6TDGbIvUMNU6obBX3i1+ptCIzV4vq0gFPEGVQ==" |
When you've got a data.frame in R and want to write it to csv and then import it into a Python Pandas instance, lazy code like this will fail to parse your string and date fields:
<some_file.R>
write.csv(my_data_frame, "my_data.csv" row.names=FALSE)
First, the h in dhcast is for hierarchy, and is useful when you need
to (A) cast (go from long to wide data) and where (B) casting should observe a
given variable hierarchy, and (optionally) aggregate your data(summarize related
records in the long dataset) at the same time.
Lets say, for example, that we want to predict some time varying attribute of a grocery store customers based their produce purchases. Furthermore, we want to