{% paginate collection.products by 16 %}
<div class="products__collection">
<ul class="product collection__grid"">
{% for product in collection.products %}
{% assign prod_id = forloop.index | plus:paginate.current_offset %}
{% include 'product-grid-item' with prod_id %}
import logging | |
from airflow.models import DagBag | |
def callback_subdag_clear(context): | |
"""Clears a subdag's tasks on retry.""" | |
dag_id = "{}.{}".format( | |
context['dag'].dag_id, | |
context['ti'].task_id, | |
) | |
execution_date = context['execution_date'] |
#!/bin/bash | |
# Runs airflow-dags tests. | |
# Set Nose defaults if no arguments are passed from CLI. | |
CWD="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" | |
NOSE_ARGS=$@ | |
if [ -z "$NOSE_ARGS" ]; then | |
NOSE_ARGS=" \ | |
--with-coverage \ |
The documentation for how to deploy a pipeline with extra, non-PyPi, pure Python packages on GCP is missing some detail. This gist shows how to package and deploy an external pure-Python, non-PyPi dependency to a managed dataflow pipeline on GCP.
TL;DR: You external package needs to be a python (source/binary) distro properly packaged and shipped alongside your pipeline. It is not enough to only specify a tar file with a setup.py
.
Your external package must have a proper setup.py
. What follow is an example setup.py
for our ETL
package. This is used to package version 1.1.1 of the etl library. The library requires 3 native PyPi packages to run. These are specified in the install_requires
field. This package also ships with custom external JSON data, declared in the package_data
section. Last, the setuptools.find_packages
function searches for all available packages and returns that
# Install brew | |
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" | |
# Install composer | |
brew install homebrew/php/composer | |
### PHPCS | |
composer global require "squizlabs/php_codesniffer=*" | |
# Add to your .bash_profile |
# In your Jenkins job configuration, select "Add build step > Execute shell", and paste this script contents. | |
# Replace `______your-plugin-name______`, `______your-wp-username______` and `______your-wp-password______` as needed. | |
# main config | |
WP_ORG_USER="______your-wp-username______" # your WordPress.org username | |
WP_ORG_PASS="______your-wp-password______" # your WordPress.org password | |
PLUGINSLUG="______your-plugin-name______" | |
CURRENTDIR=`pwd` | |
MAINFILE="______your-plugin-name______.php" # this should be the name of your main php file in the wordpress plugin |
# Function to convert python object to Java objects | |
def _to_java_object_rdd(rdd): | |
""" Return a JavaRDD of Object by unpickling | |
It will convert each Python object into Java object by Pyrolite, whenever the | |
RDD is serialized in batch or not. | |
""" | |
rdd = rdd._reserialize(AutoBatchedSerializer(PickleSerializer())) | |
return rdd.ctx._jvm.org.apache.spark.mllib.api.python.SerDe.pythonToJava(rdd._jrdd, True) | |
# Convert DataFrame to an RDD |