Skip to content

Instantly share code, notes, and snippets.

@khirotaka
Last active October 17, 2019 15:03
Show Gist options
  • Save khirotaka/4dbad01e45c604ca616fc80263383542 to your computer and use it in GitHub Desktop.
Save khirotaka/4dbad01e45c604ca616fc80263383542 to your computer and use it in GitHub Desktop.
FROM pytorch/pytorch:0.4.1-cuda9-cudnn7-devel
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
build-essential \
vim \
python-pil \
python-matplotlib \
python-pygraphviz \
default-jdk \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN conda install -y --channel conda-forge fasttsne && \
pip install --user \
Orange3==3.18.0 \
pandas==0.23.4 \
javabridge==1.0.17 \
python-weka-wrapper3==0.1.6 \
scikit-learn==0.20.0
RUN git clone https://github.com/White-Link/UnsupervisedScalableRepresentationLearningTimeSeries.git

リスト

Unsupervised Scalable Representation Learning for Multivariate Time Series

Abstract

Time series constitute a challenging data type for machine learning algorithms, due to their highly variable lengths and sparse labeling in practice. In this paper, we tackle this challenge by proposing an unsupervised method to learn universal embeddings of time series. Unlike previous works, it is scalable with respect to their length and we demonstrate the quality, transferability and practicability of the learned representations with thorough experiments and comparisons. To this end, we combine an encoder based on causal dilated convolutions with a novel triplet loss employing time-based negative sampling, obtaining general-purpose representations for variable length and multivariate time series.

Memo

  • Python 3.6

  • Numpy (numpy) v1.15.2

  • Matplotlib (matplotlib) v3.0.0

  • Orange (Orange) v3.18.0

  • Pandas (pandas) v0.23.4

  • python-weka-wrapper3(v0.1.6) (for multivariate time series)

  • PyTorch (torch) v0.4.1 with CUDA 9.0

  • Scikit-learn (sklearn) v0.20.0

  • Scipy (scipy) v1.1.0

  • UCI/UCR Dataset.

Representation Flow for Action Recognition

Abstract

In this paper, we propose a convolutional layer inspired by optical flow algorithms to learn motion representations. Our representation flow layer is a fully-differentiable layer designed to capture the ‘flow’ of any representation channel within a convolutional neural network for action recognition. Its parameters for iterative flow optimization are learned in an end-to-end fashion together with the other CNN model parameters, maximizing the action recognition performance. Furthermore, we newly introduce the concept of learning ‘flow of flow’ representations by stacking multiple representation flow layers. We conducted extensive experimental evaluations, confirming its advantages over previous recognition models using traditional optical flows in both computational speed and performance. The code is publicly available.

Memo

Dataset

  1. Kinetics human action video dataset.
  2. HMDB: a large video database for human motion recognition
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment