https://jany.st/post/2018-02-05-cross-compiling-tensorflow-for-jetson-tx1-with-bazel.html
/** | |
* MIT License | |
* | |
* Copyright (c) 2018 Dat Nguyen | |
* Trie (aka Prefix Tree) is a type of Tree Data Structure. A node contains a | |
* list of child Nodes. | |
* | |
* Applications: | |
* ============ | |
* * Word completion: quickly validate if a word is correctly type. |
{ | |
// Use IntelliSense to learn about possible attributes. | |
// Hover to view descriptions of existing attributes. | |
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 | |
"version": "0.2.0", | |
"configurations": [ | |
{ | |
"name": "(lldb) Debug binary_tree", | |
"type": "cppdbg", |
"""Move Forward | |
Start from the first element in the array (A), index is i. Move forward by A[i] steps max. | |
The algorithm is to return true/false, to indicate whether we can move from the first element | |
to the last element in the array | |
A = [2, 0, 1, 2, 0, 3] --> True | |
# Assumption: | |
------------- | |
* A[i] > 0 and there are [1 ... N] possible steps A[i] can move if A[i] = N. |
"""Give a list of image filenames, this script compute the mean and std over all the images. | |
Requires: | |
* OpenCV | |
* tqdm | |
In this example, I use `Stanford Dogs Datasets" (~20k images) | |
Example Outputs: (8 cores CPU i7 4970K) | |
(env) dat@desktop:****/StanfordDogs$ python compute_mean_std.py |
""" | |
Problem: | |
-------- | |
We would like to explore which method perform row iteration in the most efficient way. | |
Create a data frammes with 3 columns and 100,000 rows | |
Results: | |
-------- | |
vector: Iterated over 100000 rows in 0.029180 | Sample at idx [0]: (1, 100000) | |
zip: Iterated over 100000 rows in 0.073447 | Sample at idx [0]: [1, 100000] |
ffmpeg -framerate 25 -i img%05d.jpg -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p output.mp4 |
""" | |
Thiss script would convert a pre-trained TF model to a servable version for TF Serving. | |
A pre-trained model can be downloaded here | |
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo | |
Requirements: | |
* A directory contains pretrained model (can be download above). | |
* Edit three arguments `frozen_graph`, `model_name`, `base_dir` accordingly |
"""An example of how to use tf.Dataset in Keras Model""" | |
import tensorflow as tf # only work from tensorflow==1.9.0-rc1 and after | |
_EPOCHS = 5 | |
_NUM_CLASSES = 10 | |
_BATCH_SIZE = 128 | |
def training_pipeline(): | |
# ############# | |
# Load Dataset |
Whether you're trying to give back to the open source community or collaborating on your own projects, knowing how to properly fork and generate pull requests is essential. Unfortunately, it's quite easy to make mistakes or not know what you should do when you're initially learning the process. I know that I certainly had considerable initial trouble with it, and I found a lot of the information on GitHub and around the internet to be rather piecemeal and incomplete - part of the process described here, another there, common hangups in a different place, and so on.
In an attempt to coallate this information for myself and others, this short tutorial is what I've found to be fairly standard procedure for creating a fork, doing your work, issuing a pull request, and merging that pull request back into the original project.
Just head over to the GitHub page and click the "Fork" button. It's just that simple. Once you've done that, you can use your favorite git client to clone your repo or j