Skip to content

Instantly share code, notes, and snippets.

@jskDr
Last active April 10, 2016 12:22
Show Gist options
  • Save jskDr/b263bee705fb9abddfa16e3e40c6446c to your computer and use it in GitHub Desktop.
Save jskDr/b263bee705fb9abddfa16e3e40c6446c to your computer and use it in GitHub Desktop.
Simple example of Skflow (Sklearn + Tensorflow)
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Example of Skflow (Sklearn + Tensorflow)\n",
"- Skflow and numpy are imported for machine learning and data modeling, respectively. "
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"import numpy as np\n",
"import skflow"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data Modeling\n",
"Simple two feature data are modeled."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"((100, 2), (100,))"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"X = np.random.randn(100,2)\n",
"y = X[:,0]*0.1 + X[:,1]*0.5 + X[:,0]*X[:,1]*0.5 + np.random.randn(X.shape[0])*0.1\n",
"X.shape, y.shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Skflow\n",
"### Tensorflow Linear Regressor\n",
"- As a simple example, *adaptive* linear regression is tested. "
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Step #100, epoch #25, avg. train loss: 0.25266\n",
"Step #200, epoch #50, avg. train loss: 0.17804\n"
]
},
{
"data": {
"text/plain": [
"0.39044678864627635"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"lr = skflow.TensorFlowLinearRegressor()\n",
"lr.fit(X, y)\n",
"#lr.predict( X)\n",
"lr.score(X, y)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"### Comparing with Sklearn\n",
"- Here the comparison with Sklearn is introduced. Hence, these part is not needed for Skflow (Skelearn + Tensflow)."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"0.43013970416637026"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from sklearn import linear_model\n",
"\n",
"lr_sk = linear_model.LinearRegression()\n",
"\n",
"lr_sk.fit( X, y)\n",
"\n",
"lr_sk.score( X, y)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"### Tensorflow DNN Regressor"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### No Validation Example"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Step #100, epoch #25, avg. train loss: 0.09820\n",
"Step #200, epoch #50, avg. train loss: 0.02068\n",
"Step #300, epoch #75, avg. train loss: 0.01647\n",
"Step #400, epoch #100, avg. train loss: 0.01374\n",
"Step #500, epoch #125, avg. train loss: 0.01311\n",
"Step #600, epoch #150, avg. train loss: 0.01281\n",
"Step #700, epoch #175, avg. train loss: 0.00998\n",
"Step #800, epoch #200, avg. train loss: 0.01160\n",
"Step #900, epoch #225, avg. train loss: 0.01019\n",
"Step #1000, epoch #250, avg. train loss: 0.00893\n"
]
},
{
"data": {
"text/plain": [
"0.97070717726486322"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"dnn_lr = skflow.TensorFlowDNNRegressor(hidden_units = [10,20,10], steps = 1000)\n",
"dnn_lr.fit(X, y)\n",
"#lr.predict( X)\n",
"dnn_lr.score(X, y)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"#### Validation Example"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Step #100, epoch #50, avg. train loss: 0.08954\n",
"Step #200, epoch #100, avg. train loss: 0.02279\n",
"Step #300, epoch #150, avg. train loss: 0.01784\n",
"Step #400, epoch #200, avg. train loss: 0.01357\n",
"Step #500, epoch #250, avg. train loss: 0.01114\n",
"Step #600, epoch #300, avg. train loss: 0.00976\n",
"Step #700, epoch #350, avg. train loss: 0.00904\n",
"Step #800, epoch #400, avg. train loss: 0.00872\n",
"Step #900, epoch #450, avg. train loss: 0.00860\n",
"Step #1000, epoch #500, avg. train loss: 0.00810\n",
"Step #1100, epoch #550, avg. train loss: 0.00804\n",
"Step #1200, epoch #600, avg. train loss: 0.00784\n",
"Step #1300, epoch #650, avg. train loss: 0.00760\n",
"Step #1400, epoch #700, avg. train loss: 0.00767\n",
"Step #1500, epoch #750, avg. train loss: 0.00755\n",
"Step #1600, epoch #800, avg. train loss: 0.00749\n",
"Step #1700, epoch #850, avg. train loss: 0.00754\n",
"Step #1800, epoch #900, avg. train loss: 0.00708\n",
"Step #1900, epoch #950, avg. train loss: 0.00718\n",
"Step #2000, epoch #1000, avg. train loss: 0.00713\n",
"Step #2100, epoch #1050, avg. train loss: 0.00723\n",
"Step #2200, epoch #1100, avg. train loss: 0.00690\n",
"Step #2300, epoch #1150, avg. train loss: 0.00666\n",
"Step #2400, epoch #1200, avg. train loss: 0.00683\n",
"Step #2500, epoch #1250, avg. train loss: 0.00703\n",
"Step #2600, epoch #1300, avg. train loss: 0.00668\n",
"Step #2700, epoch #1350, avg. train loss: 0.00675\n",
"Step #2800, epoch #1400, avg. train loss: 0.00672\n",
"Step #2900, epoch #1450, avg. train loss: 0.00635\n",
"Step #3000, epoch #1500, avg. train loss: 0.00657\n",
"Step #3100, epoch #1550, avg. train loss: 0.00651\n",
"Step #3200, epoch #1600, avg. train loss: 0.00624\n",
"Step #3300, epoch #1650, avg. train loss: 0.00631\n",
"Step #3400, epoch #1700, avg. train loss: 0.00645\n",
"Step #3500, epoch #1750, avg. train loss: 0.00639\n",
"Step #3600, epoch #1800, avg. train loss: 0.00600\n",
"Step #3700, epoch #1850, avg. train loss: 0.00620\n",
"Step #3800, epoch #1900, avg. train loss: 0.00593\n",
"Step #3900, epoch #1950, avg. train loss: 0.00581\n",
"Step #4000, epoch #2000, avg. train loss: 0.00631\n",
"Step #4100, epoch #2050, avg. train loss: 0.00600\n",
"Step #4200, epoch #2100, avg. train loss: 0.00596\n",
"Step #4300, epoch #2150, avg. train loss: 0.00589\n",
"Step #4400, epoch #2200, avg. train loss: 0.00562\n",
"Step #4500, epoch #2250, avg. train loss: 0.00568\n",
"Step #4600, epoch #2300, avg. train loss: 0.00578\n",
"Step #4700, epoch #2350, avg. train loss: 0.00588\n",
"Step #4800, epoch #2400, avg. train loss: 0.00568\n",
"Step #4900, epoch #2450, avg. train loss: 0.00597\n",
"Step #5000, epoch #2500, avg. train loss: 0.00596\n",
"Sc1: train, test ==> 0.982527671463 0.930882513244\n",
"Step #100, epoch #50, avg. train loss: 0.08954, avg. val loss: 0.06130\n",
"Step #200, epoch #100, avg. train loss: 0.02279, avg. val loss: 0.02545\n",
"Step #300, epoch #150, avg. train loss: 0.01784, avg. val loss: 0.02095\n",
"Step #400, epoch #200, avg. train loss: 0.01357, avg. val loss: 0.01788\n",
"Step #500, epoch #250, avg. train loss: 0.01114, avg. val loss: 0.01611\n",
"Step #600, epoch #300, avg. train loss: 0.00976, avg. val loss: 0.01517\n",
"Step #700, epoch #350, avg. train loss: 0.00904, avg. val loss: 0.01464\n",
"Step #800, epoch #400, avg. train loss: 0.00872, avg. val loss: 0.01427\n",
"Step #900, epoch #450, avg. train loss: 0.00860, avg. val loss: 0.01423\n",
"Step #1000, epoch #500, avg. train loss: 0.00810, avg. val loss: 0.01373\n",
"Step #1100, epoch #550, avg. train loss: 0.00804, avg. val loss: 0.01357\n",
"Step #1200, epoch #600, avg. train loss: 0.00784, avg. val loss: 0.01303\n",
"Step #1300, epoch #650, avg. train loss: 0.00760, avg. val loss: 0.01275\n",
"Step #1400, epoch #700, avg. train loss: 0.00767, avg. val loss: 0.01263\n",
"Step #1500, epoch #750, avg. train loss: 0.00755, avg. val loss: 0.01263\n",
"Step #1600, epoch #800, avg. train loss: 0.00749, avg. val loss: 0.01235\n",
"Step #1700, epoch #850, avg. train loss: 0.00754, avg. val loss: 0.01232\n",
"Step #1800, epoch #900, avg. train loss: 0.00708, avg. val loss: 0.01211\n",
"Step #1900, epoch #950, avg. train loss: 0.00718, avg. val loss: 0.01207\n",
"Step #2000, epoch #1000, avg. train loss: 0.00713, avg. val loss: 0.01185\n",
"Step #2100, epoch #1050, avg. train loss: 0.00723, avg. val loss: 0.01179\n",
"Step #2200, epoch #1100, avg. train loss: 0.00690, avg. val loss: 0.01173\n",
"Step #2300, epoch #1150, avg. train loss: 0.00666, avg. val loss: 0.01149\n",
"Step #2400, epoch #1200, avg. train loss: 0.00683, avg. val loss: 0.01178\n",
"Step #2500, epoch #1250, avg. train loss: 0.00703, avg. val loss: 0.01230\n",
"Step #2600, epoch #1300, avg. train loss: 0.00668, avg. val loss: 0.01238\n",
"Step #2700, epoch #1350, avg. train loss: 0.00675, avg. val loss: 0.01254\n",
"Step #2800, epoch #1400, avg. train loss: 0.00672, avg. val loss: 0.01281\n",
"Sc2: train, test ==> 0.982173184945 0.949184821666\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Stopping. Best step:\n",
" step 2341 with loss 0.009069059044122696\n"
]
}
],
"source": [
"from sklearn.cross_validation import train_test_split\n",
"\n",
"X_train, X_test, y_train, y_test = train_test_split(X,\n",
" y,\n",
" test_size=0.2,\n",
" random_state=42)\n",
"\n",
"X_train, X_val, y_train, y_val = train_test_split(X_train, y_train,\n",
" test_size=0.2, random_state=42)\n",
"\n",
"val_monitor = skflow.monitors.ValidationMonitor(X_val, y_val,\n",
" early_stopping_rounds=500,\n",
" n_classes=0)\n",
"\n",
"\n",
"dnn_lr1 = skflow.TensorFlowDNNRegressor(hidden_units = [10,20,10], steps = 5000)\n",
"dnn_lr1.fit(X_train, y_train)\n",
"sc1_train = dnn_lr1.score(X_train, y_train)\n",
"sc1_test = dnn_lr1.score(X_test, y_test)\n",
"print( 'Sc1: train, test ==>', sc1_train, sc1_test)\n",
"\n",
"dnn_lr2 = skflow.TensorFlowDNNRegressor(hidden_units = [10,20,10], steps = 5000)\n",
"dnn_lr2.fit(X_train, y_train, val_monitor)\n",
"sc2_train = dnn_lr2.score(X_train, y_train)\n",
"sc2_test = dnn_lr2.score(X_test, y_test)\n",
"print( 'Sc2: train, test ==>', sc2_train, sc2_test)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.1"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
@jskDr
Copy link
Author

jskDr commented Apr 10, 2016

This is the first version of the code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment