Created
October 28, 2020 09:24
-
-
Save dianachua/4990763a6b291c751d4dadc681f2f273 to your computer and use it in GitHub Desktop.
Created on Skills Network Labs
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"cells": [ | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"<a href=\"https://cognitiveclass.ai/\">\n", | |
" <img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Logo/SNLogo.png\" width=\"200\" align=\"center\">\n", | |
"</a>\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"<h1>Lab - Classifying Images using IBM Watson Visual Recognition in Python</h1>\n", | |
"\n", | |
"<h1>Introduction</h1>\n", | |
"\n", | |
"## Objectives\n", | |
"\n", | |
"After completing this lab you will be able to:\n", | |
"\n", | |
"- How to operate the Watson Visual Recognition API and OpenCV using the Python Programming Language\n", | |
"- Understand the advantage of using the Watson Visual Recognition API over the Graphic User Interface on the Browser\n", | |
"- How to automate the training, and testing of your Visual Recognition model.\n", | |
"\n", | |
"So instead of logging in to your IBM Cloud account so that you can upload a picture that you want to classify, you can upload an image to your Visual Recognition model by running piece of python code. \n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n", | |
"<font size=\"3\"><strong>Click on the links to go to the following sections:</strong></font>\n", | |
"<br>\n", | |
"<h2>Table of Contents</h2>\n", | |
"<ol>\n", | |
" <li><a href=\"#ref1\">IBM Watson Package</a></li>\n", | |
" <li><a href=\"#ref2\">Plotting images in Jupyter Notebooks</a></li>\n", | |
" <li><a href=\"#ref3\">Classify images with IBM Watson API</a></li>\n", | |
" <li><a href=\"#ref5\">Exercises</a></li>\n", | |
"</ol> \n", | |
"</div>\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"<a id=\"ref1\"></a>\n", | |
"\n", | |
"<h2>IBM Watson Package</h2>\n", | |
"In order to run this lab we need to import two packages.\n", | |
"<ul>\n", | |
" <li>IBM Watson: which allows access to the Watson Visual Recognition API</li>\n", | |
" <li>OpenCV: a package that will help us with image processing</li>\n", | |
"</ul>\n", | |
"The code below will install Watson Developer Cloud and OpenCV. \n", | |
"\n", | |
"To run, click on the code cell below and press \"shift + enter\".\n", | |
"\n", | |
"<b>NOTE - The Watson Developer Cloud Package has been deprecated and has been replaced by the IBM Watson Package </b>\n" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"! pip install --upgrade ibm-watson opencv-python" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"<h4>What (or who) do you see in the following image?</h4>\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Images/Donald_Trump_Justin_Trudeau_2017-02-13_02.jpg\" width=\"400\"></img> \n", | |
"<b>URL</b>: \n", | |
"<i>[https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Images/Donald_Trump_Justin_Trudeau_2017-02-13_02.jpg](https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Images/Donald_Trump_Justin_Trudeau_2017-02-13_02.jpg?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-CV0101EN-SkillsNetwork-19816089&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-CV0101EN-SkillsNetwork-19816089&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-CV0101EN-SkillsNetwork-19816089&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-CV0101EN-SkillsNetwork-19816089&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-CV0101EN-SkillsNetwork-19816089&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-CV0101EN-SkillsNetwork-19816089&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-CV0101EN-SkillsNetwork-19816089&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-CV0101EN-SkillsNetwork-19816089&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-CV0101EN-SkillsNetwork-19816089&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-CV0101EN-SkillsNetwork-19816089&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ)</i>\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"<a id=\"ref2\"></a>\n", | |
"\n", | |
"<h2>Plotting images in Jupyter Notebooks</h2>\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Let's use a function to help us display images from a URL: The function below with the name <code>plt_image</code> grabs the image from the internet provided that you supply the web address of the image.<br>\n", | |
"\n", | |
"URL stands for Uniform Resource Locator, which in this case the web address of our image.\n" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"import cv2\n", | |
"import urllib.request\n", | |
"from matplotlib import pyplot as plt\n", | |
"from pylab import rcParams\n", | |
"\n", | |
"def plt_image(image_url, size = (10,8)):\n", | |
"\n", | |
" # Downloads an image from a URL, and displays it in the notebook\n", | |
" urllib.request.urlretrieve(image_url, \"image.jpg\") # downloads file as \"image.jpg\"\n", | |
" image = cv2.imread(\"image.jpg\")\n", | |
" \n", | |
" # If image is in color, then correct color coding from BGR to RGB\n", | |
" if len(image.shape) == 3:\n", | |
" image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n", | |
" \n", | |
" rcParams['figure.figsize'] = size[0], size[1] #set image display size\n", | |
"\n", | |
" plt.axis(\"off\")\n", | |
" plt.imshow(image, cmap=\"Greys_r\")\n", | |
" plt.show()" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Let's grab the image above from the internet and plot it out.\n" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"image_url = 'http://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Images/Donald_Trump_Justin_Trudeau_2017-02-13_02.jpg'\n", | |
"plt_image(image_url)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"<a id=\"ref3\"></a>\n", | |
"\n", | |
"<h2>Classify images with IBM Watson API</h2>\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"<h4>Setting the API key for IBM Watson Visual Recognition</h4>\n", | |
"\n", | |
"<p>In order for you to use the IBM Watson Visual Recognition API, you will need the API key of the Visual Recognition instance that you have created in the previous sections.</p>\n", | |
"\n", | |
"<p>Log into your IBM Cloud account with the following link.</p> <a href=\"https://cocl.us/CV0101EN_IBM_Cloud_Login\">https://cloud.ibm.com</a>\n", | |
"<ol>\n", | |
" <li>Click on <b>Services</b></li>\n", | |
" <li>Under Services, click on your Watson Visual Recognition Instance</li>\n", | |
" <li>Copy the <b>API Key</b> and past it in the code cell below</li>\n", | |
" <img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Images/API_Key.png\" width=\"680\">\n", | |
" <li>Then press \"ctrl + enter\" to run the code cell.</li>\n", | |
"</ol>\n" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"# Paste your API key for IBM Watson Visual Recognition below:\n", | |
"my_apikey = 'PudqJXF5_gR8BXt5eJJUaAq8MCPKsbx-bhE_jfEnPHk1'" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"<h4>Initialize Watson Visual Recognition</h4>\n", | |
"Let's create your own Watson Visual Recognition instance, it will allow you to make calls to the Watson Visual Recognition API.\n" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"from ibm_watson import VisualRecognitionV3\n", | |
"from ibm_cloud_sdk_core.authenticators import IAMAuthenticator\n", | |
"authenticator = IAMAuthenticator(my_apikey)\n", | |
"\n", | |
"visrec = VisualRecognitionV3('2018-03-19', \n", | |
" authenticator=authenticator)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"#### Identifying Objects in the Image\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"<p>We can see that there are two persons in the picture above. But does the computer knows this?</p>\n", | |
"\n", | |
"<p>Let's call the <b>classify</b> method from the Watson Image Recognition API to see what objects our Image Recognition Model can identify from this picture.</p>\n" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"import json\n", | |
"\n", | |
"image_url = 'http://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Images/Donald_Trump_Justin_Trudeau_2017-02-13_02.jpg'\n", | |
"\n", | |
"\n", | |
"# threshold is set to 0.6, that means only classes that has a confidence score of 0.6 or greater will be shown\n", | |
"classes = visrec.classify(url=image_url,\n", | |
" threshold='0.6',\n", | |
" classifier_ids='default').get_result()\n", | |
"\n", | |
"plt_image(image_url)\n", | |
"print(json.dumps(classes, indent=2))" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Under the field classes, you should see the class person, and other classes with their corresponding confidence score. You might get other classes other than person depending on your Visual Recognition model.\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"<h4>Getting Watson Visual Recognition results as a dataframe</h4>\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"<p>The problem with the <b>classify</b> method is that it gave an output that is extremely confusing to look at. The output is in a format called JSON which stands for JavaScript Object Notation, we can cleanup the presentation of our output by using the data structure called dataframe in the <b>pandas</b> library.</p>\n", | |
"\n", | |
"<p>In the code cell below we use a function called <b>getdf_visrec</b> which uses a dataframe that can help us easily sort the classified labels by confidence score in descending order.</p>\n" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"from pandas.io.json import json_normalize\n", | |
"\n", | |
"def getdf_visrec(url, apikey = my_apikey):\n", | |
" \n", | |
" json_result = visrec.classify(url=url,\n", | |
" threshold='0.6',\n", | |
" classifier_ids='default').get_result()\n", | |
" \n", | |
" json_classes = json_result['images'][0]['classifiers'][0]['classes']\n", | |
" \n", | |
" df = json_normalize(json_classes).sort_values('score', ascending=False).reset_index(drop=True)\n", | |
" \n", | |
" return df" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"url = 'http://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Images/76011_MAIN._AC_SS190_V1446845310_.jpg'\n", | |
"plt_image(url)\n", | |
"getdf_visrec(url)" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"url = 'http://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Images/2880px-Kyrenia_01-2017_img04_view_from_castle_bastion.jpg'\n", | |
"plt_image(url)\n", | |
"getdf_visrec(url)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"<a id=\"ref5\"></a>\n", | |
"\n", | |
"<h2>Exercises</h2>\n", | |
"<h3>Question 1</h3>\n", | |
"<p>Write a function called plot_image that takes an url of jpg format image, fixes the color scheme of the image, and sets the dimension of the image to 12, 9 and plots it.</p>\n" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"# Write your code below and press Shift+Enter to execute\n", | |
"import cv2\n", | |
"import urllib.request\n", | |
"from matplotlib import pyplot as plt\n", | |
"from pylab import rcParams\n", | |
"\n", | |
"def plt_image(image_url, size = (12,9)):\n", | |
"\n", | |
" # Downloads an image from a URL, and displays it in the notebook\n", | |
" urllib.request.urlretrieve(image_url, \"image.jpg\") # downloads file as \"image.jpg\"\n", | |
" image = cv2.imread(\"image.jpg\")\n", | |
" \n", | |
" # If image is in color, then correct color coding from BGR to RGB\n", | |
" if len(image.shape) == 3:\n", | |
" image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n", | |
" \n", | |
" rcParams['figure.figsize'] = size[0], size[1] #set image display size\n", | |
"\n", | |
" plt.axis(\"off\")\n", | |
" plt.imshow(image, cmap=\"Greys_r\")\n", | |
" plt.show()\n", | |
"\n", | |
"url = 'https://commons.wikimedia.org/wiki/Category:Picture_composition#/media/File:Strawberry_anyone?_(2087040708).jpg'\n", | |
"plt_image(url)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Double-click <font color=\"red\"><b><u>here</b></u></font> for the solution.\n", | |
"\n", | |
"<!-- The answer is below:\n", | |
"url = 'Paste_your_Image_link_here' \n", | |
"\n", | |
"import urllib.request\n", | |
"from matplotlib import pyplot as plt\n", | |
"from pylab import rcParams\n", | |
"\n", | |
"def plot_image(image_url, size=(12, 9)):\n", | |
" # Downloads an image from a URL, and displays it in the notebook\n", | |
" urllib.request.urlretrieve(image_url, \"exercise_image.jpg\") # downloads file as \"exercise_image.jpg\"\n", | |
" exercise_image = cv2.imread(\"exercise_image.jpg\")\n", | |
"\n", | |
" # If image is in color, then correct color coding from BGR to RGB\n", | |
" if len(exercise_image.shape) == 3:\n", | |
" exercise_image = cv2.cvtColor(exercise_image, cv2.COLOR_BGR2RGB)\n", | |
" \n", | |
" # set image to correct size\n", | |
" rcParams['figure.figsize'] = size[0], size[1]\n", | |
" plt.axis(\"off\")\n", | |
" plt.imshow(exercise_image, cmap=\"Greys_r\")\n", | |
" plt.show()\n", | |
"\n", | |
"plot_image(url)\n", | |
"--->\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"<h3>Question 2</h3>\n", | |
"<p>Write a function called analyze_image that takes in a url of a jpg format image, and identifies <b>ALL</b> objects in the image. Further, it uses a pandas dataframe to convert the result into a table format</p>\n" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"\n", | |
"# Write your code below and press Shift+Enter to execute \n", | |
"my_apikey = 'PudqJXF5_gR8BXt5eJJUaAq8MCPKsbx-bhE_jfEnPHk1'\n", | |
"\n", | |
"from ibm_watson import VisualRecognitionV3\n", | |
"from ibm_cloud_sdk_core.authenticators import IAMAuthenticator\n", | |
"authenticator = IAMAuthenticator(my_apikey)\n", | |
"\n", | |
"visrec = VisualRecognitionV3('2018-03-19', \n", | |
" authenticator=authenticator)\n", | |
"\n", | |
"import json\n", | |
"\n", | |
"exercise_url = 'https://commons.wikimedia.org/wiki/Category:Picture_composition#/media/File:Strawberry_anyone?_(2087040708).jpg'\n", | |
"\n", | |
"from pandas.io.json import json_normalize\n", | |
"\n", | |
"def analyze_image(url, apikey = my_apikey):\n", | |
" \n", | |
" json_result = visrec.classify(url=url,\n", | |
" threshold='0.6',\n", | |
" classifier_ids='default').get_result()\n", | |
" \n", | |
" json_classes = json_result['images'][0]['classifiers'][0]['classes']\n", | |
" \n", | |
" df = json_normalize(json_classes).sort_values('score', ascending=False).reset_index(drop=True)\n", | |
" \n", | |
" return df" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Double-click <font color=\"red\"><b><u>here</b></u></font> for the solution.\n", | |
"\n", | |
"<!-- The answer is below:\n", | |
"url = 'Paste_your_image_url_here'\n", | |
"from pandas.io.json import json_normalize\n", | |
"\n", | |
"def analyze_image(url):\n", | |
" \n", | |
" # The threshold value specifies the minimum score an object class must have to be displayed in the\n", | |
" # in the result. Setting this value to 0.0 returns ALL object classes (i.e. all objects) in the image\n", | |
" # as identified by our Visual Recognition Model\n", | |
"\n", | |
" json_result = visrec.classify(url=url,\n", | |
" threshold='0.0',\n", | |
" classifier_ids='default').get_result()\n", | |
" \n", | |
" json_classes = json_result['images'][0]['classifiers'][0]['classes']\n", | |
" \n", | |
" df = json_normalize(json_classes).reset_index(drop=True).reset_index(drop=True)\n", | |
" return df\n", | |
"plt_image(url)\n", | |
"analyze_image(url)\n", | |
"-->\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"<h1>Thank you for completing this notebook</h1>\n", | |
"You can read more about Watson Visual Recognition APIs from the following link.\n", | |
"<a href=\"https://cloud.ibm.com/apidocs/visual-recognition?code=python\">https://cloud.ibm.com/apidocs/visual-recognition</a>\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n", | |
"<h2>Get IBM Watson Studio free of charge!</h2>\n", | |
" <p><a href=\"https://cloud.ibm.com/catalog/services/watson-studio\"><img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Logo/BottomAd.png\" width=\"750\" align=\"center\"></a></p>\n", | |
"</div>\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"## Author\n", | |
"\n", | |
"<a href=\"https://www.linkedin.com/in/nayefaboutayoun/\" target=\"_blank\">Nayef Abou Tayoun</a>\n", | |
"\n", | |
"## Change Log\n", | |
"\n", | |
"| Date (YYYY-MM-DD) | Version | Changed By | Change Description |\n", | |
"| ----------------- | ------- | ---------- | ---------------------------------- |\n", | |
"| 2020-08-27 | 2.0 | Shubham | Moved lab to course repo in GitLab |\n", | |
"\n", | |
"<hr>\n", | |
"\n", | |
"## <h3 align=\"center\"> © IBM Corporation 2020. All rights reserved. <h3/>\n" | |
] | |
} | |
], | |
"metadata": { | |
"kernelspec": { | |
"display_name": "Python", | |
"language": "python", | |
"name": "conda-env-python-py" | |
}, | |
"language_info": { | |
"codemirror_mode": { | |
"name": "ipython", | |
"version": 3 | |
}, | |
"file_extension": ".py", | |
"mimetype": "text/x-python", | |
"name": "python", | |
"nbconvert_exporter": "python", | |
"pygments_lexer": "ipython3", | |
"version": "3.6.11" | |
}, | |
"widgets": { | |
"state": {}, | |
"version": "1.1.2" | |
} | |
}, | |
"nbformat": 4, | |
"nbformat_minor": 4 | |
} |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment