Last active
March 23, 2023 15:53
-
-
Save lakshanthad/b47a1d1a9b4fac43449948524de7d374 to your computer and use it in GitHub Desktop.
YOLOV5-training-for-Vision-AI.ipynb
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"nbformat": 4, | |
"nbformat_minor": 0, | |
"metadata": { | |
"colab": { | |
"name": "YOLOV5-training-for-Vision-AI.ipynb", | |
"provenance": [], | |
"collapsed_sections": [], | |
"authorship_tag": "ABX9TyP+YFdanKGfLbWG7DXtsA/L", | |
"include_colab_link": true | |
}, | |
"kernelspec": { | |
"name": "python3", | |
"display_name": "Python 3" | |
}, | |
"language_info": { | |
"name": "python" | |
}, | |
"accelerator": "GPU" | |
}, | |
"cells": [ | |
{ | |
"cell_type": "markdown", | |
"metadata": { | |
"id": "view-in-github", | |
"colab_type": "text" | |
}, | |
"source": [ | |
"<a href=\"https://colab.research.google.com/gist/lakshanthad/b47a1d1a9b4fac43449948524de7d374/yolov5-training-for-sensecap-a1101.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"This notebook will guide you to train your own AI model using YOLOv5!" | |
], | |
"metadata": { | |
"id": "4wCnaloE6aew" | |
} | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"**Step 1.** Choose **GPU** in **Runtime** if not already selected by navigating to `Runtime --> Change Runtime Type --> Hardware accelerator --> GPU`" | |
], | |
"metadata": { | |
"id": "6VrcRq-U6G2v" | |
} | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"**Step 2.** Clone repo, install dependencies and check PyTorch and GPU" | |
], | |
"metadata": { | |
"id": "K8prsdf8u8Mv" | |
} | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"!git clone https://github.com/Seeed-Studio/yolov5-swift # clone\n", | |
"%cd yolov5-swift\n", | |
"%pip install -qr requirements.txt # install dependencies\n", | |
"\n", | |
"import torch\n", | |
"import os\n", | |
"from google.colab import files\n", | |
"print('Setup complete. Using torch %s %s' % (torch.__version__, torch.cuda.get_device_properties(0) if torch.cuda.is_available() else 'CPU'))" | |
], | |
"metadata": { | |
"id": "ZfnLgOjgu8ng" | |
}, | |
"execution_count": null, | |
"outputs": [] | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"**Step 3.** Set up environment" | |
], | |
"metadata": { | |
"id": "xPagIdHSaY6n" | |
} | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"os.environ[\"DATASET_DIRECTORY\"] = \"/content/datasets\"" | |
], | |
"metadata": { | |
"id": "C32Eow5zafe_" | |
}, | |
"execution_count": null, | |
"outputs": [] | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"**Step 4.** Copy and paste the displayed code snippet from Roboflow on to the code cell below" | |
], | |
"metadata": { | |
"id": "fbUvo-BXefbu" | |
} | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"<div align=center><img width=500 src=\"https://files.seeedstudio.com/wiki/YOLOV5/81.png\"/></div>" | |
], | |
"metadata": { | |
"id": "fmJjFIEpfF44" | |
} | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"#copy and paste the code here and make sure it follows the same format as below.\n", | |
"\n", | |
"#!pip install roboflow\n", | |
"#from roboflow import Roboflow\n", | |
"#rf = Roboflow(api_key=\"YOUR API KEY HERE\")\n", | |
"#project = rf.workspace().project(\"YOUR PROJECT\")\n", | |
"#dataset = project.version(\"YOUR VERSION\").download(\"yolov5\")" | |
], | |
"metadata": { | |
"id": "2su7XslvY4S1" | |
}, | |
"execution_count": null, | |
"outputs": [] | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"# this is the YAML file Roboflow wrote for us that we're loading into this notebook with our data\n", | |
"%cat {dataset.location}/data.yaml" | |
], | |
"metadata": { | |
"id": "Fg4hIoyzE-Qe" | |
}, | |
"execution_count": null, | |
"outputs": [] | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"**Step 5.** Download a pre-trained model suitable for our training" | |
], | |
"metadata": { | |
"id": "bX1U12sNFRM5" | |
} | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"!wget https://github.com/Seeed-Studio/yolov5-swift/releases/download/v0.1.0-alpha/yolov5n6-xiao.pt" | |
], | |
"metadata": { | |
"id": "Db3JVwsPFp5R" | |
}, | |
"execution_count": null, | |
"outputs": [] | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"**Step 6.** Start training" | |
], | |
"metadata": { | |
"id": "YDBfSV8exwbe" | |
} | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"Here, we are able to pass a number of arguments:\n", | |
"- **img:** define input image size\n", | |
"- **batch:** determine batch size\n", | |
"- **epochs:** define the number of training epochs\n", | |
"- **data:** set the path to our yaml file\n", | |
"- **cfg:** specify our model configuration\n", | |
"- **weights:** specify a custom path to weights\n", | |
"- **name:** result names\n", | |
"- **nosave:** only save the final checkpoint\n", | |
"- **cache:** cache images for faster training" | |
], | |
"metadata": { | |
"id": "I6_oEXQqG8B6" | |
} | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"!python3 train.py --img 192 --batch 64 --epochs 100 --data {dataset.location}/data.yaml --cfg yolov5n6-xiao.yaml --weights yolov5n6-xiao.pt --name yolov5n6_results --cache" | |
], | |
"metadata": { | |
"id": "aCzjlGRSzs_F" | |
}, | |
"execution_count": null, | |
"outputs": [] | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"**Step 7.** Export TensorFlow Lite file" | |
], | |
"metadata": { | |
"id": "gpmLzJJ6M6_m" | |
} | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"!python3 export.py --data {dataset.location}/data.yaml --weights runs/train/yolov5n6_results/weights/best.pt --imgsz 192 --int8 --include tflite " | |
], | |
"metadata": { | |
"id": "zY8GOqDKM41s" | |
}, | |
"execution_count": null, | |
"outputs": [] | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"**Step 8.** Convert TensorFlow Lite to UF2 file" | |
], | |
"metadata": { | |
"id": "zfbD2N2eMkV3" | |
} | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"UF2 is a file format, developed by Microsoft. Seeed uses this format to convert .tflite to .uf2, allowing tflite files to be stored on the AIoT devices launched by Seeed. Currently Seeed's devices support up to 4 models, each model (.tflite) is less than 1M .\n", | |
"\n", | |
"You can specify the model to be placed in the corresponding index with -t.\n", | |
"\n", | |
"For example:\n", | |
"\n", | |
"- `-t 1`: index 1 \n", | |
"- `-t 2`: index 2 " | |
], | |
"metadata": { | |
"id": "rtpBb33HkSC7" | |
} | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"# Place the model to index 1\n", | |
"!python3 uf2conv.py -f GROVEAI -t 1 -c runs//train/yolov5n6_results//weights/best-int8.tflite -o model-1.uf2\n", | |
"%cp model-1.uf2 ../" | |
], | |
"metadata": { | |
"id": "2f3-VHKGNRTw" | |
}, | |
"execution_count": null, | |
"outputs": [] | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"**Step 9.** Download the trained model file" | |
], | |
"metadata": { | |
"id": "S8MXi72tNKJE" | |
} | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"files.download(\"/content/model-1.uf2\")" | |
], | |
"metadata": { | |
"id": "61orx1ttMJoU" | |
}, | |
"execution_count": null, | |
"outputs": [] | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"The above is the file that we will load into the SenseCAP A1101/ Grove - Vision AI Module to perform the inference!" | |
], | |
"metadata": { | |
"id": "CHDAOmaYNfc1" | |
} | |
} | |
] | |
} |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment