Commit efa46b2a authored by Yasiru Deshan IT19251110's avatar Yasiru Deshan IT19251110

Merge branch 'IT19251110' into 'master'

It19251110

See merge request !4
parents 94f7561f 5922e7cf
/tfod
/Tensorflow
/.ipynb_checkpoints
/ipynb_checkpoints
.ipynb_checkpoints
\ No newline at end of file
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 1. Import Dependencies"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Collecting opencv-python\n",
" Using cached opencv_python-4.7.0.68-cp37-abi3-win_amd64.whl (38.2 MB)\n",
"Collecting numpy>=1.17.0\n",
" Using cached numpy-1.24.1-cp310-cp310-win_amd64.whl (14.8 MB)\n",
"Installing collected packages: numpy, opencv-python\n",
"Successfully installed numpy-1.24.1 opencv-python-4.7.0.68\n"
]
}
],
"source": [
"!pip install opencv-python"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"# Import opencv\n",
"import cv2 \n",
"\n",
"# Import uuid\n",
"import uuid\n",
"\n",
"# Import Operating System\n",
"import os\n",
"\n",
"# Import time\n",
"import time"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 2. Define Images to Collect"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"labels = ['thumbsup', 'thumbsdown', 'thankyou', 'livelong']\n",
"number_imgs = 5"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 3. Setup Folders "
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"IMAGES_PATH = os.path.join('Tensorflow', 'workspace', 'images', 'collectedimages')"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"if not os.path.exists(IMAGES_PATH):\n",
" if os.name == 'posix':\n",
" !mkdir -p {IMAGES_PATH}\n",
" if os.name == 'nt':\n",
" !mkdir {IMAGES_PATH}\n",
"for label in labels:\n",
" path = os.path.join(IMAGES_PATH, label)\n",
" if not os.path.exists(path):\n",
" !mkdir {path}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 4. Capture Images"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Collecting images for thumbsup\n",
"Collecting image 0\n",
"Collecting image 1\n",
"Collecting image 2\n",
"Collecting image 3\n",
"Collecting image 4\n"
]
}
],
"source": [
"for label in labels:\n",
" cap = cv2.VideoCapture(0)\n",
" print('Collecting images for {}'.format(label))\n",
" time.sleep(5)\n",
" for imgnum in range(number_imgs):\n",
" print('Collecting image {}'.format(imgnum))\n",
" ret, frame = cap.read()\n",
" imgname = os.path.join(IMAGES_PATH,label,label+'.'+'{}.jpg'.format(str(uuid.uuid1())))\n",
" cv2.imwrite(imgname, frame)\n",
" cv2.imshow('frame', frame)\n",
" time.sleep(2)\n",
"\n",
" if cv2.waitKey(1) & 0xFF == ord('q'):\n",
" break\n",
"cap.release()\n",
"cv2.destroyAllWindows()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 5. Image Labelling"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Collecting pyqt5\n",
" Using cached PyQt5-5.15.4-cp36.cp37.cp38.cp39-none-win_amd64.whl (6.8 MB)\n",
"Collecting lxml\n",
" Using cached lxml-4.6.3-cp37-cp37m-win_amd64.whl (3.5 MB)\n",
"Collecting PyQt5-sip<13,>=12.8\n",
" Using cached PyQt5_sip-12.8.1-cp37-cp37m-win_amd64.whl (62 kB)\n",
"Collecting PyQt5-Qt5>=5.15\n",
" Using cached PyQt5_Qt5-5.15.2-py3-none-win_amd64.whl (50.1 MB)\n",
"Installing collected packages: PyQt5-sip, PyQt5-Qt5, pyqt5, lxml\n",
"Successfully installed PyQt5-Qt5-5.15.2 PyQt5-sip-12.8.1 lxml-4.6.3 pyqt5-5.15.4\n"
]
}
],
"source": [
"!pip install --upgrade pyqt5 lxml"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"LABELIMG_PATH = os.path.join('Tensorflow', 'labelimg')"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Cloning into 'Tensorflow\\labelimg'...\n"
]
}
],
"source": [
"if not os.path.exists(LABELIMG_PATH):\n",
" !mkdir {LABELIMG_PATH}\n",
" !git clone https://github.com/tzutalin/labelImg {LABELIMG_PATH}"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"if os.name == 'posix':\n",
" !make qt5py3\n",
"if os.name =='nt':\n",
" !cd {LABELIMG_PATH} && pyrcc5 -o libs/resources.py resources.qrc"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Image:D:\\YouTube\\OD\\TFODCourse\\Tensorflow\\workspace\\images\\collectedimages\\thumbsup\\thumbsup.6a706a36-940f-11eb-b4eb-5cf3709bbcc6.jpg -> Annotation:D:/YouTube/OD/TFODCourse/Tensorflow/workspace/images/collectedimages/thumbsup/thumbsup.6a706a36-940f-11eb-b4eb-5cf3709bbcc6.xml\n",
"Image:D:\\YouTube\\OD\\TFODCourse\\Tensorflow\\workspace\\images\\collectedimages\\thumbsup\\thumbsup.6ba4d864-940f-11eb-8c74-5cf3709bbcc6.jpg -> Annotation:D:/YouTube/OD/TFODCourse/Tensorflow/workspace/images/collectedimages/thumbsup/thumbsup.6ba4d864-940f-11eb-8c74-5cf3709bbcc6.xml\n",
"Image:D:\\YouTube\\OD\\TFODCourse\\Tensorflow\\workspace\\images\\collectedimages\\thumbsup\\thumbsup.6cd9c8e2-940f-11eb-b901-5cf3709bbcc6.jpg -> Annotation:D:/YouTube/OD/TFODCourse/Tensorflow/workspace/images/collectedimages/thumbsup/thumbsup.6cd9c8e2-940f-11eb-b901-5cf3709bbcc6.xml\n",
"Image:D:\\YouTube\\OD\\TFODCourse\\Tensorflow\\workspace\\images\\collectedimages\\thumbsup\\thumbsup.6e0f5bc0-940f-11eb-8d18-5cf3709bbcc6.jpg -> Annotation:D:/YouTube/OD/TFODCourse/Tensorflow/workspace/images/collectedimages/thumbsup/thumbsup.6e0f5bc0-940f-11eb-8d18-5cf3709bbcc6.xml\n",
"Image:D:\\YouTube\\OD\\TFODCourse\\Tensorflow\\workspace\\images\\collectedimages\\thumbsup\\thumbsup.693a5158-940f-11eb-8752-5cf3709bbcc6.jpg -> Annotation:D:/YouTube/OD/TFODCourse/Tensorflow/workspace/images/collectedimages/thumbsup/thumbsup.693a5158-940f-11eb-8752-5cf3709bbcc6.xml\n"
]
}
],
"source": [
"!cd {LABELIMG_PATH} && python labelImg.py"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 6. Move them into a Training and Testing Partition"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# OPTIONAL - 7. Compress them for Colab Training"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [],
"source": [
"TRAIN_PATH = os.path.join('Tensorflow', 'workspace', 'images', 'train')\n",
"TEST_PATH = os.path.join('Tensorflow', 'workspace', 'images', 'test')\n",
"ARCHIVE_PATH = os.path.join('Tensorflow', 'workspace', 'images', 'archive.tar.gz')"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [],
"source": [
"!tar -czf {ARCHIVE_PATH} {TRAIN_PATH} {TEST_PATH}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "tfod",
"language": "python",
"name": "tfod"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.0"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
This diff is collapsed.
<b>Error:</b> No module named ‘xxxxxx’<br/>
<b>Solution:</b> Install that module
<pre>!pip install xxxxxx</pre>
<i>Example:</i><br/>
Error: No module named typeguard<br/>
Solution: pip install typeguard # note the name of the module will not always equal the package name
<b>Error:</b> AttributeError: module 'sip' has no attribute 'setapi'<br/>
<b>Solution:</b> Downgrade matplotlib to version 3.2 by running the following command
<pre>!pip install matplotlib==3.2</pre>
<b>Error:</b> ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject<br/>
<b>Solution:</b> Reinstall pycocotools
<pre>Pip uninstall pycocotools -y
Pip install pycocotools</pre>
<b>Error:</b> ValueError: 'images' must have either 3 or 4 dimensions.<br/>
<b>Solution:</b> Restart your jupyter notebook as the Webcam is unavailable. If using images, this normally means your image name and path is incorrect.
<b>Error:</b>error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvDestroyAllWindows'<br/>
<b>Solution:</b> Reinstall opencv and uninstall opencv-headless
<pre>
pip uninstall opencv-python-headless -y
pip install opencv-python --upgrade
</pre>
<b>Error:</b>When running GenerateTFRecords script you receive an error like the following:
File "Tensorflow\scripts\generate_tfrecord.py", line 132, in create_tf_example
classes.append(class_text_to_int(row['class']))
File "Tensorflow\scripts\generate_tfrecord.py", line 101, in class_text_to_int
return label_map_dict[row_label]
KeyError: 'ThumbsDown' # YOUR LABEL HERE
<br/>
<b>Solution:</b> This is likely because you mismatches between your annotations and your labelmap. Ensure that the label names from your annotations match the label map exactly, note it is case sensitive.
<b>Error:</b>When running training script from the command line, you get a No module error. e.g. ModuleNotFoundError: No module named 'cv2'
<br/>
<b>Solution:</b> Remember you need to activate your environment at the command line in order to leverage all the packages you have installed in it.
<b>Error:</b> When training, only the CPU is used and the GPU is ignored.
<br/>
<b>Solution:</b> Ensure you have a matching CUDA and cuDNN version for your Tensorflow version installed. Windows:https://www.tensorflow.org/install/source_windows, Linux/macOS: https://www.tensorflow.org/install/source
<b>Error:</b>CUBLAS_STATUS_ALLOC_FAILED or CUDNN_STATUS_ALLOC_FAILED <br/>
<b>Solution:</b> This is because the available VRAM on your machine is completely consumed and there is no more memory available to train. Quit all of your Python programs and stop your Jupyter Notebook server to free up the VRAM and run the command again.
Template
<b>Error:</b> <br/>
<b>Solution:</b>
<pre></pre>
\ No newline at end of file
# Tensorflow Object Detection Walkthrough
<p>This set of Notebooks provides a complete set of code to be able to train and leverage your own custom object detection model using the Tensorflow Object Detection API. This accompanies the Tensorflow Object Detection course on my <a href="https://www.youtube.com/c/nicholasrenotte">YouTube channel</a>.
<img src="https://i.imgur.com/H3tUyKM.png">
## Steps
<br />
<b>Step 1.</b> Clone this repository: https://github.com/nicknochnack/TFODCourse
<br/><br/>
<b>Step 2.</b> Create a new virtual environment
<pre>
python -m venv tfod
</pre>
<br/>
<b>Step 3.</b> Activate your virtual environment
<pre>
source tfod/bin/activate # Linux
.\tfod\Scripts\activate # Windows
</pre>
<br/>
<b>Step 4.</b> Install dependencies and add virtual environment to the Python Kernel
<pre>
python -m pip install --upgrade pip
pip install ipykernel
python -m ipykernel install --user --name=tfodj
</pre>
<br/>
<b>Step 5.</b> Collect images using the Notebook <a href="https://github.com/nicknochnack/TFODCourse/blob/main/1.%20Image%20Collection.ipynb">1. Image Collection.ipynb</a> - ensure you change the kernel to the virtual environment as shown below
<img src="https://i.imgur.com/8yac6Xl.png">
<br/>
<b>Step 6.</b> Manually divide collected images into two folders train and test. So now all folders and annotations should be split between the following two folders. <br/>
\TFODCourse\Tensorflow\workspace\images\train<br />
\TFODCourse\Tensorflow\workspace\images\test
<br/><br/>
<b>Step 7.</b> Begin training process by opening <a href="https://github.com/nicknochnack/TFODCourse/blob/main/2.%20Training%20and%20Detection.ipynb">2. Training and Detection.ipynb</a>, this notebook will walk you through installing Tensorflow Object Detection, making detections, saving and exporting your model.
<br /><br/>
<b>Step 8.</b> During this process the Notebook will install Tensorflow Object Detection. You should ideally receive a notification indicating that the API has installed successfully at Step 8 with the last line stating OK.
<img src="https://i.imgur.com/FSQFo16.png">
If not, resolve installation errors by referring to the <a href="https://github.com/nicknochnack/TFODCourse/blob/main/README.md">Error Guide.md</a> in this folder.
<br /> <br/>
<b>Step 9.</b> Once you get to step 6. Train the model, inside of the notebook, you may choose to train the model from within the notebook. I have noticed however that training inside of a separate terminal on a Windows machine you're able to display live loss metrics.
<img src="https://i.imgur.com/K0wLO57.png">
<br />
<b>Step 10.</b> You can optionally evaluate your model inside of Tensorboard. Once the model has been trained and you have run the evaluation command under Step 7. Navigate to the evaluation folder for your trained model e.g.
<pre> cd Tensorlfow/workspace/models/my_ssd_mobnet/eval</pre>
and open Tensorboard with the following command
<pre>tensorboard --logdir=. </pre>
Tensorboard will be accessible through your browser and you will be able to see metrics including mAP - mean Average Precision, and Recall.
<br />
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment