"WARNING:absl:Found untraced functions such as _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op while saving (showing 3 of 3). These functions will not be directly callable after loading.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"INFO:tensorflow:Assets written to: ./models/model\\assets\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:tensorflow:Assets written to: ./models/model\\assets\n"
"In this section, I define some constants used throughout the code. IMG_SIZE represents the desired size of the input images, BATCH_SIZE determines the number of samples processed in each training batch, and EPOCHS specifies the number of times the model will iterate over the entire dataset during training. CLASSES is a list of class names extracted from the directory structure, and NUM_CLASSES represents the total number of classes in the dataset."
]
]
},
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count": 13,
"execution_count": 7,
"id": "16176bf6",
"id": "16176bf6",
"metadata": {},
"metadata": {},
"outputs": [
"outputs": [
...
@@ -40,7 +57,7 @@
...
@@ -40,7 +57,7 @@
" 'Uhh']"
" 'Uhh']"
]
]
},
},
"execution_count": 13,
"execution_count": 7,
"metadata": {},
"metadata": {},
"output_type": "execute_result"
"output_type": "execute_result"
}
}
...
@@ -57,7 +74,7 @@
...
@@ -57,7 +74,7 @@
},
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count": 14,
"execution_count": 8,
"id": "8f7b1301",
"id": "8f7b1301",
"metadata": {},
"metadata": {},
"outputs": [],
"outputs": [],
...
@@ -81,9 +98,19 @@
...
@@ -81,9 +98,19 @@
"}"
"}"
]
]
},
},
{
"cell_type": "markdown",
"id": "3d2af75d",
"metadata": {},
"source": [
"### Load Dataset\n",
"\n",
"In this section, I define a function load_dataset() to load the images and labels from the dataset directory. The function iterates over the classes and images, reads and preprocesses each image, and stores the data and corresponding labels. The dataset path is provided as an argument to the function. After loading the dataset, I split it into training and validation sets using the train_test_split() function from sklearn. The split is done with a test size of 20% and a random state of 42."
"### Define the Model Architecture and Compile\n",
"\n",
"In this section, I define the model architecture using the Sequential API from TensorFlow's Keras API. The model consists of a series of convolutional and pooling layers, followed by flatten and dense layers. The convolutional layers extract features from the input images, and the dense layers perform classification based on these extracted features. The model is compiled with the Adam optimizer, categorical cross-entropy loss function, and accuracy as the evaluation metric."
]
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count": 17,
"execution_count": 11,
"id": "d44f7806",
"id": "d44f7806",
"metadata": {},
"metadata": {},
"outputs": [],
"outputs": [],
...
@@ -142,9 +179,19 @@
...
@@ -142,9 +179,19 @@
"])\n"
"])\n"
]
]
},
},
{
"cell_type": "markdown",
"id": "ab7b7e82",
"metadata": {},
"source": [
"### Train the Model\n",
"\n",
"In this section, I train the model using the fit() function. It takes the training data and labels as inputs and trains the model for the specified number of epochs and batch size. The validation data and labels are provided for evaluation during training. The model learns from the training data to minimize the defined loss function and improve its accuracy."