Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
A
AI BASED MULTI-LANGUAGE CHATBOT SYSTEM
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Jathursini perinpanathan
AI BASED MULTI-LANGUAGE CHATBOT SYSTEM
Commits
ae02e196
Commit
ae02e196
authored
Apr 27, 2022
by
Jathursini perinpanathan
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
chatbot
parent
ea8e2fc4
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
53 additions
and
0 deletions
+53
-0
chat.py
chat.py
+53
-0
No files found.
chat.py
0 → 100644
View file @
ae02e196
import
random
import
json
import
torch
from
model
import
NeuralNet
from
nltk_utils
import
bag_of_words
,
tokenize
device
=
torch
.
device
(
'cuda'
if
torch
.
cuda
.
is_available
()
else
'cpu'
)
with
open
(
'intents.json'
,
'r'
)
as
json_data
:
intents
=
json
.
load
(
json_data
)
FILE
=
"data.pth"
data
=
torch
.
load
(
FILE
)
input_size
=
data
[
"input_size"
]
hidden_size
=
data
[
"hidden_size"
]
output_size
=
data
[
"output_size"
]
all_words
=
data
[
'all_words'
]
tags
=
data
[
'tags'
]
model_state
=
data
[
"model_state"
]
model
=
NeuralNet
(
input_size
,
hidden_size
,
output_size
)
.
to
(
device
)
model
.
load_state_dict
(
model_state
)
model
.
eval
()
bot_name
=
"Sam"
print
(
"Let's chat! (type 'quit' to exit)"
)
while
True
:
# sentence = "do you use credit cards?"
sentence
=
input
(
"You: "
)
if
sentence
==
"quit"
:
break
sentence
=
tokenize
(
sentence
)
X
=
bag_of_words
(
sentence
,
all_words
)
X
=
X
.
reshape
(
1
,
X
.
shape
[
0
])
X
=
torch
.
from_numpy
(
X
)
.
to
(
device
)
output
=
model
(
X
)
_
,
predicted
=
torch
.
max
(
output
,
dim
=
1
)
tag
=
tags
[
predicted
.
item
()]
probs
=
torch
.
softmax
(
output
,
dim
=
1
)
prob
=
probs
[
0
][
predicted
.
item
()]
if
prob
.
item
()
>
0.75
:
for
intent
in
intents
[
'intents'
]:
if
tag
==
intent
[
"tag"
]:
print
(
f
"{bot_name}: {random.choice(intent['responses'])}"
)
else
:
print
(
f
"{bot_name}: I do not understand..."
)
\ No newline at end of file
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment