Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
S
Smart E- Learn Tracer
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
1
Merge Requests
1
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
23_22 - J 01
Smart E- Learn Tracer
Commits
6f2965d2
Commit
6f2965d2
authored
May 15, 2023
by
Niyas Inshaf
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Add new file
parent
1616ce20
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
48 additions
and
0 deletions
+48
-0
Final_Answer VView
Final_Answer VView
+48
-0
No files found.
Final_Answer VView
0 → 100644
View file @
6f2965d2
!pip install transformers
from
transformers
import
AutoTokenizer
,
AutoModelForCausalLM
import
random
#
Load
the
GPT
-
2
model
and
tokenizer
model_name
=
"EleutherAI/gpt-neo-2.7B"
tokenizer
=
AutoTokenizer
.
from_pretrained
(
model_name
)
model
=
AutoModelForCausalLM
.
from_pretrained
(
model_name
)
#
Set
the
pad
token
id
and
attention
mask
tokenizer
.
pad_token
=
tokenizer
.
eos_token
model
.
config
.
pad_token_id
=
model
.
config
.
eos_token_id
model
.
config
.
use_cache
=
False
#
Take
user
input
text
=
input
(
"Please enter a piece of text: "
)
#
Generate
a
summary
of
the
text
using
the
GPT
-
2
model
inputs
=
tokenizer
.
encode
(
text
,
return_tensors
=
'pt'
)
summary_ids
=
model
.
generate
(
inputs
,
max_length
=
100
,
num_beams
=
4
,
early_stopping
=
True
)
summary
=
tokenizer
.
decode
(
summary_ids
[
0
],
skip_special_tokens
=
True
)
#
Generate
structured
questions
and
multiple
-
choice
options
based
on
the
summary
questions
=
[]
for
sentence
in
summary
.
split
(
"."
):
if
sentence
.
strip
()
!= "":
#
Fill
-
in
-
the
-
blank
approach
sentence_tokens
=
tokenizer
.
tokenize
(
sentence
)
blank_index
=
random
.
choice
(
range
(
1
,
len
(
sentence_tokens
)))
sentence_tokens
[
blank_index
]
=
"[MASK]"
sentence
=
tokenizer
.
convert_tokens_to_string
(
sentence_tokens
)
#
Generate
question
and
multiple
-
choice
options
question
=
sentence
.
replace
(
"[MASK]"
,
"______"
)
options
=
[
tokenizer
.
convert_tokens_to_string
(
model
.
generate
(
inputs
,
max_length
=
50
,
num_beams
=
4
,
early_stopping
=
True
)[
0
]).
strip
()
for
_
in
range
(
3
)]
options
.
append
(
tokenizer
.
convert_tokens_to_string
(
sentence_tokens
[
blank_index
-
1
:
blank_index
+
2
]).
replace
(
"[MASK]"
,
"______"
).
strip
())
random
.
shuffle
(
options
)
#
Add
question
and
options
to
list
questions
.
append
((
question
,
options
))
#
Print
questions
and
options
print
(
"
\n
Generated Questions:"
)
for
i
,
(
question
,
options
)
in
enumerate
(
questions
):
print
(
f
"
\n
{i+1}. {question}"
)
for
j
,
option
in
enumerate
(
options
):
print
(
f
" {'ABCD'[j]}. {option}"
)
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment