►
From YouTube: Training & Deploying AI Models with GitLab and Vertex AI
Description
Walk with us as we train, deploy, and test a predictive model using GitLab's robust CI/CD pipeline and Google's powerful machine learning platform, Vertex AI.
You'll see firsthand how seamlessly these tools work together in a modern data science workflow. We also share valuable insights into deploying and testing the model via a REST API endpoint, making it an ideal watch for developers and business users alike. This demo showcases a practical use case of advanced AI technology in the banking sector.
Find more valuable content on GitLab, Vertex AI, and other AI technologies on our channel.
Project repository: https://gitlab.com/gitlab-com/alliances/google/sandbox-projects/demos/vertex-ai
A
Some
prerequisites
first
would
be
a
Google
Cloud
project.
Second,
would
be
a
service
account
on
Google
cloud
with
these
permissions,
the
AI
platform,
admin
service,
account,
user
storage,
admin,
storage,
object,
admin
and
vertex,
AI
administrator.
And
finally,
you
need
your
gitlab
project
where
you
will
have
your
CI,
CD
and
source
code
hosted.
A
A
Fourth
would
be
to
train
the
model.
This
will
be
a
custom
Training
via
python
script
that
will
be
executed
and
then
deployed
onto
Google,
Cloud's,
vertex
AI.
And
finally,
we
will
test
this
endpoint.
Once
it's
deployed
onto
Google
Cloud
now,
I
wanted
a
demo.
The
first
step
is
to
set
up
the
service
account
of
your
Google
credentials
into
your
gitlab
project.
A
To
do
that,
you
go
to
settings,
go
to
cicd
and
enter
the
credentials
Json
value
in
the
environment
variables,
and
you
should
encode
this
using
base64
to
obfuscate
that
Json
value
of
the
service
account
so
I
put
them
right
here
as
a
variable
called
Google
credentials
and,
as
you
can
see,
it's
obfuscated
and
encoded
on
base64.
So
that
is
the
first
step.
You've
set
up
the
gitlab
project
to
be
able
to
communicate
with
your
Google
Cloud
project.
A
The
next
step
is
to
upload
your
data
set
to
vertex
AI
in
Google
Cloud.
So
in
this
screen
you
see
that
I've
already
completed
and
uploaded
my
current
data
set,
but
the
steps
on
doing
that
is
simply
just
to
create
the
data
set
using
the
button
here
at
the
top
you'll
be
entered,
I
asked
to
enter
the
name,
but
since
we're
using
a
credit
card
fraud,
detector
using
tabular
data
from
transactions
happened
a
decade
ago.
A
You
could
you
should
select
appropriate
type
like
tabular
data
meant
for
regression
or
classification,
because
we
are
going
to
predict
whether
transaction
is
fraudulent
or
not
so
classification
would
be
the
model
once
you
hit,
create
you'll,
be
asked
where
to
Source
the
that
data.
So
it's
proceeded
here
so
either
you
can
upload
it
select
from
cloud
storage
if
you've
already
made
that
available
or
use
bigquery.
So
previously,
I
had
uploaded
this
from
my
direct
local
computer,
but
you
could
also
use
something
from
cloud
storage.
You
just
need
to
enter
the
location.
A
Okay
next
step
is
to
head
back
to
the
gitlab
project
and
plan
and
write
the
CI
CD
pipeline.
So
gitlab
is
very
important
in
the
step,
because
this
is
where
the
deployment
and
training
will
be
kicked
off
and
run
by
or
runners
in
your
project.
Now,
let's
look
at
the
pipeline,
so
I'll
zoom
in
a
bit,
so
there
are
three
stages
for
this
pipeline,
so
the
first
stage
is
the
test
stage
where
I've
included
a
great
deal
of
the
templates
used
for
scanning
and
security
you
can
get.
A
This
is
the
ensure
code
quality
with
the
training,
python
code
and
deployment
python
code
that
we'll
be
using
so
this
this
key
to
ensure
that
the
quality
and
security
is
there.
The
next
stage
would
be
the
training
stage
where
the
application
will
be
the
python
code
that
will
train
the
model
will
be
executed
using
python.
A
Now,
it's
very
straightforward:
it's
just
a
simple
app
installation
of
the
the
prerequisite
libraries
in
python
in
the
container
and
that
a
new
artifact
will
be
generated
called
model.pickle
and
that
will
be
uploaded
to
a
Vertex
AI
once
that
model
is
uploaded
and
included
on
the
model
registry.
It's
then
ready
now
for
deployment
via
an
endpoint
and
that
will
be
executed
by
this
last
stage.
For
this
stage,
we'll
be
using
a
specific
container
for
deep
learning
that
Google
had
provided,
and
it's
geared
for
the
library
that
you'll
be
using,
which
is
scikit-learn.
A
So
these
are
the
stages,
we'll
be
testing,
training
and
deploying.
Now,
let's
take
a
quick
look
at
the
code
that
will
train
the
model
in
vertex
AI
I'm
not
going
to
go
very
detailed
on
this.
You
can
take
a
look
at
the
code
at
the
code,
repo
which
I'm
seeing
the
link
that
I'll
be
providing,
but
what
essentially
will
happen
is
the
code
will
connect
to
Google
Cloud
using
the
credentials
that
we
had
entered
at
the
beginning?
A
A
The
final
piece
of
code
that
will
be
executed
in
this
demo
application
is
a
deployment,
a
script
in
Python,
it's
very
straightforward.
So
what
will
happen
is
it'll
connect
to
the
Google
Cloud
project
using
the
credentials
I
had
created
earlier
and
from
there.
It'll
programmatically
create
an
endpoint
and
deploy
the
model,
and
so
once
that's
done,
people
can
test
and
try
out
the
API
and
make
some
requests
and
send
Json
payloads
to
test
predictions
of
the
model.
A
So
here's
what
the
pipeline
looks
like
once
the
execution
is
done.
All
those
three
stages
that
had
described
were
completed
the
test,
train
and
deploy,
and
what
this
means
is
that
we
should
expect
an
endpoint
deployed
on
vertex
AI,
based
on
the
model
that
we
had
trained
using
the
training
custom
Training
on
python
heading
to
vertex
AI.
A
Earlier
now,
when
I
click
to
the
version
ID
I'm
able
to
test
this
endpoint
using
a
feature
at
vertex
AI
without
the
use
of
Postman
or
some
other
external
tool,
I
plug
in
the
Json
request
here
to
test
click
predict,
and
this
is
the
expected
response,
where
the
value
of
0
or
1
will
be
shown
here,
based
on
how
the
model
predicts,
whether
it's
a
fraudulent
transaction
or
not.
So
there
you
have
it.