►
From YouTube: Incubation Engineering MLOps - Update 2022-01-16
Description
Current state of MLFLow Integration
- Request for testing: https://forms.gle/pALYkCFbz9jZmfXU7
- Feedback Issue: https://gitlab.com/gitlab-org/gitlab/-/issues/381660
- This Update: https://gitlab.com/gitlab-org/incubation-engineering/mlops/meta/-/issues/65
- All Updates: https://gitlab.com/gitlab-org/incubation-engineering/mlops/meta/-/issues/16
A
Hello,
everyone
and
welcome
to
another
update
for
mlops
here
gitlab.
So
today
we're
gonna
talk
quickly
about
the
progress
on
mlflow
or
on
more
specifically
on
experiment
tracking
within
gitlab
and
a
little
bit
about
Jupiter
as
well,
because
why
not
so
just
a
reminder?
What
we're
doing
here
we've
been
working
for
the
past
few
months
on
enabling
experiment
tracking
for
for
gitlab.
A
So
what
experiment
tracking
means
is
finding
a
way
or
allowing
data
scientists
to
keep
track
of
what
hyper
parameters
generated,
what
model
candidates
within
gitlab
and
how
could
this
influence
or
how
could
this
be
used
across
the
plot,
digit
platform
or
better?
How
could
the
gitlab
platform
help
data
scientists
track
these
experiments?
A
So
on
the
past
few
weeks
we
did
a
lot
of
improvements
on
the
ux
and
UI.
We've
been
iterating
over
the
internal
testing
that
started
in
December,
so
we
have
data
scientists
here
gitlab
and
we
added
we
asked
them
to
test
this
feature.
A
They
already
use
ml
flow,
so
the
only
thing
that
they
did
was
swap
the
URI
to
gitlab
and
gave
us
feedback
or
what
needs
to
be
done
still
so,
based
on
that
feedback,
we've
added
a
few
different
things,
so
we
added,
for
example,
search
now
there's
search
over
here
when
you
go
to
the
we
also
added
display
for
the
name
of
of
an
exp
of
a
running.
If
there
is
so,
let
me
just
play
over
here
this
one.
A
So
if
there
is
a
name,
we
display
the
creation,
the
user
that
you
created
it.
When
we
go
over
the
details
of
of
that
of
that
candidate,
we
also
will
be
able
to
see
the
metadata
for
that
candidate.
So,
for
example,
over
here,
this
was
created
by
mflo
itself.
So
we
use
the
referral
client,
and
here
you
can
see
all
the
metadata
that
was
generated
during
the
creation
of
that
candidate
oral
that
run.
So
all
of
this
came
from
internal
testing
and
we
are
further
integrating
on
this
testing
out
new
things.
A
So
a
bunch
of
Mrs
emerged
for
this,
and
additionally,
we
took
a
little
bit
of
time
to
work
on
Jupiter
book
divs.
There's
a
big
low
hanging
fruit,
which
is
there
is
a
size
limit
for
divs,
which
is
512
gigabytes,
any
Jupiter,
diff
kind
of
goes
over
that.
So
what
we're
trying
to
do
here
is
enabling
setting
a
different
size
limit
for
Jupiter
diffs.
So
we
did
the
code
already
in
Italy.
A
So
what's
next
validate
with
further
internal
testing,
so
we
did
a
lot
of
changes.
A
lot
of
new
features,
a
lot
of
that.
A
new
feature
request
from
from
our
colleagues
so
validating
those
we
can
move
on
to
external
testing
already
either
closed
or
open
I'm,
not
entirely
sure
how
that's
going
to
go.
But
we
are
at
a
point
that
we
fixed
all
of
the
major
we.
We
introduced
all
of
the
major
features
and
fixed
what
was
needed
so
as
a
feature
it
works
already.
So
as
a
experiment
tracking,
it
works.
A
What
we
need
to
do
now
is
go
beyond
is
how
can
we
leverage
the
gitlab
platform
to
both
increase
the
experience,
while
using
experiment
tracking,
but
also
how
can
experiment
tracking
inform
the
other
steps
of
the
devops
for
data
science
workflow
or
for
the
mlaps
workflow,
so
yeah?
This
is
where
we
are
at
now,
and
thanks
for
watching.