►
Description
This talk will be given in person at the OSS Summit 2023
What is MlOps, and why is it a thing in the first place?
What are the unique challenges faced when developing features powered by Machine Learning?
How is GitLab building the MLOps Platform?
What is LLMOps
A
I
am
a
Brazilian
I've
I
am
located
in
the
Netherlands
and
before
this
at
gitlab
I
was
a
data
scientist
and
also
a
software
engineer,
an
Android
engineer,
front-end
and
some
others
quick
overview.
First,
we're
going
to
start
by
I'll
start
with
the
definitions.
What
is
ml
Ops
in
itself,
then
I'm
going
to
go
on
a
few
examples
of
some
communication
issues,
our
name
between
data
scientists
and
the
platform
teams
and
the
rest
of
the
company,
and
to
explain
why
mlops
is
there
third
I'm
gonna
talk
about?
A
Why
why
develop
developing
this
machine
learning
is
developing
with
machine
learning.
Is
different,
then
I'm
gonna
give
a
an
overview
of
how
we
are
tackling
this
problem
at
gitlab
and
finally,
a
quick
word
about
llm
Ops.
A
So
in
Portuguese
we
say,
let's
name
the
cows,
and
so
what
is
devops
and
what
is
envelops
devops
short
story
is
always
people
working
together
to
reduce
inefficiencies
and
to
deploy
high
quality
software
that
actually
solves
business
problems.
A
So
it's
a
set
of
processes
and
tooling
that
we
implemented
over
time
to
help
us
deploy
actually
good
software,
that
users
can
use
and
it's
useful
for
them
and
mlops
is
similar.
It
is
the
same
idea.
It's
a
lot
of
it's
people
wanting
to
develop
processes,
develop
tooling,
to
make
deployment
software
easily
and
meaningful
software.
The
only
difference
is
that
this
email
up
this
this
software
includes
machine
learning
powered
features,
so
some
some
are
all
or
a
small
amount
of
features
are
powered
by
Machine
learning.
A
This
is
the
only
difference,
so
the
goal
is
the
same,
but
if
it
was
the
same
thing,
I
wouldn't
be
here
talking
to
you
right.
So
why
is
mlops?
Why
does
mlops
exist
in
the
first
place?
I'm
going
to
show
some
examples
of
some
conversations
that
I
had
in
the
past
that
I'll
explain
them
later,
but
just
some
of
the
examples
I
was
data
scientist
at
a
large
company
about
300
in
a
scientist,
3000
Developers,
and
we
had
we
part
of
the
designs
work
you
set
up:
data
pipelines
and
I.
A
Have
we
used
the
cool,
a
tool
called
Uzi,
which
is
a
pipeline
scheduler
kind
of
for
the
Hadoop?
So
before
airflow
was
a
thing
and
it
worked
for
us,
it
wasn't
perfect,
but
what
we
did
is
with
my
our
workflow
was
find
a
similar
pipeline
that
worked
copy
paste.
The
definition
changes
we
needed
push
it
and
it
worked
it
had
dependencies.
We
had
this
huge
lag
of
of
of
of
pipelines.
It
worked,
it
wasn't
the
best,
but
it
worked
at
some
point.
The
platform
decided
that
it
wasn't
good
enough.
A
That
users
had
some
specific
setup
required
made
it
hard
to
debug.
Didn't
really
accept
the
format
that
we
used
to
work.
It
would
take
time
to
deploy
to
migrate
and
so
on,
and
the
long
story
short
is
that
this
conversation
happened.
Five
years
ago,
I
spoke
with
colleagues
last
last
month
and
they're
still
using
the
old
one.
All
of
the
effort
that
was
put
on
the
new
tooling
went
to
waste
because
they
didn't
account
for
the
user
base.
A
This
was
also
with
me.
I
was
working
with
a
smaller
company
and
we
had
just
deployed
a
part
of
of
the
of
a
number
of
flow
of
of
mlops
kind
of
like
setup
or
our
stack
with
kubeflow,
and
we
were
about
to
deploy
a
mod.
We
were
working
on
deploying
models
or
improving
the
the
model
deployment
and
realized
that
there
was
no
model
registry.
A
There
was
no
way
to
mount
version
those
models
and
we
approached
the
platform
team,
okay,
the
platform
team-
and
we
needed
this,
and
they
just
said
that
it
shouldn't
be
necessary
because,
since
kubeflow
is
already
said,
they
are
end-to-end.
That
means
what
I
was
needing
was
not
necessary,
because
if
it's
end-to-end
it
would
be
there.
A
So
that
was
very
frustrating
to
deal
with
another
one-
and
this
is
a
rite
of
passage
for
almost
all-
that
the
scientists
dealing
with
a
platform
team,
especially
when
it's
starting
to
to
the
team,
is
starting
in
a
new
company,
is
convincing
the
platform
team
that
data
scientists
need
access
to
production
data
on
the
dev
environment,
that
kind
of
breaks
the
the
the
the
the
the
the
the
the
the
the
connections.
All
the
learning
like
how?
What
do
you
mean
production
data
on
the
deaf
environment?
A
A
So
you've,
my
you
might
have
noticed
a
few
of
the
of
the
themes
here,
but
there
are
two
reasons:
why
main
reasons
why
devops
is
failing
machine
learning
first
of
all,
is
a
failing
to
understand
the
who
is
trying
to
force
data
scientists
into
this.
A
software
engineering
workflow
into
data
science,
when
data
science
is
different
and
the
data
scientists
are
different
and
second
failing
to
understand
the
what
creating
software
with
machine
learning
is
different
than
creating
software
without
machine
learning.
A
There
are
fundamental
difference
that
will
go
in
a
minute,
so
we
went
through
what
is
examples,
and
now
why
is
machine
learning
different?
The
first
part
is
the
who,
let's
start
with
the
whom
data
scientists
are
not
soft
ingenious.
Let's
start
with
that,
they
are
not
soft
Engineers.
They
care
about
different
things.
They
have
different
training.
A
The
workflow
that
works
for
for
software
Engineers
might
not
necessarily
work
for
data
scientists,
and
sometimes
what
happens
is
that
we
have
a
hammer
and
we
pretend
everything
is
a
nail,
but
we
have
a
screw
here.
So
first
of
all,
data
scientists
are
have
actually
different
personas,
even
in
their
group.
So
there
you
have
machine
learning
Engineers,
which
are
software
Engineers,
more
specialized
towards
machine
learning.
You
have
researchers,
people
more
connected
to
the
academy,
even
if
they
are
embedded
into
the
into
your
companies,
they
act
more
as
a
research
team.
A
You
have
data
scientists
or
decision
scientists
which
are
data
scientists
more
focused
on
helping
the
user,
helping
the
business
make
decisions
so
a
b
testing
cultural
impact,
and
things
like
that,
so
each
one
of
them
act
differently.
Different
workflow,
different,
tooling.
Second
code
is
not
really
a
craft
for
data
scientists.
Of
course,
some
will
like
coding,
but
software
Engineers
code
is
their
craft
data
scientists.
Machine
learning
is
aircraft
data
analysis.
Their
craft
code
is
just
a
means
to
get
there.
A
They
are
they
don't
care
so
much
about
writing
the
best
code
or
about
the
newest,
tooling
or
whatever.
They
just
want
something
that
works
for
them.
They
have
very
low
tolerance
towards
new,
tooling
or
setting
up
new
tooling.
They
prefer
to
use
what
already
exists.
Partly.
This
is
the
reason
why
you
have
like
one
language,
one
language
which
is
python.
Some
people
use
R
and
the
libraries
are
consolidating
like
the
ecosystem
of
data.
A
Scientists
is
kind
of
Consolidated
into
very
small
amount
of
of
libraries,
so
data
science
prefer
to
consolidate
rather
to
to
spread
out
and
create
new
stuff
on
tooling
and
last,
and
another
point
is
that
data
scientists
have
very
very
different
backgrounds.
It
is
the
most
diverse
group
of
people
I've
worked
with
in
the
past
from
background.
They
are
not
the
most
of
them
are
not
computer
scientists.
They
come
from
a
philosophy
from
music
from
biology
from
geology
physics.
A
The
training
for
computer
science
is
not
there
for
for
a
lot
of
the
cases
as
well,
so
even
things
that
software
Engineers
consider
basic
like
kit
might
be
something
new
to
someone
that
comes
from
the
Academia,
so
there's
a
different
set
of
complexities
that
data
scientists
and
software
Engineers
care
about,
and
the
second
Point.
Why
is
machine
learning
different?
Is
the
what,
when
you're
working
with
regular
software
when
you
build
software,
you,
you
code,
your
logic
that
will
take
some
input,
data
and
output,
some
Behavior.
A
So
the
code
is
ex
the
the
the
the
the
logic
of
the
developer
is
explicit
through
the
code.
When
you
code
review
you
don't
really
just
look
at
the
code,
you
call
you
look
at
the
logic
that
the
code
implements
right.
So
this
is
regular
software
development,
like
you
have
import
you
have.
It
goes
through
some
code.
That
has
the
logic
the
the
the
the
soft
engineer
intended
to
implement
any
outputs
and
behavior
Pharmacy
learning
is
different.
Everything
starts
with
some
data
that
you
that
they
want
to
extract
some
logic
from
you.
A
Don't
know
if
the
log
exists
in
the
data
in
the
logic
is
made
apparent
by
patterns,
and
you
write
some
code
to
extract
these
patterns
from
the
data.
This
patterns
might
or
might
not
reflect
a
logic
that
exists
there
or
doesn't
exist,
and
this
extracted
patterns
will
return
a
model
that
will
have
some
logic.
That
will
do
some
logic.
A
So,
even
if
there's
no
logic
in
the
training
data,
the
code
will
find
something
right
and
then
you
pass
some
input
data
to
this
model
that
will
finally
lead
to
your
behavior.
So
if
you
compare
to
the
previous
version,
It's
like
a
second
order
over
here,
instead
of
the
logic
being
expressed
in
the
code,
the
the
code
is
explicit.
The
code,
the
code,
the
logic
is
implicit
in
the
in
the
model
in
the
training
data
and
the
code
extracts
this
these
patterns
into
the
model.
A
So
this
leads
to
a
lot
of
challenges.
First
of
all,
extracting
the
patterns
from
the
data
can
be
very
expensive,
like
the
pipelines
for
machine
learning
are
very,
very
expensive.
Some
are
simple,
but
some
can
take
days.
You
need
to
use
GPU,
you
need
to
use
more
powerful
Hardware.
They
are
larger.
Some
machine
learning
models
are
up
for
are
larger
for
than
10
20
30
gigabytes,
so
the
pipelines
are
very
different.
So
the
code
that
you
write
that
you
write
to
extransom
part
pattern.
A
They
will
extract
some
pattern,
but
they
might
be
completely
useless.
It
might
be
the
wrong
pattern
because
the
logic
is
not
there
or
by
then
pattern.
The
launch
is
there,
but
the
patterns
don't
reflect
it
reflected,
but
it
will
extract
some
pattern
and
it
will
create
a
model,
and
then
you
put
them.
You
only
know
if
that
model
is
used
for
or
not
when
it's
when
you
put
it
in
production,
that's
the
only
way
for
sure
to
see
that
whether
a
machine
learning
model
is
used
like
is
doing
what
it's
supposed
to
do.
A
You
have
some
send
it
some
metrics
that
you
can
use
for
sanity
checks,
but
it
will
only
you
check.
You
only
know
if
it's
better
or
not
when
it
goes
to
production.
A
The
patterns
that
you
learn
from
data,
the
training
date
itself
can
get
stale.
So
you,
for
example,
if
you
have
a
a
recommender
system
like
the
recommendations,
might
change,
the
user
might
want
different
things
now,
and
so
you
have
to
keep
retraining
this
this
model,
even
if
the
code
stays
the
same.
So
it's
very
common.
The
versioning.
A
A
The
development
environment
requires
production
data.
This
is
what
I
mentioned
before,
because
the
code
extracts
pattern
right.
This
is
what
the
code
does.
You
cannot
extract
pattern,
a
useful
pattern
from
local
data.
From
like
the
the
toy
data,
you
could
take
the
production
data
and
create
some
synthetic
data
from
the
production
data,
but
the
production
data
is
part
of
the
pipeline,
like
you
need
production
data
or
something
that
looks
like
production
data
to
create
the
model
in
the
dev
environment.
A
A
So
if
you,
if
you
create
a
model
that
has
I,
don't
know
five
percent,
one
percent
error
rate
or
or
something
like
that,
it
each
model
that
you
create
is
different
and
it
will
result
in
a
different
thing
and
it
might
break
your
your
you
might
break
your
test
and
tests
are
harder
to
implement
properly
with
machine
learning.
A
So
all
of
these
problems
arise
from
being
different,
both
on
The
Who
and
on
the.
What
and
how
is
gitlab
approaching
this
so
right
now
in
devops
a
while
ago
or
still
over
there.
What
happened
is
that
we
started
realizing
okay,
we
need
to
improve
our
efficiency
and
what
happened
is
a
bunch
of
different
point
solution
tools
that
solve
each
one.
A
specific
problem
appeared
and
then
they
didn't
communicate
with
each
other
really.
Well,
then,
what
happened?
A
We
moved
into
creating
standardized
apis
between
this
tooling,
but
the
user
was
still
the
customer
or
the
user
was
still
responsible
for
gluing.
All
of
this
tooling
around
a
lot
of
error
prone
a
lot
of
security
issues,
and
then
it
came
with
the
digital
duct
tape
or
what
I
call
his
teachers,
some
vendors,
that
would
just
connect.
All
of
these
different
write,
the
glue
code
for
you,
but
each
one
of
these
different
tools
still
had
their
own
specific
language,
visual
language,
design,
language
and
you
had
to
learn
all
of
them
differently.
A
Some
use
different
words
and
others
for
the
same
thing,
and
it
was
just
inefficient
right
now.
Mlopsis
is
just
about
getting
to
the
standardized
tool:
Chains
It's,
not
even
there.
Yet
some
vendors
are
already
working
on
the
digital
duct
tape
and
connecting
different
tuning,
but
we
are
not
even
really
at
the
standardized
tool
chains.
A
So
how
gitlab
is
going
we
at
gitlab?
We
are
tackling
this.
The
same
way.
We
are
working
on
devops.
The
mlops
platform.
I
will
just
read
this
slide
and
then
I'll
explain
later
so.
Gitlab
ml
platform
is
a
single
application,
powered
by
a
cohesive
user
interface,
agnostic
of
managed
or
Source
deployment.
It
is
built
on
a
single
code
base
with
a
unified
data
store
that
allows
organizations
to
resolve
in
efficiency
and
vulnerability
of
an
unreliable
dii
tool
chain.
What
does
that
mean?
A
A
We
create
a
single
tool
that
encompasses
all
of
this
all
of
this
development
process
and
what
we
are
doing
with
ML
Ops
is
looking
the
same
way
while
on
on
crate,
is
mlaps
lacking.
Warren
plan
are
we
lacking
for
ML
apps?
Why
don't
secure
Warren
govern
what
are
the
features
across
the
whole
mlops
landscape?
We're
not
focusing
on
some
specific
here
and
there.
We
are
looking
at
the
whole
thing
and
seeing
it
here
in
each
step.
What
is
missing
right
now,
so
what
we
want
to
build
for
ML
Ops.
We
want
to
build
develops
platform.
A
A
gitlab
native
experience,
the
same
place
that
you
work
with
your
with
the
software
in
the
software
engineer,
the
product
manager
and
what
everyone
we
want.
The
data
science
to
be
the
data
scientists
should
be
there
as
well.
We
want
gitlab
to
be
useful
to
the
data
science
so
minimal
setup.
The
the
data
scientist
doesn't
need
to
set
up
anything.
They
don't
need
to
ask
a
devops
or
a
platform
engineer
to
set
something
new
if
it
is
a
gitlab,
it
is
available
to
them
either
on
the
sauce
on
gitlab.com
or
on
their
self-managed
instance.
A
Connected
across
the
platform
is
an
important
point.
Is
so
it's
not
about
vertical
features.
We
don't
want
to
build
envelopes
as
a
vertical
thing.
Intricate
lab
like
something
separate.
No,
it
has
to
be
integrated
across
the
features
that
are
already
there
the
same
language.
So
you
don't
need
to
learn
multiple
tools,
you
just
you,
learn
the
same
tool
for
your
issues,
for
your
merge
requests
for
your
package
registry
for
your
CI
and
for
your
machine
learning
development.
Everything
is
together
and
third
and
finally,
it's
open
source
and
we
are
all
you're
building
in
the
open.
A
So
everything
that
we
build
is
available
whenever
I'm
working,
there's
some
users
coming
in
giving
feedback
and
I
I
share
updates
very
frequently
on
the
state,
even
if
it's
not
ready
just
to
check
in
with
the
users
if
the
vision
is
is
going
to
Accord
if
the
features
are
what
they
want
and
so
on
and
so
forth.
So
it's
a
really
fun
way
to
build
and
I
sell
a
bunch
of
words.
But
what
do
we
actually
do
so
far?
So
first,
the
first
feature
that
that
we've
built
was
code.
A
Reviews
for
Jupiter
notebooks,
like
I
mentioned
before
Jupiter
notebooks,
are
a
very
weird
file
type
because
it
embeds
into
it
it's
actually
a
Json,
but
that
embeds
into
it
code,
images
as
base64
HTML.
A
It's
a
lot
it's
almost
it
should
be
about
it
feels
like
it
should
be
a
binary,
but
it's
not
and
it's
a
Json
file
with
all
of
this
and
data
scientists.
They
they
writer
code,
they
push
to
get
to
a
git
repository,
but
they
cannot
review
that
code
because
it's
Jupiter.
So
what
we
did?
We
implemented
a
native
diff
between
this
this
notebox,
so
you
can
discuss
if
you,
if
you
push
a
chapter
notebook
into
gitlab,
you
have
this
diff
between
before
and
after
on
the
commit.
A
It
shows
the
images
it
shows
the
code,
it
shows
the
markdown.
It
shows
everything
you
can
discuss,
both
the
input
and
the
output
on
a
gitlab
itself,
zero
setup
by
the
users.
So
you
just
need
to
push
your
notebook
and
it
will
be
working.
This
was
already
released
in
14.5.
A
Another
thing
that
built
recently
is
model
experiments
models
when
you're
training,
a
machine
learning
model
you
train,
since
it's
non-deterministic,
every
change
in
parameter
will
have
a
different
model
with
a
different
performance
by
whatever
thing
you're
measuring
performance.
A
Even
when
they're
still
developing
like
it's
not
even
on
CI,
yet
they
are
developing
locally,
but
they
want
to
keep
track
of
of
the
different
models,
they're,
creating
different
versions
and
and
the
the
artifacts
that
they
have
a
common
solution,
for
this
is
Excel.
That
is,
scientists
you
Excel
for
that
and
an
open
source
tool
for
this
is
ml
flow,
and
this
is
where
gitlab
starts
to
do
it
differently.
A
A
lot
of
the
vendors
will
just
provide
a
self-managed
insta,
managed
instance
of
ml
flow
to
customers
as
a
as
a
product,
so
they
take
the
mlflow,
which
is
an
Open
Source
One
and
provide
that
one
and
those
are
what
what
we
call
them.
The
latest
teachers
gitlab.
We
are
not
doing
that.
What
we
did
with
built
about
experiments,
direct
limited
lab
on
the
code
base.
We
rebuilt
this
this
feature,
and
so
it
goes
it's
in
the
same
code
base.
A
It
follows
the
same
code
in
standard,
same
security
processes
of
gitlab,
but
by
building
this
intricate
Lab
First,
you
have
the
same
UI
that
the
rest
of
the
two.
You
don't
need
to
learn
a
new
UI,
but
it's
very
easy
for
us
that
for
us
to
have
this
connected
now
to
the
CI
pipelines,
if
you
create
your
model
from
a
CI
pipeline,
you
we
will
pick
it
up
and
you
already
display
all
of
the
information
about
the
CI
power
plant
directly
in
the
model
if
it
is
Created
from
an
MR.
A
Imagine
that
you
create
an
MR
with
a
change
that
a
mark
triggers
the
training
of
different
of
different
models,
because
there
was
a
changing
code
on
data
in
parameters.
We
will
see
that
you
see
that
the
that
Mr
created
these
different,
these
different
models,
the
package
registry.
This
is
built
on
top
of
the
package
right.
So
when
you
save
a
candidate
or
a
model
candidate
into
gitlab,
we
use
you'll
be
able
to
store
your
model
itself
into
gitlab.
You
don't
need
to
set
up
a
bucket
or
a
S3.
A
Everything
is
already
on
gitlab
without
any
setup
by
the
user.
This
is
already
available,
both
in
gitlab.com
and
also
in
self-managed.
So
the
users
don't
need
to
install
anything
new.
They
don't
need
to
go
and
understand
a
new
tooling.
It's
already
there,
they
don't
need
to
ask
the
platform
engineer
either
it's
already
there.
A
User
management
is
also
through
gitlab.
Like
a
lot
of
this
tooling,
you
need
to
connect
with
your
ldap
service
or
whatever.
This
is
already
using
the
gitlab
built-in
user
management
projects
through
permissions
through
everything
and
to
make
it
easier
for
data
scientists,
because
we
we
are
aiming
for
minimal
changes
on
the
on
the
code
as
little
as
possible,
and
we
just
want
them
to
adopt
one
shot
up
and
be
able
to
adopt.
We
also
provide
compatibility
with
mlflow
client,
so,
like
I
mentioned,
my
fluids
are
common.
Ops
are
stool
for
this.
A
A
What
we're
working
now
is
the
model
registry,
so
the
model
experience
is
kind
of
like
the
scratch
pad,
and
the
model
registry
is
where
you
save
models
that
go
into
production.
So
we
are
working
to
understand.
Where
is
package
registry
not
enough?
What
are
the
things
that
we
need
to
implement
over
here,
so
that
this
becomes
truly
useful
for
data
scientists,
so
larger
model
sizes?
You
need
to
keep
better
track
of
your
metadata
for
models.
You've
got
to
train
a
lot
of
different
versions
of
the
same
model.
How
do
we
manage
that?
A
For
example?
A
lot
of
the
use
cases
is
the
a
lot
of
the
software
engineer.
I
love.
The
package
registry
are
ignorant
towards
what
is
in
production
like
they
just
provide
versions,
and
then
something
else
manages
the
version,
but
data
science
is
a
little
bit
different
that
they
use
the
model
registry
to
already
toggle.
What's
in
production,
so
data
scientists
want
to
be
able
to
track
what's
deployed
or
not
directly
from
the
model
registry,
so
this
is
in
progress.
A
We've
been
developing
if
you're
interested
in
this
follow
that
epic
to
give
feedback
or
to
ask
for
specific
features.
This
is
very,
very
exciting.
So
these
are
the
three
me
three
main
features.
We
also
deployed
GPU
Runner,
so
your
CI
pipelines
can
use
GPU
as
well.
So
if
you
want
to
train
machine
learning
models
on
gitlab,
you
can
use
the
GPU
runners.
A
And
yeah
the
final
aspect
here
and
I'm
just
going
to
do
a
quick
talk
about
a
quick
mention
of
lmops,
which
is
the
new
thing.
A
Remember
how
I
mentioned
that
mlml
developing
with
ML
is
one
step
like
a
second
order,
step
above
software
engineering,
because
it's
indirect,
like
your
logic,
is
not
really
on
the
code.
It's
extracted
as
a
pattern
and
a
lot
of
mobs
will
make
it
even
worse.
Why?
Because
it
adds
an
additional
step.
So
before
you
had
a
training
data
and
you
had
the
code,
the
code
would
output
a
model
that
will
receive
input
data.
A
Now
you
have
an
additional
step
in
between,
so
you
have
a
code
that
extracts
patterns
of
a
really
really
large
training
data
that
will
generate
a
lamp
which
is
the
model
this
model
we
were.
This
other
time
will
receive
a
prompt,
and
this
prompted
by
plus
elements
what
I'm
calling
here
an
app
so
it
it
is
instructions
of
how
the
llm
should
read
input
data,
and
then
you
feed
the
input
data
from
the
user
from
another
system.
A
So
there
are
many
levels
of
indirect
of
interactions
here,
like
the
logic
is
split
across
all
of
them,
and
it's
going
to
be
really
hard
to
to
manage
this.
So
there
are
many
challenges
that
will
appear
that
we
do
not
consider.
For
example,
what's
in
the
data
like
the
training
data,
a
lot
of
the
companies
won't
won't,
be
the
ones
building
the
llm
themselves,
because
it's
very
expensive
so
you're
going
to
have
some
large
companies
building
your
lens,
and
so
you
need
to
know.
What's
in
the
data,
how
was
the
what
patterns
they
called
extract?
A
You
need
to
measure
different
LMS
you
want
to.
You
need
to
be
able
to
switch
between
different
llms
The
Prompt
itself
are
artifacts,
so
they
need
to
be
versioned.
They
need
to
be
measured.
A
lot
of
people
are
going
to
try
to
change
this
prompt
like
the
product
manager,
the
ux
writer
they're,
not
really
comfortable
with
still
interacting
the
code,
and
then
you
have.
This
will
generate
another
prompt
and
the
lambo
generate
an
app.
How
does
that
change?
How
does
that
will
work
with
the
input
data?
A
It
might
be
that
by
changing
the
prompt
you
break
the
app
or
the
LM,
you
break
the
app
and
you
need
to
work
through
that.
There
are
other
things
like
agents
that
didn't
even
mention
here
that
allow
you
to
connect
traditional
software
with
llms
and
vice,
or
allow
llms
to
use
traditional
software,
and
all
of
this
is
going
to
get
really
crazy,
and
there
are
a
lot
of
challenges
that
we
will
need
to
work
through
to
create
the
same
way
that
we
are
working
to
build
efficiencies
for
them.
A
We
work
to
build
efficiencies
for
devops
and
how
we
were
working
to
build
efficiencies
for
mlops
who
have
to
work
to
build
new
efficiencies
for
llm
Ops.
A
A
The
existing
devops
Solutions
are
not
enough.
Even
the
existing
envelope.
Solutions
are
not
enough
because
devops,
because
it
doesn't
understand
how
they
don't.
They
failure
to
understand
the
who
the
the
people
who
are
creating
these
machine
learning
models
are
like
the
data,
scientists
and
they're.
What
machine
learning
is
different
than
software
engineering
envelope?
A
Solutions
are
also
not
enough
because
they
are
either
Point
Solutions
like
they,
each
one
fix
their
own
specific
problem
or
they
are
stitchers
like
they
glue
a
lot
of
different
tools,
but
they
don't
create
a
unified
ux
across
the
stooling
and
we
at
gitlab.
We
are
working
at
changing
this
by
creating
by
becoming
the
mlops
platform.
A
That's
what
I
had
to
share
today.
My
name
is
again:
my
name
is
Eduardo.
If
you
want
to
follow
on
Twitter
or
on
LinkedIn
and
yeah.
Thank
you.