►
From YouTube: CDF - SIG MLOps Meeting - 2021-04-22
Description
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
A
B
A
B
Thank
you.
Yeah
been
a
missing
a
bit
of
yeah
home
sort
of
che
blossom
seasons
yeah.
So
I'm
from
tokyo
originally.
B
And
wow:
is
that
your
background
it
or
is
it
the
real
background
or
is.
A
B
Yeah
yeah,
I
did
have
a
look.
The
2021
draft
version
yeah
it's
of
quite
a
lot
of
things
to
kind
of
digest,
but
yeah
I
kind
of
wanted
to
mainly
sort
of
know,
get
to
know
what
you
do
sort
of
from
meetings
and
meetings.
I
think
there's
twice
every
month
meeting
every
other
week
on
thursdays.
A
So
we've
for
for
the
robot
meetings
we've
switched
to
once
a
month
at
the
moment:
okay,
just
because
we're
we're
fairly
early
in
the
year,
so
you
know
we're.
A
And
then
we
we
aim
to
do
a
an
update
as
we
go
through
the
year.
So
so
right
now.
B
A
We're
still
relatively
up
to
date
and
there's
not
much
going
on
in
terms
of
new
changes.
Yeah.
A
But
as
we
as
we
ramp
into
you,
know,
trying
to
add
new
stuff
to
the
document
generally
tidy
things
up,
we
may
go
into
back
into
the
twice
a
month
schedule
again.
C
A
Generally,
having
input
and
yeah
but
right
right
now,
the
focus
really
has
been
on
trying
to
capture
a
record
of
what
we
think
the
challenges
really
are
in
the
space
yeah
and
and
then
you
know,
start
to
look
at
what
needs
to
be
done
to
to
manage
that
longer
term.
B
A
Obviously
we're
in
a
when
a
very
new
space
and
everyone's
making
it
up
as
they
go
along,
but
that
will
mature
and
we
want
to
try
and
make
sure
that
it
matures
into
something
that
is
going
to
be
viable
in
the
medium
to
long
term
yep.
Otherwise,
we're
going
to
have
to
get
a
lot
of
people
to
back
up
and
go
in
a
different
direction.
B
A
It
can
take
decades
to
to
get
that
to
happen
so
yeah.
A
A
lot
of
this
is
about
trying
to
get
in
front
of
you
know,
understanding
where
the
whole
discipline
needs
to
be,
and
you
know
what
tooling,
we
might
need
those
sorts
of
challenges
hi.
There.
B
A
Too
yeah
we've
we've
just
been
chatting
over
the
you
know
the
the
general
role
of
the
road
map
and
doing
a
bit.
A
That's
our
agenda
document
yeah,
if
you
could
just
sign
in
on
the
on
the
document
and
you'll,
see
at
the
top
near
the
top
of
the
agenda.
There's
this
this
month's
session
and
a
list
of
attendees
yeah
just
gives
me
a
feel
for
who's
involved
and
what
input
we're.
A
A
So
just
to
get
everybody
up
to
date
in
terms
of
conferences.
A
We
are
probably
going
to
be
doing
an
ml
ops
sig
because
of
a
further
session,
which
will
just
be
a
sort
of
joint
feedback
session
where
everybody
can
get
involved
and
ask
questions,
and
you
know,
share
information.
C
A
C
A
Got
I've
got
a
list
of
upcoming
conferences
that
I
need
to
also
apply
to
so
I'm
gonna
fire
off
a
a
bunch
of
proposals
to
the
the
latest
ones.
A
November,
so
starting
to
to
get
a
few
opportunities
in
there
to
communicate.
A
Now,
if
anyone
wants
to
get
involved
on
the
the
cdcon
session,
then
I
can
include
you
in
the
in
the.
C
I'd
love
to
help.
I
think
I
can
help
creating
some
some
slides
over
there
to
help
you
sharpen
it.
A
It's
it's
not
really
gonna,
be
a
presentation,
it's
more
of
a
panel
chat,
so
it
would
be
a
matter
of
being
able
to
attend
the
in
real
time
and
and
then
you
know,
we,
we
just
have
a
sort
of
off-the-cuff
discussion
about
some
of
the
work
that
we're
doing
and
and
then
hopefully
prompt
some
questions
from
the
attendees
and
then
get
into
more
of
a
discussion.
A
Is
there
a
date
there's
a
there's,
a
broad
date
for
the
for
the
two
days
of
the
conference,
which
I
can
share
with
you,
I'm
still
waiting
to
find
out
what
time
slot
we
might
have
within
that
I've
asked
for
a
sort
of
mid
to
late
afternoon,
uk
time,
because
that
seems
to
map
reasonably
well
to
other
time
zones.
A
Right
so.
A
Other
than
that,
probably
the
the
big
news
this
week
is
the
leaking
of
the
proposed
eu
regulations
on
ai.
A
It's
it's
quite
concerning
it.
It
really
would
not
be
possible
to
be
compliant
with
the
regulations
that
they're
proposing
it's.
A
A
So,
for
example,
there's
a
there's
a
requirement
for
everyone
working
with
the
software
to
be
trained
to
understand
exactly
how
the
system
makes
decisions.
A
So
for
deep
learning,
you're
expected
to
to
be
able
to
know
and
understand
you
know
a
neural
network
which
is
clearly
not
possible.
C
My
take
on
that
is
it's
basically,
there
are
three
things
that
related
to
dutch
three
points
now.
The
first
thing
is,
I
think
that
we
as
a
subgroup
of
the
cd
foundation
linux
foundation,
I
think
we
should
try
and
participate
in
some
meetings.
Official
meetings
related
to
the
regulation
to
as
experts
in
the
field
to
participate
and
give
our
expertise
over
there.
I
think
it
will
also
be
good
for
public
relations
stuff.
C
No,
that's
a
b.
I
don't
see
any
way.
This
can
be
any
close
to
final
because,
as
you
said,
it's
there
are
a
lot
of
concerning
problems
with
this
document
see,
I
think
in
somehow
it
will
push
companies
to
adopt
machine
learning,
tools
or
platforms,
because
otherwise
it
will
be
held
expensive
to
implement.
You
know,
tracking
data
and
managing
data,
so
maybe
we
can
try.
You
know
to
juice
their
lemon.
A
So
responding
to
your
first
point,
I
I
will
raise
that
with
the
cdf
committee
to
to
get
their
feeling
on
whether
it's
appropriate
for
us
to
get
involved
at
that
sort
of
level.
A
A
I
I
think
the
the
approach
will
be
interesting
on
on
this
one,
because
the
the
drivers
for
this
regulation
may
not
be
entirely
what
we
expect
so
so
this
regulation
may
be
more
focused
on
trying
to
deliberately
block
american
and
asian
companies
from
being
able
to
compete
in
europe,
rather
than
explicitly
doing
what
it
says
on
the
tin
and
if
that's
the
case,
there
will
be
a
rush
to
to
push
it
through
and
and
then
there
will
be
a
lot
of
publicity
about.
A
A
Yeah,
so
oh
welcome,
I
see
see,
we've
got
artem,
joined
us
and
feel
free
to
to
join
into
the
conversation.
We're
we're
very
friendly.
We
don't
bite.
B
I'm
the
I'm
a
first
timer
as
well
so
and
in
terms
of
sort
of
our
concern
as
mlop's
special
interest
group
towards
these
kind
of
regulations.
B
Is
there
a
way
we
can
express
and
in
a
more
direct
manner,
rather
than
just
as
a
kind
of
you
know,
here's
our
concerns
and
recommendations.
A
So
to
to
this
point,
the
the
group
has
really
been
focused
on
a
more
technical
audience.
A
And
then,
if
you
like
document
an
end-to-end
requirement
set,
which
we
can
communicate
with
anyone
who's
working
on
building
solutions
in
that
space,
so
that
they're
they're
up
to
speed
with
understanding
the
full
scope
of
what
the
problem
domain
looks
like,
rather
than
getting
narrowly
focused
on
to
the
bit
of
the
problem
that
they
can
see
right
now
and
then
potentially
getting
sucked
off
into
a
blind
alley,
because
they
were
unaware
of
something
that
was
about
to
come
and
bite
them.
A
A
The
idea
was
that
that
would
be
very
beneficial
in
terms
of
supporting
teams
who
were
building
tooling
for
devops
and
ml
ops,
because
it
would
help
to
steer
people
away
from
a
number
of
blind
alleys
which
exist
in
the
in
the
current
domain.
A
A
It's
not
an
ideal
artifact
for
communicating
with
a
non-technical
audience
in
a
way
that
will
be
easily
understood.
So
it
would
be
quite
a
lot
of
work
for
us
to
to
shape
that
into.
C
A
A
A
A
A
A
I
don't
know
if
anyone
else
is
attending
the
the
ml
conference.
That's
going
on
at
the
moment
the
access
conference
with
tecton
and
feast,
but.
A
A
lot
a
lot
of
talks
there,
where
people
have
been
proposing
solutions,
and
actually
we
already
know
that
the
solutions
that
they're
proposing
don't
work
because
of
the
the
requirements
that
we've
gathered
in
the
roadmap.
A
So
it's
a
little
bit
concerning
in
that
you
know.
There's
a
lot
of
work
going
on
worldwide
at
the
moment,
and
lots
of
people
are
dedicating
a
lot
of
time
to
doing
things,
and
actually
they
still
don't
know
what
they
don't
know
in
terms
of
what
problems
they're
going
to
run
into
when
when
they
try
and
implement
that
it's
quite
important
for
us
to
to
find
as
many
ways
as
possible
to
to
get
the
roadmap
circulated
more
widely.
This
year,.
C
Can
you
can
you
give
us
an
example
of
what
are
you
talking
about
fist
implementation?
Are
you
talking
about
something,
maybe
more
specific,
so
yeah
pushing
pushing
to
promote
feast
and
technology
100.
A
So,
for
example,
there
was
a
there
was
a
discussion
yesterday
about
what
a
third
generation
of
machine
learning
system
might
look
like
and
really
what
it
was
describing
was
some
of
the
pipeline
based
machine
learning,
solutions
that
were
already
built
last
year
and
using
some
of
the
practices
that
we've
we've
put
in
the
road
map.
A
So
it's
like
you
know.
People
are
investing
time
in
in
doing
something
that
actually
there's
already
a
good
generic
solution
out
there
for
and
they
can
just
download
it
and
use
it
as
open
source.
So
you
know
it's.
There
is
definitely
a
a
big
communication
problem
in
the
space
at
the
moment,
making
sure
everybody
has
access
to
a
full
picture
of
what
the
problem
space
looks
like.
A
D
C
I'm
not
sure,
what's
the
way
to
you
know,
try
to
solve
them
if
you're
referring
to
the
concept
of
feature
stores,
I
think
it's
it's
only
a
personal
partial
solution.
A
So
yeah
I
mean
you
know,
obviously
that
particular
event
will
be
heavily
focused
on
on
feature
stores,
because
that's
the
the
nature
of
that
event,
but
I
think
there's
a
there's,
a
a
bigger
picture
of
you
here.
In
terms
of
how
people
see
the
problem
depends
very
much
on
which
discipline
they've
come
from
so
people
who've
come
from
a
data.
Science
background
tend
to
see
machine
learning
as
a
database
problem,
and
they
tend
to
build
solutions
that
work
like
database.
C
C
C
C
A
Thing
you're
trying
to
build
is
the
product,
not
the
machine,
learning
right.
So,
if
you're
going
to
succeed
commercially,
the
focus
has
always
got
to
be
on
the
product
and
how
you
manage
and
maintain
that
that
asset
across
its
life
cycle,
that's
the
control
and
and
the
reality
is
that
when
you,
when
you
look
at
one
of
these
products,
it
will
be
promoted
as
a
an
ai
product.
A
But
when
you
break
down
the
the
overall
product
itself
about
five
percent
of
the
overall
effort
goes
into
the
machine
learning
bit
and
the
other
95
is
in
managing
the
rest
of
the
product
and,
and
that
includes
a
lot
of
conventional
software
assets,
because
you
know
a
model
on
its
own,
can't
talk
to
anything.
A
So
so
your
product
is
going
to
have
a
user
interface.
It's
going
to
have
integrations
it's
going
to
be
connected
to
things
it's
going
to
be
managing
those
data
and
so
machine
learning
assets
never
exist
in
isolation
in
the
real
world,
they're
always
just
another
asset
as
part
of
an
overarching
product.
A
So
the
the
path
that
a
lot
of
companies
have
gone
down
in
the
ml
space.
At
the
moment
is
to
treat
the
machine
learning
assets
as
if
they
exist
in
isolation
and
deploy
them
as
an
atomic
unit
through
a
dedicated
system.
That
only
does
machine
learning,
and
then
that
leads
to
a
situation
where
you.
You,
then
have
a
a
lot
of
challenges.
A
Trying
to
cost
effectively
manage
your
overall
asset
because
you've
got.
You
know
one
big
chunk
of
it,
which
will
be
all
of
the
web
facing
stuff
for
all
the
ui
facing
stuff
and
all
the
customer
facing
stuff
and
all
the
integration
stuff
which
can
all
be
deployed
with
devops
and
will
be
part
of
a
ci
cd
system.
And
you.
C
A
You
can
work
on
a
very,
very
fast
cadence
of
you
know,
doing
a
release
every
few
hours
if
you
need
to,
and
then
you've
got
this
huge
monolith,
which
is
your
machine
learning
stuff,
which
acts
a
lot
more
like
a
a
single
database
server
instance,
rather
than
a
distributed
component
based
model
with
the
rest
of
your
architecture.
A
C
So
I
I
I
don't
know
if
you
had
time
to
read
my
article
that
I
published
a
few
days
ago,
so
I
totally
agree
with
what
you
said.
I
think
that
there
is
a
very,
very
big
problem,
deploying
models
to
production
and
that's
because
when
you
do
want
to
deploy
models
to
production,
you
can't
just
you
know,
click
on
the
button
on
its
own
production.
C
You
need
to
work
closely
with
a
different
role
in
the
organization
and
you
need
to
build
together
this
productization
overhead.
So
let's
say
I'm
an
engineer
and
you're
a
data
scientist.
So
now
you
need
to
you
need
me
to
have
time
to
sit
with
you
to
understand
your
problem,
to
have
multiple
meetings,
to
plan
a
new
design
for
productization,
to
write
some
code
to
do
qa.
C
We
need
somehow
to
mitigate
between
these
two
layers
and
the
technical
solution
I
propose
is
to
to
split
the
initiatives
to
model
development
and
the
data
development
in
a
similar
manner
to
front
and
then
and
backing
you
know.
So
there
will
be
a
tight
relationship
between
these
two
processes,
but
it's
two
different
processes.
D
A
A
Yeah
so
let's,
let's,
let's
split
out
the
the
the
technical
architecture
from
the
the
conceptual
approach
to
managing
an
asset,
because
those
are
actually
different.
Different.
A
Yeah,
well,
I
think,
there's
there's
actually
a
deeper
problem
here
in
that
right
now
the
phrase
ml
ops
is
being
used
to
describe
something
which
is
unrelated
to
devops,
and
that
is
is
misleading
people,
because
you
know
within
the
cdf.
A
A
A
You
know
ways
of
working
within
ml
ops
without
understanding
what
the
purpose
of
devops
was
in
the
first
place
in
so
so,
we've
got
a
communication
role
there,
which
is
which
very
much
falls
to
us
to
to
say.
Look
devops
exists
because
of
these
problems
that
you
will
face
when
you
try
to
manage
your
asset
in
a
commercial
sense
and
devops
as
a
practice.
C
A
So
then
we're
we're
currently
working
on
a
best
practice
guide
within
the
cdf,
which
is
spelling
out
a
lot
of
the
you
know
the
fundamental
drivers
for
devops,
and
hopefully
that
will
help
to
communicate
what
some
of
these
challenges
are.
C
C
However,
that
might
not
be
enough,
because
if
we
will
take
a
look
about
devops,
let's
say
you
need
to
deploy
a
complex
application
that
should
take
you
know,
user
upload,
an
image
and
the
image
should
be
uploaded
to
a
server.
So
if
you
need
to
manage
that
so
for
a
device
perspective,
that's
very
complex
because
you
need
to
to
handle
you
know
persistent
storage
and
all
of
this
stuff.
C
So
I
I
think
that,
in
order
to
overcome
the
challenges
of
deploying
machine
learning
or
ai
in
general,
we
need
to
solve
these
two
problems:
how
to
simplify
the
the
technical
solution
and
how
to
simplify
the
way
to
operation
operationalize
these
these
new
technologies.
A
Yeah-
and
this
is
where
I
think
we
have
to
be
quite
careful,
because
many
of
the
proposed
simplifications
actually
come
as
a
result
of
you
know,
discarding
certain
things
and
saying
right,
we'll
simplify
by
not
doing
this
stuff,
but
the
stuff,
that's
being
discarded,
is
actually
essential
to
addressing
the
overall
problem
domain.
And
so
you
get
simplified
over
simplifications.
A
That
then
paint
you
into
a
corner
and
you
then
get
a
product
that
can
do
certain
things
really
easily,
but
can't
actually
do
all
that
it
needs
to
do
in
order
to
be
a
viable
product,
and
that's
that's
pretty
much
a
good
description
of
where
the
ml
ops,
tooling
market
is
right.
Now,
in
that
a
lot
of
people
have
made
assumptions
about
what
they
can
simplify
out
and
have
built
tools
to
optimize
for
those
scenarios.
A
A
So
this
is
this
is
where
there's
a
lot
of
hidden
complexity
and
what
we
need
to
do
is
provide
ways
to
consistently
manage
the
complexity.
So
people
can
understand
it,
but
we
can't
magically
make
the
complexity
go
away,
so
they,
instead
of
pretending
it
doesn't
exist.
What
we
have
to
do
is
build
tools
that
that
label
it
and
standardize
it
and
so
becomes
easier
to
to
understand
so
in
many
ways
we're
looking
at
what's
effectively
a
parallel
to
containerization
in
in
the
real
world.
A
In
that
we
went
from
deploying
things
onto
physical
computers
which
works,
but
it
wastes
a
lot
of
resources
and
it's
every
computer
you
deploy,
gets
configured
slightly
differently
and
therefore
there's
no
consistency
and
there's
a
lot
of
complexity.
A
So
then
we
went
oh
well,
actually
we're
not
using
most
of
the
resources
on
these
computers
that
we've
already
got.
So
why
don't?
We
split
them
down
into
virtual
computers
and
deploy
into
the
virtual
computer
so
that
they
can
share
the
physical
resources
of
one
machine,
and
that
gets
you
a
bit
more
compute
efficiency.
A
But
then
everyone
was
still
building
their
vms
by
hand
and
doing
it
differently.
So
every
vm
running
on
the
machine
was
a
bit
different,
so
that
made
the
vms
hard
to
manage,
because
you
never
quite
knew
how
it
had
been
set
up
and
how
you
needed
to
maintain
it.
So
then
we
went
to
containerization,
which
says
right.
A
Well,
let's
just
create
this
idea
of
a
completely
virtual
set
of
computing
resources
which
spread
across
lots
of
physical
machines,
but
which
all
have
the
same
configuration
and
the
same
way
of
setting
them
up,
and
then
it
doesn't
matter
how
many
resources
you
need.
A
Somebody
will
plug
some
more
hardware
in
in
the
back
end,
then
your
application
will
just
spread
onto
that
new
hardware,
so
that
gave
you
a
level
of
abstraction
that
made
it
simpler
to
conceptually
work,
but
at
the
same
time
it
created
this
massive
amount
of
complexity
within
the
infrastructure,
where
you
actually
have
to
be
able
to
manage
all
of
these
things
in
a
consistent
way
using
consistent,
tooling,
and
so
the
complexity
still
exists.
A
This
is
where
we're
sitting
with
with
a
lot
of
the
ml
problems
right
now
in
you
know,
yes,
you
can
build
bespoke
solutions
for
doing
ml
ops
to
solve
your
problem
in
the
short
term,
but
if
we
want
to
be
able
to
do
mlaps
cost
effectively
like
we
do
with
other
types
of
software,
then
we
we
need
a
lot
of
standardized
tooling
that
all
plugs
together
in
a
consistent
way
that
takes
away
the
complexity
from
some
teams
and
buries
it
in
a
in
a
separate
layer
so
that
we're
not
all
getting
dragged
into
all
of
the
complexity.
B
Yeah
that.
A
Actually,
oh
sorry,.
B
Sorry
go:
go,
oh
yeah,
okay,
I'll
I'll,
be
quick!
So
that's
quite
interesting
point
and
I
was
reading
the
the
roma
as
well.
B
Around
kind
of
mlops
has
to
be
a
discipline
that
is
language
and
framework
and
platform,
infrastructure,
agnostic
and
that's
kind
of
an
interesting
and
in
the
point
that
you
raised
terry
about
kind
of
how
to
ex
increase
exposure
to
this
road
mlaps
roadmaps
and
all
of
that
sort
of
what
ml
ops
has
to
be
kind
of
point
towards
things
like
kubernetes,
which
I
think
is
quite
well
aligned
in
terms
of
it
is
quite
an
agnostic
platform
for
allowing
sort
of
deployment
of
containerized
application,
and
so
is
there
kind
of
a
scope
with
for
this
group
to
kind
of
either
endorse
or
collaborate
such
sort
of
technologies
such
as
kubernetes,
in
a
way
that
sort
of
we
recommend
or
and
then
vice
versa.
B
So
on
the
kubernetes
side,
if
you
know
if
they
agree
with
with
our
objectives
and
and
recommendations,
sort
of
point
users
towards
this
recommendation
or
or
some
kind
of
mention
of
yeah
sort
of
sort
of
the
the
work
that
that's
been
put
into,
this
group.
A
Yeah,
so
that's
a
good
question:
what
we're
trying
to
do
is
to
communicate
with
teams
who
are
building
solutions
in
this
space
and
get
them
to
understand
the
implications
of
the
problems
that
are
in
the
roadmap
so
that
they
start
to
come
up
with
technical
solutions
that
actually
address
the
full
scope
of
the
problem
domain
rather
than
just
part
of
it
now
kubernetes
is,
is
a
potential
approach
to
solving
some
of
these
problems.
A
With
my
other
hat
on,
I
lead
the
ml
ops
work
within
the
jenkins
x,
ci
cd
solution,
and
we
have
a
an
ops
component
which
actually
does
use
kubernetes
to
to
do
all
of
the
machine,
learning,
training
and
deployment
so
yeah.
There
are
some
solutions
out
there
already
that
are
going
down
that
path
and
they
don't
actually
need
to
be
directly
supported
by
kubernetes.
A
So
the
jenkins
x
solution
is
leveraging
the
other
techdon
with
a
k
rather
than
a
c,
which
is
a
standard
pipeline
component
for
kubernetes.
A
And
so
we
just
we
just
turn
machine
learning
assets
into
things
that
can
align
with
a
standard
tecton
pipeline.
And
then
we
just
use
standard,
build
capabilities
to
distribute
the
the
trainings
and
and
to
to
run
the
model
inferencing.
A
So
together
that
that
stuff
is
going
on,
but
I
think
we're
we
still
got
a
bit
of
a
void
between
the
teams
who
are
build
focused
and
the
teams
who
are
ml
focused
and
there's
still
work
to
be
done.
To
get
everybody
understanding.
The
full
scope
of
the
problems
that
they're
facing.
B
A
So
artem
back
to
you.
E
Yeah,
I
had
a
maybe
maybe
stupid
question
because
I
am
partially
not
in
the
context
of
of
sigmarops
roadmap,
but
sorry
for
that.
If,
if
it
is.
A
There
there
are
no
stupid
questions.
There
are
just
good
questions.
E
All
right
all
right,
so
my
question
is
from
from
the
perspective
of
infrastructure
and,
as
you
were,
telling
the
history
of
like
progress
from
virtual
machines
to
to
containers
how
is
amal
and
envelopes
so
different
from
already
solved
problem
for
software
engineering.
Is
it
like
large
amount
of
data,
or
so
from
my
from
my
point
of
view,
all
the
complexity
is
most
of
the
complexities
concerned
on
the
ml
layer,
not
on
the
infrastructure
layer.
A
So
you've
got
some
code,
you
build
the
code,
you
test
it,
you
deploy
it
somewhere.
You
run
some
integration
tests
on
it
and
then
you
switch.
It
live
that
that's
a
solved
problem
and
there
are
lots
of
good
solutions
out
there
in
in
that
space.
A
C
A
E
But
as
far
as
I
understand,
this
is
still
like
far
related
to
the
problem
of
managing
resources.
So
imagine
we
have.
We
have
a
super
huge
super,
flexible
database
that
allows
us
to
fixate
the
let's
say,
immutable
slices
of
let's
say:
commits
of
data
subsets
of
data
train
on
them
version
them
change
them,
maybe
delete
from
them.
Some
rows
delete
the
user's
data
track
back.
So
if
we
had
this
fully
data
managed
solution,
then
then
we
still
so
first
question
is
the
problem
solved
and
second
problem
is:
are
there
any
other
infrastructure?
A
So
so,
yes,
that
that's
one
of
the
the
technical
solutions
that
that
we
suggest
is
needed
in
the
roadmap,
so
you
know
that
partial
bits
of
that
solution
exist
today,
but
there's
nothing.
You
can
go
to
off
the
shelf
and
just
say
right
here
is
a
a
a
data
lake
that
is
completely
versionable
that
integrates
into
a
cicd
system
in
such
a
way
that
you've
got
full
traceability.
A
So
that's
a
a
product
that
somebody
needs
to
build.
That
is
a
critical
dependency
for
deploying
any
high-risk
ai
system
within
europe
as
an
example.
A
So
yeah,
but
that's
this
is
the
flip
side
of
the
road
map.
Is
that
actually
it's
a
long
list
of
problems
that
people
need
to
be
working
on?
That
will
actually
have
high
value,
because
if
you
build
one
of
these
things
and
sell
it,
you'll
make
a
lot
of
money
because
there's
a
captive
market.
A
This
is
one
way
of
working
out
what
to
work
on
next,
because
we're
we're
making
some
some
some
pretty
strong
predictions
about
what
the
market
is
going
to
look
like
over
the
next
five
years
and
where
there
are
commercial
opportunities
to
be
had.
A
And
in
in
our
answer
to
the
second
part
of
your
question,
yeah,
there
are
other
aspects
to
this
that
the
will
need
to
be
solved
there.
There
are
lots
of
governance
challenges
that
sit
within
this
space,
and
the
expectation
of
regulators
is
a
long
way
away
from,
what's
practically
possible
right
now,
with
the
tools
that
we
have.
A
A
Now
there
are.
There
are
some
tools
out
there
for
for
building
your
own
bias,
testing
approaches,
but
really
what
we
need
is
a
standard
component
that
says
is.
That
is
the
types
of
bias
that
we
expect
to
encounter
in
the
customer
domain
that
we
are
working
in
apply
those
generically
to
each
model
we
produce
and
give
us
a
score
for
that
model,
and
then
you
just
report
those
scores
as
as
part
of
the
quality
metrics
for
the
models
you're
generating.
E
Can
I
have
one
more
question,
please
yeah,
of
course
carry
on
what
do
you?
What
do
you
mean
the
standard
way?
Is
it
the
the
way
that
goes
standard
first
and
then
implementation,
or
is
it
easy
pluggable
to
some
widespread
platform
as
kubernetes,
for
example,.
A
Really,
what
we're
looking
at
is
collaborative
standards
for
for
plugability,
rather
than
four
more
standards,
for
this
is
how
you
shall
work.
So
the
idea
is
that
we
we
want
to
encourage
a
level
playing
field,
but
with
lots
of
opportunity
for
for
lots
of
vendors.
A
A
Yeah,
I
I
I
wouldn't
worry
about
that.
One
too
much
nobody's
been
attending
that
one
for
months,
so.
A
There
there's
a
there's
another
u.s
time
zone
slot
which
is
later
on
today,
but
that
one
was
being
used
for
another
purpose
and
has
wound
down.
So
it
doesn't
have
much
attendance
right
now.
A
If
we
get
more
collaborators
coming
in
from
from
that
time
zone,
then
I'll
I'll
continue
to
run
both
sessions,
but
but
right
now
this
is
the
one.
That's
that's
driving
most
of
the
work.
A
Okay,
well
thanks
everyone,
it's
been
great
to
have
your
involvement
and
hope
to
see
you
again.
Thank.
A
Feel
free
to
reach
out
to
me
offline,
and
you
know
any
questions
or
anything
you
want
to
contribute.
Just
you
know
find
me
a
message
join,
join
the
slack.