►
From YouTube: CDF SIG MLOps Meeting 2020-07-02b
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Moving
on
I
was
just
on
a
LFA
ie,
so
there
is
this
open
source
summit
going
on
right.
The
the
Linux
Foundation
well
so
I
was
on
that
presenting
on
that
LFA
a
mini
summit,
quite
a
bit
of
interest
around
here
on
the
ML
ops
topic
right,
so
I
want
to.
Sometimes
you
know
we
should
make
this.
Maybe
cross
I,
don't
know
I
mean
the
like.
Sometimes
you
know
it's
like
this,
whether
this
makes
sense
in
CD
foundation
or
it
should
be
in
something
like
LFA
a
foundation
right,
because.
C
C
The
CI
CD
side
of
things
it's
clear
from
the
conversations
that
I've
been
having
that
we
we
actually
need
to
get
that
that
group
understanding
the
needs
of
ML
ops
and
starting
to
think
about
how
they
can
extend
the
capabilities
of
solutions
in
those
spaces.
So
there's
definitely
a
benefit
to
having
that
conversation
running.
B
A
Don't
have
anything
specific
I'm
working
in
a
machine
learning
team
at
a
proof,
point
which
is
a
enterprise
cybersecurity
company
and
we're
working
on
a
new
environment
for
doing
machine
learning
inference,
and
so
we've
been
looking
around
at
different
resources
and
I
saw
this
meeting.
That's
an
available
place
to
come
here
about
ml,
ops.
Sorry.
B
A
Are
we're
on
AWS
but
we're
yes,
so
just
kind
of
in
the
early
design
stages
figuring
out
what
exactly
we
want
to
do?
Most
of
our
products
are
kind
of
internal
facing,
so
our
goals
are
mostly
just
to
expose
simple
REST
API
to
the
rest
of
the
company,
for
you
know
basically
doing
product
enhancements
for
features
that
our
company
offers
in
other
places
so
and
yeah.
That's
kind
of
a
working
on
so.
B
A
I
said
yeah,
we
are
looking
at
using
sage
maker
for
serving
models
and
for
doing
a
couple
of
other
things,
but
we're
just
trying
to
figure
out,
especially
since
we
have
kind
of
unusual
security
requirements
in
terms
of
customer
confidential
data
and
that
kind
of
thing.
We
are
just
kind
of
evaluating
different
options
for
things
we
might
do.
B
B
So,
that's
you
know
one
of
our
users.
He
essentially
migrated
from
you
know
in
a
similar
situation
as
yours,
but
migrated
and
moved
off
and
from
AWS
and
move
the
entire
deployment
in-house
leveraging
cap
serving
right
and
part
of
the
reasons
was
same
I.
Think
cost
was
also
a
factor
to
be
played
because
they
they
host
this
dungeon
game
right,
which
is
hugely
popular,
and
you
know
for
that
level
and
then
the
need,
you
know
a
lot
of
GPU
capabilities
right,
so
the
cost
was
becoming
a
barrier
plus
in
general.
They
needed.
B
Oh
I
can
just
give
you
so
this
is
the
stack
right
4k
of
serving.
If
you
are
interested
in
it,
you
know
you
can
reach
out
right
so,
ideally
built
on
top
of
you
know:
Kuban
IDs,
as
is
everything
in
queue
flow
right,
but
we
leveraged
Canada,
vana
steel
under
the
covers
right
for
variety
of
reasons.
I
think
the
two
key
things
which
are
important
right,
which
initially
what
the
drivers
was
K
native.
B
You
know
bringing
these
surveillance
capabilities
of
scale
to
zero
and
giving
you
know
the
auto
scale
ability
based
on
request
based
queuing,
then
sto,
which
gives
you
you
know
much
more
control
over
doing
canavero
all
say,
be
testing
pin
roll
awesome.
So
those
were
the
reasons
right
for
basing
it
on
that
stack.
And
then
you
know
having
common
stock
or
tensorflow
PI
Taj
scalar
and
actually
boost
onyx,
runtime
and
media
stencil
out
here.
B
Try
it
on
server
and
providing
you
know,
pluggable
interfaces
for
not
only
prediction:
I
mean
prediction
is
one
of
the
key
things
right,
but
a
lot
of
the
times
you
were
doing
pre-processing
and
post-processing.
You
know,
after
getting
the
user
input
and
before
giving
the
user
output
explained
ability
and
then
you
know
advanced
level
capabilities
which
are
being
integrated
like
drift,
detection,
anomaly
detection,
so
yeah
I
think
it's
it's
pretty
promising.
You
know.
B
B
C
C
C
B
Okay,
sorry
I
was
saying
that
you
know
that's
where
I
think
the
the
sort
of
domain
expertise
you're
looking
for
all
the
users
for
the
ML
ops,
probably
l,
FAI
foundation-
is
you
know,
maybe
the
right
one
to
actually
try
and
reach
out
right.
So
I
think
it
will
be
wise
to
figure
out.
You
know
if
this
can
be
made
of
cross-pollination
between
those
two
groups,
because
I
believe
C
D
foundation.
You
know
the
users
are
here
for
a
particular
reason.
C
B
What
we
can
do
is,
you
know,
just
make
it
that
one
before
it
right
and
we
can
switch
from
this
time.
Let's
just
have
that
one
and
was
over
I
mean
if
people
have
a
really
neat
watching
they
can
attend.
Even
at
the
time
is
you
know
a
bit
inconvenient
to
mean
that
probably
makes
more
sense
right
rather
than
having
to
calls
well.
C
I
think
this
is
one
way
you
know
clearly
you've
got
a
group
working
on
on
things
in
in
this
space,
so
we
don't
want
to
disrupt
that,
but
it
would
be
nice
if
we
could
grow
this.
This
group
in
in
the
US
and
then
there
seems
to
be
more
activity
going
on
in
in
the
opposite,
timezone,
but
again
a
lot
of
it
or
just.
B
C
B
C
B
That
becomes
like
I
mean
typically,
like
you
know,
a
lot
of
the
evening
meetings
we
have
in
the
Pacific
time
zone
like
whenever
we
need
to
interact
with,
for
example,
you
know
the
China
side
of
the
house.
It's
mostly.
We
have
a
lot
of
535
o'clock
Pacific
cause
right
which
get
very
well-attended
from
the
Chinese
or
Asia
Pacific
side,
so
I
think
if
we
can
find
a
sweet
spot
there
right,
which
is
a
bit
inconvenient
for
both
the
sides
right,
but
it
can
be
one
call.
B
It
will
eliminate,
also
confusion
and
right
and
because
to
me
you
know
there
was
I
mean
at
least
from
my
perspective,
Terry.
There
was
this.
This
whole
specific
project
right
which
I
was
driving
and
and
to
the
point
what
I
wanted
to
get
out
of
it
is.
Is
you
know
my
first
phase
is
complete,
the
project
is
ready,
and
then
you
know
if
there
are
not
enough
technical
missions
currently
right
there
or
people.
You
know
interested
in
technical
missions
here,
right,
I
would
rather
join.
C
Just
starting
to
really
promote
this
group,
so
the
first
of
the
roadmap
announcements
went
out
about
a
week
ago
now,
so
the
expectation
is
that
we'll
be
promoting
this
on
the
conference
circuit
and
trying
to
build
up
more
of
a
collaborative
working
environment.
So
I
think
we're
still
early
days
on
this
at
the
moment,.
B
Yeah
yeah
make
sense,
make
sense
right
weekend
and
obviously
there
will
be
a
lot
of
things
and
I
tend
to
use
this
call
right.
I
mean
whenever
I
need
to
get
some
discussion
going
on
with
the
counterparts
on
checked
on
site
or
Google
site
within
you
know
the
the
queue
flow
or
petrol
community
right.
So
that's
how
this
serves
me
right.
I.
B
Make
sure
that
if
I
need
those
folks
together,
right,
I
ping
them
before
hey,
let's
sync
up:
this
is
a
time
which
is
blog,
so
yeah
I
think
that's
that's
precisely
I
mean
what
I
would
be
looking
for
right
and
if
there
are,
you
know
more
interest
coming
from
a
queue
flow
perspective.
I
can
definitely,
you
know,
keep
on
doing
more
deep
dives.
There.
C
Yeah
I
think
we're
we're
at
a
stage
now
where
we
of
the
sort
of
boring
but
relatively
straightforward,
work
on
the
roadmap
and
now
we're
getting
into
the
more
challenging
issues
where
actually,
we
might
benefit
from
from
setting
a
subject
area
in
advance
and
circulating
an
agenda
and
just
doing
a
call
for
contributions
on
on
that
area.
Yeah.
B
In
there,
like
you,
know
this,
this
whole
key
flow
pipeline,
Techtron
project
plus,
you
know
the
whole
trusted
AI
umbrella
and
then
you
know
I'm
leading
the
IBM
and
Red
Hat
data,
any
open
source
alignment,
so
so
many
balls
up
in
there
to
be
very
wide
and
broad
general-purpose
topics
right.
So
I
think
that's
where
you
can.
You
know
possibly
said
the
agendas
except
alright.
If
I
do
have
certain
requests,
you
know
like
okay,
I
need,
maybe
15
minutes
block
for
discussion.
Let's
say
you
know
the
tensorflow
extended
and
kkf
be
intact
on
integration.