►
From YouTube: CDF SIG MLOps Meeting 2020-06-04a
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
B
B
B
A
A
B
A
Witness
some
roadmaps,
Allah
started,
I,
didn't
really
I,
don't
really
know
the
background
of
it.
So
is
there
a
backstory
for
this,
because
I
know
this
seed
was
going
on
for
a
little
bit
before,
like
what
prompted
to
start
this
to
all
SIG's
have
some
sort
of
a
road
map
thing
like
this
fight?
What's
the
backstory
so.
A
All
right
cool,
so
yeah
I'll
keep
working
down
that
list
so
that
that
makes
a
bit
clearer.
So
few
other
people
join,
Cara's
joined,
I,
don't
have
carries,
joined
this
one
before,
but
being
the
other
one.
Okay
and
Esther.
Oh.
A
Do
a
shot
here:
everyone
gets
it
the
agenda
there
I
did
see
an
interesting
article,
I
pasted
at
the
top
of
that
agenda,
which
seemed
topical
Leslie
was
on
AI
based
medical
algorithm,
favoring
white
people
without
race
being
part
of
the
training
set,
so
I
thought
that
was
sort
of
thing
and
I
know
something
teri
talks
about
a
lot.
This
had
no
ill
intent
at
all,
but
you
know
the
the
correlations-
and
you
know
stuff
was
all
unfortunate
and
I
corrected
it.
They're
done.
A
I
thought
that
was
one
of
the
things
talked
about
in
the
roadmap
is
the
importance
of
bias
and
awareness
of
bias
and
things
like
that
and
I'm
sure
it's
not
surprising
to
Terry,
but
this
was
kind
of
surprising
like
when
you
look
at
level.
Of
course
it
could
do
that,
and
that
makes
sense
that
if
there
are
systemic
things
going
on
in
a
historically
trained
model,
would
probably
encode
them
so
I
guess
the
way
you
would
counter
that
would
be.
A
You'd
have
some
well
I
guess
almost
like
test-driven
development
you'd
want
to
have
some
criteria
or
some
analysis.
He
do
on
any
sort
of
model.
Whether
or
even
if
it
was
handwritten
code
that
it
wasn't
doing
doing
things
that
would
challenge
what
like
I
guess,
people
assumed
it
would
just
be
fair
being
based
on
external
data.
I
thought
that
was
an
interesting
link
to
include,
because
it's
like
it's
just
evidence
so
I,
don't
know
if
things
like
that.
A
B
C
A
A
So
looking
I
just
thought
that
was
interesting,
looking
also
it.
Lastly,
there
was
a
bit
of
a
discussion
about
long-running
training
and
checkpointing
and
I
left
a
note
to
talk
a
bit
more
about
it,
but
I,
don't
really
have
much
to
say
yeah.
That
seems
probably
a
bit
further
in
the
weeds
and
we
want
to
get.
B
So
I
think
that's
probably
something
that
deserves
the
section
in
the
in
the
technology
requirements.
So
we
should
spell
out
the
fact
that
there
there
is
a
an
issue
there
in
terms
of
you,
know
we're
dealing
with
long-running
processes
and
those
processes
fail.
Then,
are
we
forcing
people
to
restart
from
the
beginning,
or
are
we
enabling
continuity
by
recovering
back
to
a
previous
known,
good
step,
I.
A
A
Part
of
that
might
be
a
little
bit
just
that's
just
the
way
it
is
like.
If
you're
gonna
be
training,
you
know
learning
and
algorithm
as
opposed
to
writing
it
then
you've
got
to
let
the
machine
do
the
work,
but
you
know
as
a
person
that
needs
to
sort
of
focus
and
avoid
switching
context.
It's
quite
jarring,
so
that's
probably
worth
calling
out
so.
B
An
entire
data
center
dedicated
to
running
that
one
training
event.
So
you,
if
you
look
at
the
granularity
of
the
equipment
that
you're
you're
buying
at
that
stage,
you
know
you
know
the.
If
you
look
at
the
Nvidia
rig,
they
they
have
a
supercomputer
offering
where
a
single
deliverable
chunk
is
is
just
over
a
thousand
GPUs
that
takes
up
like
eight
or
ten
full
racks
in
a
data
center,
and
you
can
then
start
stacking
those
things
together
to
to
give
you
enough
compute
capacity
to
operate
that
but
pace.
B
A
So
another
thing
that
we
didn't
talk
about
which
I'd,
like
your
view
on
was
this
library
from
from
David
from
Microsoft
I,
didn't
know
whether
that
was
kind
of
relevant
to
checkpointing
and
I
think
the
idea
was
to
have
sort
of
standard
metadata
formats
or
yeah
mall
formats.
So
when
you
configure
a
training,
run
or
some
step
of
things,
then
it
writes
it
out
to
disk
in
a
standard
format.
Did
you
see
it
as
relevant
to
that
or
it's
a
bit
off
to
the
side,
so.
A
But
the
way
they
work
is
quite
different.
Looking
for
clusters
of
things,
it'll
maybe
be
dealing
with
more
regular
streaming
data,
there's
no
discrete
oK.
We've
stopped
training
now,
when
you're,
deploying
or
a
being
the
model
or
checkpointing
the
data
and
reinforcement,
learnings,
I,
guess
a
more
complex
thing
again,
but
maybe
it's
a
little
bit
more
niche
in
this
setting,
mostly
that's
in
robotics
or
games
from
my
understanding,
but
how
much
of
this
roadmap
has
to
sort
of
spread
to
those
other
areas?
I
think.
B
It
probably
helps
to
think
about
this
as
as
three
different
strategies
on
a
continuum,
so
so
you've
got
the
the
type
of
supervised
learning
that
we've
been
looking
at
today,
which
is
kind
of
equivalent
to
a
human
academic
training.
In
that
you
do
it
once
at
the
beginning
of
your
career,
and
then
it's
over
your
apply
it
again.
B
So
that
that's
definitely
a
compile-time
activity,
then
you've
got
sort
of
sequence
based
real
learning,
which
is
a
little
kin
to
the
human
sleep
strategy,
which
is
where
you
get
exposed
to
a
bunch
of
new
training
data
every
day.
And
then
you
shut
down
overnight
and
learn
from
the
new
data
and
that
modifies
your
strategy
for
the
following
day.
B
A
Yeah,
that's
that
you've
just
you've
got
this
slightly
yeah.
That
would
be
that's
sort
of
what
I'm
doing,
but
it's
myself
right,
I,
I'm
thinking
so
I'm,
trying
to
think
of
the
things
I
know
that
fall
outside
of
ml
ops.
Like
so
did
you
say
the
second
one
that
was
more
akin
to
day-to-day
human
learning,
was
unsupervised
or.
A
B
You're,
more
likely
to
find
people
working
collaboratively
on
shared
solutions
in
the
first
place
is
because
it
doesn't
impinge
directly
on
their
intellectual
property,
but
it
does
accelerate
their
ability
to
generate
intellectual
property,
whereas
in
the
third
case
the
the
continuous
learning
is
intrinsically
the
way
you
do.
Continuous
learning
forms
part
of
your
intellectual
property,
so
people
are
more
likely
to
be
building
dedicated
platforms.
To
do
that.
A
B
A
What
what
if,
what
if
your
you
came,
you're
using,
maybe
something
that
could
be
a
supervised
technology
and
you
wrap
things
around
it,
so
it
you
know,
like
aid
and
other
things
and
continually
tweak
the
hyper
parameters
in
your
own
special
source
way,
and
it
was
always
learning,
but
it
was
still
checkpointing.
It
was,
you
know,
do
it.
You
know
once
an
hour
or
once
a
day
once
a
week.
Well,.
B
B
A
B
Couldn't
extend
the
CI
CD
environment
to
make
it
into
something
that
would
allow
you
to
do
that,
but
my
expectation
would
be
that
that
would
that
would
be
more
intrinsic
to
to
a
particular
application,
and
so
there
would
be
less
opportunities
for
people
to
to
share
their
approaches
in
that
space.
Because
that's
where
the
secret
sauce
is
going
to
sit,
I
mean.
A
My
understanding
is
things
like
that
it
is
like,
underneath
there
is
some
level
of
sharing
like
anomaly.
Detection
neural
network
still
uses
the
same,
a
lot
of
the
same
technology
as
supervised
learning.
They
just
structure
the
layers
differently,
where
they
compare
the
input
and
output
and
see
whether
it's
expected
or
not.
A
So
there
is
some
sharing
going
on
there.
I
guess.
Another
type
of
continuous
learning
would
be
things
like
anomaly.
Detectives
Amazon
have
random
cut.
First
in
Kinesis,
you
can
use
that
with
a
few
lines
of
rescue
L
in
one
of
their
apps,
and
it
like
it's
there's
an
there's,
no
iterative
thing
to
that.
It
just
is
just
the
window
of
data,
and
it's
just
doing
you
know
it's
only
modeling
on
it
and
there's
other
things
like
that.
So
I
guess
they
do
to
me
that
fairly
clearly
doesn't
fit
in.
B
A
A
And
given
its
using
the
same
or
a
lot
of
the
same
algorithms
underneath
it's
my
that's
my
understanding
in
sort
of
researching
in
the
areas
a
lot
of
it's
very
similar,
they
might
be
there
some
secret
sauce
out
there,
but
it's
still
they're
using
a
lot
of
the
same
techniques.
So
I
guess
an
analogy
for
traditional
server.
Software,
for
example,
would
be
you
can
mutate
your
server
in
place.
You
can
update
things
on
the
fly
sure
you
know
technology
do
that
you
can
hoc,
swap
things
in
Java.
A
You
could
do
all
sorts
of
technology
that
small
talk,
there's
so
many
technologies
out
there
to
do
that
or
you
do
something.
That's
more
rigorous,
like
you
know,
canary
or
rollout
of
new
versions
of
seasons
or
features
where
you
cut
the
traffic
over
progressively
and
track
error
rates
or
a/b
test
it
or
add
the
ability
to
rollback.
You.
B
A
Pretty
much
everything
is
moved
in
that
direction
if
having
that
ability
to
turn
things
on
and
off
and
flip
back
and
forward,
lowering
the
risk
in
other
fields,
so
I
guess
you're
getting
at
well
in
the
machine
learning
field.
Maybe
that
analogy
still
holds
where
it's
we've.
We've
learned
this
lesson
elsewhere,
like
we,
don't
we
don't?
You
know
you
don't,
can
currently
SSH
to
five
sales
of
HP
servers
and
update
things.
A
B
Yeah
and
I
think
this
is.
This
is
going
to
be
an
interesting
one,
because
if
we,
if
we
build
the
the
type
of
infrastructure
that
the
roadmap
is
suggesting,
then
you're
actually
creating
the
possibility
to
do
evolutionary
modeling.
We
are
actually
you're
creating
multiple
instances
of
similar
models
in
parallel
having
them
compete
against
each
other,
yes
down
box
and
then
the
in
canary
really
seeing
or
a
be
releasing
the
the
more
successful
models
which
would
be
much
safe.
There
are
more
effective
strategy
than
betting,
the
farm
on
something.
B
A
A
B
A
So
from
your
experience,
what
when
people
say,
explain
ability
like
what?
What
are
they
really
looking
for
like?
If,
if
you
imagine
it
was
a
bottle
that
not
a
model
just
you
know
a
bit
of
code,
that's
continuously
delivered
and
decisions
made
by
it
to
approve
a
home
loan
or
not,
and
people
ask
well.
Why
did
it
decide
to
approve
it
or
not,
like
someone's
written
those
rules
into
code,
and
it's
probably
spread
across
multiple
services
like
what?
What
do
people
mean
by
explain
ability,
I,
think.
B
B
A
A
So,
if
you're
using
training
data
to
explain
ability
than
you
potentially
have
privacy
issues
yeah,
because
if
you
say
well,
I
can
explain
how
I
came
to
this
decision.
These
are
my
set
of
training.
Here's
my
hundred
parameters
is
all
this.
You
can
go
and
reproduce
it
get
some
graphs
analyze,
but
to
do
that
you're,
basically
giving
away
you
know
the
ground
rules.
A
The
training
day
would
another
like
some
algorithms
can
print
out
weights
of
what
for
a
specific
decision
of
what
inputs
affected.
It
I
think
for
a
lot
of
people.
That
would
be
that's
a
fairly
human
sort
of
thing
to
do
if,
but
not
a
logos
and
support
that,
but
I've
seen
some
that
do
because,
from
a
human
point
of
view,
you
know,
you've
got
a
bunch
of
features
that
drove
the
decision
and
you
know
I
was
to
do.
A
Is
you
know,
as
if
code
and
and
a
few
other
things,
then
you
might
not
know
everything
else
that
went
on
in
the
neural
network
or
all
the
training
that
made
it
to
get
that
way?
But
you
can
understand
okay,
it
was
because
of
these
three
factors:
okay,
that
maybe
makes
sense
to
me,
or
this
other
one
was
approved,
and
it's
because
of
these
five
other
factors
is
that's
how
a
human
would
explain
like
you
go.
You
know.
Why
did
you
make
that
decision?
Why
did
you
reject
that
PR?
And
you
know
it?
A
You
could
enumerate
a
few
things
that
just
be
a
few
things
that
we'd
say
and
then
the
other
person
ago
year.
That
explained
it
to
me.
That's
enough,
like
it's,
it
seems
very,
very
subjective,
but
just
I
guess
showing
the
main
features.
The
drug
thing
would
that
would
be
a
less
privacy
infringing.
Why.
B
B
A
B
B
I
think
it's
it's
to
be
expected.
You
know
where
was
more
cautious
when
it
comes
to
new
things,
but
the
challenge
comes
in
in
the
fact
that
we
don't
know
how
to
set
the
had
to
had
to
work
out,
explain
ability
for
natural
neuron
networks,
then
intrinsically
we
don't
understand
how
to
do
that,
for
the
artificial
ones
were
creating
like
that.
Yeah.
A
Yeah,
it
reminds
me
of
sort
of
stories
in
the
past
where
and
their
companies
would
go
through
a
stage
of
taking
a
business
process
that
was
human,
driven
and
theoretically
rigorous
and
then
making
digital.
You
know
using
a
rules,
engine
or
just
you
know
methodically
hand
coding
it,
and
then
things
wouldn't
work
their
own,
consistencies
and
and
illogical
states.
Things
could
get
in
and
they
would
blame
the
software
and
the
technology,
but
really
it
was
fundamentally
flawed.
All
along
it's
just
that
the
standard
was
higher.
A
B
The
classic
example
of
that
is,
you
know:
I've
worked
on
many
government
projects
implementing
public
policy
in
technology,
and
when
you
get
down
into
the
detail
of
that,
you
usually
discover
that
the
policy
itself
is
internally
inconsistent.
So
so
you
cannot
apply
the
policy
without
violating
the
rules
of
physics,
and
thus
you
end
up
in
a
situation
where
humans
appear
to
be
running
the
policy
because
actually
they're
selectively
choosing
to
ignore
it
in
order
to
get
things
through
the
process.
B
And
so
this
is
what
we're
going
to
see
as
we
move
into
more
machine
learning
driven
decision
making
is.
Is
that
we're
going
to
start
to
see
the
machines
reconciling
things
in
the
same
way
that
the
humans
do,
but
not
necessarily
in
a
way
that
actually
aligns
to
the
policy
that
you
thought
you
were
implementing
your
model?
The
behaviors
not
the
intent,
and
this
is
why
we
we
have
the
problem
with
with
bias
and
fairness
and
things
like
that,
because
you're
actually
modeling
what
people
do
not
what
you
wanted
people
to
do.
C
So
this
is
my.
This
is
my
first
time
on
this
iPad
coaster.
I
I,
don't
know
if
you've
discussed
this
already,
but
that
idea
bias
how
that
was
explained
to
me
as
a
way
to
handle
it
is
you
always
in
you
know,
human-in-the-loop,
but
when
you're
relying
on
that
human
quite
a
bit,
but
you
do
need
that
human
checking
to
make
sure
that
your
model
is
not
diverging
or
it's
not
giving
you
honorable
side
effects
and
you.
A
Mean
in
the
loop
as
in
forth,
like
that
you
couldn't
the
CD
sort
of
Luke
yeah.
Not
it
not
it,
not
necessarily
an
individual
decision,
although
for
some
business
processes,
that's
probably
reasonable,
I'd
say
if
it's
a
self-driving
car
then
like
every
decision,
then
that
would
mean
it's
not
autonomous.
No.
C
No,
no
so
thinking
more
about
bias,
but
even
with
self-driving
cars,
like
probably
ultimately
want
a
human
and
have
some
power
in
that
car
anyways,
but
and
but
like
with
with
the
system,
that's
producing
bias,
you
need
to
be
able
to
have
humans,
watching
it
and
saying
they're
shut
it
off
like
not
working
music
breakfast
and
that
that
needs
to
be
acknowledged
as
part
of
the
process
that
you
need
that
check
and
I
was
just
wondering.
Have
we
addressed
that
in
our
how
bias
will
be
handled,
especially
if
you're
very
this
evolutionary?
C
We
have
continuous
progressive
deployments
of
different
parts
of
a
role
model
and
then
how
they
work
together
and
how
all
those
decisions
are
being
made
and
then
how
it
comes
together
and
the
ultimate
functionality
that
you
have
are
we
are
we
addressing
how
the
different
checks
will
be
in
place.
The
different
parts
so.
A
That's
like
the
the
I'm,
a
self-driving
car
in
Tesla.
You
got
to
have
your
hands,
you
know
on
the
wheels
or
you
know
the
line
assistant
ones.
There
was
talk
of
Volvo
having
one
where,
if
you're
on
a
certain
motorway,
it
would
take
control
and
then
legally
take
liability
for
it
in
certain
criteria,
but
that
hasn't
happened
yet,
but
yeah
I
think
so
they
need
the
Erland.
B
A
B
A
Customers
and
users
out
there
that
when
they
first
built
their
pipeline,
they
would
have
a
dozen.
You
know,
not
checkpoints,
but
you
know
input
stages
for
human
input
stages
that
start
with
the
dozen.
This
is
a
bank
which
was
a
European
Bank.
They
started
with
a
dozen
because
that's
kind
of
how
they
used
to
work,
but
they
put
it
all
in
one
pipelines
that
was
all
visible,
reportable
and
and
so
on,
and
then
they
got
it
down
to
three.
A
You
know
they're
ten,
ten
to
twelve
month
period,
and
they
were
pretty
proud,
proud
with
that
and
actually
use
that
as
a
measure
of
progress
towards
continuous
Ness,
because
they
actually
thought
they
needed
the
humans
in
the
loops
of
these
steps.
But
then
they
realized
what
they
were
doing.
They
could
automate
that
way
away
and
put
checks
and
guards
in
and
I
wonder
it's
not
the
similar
sort
of
thing
here,
where
maybe
the
comfort
people
will
want
that
and
liability
reasons.
A
But
in
times,
if
we
can
address
the
bias
issue,
which
is
probably
the
most
serious
one,
the
solution
doesn't
necessarily
mean
having
a
human
around,
always
not
that
you
want
to
take
over
from
the
humans.
But
you
know
you
can
like
that
example:
I
brought
up
at
the
start
that
that
health
care
issue,
that's
something
that
had
they
thought
about
it,
they
could
use.
B
Cases
the
the
bias
actually
comes
back
to
how
you're
structuring
the
training
data.
So
if
you
are,
if
you're
running
a
process
to
validate
that,
you
have
a
representative
set
of
training
data,
then
and
a
representative
set
of
test
data,
then
you're
heading
off
most
of
those
risks
before
you
even
run
the
training.
B
So
the
challenge
typically
comes
in
those
places,
because
the
available
training
data
that
has
insufficient
information
to
represent
the
the
customer
population,
so
you're
only
trainings
are
part
of
the
problem
that
you're
trying
to
solve
and
then
doesn't
behave.
The
way
you
expect
many
deploy
it
into
a
larger
population
set.
A
B
B
You
know
a
manual
sign-off
to
say
that
you've
you've
done
an
audit
on
the
data
and
you
consider
it
to
be
sufficiently
broad
to
meet
your
requirements
or
it
might
be
that
you're
you're
retaining
a
set
of
bias,
related
test
data
which
doesn't
exist
in
your
training
set
and
then
you're
using
that
to
measure
the
response
of
the
model
to
two
known
scenarios
to
detect
your
bias
and
unfairness.
B
So,
typically,
there's
a
lot
of
this.
That
can
be
automated
as
long
as
you
are
you're
setting
up
a
reasonable
profile
for
the
the
test
date
of
the
ignorant
applying
and
I
think
what
we
will
see
is
an
evolution
of
some
common
standards
for
what
biases
urines
should
look
like,
and
we
were
also
already
seeing
a
number
of
independent
assurance
companies
offering
this
as
a
service.
A
All
right,
well,
I,
think
I
think
we
got
some
stuff
to
follow
up
for
the
next
fortnight.
I'll
get
a
few
more
sections
to
add
in.
A
And
then
just
so,
we've
got
the
record.
A
And
if
you
had
more
thoughts
car
on
on
bias,
because
it
is
a
important
sort
of
issue,
then
kid
neither
shoot
it
to
me
or
to
as
a
piata
to
something
if
we
needed
whole
section
not
because
it
will
come
up
that
might
be
worthwhile
like
that
thing.
I
talked
about
at
the
start.
I
was
interesting
because
I
always
thought
what
am
a
dude
bias
would
be.
A
Oh,
you
could
look
at
your
features
and
highlight
very
unbalanced,
categorical
features,
and
then
that
would
hint
at
where
biases,
but
in
that
example,
at
the
start,
they
never
race
was
never
a
feature
of
that
training.
Data
set
yet
alert
about
the
systemic
racism
in
that
system
without
it
at
a
date
mentioned.
So
no
automated
thing
that
I
sort
of
would
have
picked
it
up.
So
so
the.
A
All
right:
well
thanks.
Everyone
for
coming
I've
got
some
good
notes
and
put
them
in
the
doc
as
always
share
anything
in
the
chat
room
and
talk
again
next
week,
I've
got
sudden
content
to
add
to
that
and
some
adjustments
to
make.
So
you
know
I'll
have
a
look
at
that,
probably
early
next
week
and
say
we
good
right.