►
From YouTube: CNCF SIG Runtime 2020-07-16
Description
CNCF SIG Runtime 2020-07-16
A
B
A
A
D
It's
not
too
bad
because
I'm
not
in
a
heavily
populated
area,
but
still
staying
safe
and
staying
away
from
things
as
much
as
possible.
Yeah.
A
All
right
cool,
I
think
we
have
enough
people
so
yeah
welcome
everybody.
So
we
have
you
michael
today.
So
thank
you
for
deciding
to
present.
So
I
guess
you're
gonna
be
talking
about
the
resource
interface.
A
D
A
E
I'm
not
sure
we're
waiting
for
him.
I
see
zoom
released,
dedicated
standalone
screen
thing
for
zoom
calls.
Recently
the
facebook
one
seems
to
work
really
well.
The
only
problem
is
at
the
moment
it
doesn't
do
zoom,
which
is
a
pity.
C
D
We
have
different
workload
requirements,
you
have
batch
workloads,
latency,
sensitive
workloads,
customers
have
their
own
slas
and
slos,
or
you
have
different
classes
of
workloads
like
p1
critical.
It
always
needs
to
run
and
then
all
the
way
down
to
where
batch
would
probably
be
classified
as
like
v3
before
things
like
that.
D
D
What
socket
is
my
gpu
connected
on?
What
sockets
are
my
network
cards
on
in
large
deployments?
It's
kind
of
things
you
you
have
to
think
about
at
the
end
of
the
day,
so
this
creates
a
large
matrix.
There's
a
lot
of
things
to
consider
and
there's
some
current
solutions.
So
cubelet
today
has
the
cpu
manager
and
there's
a
few
caps
that
are
already
outstanding
with
the
community
on
proposing
like
how
do
we
improve
improve
this?
How
do
we
start
adding
pneuma
support
to
this?
D
But
when
I
was
researching
this
there's
a
lot
of
weird
us
so
like
you're,
a
guaranteed
pod
and
your
requests
equal
your
limits,
then
you
get
cpu
sets
and
you're
scheduled
on
dedicated
cores,
and
I
I
don't
think,
that's
very
friendly,
a
very
friendly
ux,
it's
kind
of
hidden
away
and
you
have
to
know
the
right
knobs
to
turn
and
things
like
that
and
it's
off
by
default
and
then
there's
topology
manager,
which
basically
only
the
cpu
manager
and
device
manager
take
advantage
of,
and
that's
provided
by
a
hint
providers.
D
So
there's
another
solution
from
intel.
They
have
the
intel,
cpu
manager
for
cube,
there's
a
cli
tool
called
cmk
which
does
kind
of
the
low-level
allocating
pools
and
then
placing
workloads
within
those
pools,
depending
on
different
things
and
and
picking
what
cpus
go
into
whatpool.
D
And
then
they
have
the
cri
resource
manager
that
builds
on
top
of
cmk
for
use
within
cube,
and
one
thing
with
that
is
they
have
to
hijack
the
entire
cri
socket
and
they
also
have
some
api
extensions
for
this,
and
the
cri
interface
is
kind
of
a
really
big
interface.
Whenever
you
only
want
to
deal
with
like
scheduling,
containers
on
specific
cores,
so
overall
qs
qos
is
hard,
there's
lots
of
users
putting
on
your
scale
it's
hard
to
solve
this
for
everyone.
D
D
You
can
compose
various
plugins
together
and
there's
no
controversy
that
I've
seen
within
the
design
of
cni
and
it's
basically
we've
all
accepted
it,
and
and
and
use
it
within
cube
and
and
other
container
projects.
D
D
So
came
up
with
nri,
because
cri
was
already
taken.
I'd
rather
be
named
container
resource
interface,
but
we
already
have
container
runtime
interface,
so
nri
is
the
best.
I
could
come
up
with
right
now
and
designing
this,
like.
I
don't
think
cubelet
is
the
right
abstraction
for
this.
We
have
the
cubelet
and
then
we
have
cris
and
cris
are
very
low
level.
They
know
how
to
interact
with
the
system,
whether
you're
on
linux
or
windows.
D
It
can
it's
hard
to
tell
who's
responsible
for
resource
management
right
now,
with
cpu
manager
and
topology
manager
there,
and
at
the
cri
level
we
have
very
robust
ways
to
hook
into
the
actual
life
cycle
of
a
container
on
a
system.
D
D
And
then
we
have
this
pause
in
between,
and
this
is
where,
like
cni
comes
in
and
where
I'm
proposing
nri
comes
in
where,
where
it
can
take,
what's
the
existing
setup
you
you
can
modify
the
resources,
add
additional
things,
and
then
we
start
the
container,
which
is
the
user's
process,
so
designing
this
kind
of
taking
a
lot
of
inspiration
from
cni.
We
have
kind
of
a
global
systems
config,
and
you
have
a
list
of
these
plugins
where
you
can
compose
these
together
chain
them.
D
Plugins
can
have
specific
configuration
so
for
this
confine
plugin,
we
have
system
reserve
cores
where
we
say
when
you're
dealing
with
topology
and
scheduling
these
workloads
on
cores.
I
need
zero
and
one
to
be
reserved
for
the
system,
so
don't
touch
those
and
you
have
the
rest
of
it
and
then
kind
of
enable
to
build
a
good
ecosystem
around
this.
Like
cni's
done,
you
need
kind
of
skeleton
code,
make
it
easy
for
people
to
build
these
plug-ins
and
not
get
in
their
way
so
kind
of
worked
on
packages
for
as
a
plug-in
developer.
D
In
the
create
step,
you
would
invoke
the
nri
plug-ins
at
this
step
at
deletion.
You
would
invoke
the
deletion
handles
and
then
we
start
the
container.
So
it's
very
robust.
We
have
explicit
places
to
inject
these
in
the
life
cycle
of
the
container
and
it's
not
it's
not
bleeding
over
into
other
people's
functionality
in
the
stack.
D
D
D
So
we
kind
of
build
a
dynamic
node
topology
it
dynamically
places,
workloads
on
the
system
based
on
the
qos
class.
We
have
pneuma
support,
so
if
your
latency
sensitive
service
says
I
need
to
be,
I
need
to
be
on
a
specific
numa
node
or
I
need
to
reserve
the
entire
numa
node.
It
can
do
that
and
it
will
still
steal
that
node
away
from
the
batch
workloads
or
return
that
whenever
that
workload's
done
so
kind
of
with
these
plug-ins
there's
no
need
to
wait
for
longer
cube
release
cycles
for
updates
to
cpu
manager,
topology
manager.
D
You
have
a
community
being
built
up
of
all
these
plug-ins
with
nri.
You
just
update
them
as
you
need
to,
and
you're
not
tied
to
a
cube
release
cycle,
as
you
would
be
with
cpu
manager.
You
can
kind
of
chain
all
these
together.
You
can
make
plugins
that
do
one
thing
and
do
them
well,
and
it
keeps
your
code
simpler
and
more
robust
and
things
we
care
about
at
the
infrastructure
layer
and
like
if,
if
a
specific
plug-in
doesn't
work
for
you
then
fork
it
change
it
make
your
own
or
build
more
plugins.
D
First
stop
to
present
this
and
start
getting
some
feedback
I'll
have
a
formal
spec
up,
hopefully
today
within
the
container
d
project,
because
it's
kind
of
where
I
have
my
default
implementation
and
hooks
into
ctr
and
the
cri
for
cube
in
that
project
and
then
kind
of
expand
out
to
different
sigs
and
things
like
that.
D
They
they
don't
work
side
by
side,
because
they
would
start
to
conflict
with
each
other,
or
at
this
stage
it
would
be
able
to
override
the
cubelet.
So
it's
best
you
would
want
to
have
cpu
manager
off
which
right
now,
it's
off
by
default,
but
yeah.
A
D
A
A
B
A
B
D
And
when
I
was
looking
at
the
intel
intel,
cpu
manager
work,
it
seems
like
they
could
easily
plug
into
this
nri
interface
for
the
cmk
tool
and
do
their
existing
work,
and
then
they
wouldn't
have
to
implement
the
surface
area
of
cr
cri.
To
hijack
into
this.
They
have
a
very
specific
api
for
doing
this,
so
I
think
I
think
it
aligns
well
with
the
work
that's
being
done.
There.
B
B
A
Cool
yeah
is
there
like
a
mechanism
where,
like
you,
also
get
feedback
from.
A
From
the
systems
right,
so
you
it
basically
some
monitoring
that
says:
okay,
my
I'm
kind
of
full
or
I'm
kind
of
overloaded.
Can
you
move
this
stuff
away
from
me
into
some
other
node
or
something
like
that?.
D
Yeah,
so
my
general
idea
of
it
was
that
the
cube
schedulers
will
still
handle
placing
workloads
on
the
node
and
if
they
decide
to
over
subscribe
a
node
or
not
that's
kind
of
a
high
level
scheduling
decision
where
at
this
level
it's
hard
to
provide
feedback
back
up.
The
stack
like
we
could
always
kill
a
container,
but
then
the
cubelet
wouldn't
know
why
we
killed
that.
D
A
D
F
D
Yeah,
I
am
working
on
a
way
to
chain
plug-ins
together,
so
they
can
get
feedback
within
the
chain
of
where
they
know
who
executed
before
them.
What
changes
they
made
but
yeah
it
would
be
where
vendor-specific
implementations
like
plugins
can
be
made
for
devices
that
you
need
to
do
something
more
powerful
on.
B
They
would
still
have
a
device
plug
in
everything
else,
you're
just
looking
at
where
you
would
place
the
actual
container
or
the
pod
processes,
with
the
context
of
where
your
device
is,
and
the
topology
too
is.
That
is
that
right
so
like
it
wouldn't
be
a
device
plug-in,
but
it
would
based
on
using
a
device.
You
would
have
have
a
wiser
placement
of
the
workload
is.
Is
that
kind
of
the
idea.
E
First
of
all,
thanks
for
a
really
interesting
presentation
and
you've
raised
some,
what
I
think
a
very
fundamental
and
very
interesting
questions,
and
so,
first
of
all,
I
must
prefix
this
by
saying:
I'm
not
super
familiar
with
cubelet
and
the
various
interfaces
that
are
down
at
that
level,
cni
cri
and
the
stuff
you
guys
are
working
on,
but
but
it
seems
clear
to
me
that
a
lot
of
that
stuff
kind
of
developed
organically-
and
I
think,
if
I
read
between
the
lines,
what
you're
saying
like
not
all
of
it
is
ideal.
E
If
we
were
to
sort
of
start
from
with
a
blank
sheet
of
paper,
we
might
architect
things
differently,
and
I
don't
mean
this
to
be
offensive
to
anybody
who
who
has
been
involved
in
the
current
architecture.
But
it
seems
to
me
like
it
might
be
useful
to
actually
kind
of
sketch
out
what
we
think
a
sort
of
reference
architecture
for
these
kinds
of
things
might
be,
and
you
know
look
at
where
we
came
from
look
at
where
we'd
like
to
go
to,
and
this
doesn't
necessarily
need
to
be
kubernetes
specific.
E
This
can
be
sort
of
from
a
point
of
view
of
like
if
you're
going
to
do
container
orchestration
in
a
cloud
native
way.
You
know
these
are
the
kinds
of
things
you
run
into,
and
this
is
what
we
think
is
a
good
reference
architecture
and
have
you
know
the
various
different
implementations
of
these
things
at
least
have
a
kind
of
a
guiding
light
as
to
what
we
think
a
good
approach
is.
Does.
D
D
So
like
you
make
compromises,
and
you
have
growing
pains
things
like
that,
but
I
think
interfaces
kind
of
stand,
the
test
of
time
where
implementations
get
refactored
and
you
can
always
break
things
out
like
cpu
manager,
could
start
in
cubelet
now
and
say
like.
Actually,
this
needs
to
be
factored
out
behind
an
interface.
E
Yeah
that
makes
sense
is,
is
anyone
else
on
the
call
sort
of
intimately
involved
in,
for
example,
sig
node,
and
does
anybody
know
what
the
status
of
like
coming
up
with
a
grand
vision
for
the
node
interfaces
stands
because
it's
the
other
problem
that-
and
this
is
completely
sort
of
anecdotally
from
looking
from
the
outside.
It
looks
like
a
lot
of
these
interfaces
have
developed
somewhat
in
isolation
from
each
other.
E
So
you
know
the
networking
people
that
cni
and
the
the
container
people
did
cri
and-
and
it's
not
clear
that
that
anybody
with
a
holistic
vision
across
all
of
these
things
necessarily
kind
of
weighed
in
and
that
that
may
be
wrong.
But
that's
the
impression
I
get-
and
I
was
you
know,
I
know
dawn
very
well
and
I
was
in
the
google
kubernetes
team
who
sort
of
started
all
the
stuff
great
backside.
So
he's
a
very
competent
engineers.
A
Yeah,
it
sounds
good
to
me.
I
mean
I
mean
to
the
the
next
question
that
I
have
is
that
is
a
group
of
people
interested
in
starting
sort
of
like
a
working
group
and
in
address
some
of
these
issues
right
so
and
and
quentin
you,
you
brought
up
the
issue
of
maybe
bringing
up
some
of
this
all
the
interfaces
together
so
or
people
can
have
that
communication
across
the
different
teams
right,
so
that
would
you
know
so
that
so
they
they
agree
on
certain
standards.
A
What
I've
seen
in
the
past
is
that
a
lot
of
the
teams
actually
work
on
their
own,
like,
like
you
mentioned
and,
and
you
know,
a
lot
of
these
interfaces-
kind
of
come
up
and
and
they're
kind
of
different
in
different
ways
in
this
case
you're
following
the
cni
aspect,
which
is
great,
I
think
you
know
there
are
some
other
cases
where
I
mean
some
of
the
other
teams
are
actually
working
independently
and
it
may
not
be
the
best
best
experience
for
some
of
the
end
users
so
yeah.
A
So
the
idea
here
is
just
to
to
to
make
the
exp
the
user
experience
more
more
together
and
and
so
that
people
you
know
make
you
know,
make
better
decisions
on
how
to
want
to
use
the
tools
and
and.
E
End
yeah,
I
think,
before
we
start
any
group
to
do
any
brainstorming
in
that
regard,
I
would
definitely
want
to
catch
up
with
the
people.
I
think
the
de
facto
place
where
these
conversations
happen
or
did
in
the
past
happen
anyway
was
was
signode
the
kubernetes
signaled,
so
we
should
definitely
go
and
chat
to
whoever
runs
the
show
there
and
just
get
a
sort
of
state
of
play
from
them.
I
don't
want
us
to
create
some
other.
E
B
B
C
I'm
sorry,
I
thought
we
already
created
the
working
group
for
a
container
device
interface,
or
is
it
still
in
process.
B
A
resource
management
work
group
as
well
that,
because
there's
a
lot
of
discussion
around
like
crirm
and
topology
manager
and
things
like
this,
because
I
don't
think
anybody's
happy
with
it,
as
is
in
signod,
because
that
was
consuming
a
lot
of
time.
They
created
a
different
work
group
specifically
focused
on
resource
management.
B
C
A
Yeah
yeah
that
that's
another
option
or
a
feasible
option.
I
think,
but
what
quentin
brings
up
also
makes
sense,
which
is
you
know,
talk
to
some
of
the
other
teams
and
sign
out,
and
we
don't
want
to
kind
of.
You
know
say
that
we're
going
to
be
working
on
this
and
when
some
other
people
might
be
already
be
talking
about
doing
something
similar
yeah.
A
Some
work
is
to
be
done
related
to
reaching
out
to
some
of
those
groups
and
if
somebody's
on
the
call
today
that
it's
in
one
of
those
groups
like
quinton
mentioned,
then
they
can
reach
out
to
some
of
those
other
working
groups.
E
Yeah,
I
think
it
needs
to
be
someone
with
a
with
a
good,
sound,
technical
understanding
of
the
issues
at
that
level.
I
I
don't
have
that
understanding
and
I
would
definitely
not
want
to
be
the
person
leading
this
technical
kind
of
coordination
role,
but
but
if
anyone
is
on
the
call
or
knows
of
anyone
who
would
be
interested
in
doing
that
kind
of
code,
actually
we
have
some
tech
leads
on
this
group
diane
and
others.
I
think
that
seems
to
have
dropped
off,
but
but
yeah.
E
E
We
have
networking
vendors
and
you
know
container
vendors
and
all
sorts
of
people
involved.
So
we
have
to
be
very
careful
not
to
kind
of
create
a
17th
committee
to
kind
of
try
and
standardize
these
things
if
the
people
involved
don't
actually
want
to
standardize
them
the
way
we
do
so
yeah,
that's
my
two
cents.
E
One
I
think,
rather
than
necessarily
force
people
to
make
decisions.
Now,
let's,
let's
put
that
on
our
to-do
list
of
finding
the
right
person
for
that
role
and
and
put
a
sort
of,
I
mean
it's
going
to
take
a
while
to
get
all
of
this
stuff
together.
This
is
kind
of
like
a
multi-quarter
project.
I
think-
and
it's
going
to
be
very
important
to
start
with
the
right
people.
We've
got
enough
examples
of
attempts
in
this
direction
that
haven't
gone
anywhere.
E
A
Sense
any
other
comments.
Anything
you'd
like
to
talk
about
related
to
this.
E
Sounds
like
that's
it
thanks
for
a
very
thought-provoking
talk,
michael
and
thanks
for
getting
all
the
interesting
presenters
together.
Ricardo.