►
Description
SIG Runtime Container Orchestrated Device Working Group Meeting 2020-10-13
B
B
A
C
Sasha,
how
many
kids
do
you
have
just
one
this
one.
C
E
There's
two
parts
right:
the
one
is
the
deleting
the
images
that
haven't
been
touched,
the
other
the
more
important
one
is
the
is
the
new
limits
restrictions
on
how
many
pulls
you
can
do
an
hour
that
I
mean
they're,
not
really
gonna.
E
You
know,
go
nuclear
on
november
1st,
at
least
not
according
to
you
know
our
buddy,
mr
cormac
right,
but
but
they
that's
what
it
currently
says.
It
says
you
know
on
that
date.
If
you
pull
100
images
in
six
hours,
you're
out
right,
no,
more
images
that,
until
after
the
six
hours,
if
you're
doing
it
anonymously,
if,
if
you
log
in
on
a
free
account,
you
get
200
pulls
right,
yep
and
you
know
for
a
developer,
that's
fine,
but
for
people
who
do
installs
at
customer
sites.
B
Yeah
kind
of
the
way
they
do
it,
they
charge
it
back
to
the
user.
Instead
of
like
the
image
owner
like
there
should
be
an
option
for
the
image
owner
for
like.
E
There
there
is
mike
but
they're
not
making
it
public
yeah,
so
so
yeah
yeah
they
do
have
they've
got
two
white
listing
programs
one.
I
don't
have
the
details
on
on
the
client
side
and
the
other
is
on
the
on
the
source
side.
If
you
have
an
organization,
you
know
with
a
bunch
of
repos
you
can
you
can
you
can
pay
them
to
for
the
resource,
storage
and
and
downloads
they'll?
Do
some
estimate
on
the
amount
you've
got
and
give
you
a
number
we're
still
working
that
out?
E
E
A
Well,
I
know
what
our
guys
already
hit
some
limits
in
the
testing
clusters:
yeah
yeah
yeah.
B
C
My
p2p
service
hearing
service
won't
work
anymore.
E
Yeah,
that's
definitely
the
the
solution
mike.
It's
just
that
there
are
certain
scenarios
where,
where
they've
gotten
used
to
using
latest
tags
and
docker
only
counts
the
they're,
they
don't,
they
don't
care.
How
how
much,
how
big
the
image
is
on
the
client
side,
they're,
just
counting
the
number
of
manifests
right
and
saying:
okay,
that's
one
pole
for
one
image,
no
matter
how
many
manifest
you
want
and
then,
of
course
we're
just
using
that
for
verification,
validation
of
the
hash.
E
You
know
blobs
that
we
have
stored
on
our
cache,
but
you
know
they're,
they
don't
care.
E
So
if
you,
if
you
do
latest
or
default
right
or
pull
always
just
to
check
the
you
know,
authentication
verification,
then
then,
then
yeah,
that's
a
that's
a
pull
right!
E
You
can
do
you
can
use
mirrors,
but
you
know,
then
you
know:
how
often
do
you
go
back
to
the
source
to
make
sure
that
you
know
the
original
root
public
is?
Is
you
know
still
the
current
version?
C
All
right,
I
think,
we're
still
waiting
for
renown,
but
we
can
probably
start
okay.
C
C
Improvements
prs:
the
two
items
are
the
next
concrete
steps
for
cdi
and
nri,
and
the
second
item
is
just
some
discussions
around
the
panel
for
the
cod
cod
work
group
at
kubecon,
namely
questions
we'd
like
to
ask
and
answers
we'd
like
to
I
mean
talk
about,
so
we
can
probably
start
with
the
next
concrete
steps
for
cdi
and
noi,
because
we
wanted
to
have
that
discussion
with
michael
crosby
in
the
room.
C
So
I
think
maybe
sasha
and
mentioned
that
he
had
talked
to
you,
michael
crosby.
Could
you
maybe
get
us
up
to
speed
on
what
is
the
plan
or
what
it?
What
are
the
ids
that
you
have
for
the
nri,
and
I
think
there
was
some
mentions
about
asia?
Libraries
between
multiple
projects
is
that
is
that
a
good
introduction,
or
could
you
maybe
help
us
just
get
us
up
to
speed.
E
Yeah,
I
think
the
the
other
thing
that's
been
asked
a
couple
times
is
to
to
what
level
of
degree
we'll
be
extending
the
nri
to
support.
You
know
additional
cases
that
the
intel
guys,
for
example,
had
had
a
need
for
at
nri.
For
example,
you
know
hooks
and
different
stages.
B
Side
I
can
like
so
far.
The
initial
design
of
nri
was
around
resource
management,
specifically
a
need
that
I
had
for
cpu,
pneuma
and
then
expanding
out
to
things
like
huge
huge
paid
support,
l3
cache
things
like
that
kind
of
over
the
past
month,
or
so
we've
been
working
on
a
few
different
plugins
to
support
that,
and
so
far
the
life
cycle
hooks
of
where
we
have
now
for,
like
the
crate,
start,
delete
and
update,
calls
all
have
worked
out.
B
But
so
far
from
my
point
of
view,
I
think
it's
flexible
enough
in
terms
of
supporting
this,
but
we've
also
discussed
about
some
ways
where
we
can
share
some
underlying
code
for
the
plug-ins.
I
think
the
api
is
pretty
generic
and
and
straightforward.
It's
collaborating
a
lot
on
the
plug-ins
and
how
those
are
built.
C
All
right
so
maybe
at
least
one
of
the
things
we
wanted
to
do
with
cdi.
C
Is
this
part
that
you're
mentioning
where
we
have
a
pre-create
and
we
change
a
spec
one
of
these
ids
here
at
least
one
of
the
thoughts
or
the
reasoning
behind
changing
the
spec.
C
Is
that,
for
example,
when
you're
adding
a
device
and
you're
informing
container
d
or
pondman
about
the
fact
that
there
is
a
device,
then
when,
for
example,
a
user
calls
the
or
makes
a
call
to
the
updates
path,
we
don't
need
to
have
a
hook
in
the
update
path,
because
so
the
way
that
it
currently
works
is
that
when
you're
updating
the
resources,
if
you've
added
a
device
and
in
the
create
path
without
informing
your
runtime,
you
also
need
to
I
mean
have
a
hook
in
the
update
pass
to
re-add
that
device.
C
But
even
there
you'd
be
kind
of
you'd
be
in
this
position
where,
if
your
process
was
reading
from
that
device,
node
and
the
update
path
removes
the
device
from
the
c
groups
and
you
re-add
it
as
part
of
your
hook.
C
Your
process
is
going
to
lose
the
read,
access
to
or
read,
write
or
whatever
permission
access
it
has
to
the
device.
During
that
brief
amount
of
time.
So.
A
I'm
saying
what
update
pass
you
know:
spec
right
now
is
not
able
to
modify
with
devices,
so
you
can
do
it,
but
it
does.
It
does
modify.
C
The
devices
I
mean
we
have
a
longstanding
bug
where,
because
of
how
we
do
we,
because
of
the
fact
that
we
add
devices
in
a
container,
for
example,
with
the
cpu
manager,
cpu
manager,
goes
in
updates.
The
just
calls
the
updates
on
the
cpus
and
the
container
loses
all
c
groups
permissions
to
read
from
the
dev
nvidia
nodes.
C
So
at
least
I
mean
that's
one
of
the
reasoning
where
we
had
behind
with
changing
the
spec.
Is
that
once
the
once
the
spec
has,
or
once
the
changes
have
been
sent
to
the
container
runtime,
the
hook
is
no
longer,
or
at
least
the
the
plug-in
is
no
longer
on
the
hook
to
get
in
there
and
intercept
every
update
call
the
other
one.
Is
that
there's
in
a
way
a
simplifying
factor
to
just
having
a
json
file?
C
B
Yeah,
I
think
I
think
the
api
of
how
inner
eye
is
it's
very
flexible,
so
it
would
allow
you
to
do
a
lot
of
that.
It's
just
making
sure
we
have
the
correct
hooks
in
the
in
the
cris
but
yeah.
I
think
I
think
it's
totally
doable
when
I
started
looking
into
a
pre
pre-create
hook
where,
like
it,
would
be
more
of
the
contract
between
nri
and
cri,
because
we'd
have
to
like
accept
the
current
spec
on
standard
in.
B
C
Okay,
so
maybe
I
think
one
of
the
question
that
at
least
I
had
was,
if,
if
we
feel
like
dnri,
is
the
right
way
forward,
the
first
question
is:
is
the
nri,
something
that
we
think
we'll
be
able
to
see
in
not
just
continuity
but
other
runtimes,
and
the
second
question
is:
if,
if
it
is
the
case,
how
do
we?
How
do
we
take
advantage
of
that?
How
do
we
concretely?
C
What
are
the
expectations
that
like
where,
where
do
we
start
making
pull
requests?
What's
what's
the
step
that
we
can
actually
start
acting
on
and
help
build
design,
this
id
on
top
of
the
nri.
B
Yeah,
I
think
renault
would
be
the
best
to
to
talk
about
if
cryo
would
be
interested
in
supporting
this.
The
way
I'm
building
it
like
I'm
hoping
it.
The
goal
is
support
to
be
generic.
Just
like
cni
is
and
a
lot
of
the
the
domain.
Specific
implementation
happens
in
the
plug-ins,
where
we're
just
providing
hooks
within
our
cri
implementation,
so
so
yeah.
G
I
think
it
does
make
sense,
but
I
think
I
need
to
understand
the
bigger
picture
like
with
in
case
of
cdi.
We
understand
that
we'll
look
at
something
of
that
sent
over
the
cri
and
decide
that
okay,
we
need
to
make
these
modifications
to
the
spec.
So
how
are
we
integrating
with
the
other
items
in
the
spec
like?
When?
Are
we
deciding
that
they're
calling
into
the
nri?
G
Are
we
mapping
from
the
cri
or
do
we
expect
changes
to
the
cubelet
that
call
into
the
nri.
F
B
A
A
So
we
we
can
react
on
reports.
We
can
react
on
individual
containers.
We
can,
for
example,
for
for
the
statistics
we
can
use
for
cpu
cycles.
What
is
reported
by
the
runtime
to
actually
trigger
some
of
the
update.
A
B
A
From
my
site,
also
to
what
michael
said
is
where
the
mechanisms
which,
like
blueprinted
from
with
cni
plugins
when
the
executable
executed
as
a
hook,
it
potentially
might
be
okay,
but
on
a
high
loaded
system,
it's
quite
expensive
operations.
A
So,
ideally,
what
we
expecting
to
have
is
potentially
some
grpc
mechanism
or
some
other
mechanism
where
we
can
integrate
closely
to
our
runtimes,
both
private
and
then
continuously
with
high
efficiency.
Passing
this
data
back
and
forward.
G
E
Yeah,
I
think
michael's
nri
sort
of
solves
all
that
right.
We
just
have
to
make
some
decisions.
You
know
alex
you,
you're
modifying
the
the
spec
before
the
container
runtime
receives
it
in
the
cri
space
we
like
to
store
what
we
receive
from
kublet
right
in
our
own
little
storage.
So
we
we'd,
like,
I
think,
to
store
it
first
before
you
make
your
modifications
if
we
need
to
store
it
again
after
you
make
the
modifications.
That's
that's
fine!
That's
fair!.
A
Well,
we
don't
have
any
specific
problems
of
how
it's
structured
and
actually
maybe
even
having
well
some
level
of
debug,
which
saying
like
this
is
what
we
received
from
sierra
socket
analysis,
what
internals
plugins
modified
it.
It
might
be
a
good
solution
and
no
problem
with
that.
A
Our
our
task
and
our
design
is
what
we
need
to
be
able
to
have
get
from
the
runtime
the
whole
state
of
system,
like
all
reports,
all
the
containers
and
just
because
how
our
algorithm
works.
B
Okay,
yeah-
that
was
one
gap
that
that
we
noticed
is.
We
want
at
least
one
change
in
the
cubelet
to
provide
us
the
entire
pod
spec.
Instead
of
feeding
us
containers,
one
by
one
because,
like
the
information's
there,
it
just
doesn't
give
the
cri
the
whole
pod
spec,
where
we
can
make
better
more
efficient
decisions.
When
we
know
the
entire
workload.
A
Michael,
you
probably
missed
it.
Auntie
started
to
draft
a
cap
about
what
it's
linked
in
the
meeting
units
below,
so
we
will,
and
now
it's
partially
linked
to
our
discussion,
what
we
had
in
this
cri
going
to
ge
or
better
discussion
about
generalizing
with
how
we
communicate
with
resources
down
from
equipment.
A
E
Michael,
the
when
you
say
the
whole
pod
specification
there's
a
lot
of
good
stuff
there
and
it's
not
just
the
pod
spec.
It's
also
the
other
objects
that
are
related
to
the
pod
spec
that
google
uses
you
know
to
make
decisions
and
set
up
the
pod
information.
That's
passed
on
the
pod
run,
request,
there's
also
a
bunch
of
other
stuff
that
happens
before
they
do
a
pod
run.
You
know
insofar
as
doing
image,
caching
and
loading
you
know
querying
the
state
of
the
node
that
sort
of
stuff
right
kubelet
keeps
that
information.
E
You
know
pretty
pretty
tight
to
the
vest,
but
I
think
we
they
have
a
a
thing
in
in
kublet
called
a
you
know,
a
container
runtime
manager
and-
and
I
think
there
was
an
original
intent
in
kublet
for
that
to
be
the
cri
and
right
now
it's
it's
just
got
a
lot
of
stuff
that
we
need
access
to.
So
we
probably
need
to
ex
extend
the
pod
specification
and
other
objects
that
kubelet
uses
to
be
distributed
through
the
cri
v2
or
something
to
that
effect.
A
E
Right,
yeah,
yeah
and
hopefully
under
the
under
the
common
process,
we're
removing
the
information
we're
moving
the
information.
The
container
runtimes
need
to
manage
these
containers
for
the
pods
downstream.
G
Yeah
this
may
also
come
up.
I
don't
know
who
attended
the
whole
sidecar
discussion
where
there
was
a
proposal
to
instead
have
explicit
dependencies
like
on
the
startup
order
and
shutdown
order
like
like
systemd.
G
E
E
Pretty
interesting,
I
especially
liked
when
they
they
started
talking
about
the
you
know
a
graph
or
you
know,
with
priorities
for
for
the
sequence.
A
So
one
of
the
problems
with
web
think
derek
mentioned,
and
we
see
right
discussion
is
what,
like
it
blows,
the
responsibility
of,
for
example,
like
priority
of
creation
or
killing
of
offending
workloads.
So
he
doesn't
want
to
remove
it
from
the
kubrick.
H
But
sasha
so
that
so
this
om
adjustment.
Yes,
that's
fine
that
the
kublat
is
deciding
that,
but
the
problem
is
that
the
cubelet
is
currently
hiding
from
the
runtime
and
everybody
who
is
behind
the
sierra
interface
that
how
much
it
has
promised
the
containers
that
it
can
use
memory.
So
basically
the
memory
yes.
A
Yeah,
that
is
part
of
what
what
both
michael
and
aint
wrote
and
proposal.
F
A
A
So
when,
when
some
condition
on
the
node
trigger
it
and
kubelet
needs
to
start
the
leaking,
it
will
evict
first
or
start
killing
the
containers
for,
like
normal
workloads
and
system
workloads
will
be
like
in
in
the
last
priority,
so
that
part
stays
in
recovery,
but
how
it's
run,
how
it's
killed
in
our
run
times.
In
my.
C
A
So
let
me
say
something
well,
but
so
I
don't
know
you're
correct
what,
where
current
cdi
can
be
implemented
in
nri
approach,
so
like
in
in
case,
if
we
hook
into
this
procreate
state,
so
we
can
get
where
container
from
cri
inject
whatever
we
need
and
when
pass
it
to
runtime
to
execute.
A
Well,
it's
doable,
but
when
it
covers
only
the
case
for
for
kubernetes
and
cri.
But
our
initial
idea
was
what
it's
also
applicable
for
docker
command
line
and
for
for
podmonk
online.
C
How
do
you
see
where
so?
So,
let
me
let
me
ask
this,
is
something
that
I
might
have
not
completely
understood.
The
nri
is
integrated
into
the
cri
shim,
or
is
it
into
radians
container
d
directly.
B
C
So
one
other
thing,
at
least
this
is
what
alex
is
mentioning.
One
of
the
things
we're
really
hoping
is
ci
is
something
that
eventually
pops
up
to
the
dockers
ui.
Maybe
the
container
dci
at
the
end
of
the
day,
what
we're
really
looking
for
is,
first
and
foremost,
being
able
to
do
docker
or
podman,
run
dash
dash
device,
my
super
device
and
then
something
like
ubuntu
and
then
my
vendor
tool.
C
If
that
makes
sense,
and
then
that's
that's
what
we're
looking
to
do,
because
at
the
end
of
the
day,
what
we're
hoping
is
at
least
the
the
mental
model
that
we
have
for
something
like
kubernetes
is
that,
instead
of
what
kubernetes
is
doing
today,
which
is
docker
slash,
pod
man
run
dash
v,
my
volume
or
my
binary
dash
device
dash
e,
etc.
C
We
really
want
kubernetes
to
be
doing
something
like
this.
That's
that's
the
mental
model
that
we're
going
for
rather
than
the
current
mental
model,
which
is
this
and
so
for
us.
The
the
first
objectives
here
is
really
to
surface
this
through
the
ci
and
then,
with
this
surface,
this
back
to
kubernetes.
B
B
Because
of
the
way
you
get
information
in,
like
you
have
to
have
hooks
in
the
cri
and
like
docker,
docker
demon
part,
you
can't,
I
don't
think,
there's
a
way
to
get
the
best
of
both
worlds.
Where
you
get
all
of
the
pod
spec
for
cube,
specific
runs
and
be
able
to
do
advanced
like
resource
placement.
Topology
with
just
a
generic
kind
of
the
way,
docker
and
stuff
is
today
container
by
container
running.
C
Okay,
so
so
what
you're
saying
is
that
I'm
still
so,
if
I
were
to
rephrase
and
what
you're
saying
is
that
there's
going
to
be
any
or
we
are
going
to
need
to
have
to
change
a
bit
the
how
the
nri
is
called
in
the
container
d
project
and
then
the
container
d
shim
project,
sorry
cri
shim?
B
So
so
the
cri
shim
stays
the
same
because
we
provide
the
additional
pod
information
in
the
inner
eye
invokes
before
adding
this
to
docker
or
another
client.
You'd
have
to
add
the
inner
eye
hooks
to
that
specifically,
because
there's
not
a
generic
way
to
shove.
This
inside
container
decor
and
to
know
a
pod
run
versus
a
specific
container
invoked
from
docker.
G
Think
I'll
probably
need
to
look
at
a
deeper
demo
or
something
to
figure
out
how
we
can
integrate
it.
But
it
looks
like
if,
at
the
cri
level
it
should
be
fine,
but
I
think
that
the
thing
then
is
like
cri
works
for
cryo,
but
it
doesn't
work
for.
G
B
E
Payloads,
isn't,
I
think
the
podman
case
is
probably
a
little
special
right
where
it's
trying
to
run
a
a
container
for
for
a
pod,
but
it's
more
it's
more
just
running
one
container
right:
it's
not
integrated
with
kubernetes
or
anything
like
that.
E
B
C
Cool
all
right,
gotcha,
just
to
make
sure
our
I
mean
do
these
notes.
Make
sense.
Am
I
writing
things
that.
E
All
right
yeah,
I
guess
el
alexander,
the
question
would
be
to
you
right
on
your
side
or
you
know
others
like
you.
If
you,
if
you
require
additional
pod
spec
details
on
the
containers
that
are
run
at
that
layer,
so
can
you
operate
just
on
a
single
pod?
A
A
A
A
Yeah
for
us,
the
port
is
also
needed
to
get
with
information,
because
we
are
supporting
feature,
we're
containing
affinity
and
anti-affinity.
So,
for
example
like
if
you
have
like
database
and
consumer
or
within
one
port
and
needs
to
be
located
close
together,
we
need
to
have
information
about
this
container,
actually
part
of
one
pot
right.
E
I
E
A
E
A
A
E
How
to
make
your
your
code
work
via
a
plugin.
A
Oh,
our
quote
will
be
quite
simple
to
to
do
like
the
whole
policy
engine
is
can
be
detached
into
separate
library.
The
only
thing
well
only
will
be
different
is
what,
instead
of
this
proxy
object,
we
will
be
getting
calls
from
some
hours,
grpc
service
or
whatever
else
we
will
come
up
with,
but
not
nothing.
Much
changes.
B
A
Well,
michael
generally,
cdi
in
long
terms
supposed
to
support
also
the
pod
level
devices
but
applicable,
especially
for
like
rdma
kind
of
devices
where,
like
you,
have
a
shared
memory
between
multiple
containers
within
one
port,
so
the
carbon
cgi.
It
will
just
inject
the
same
device
twice
into
multiple
containers.
Oh
well
as
many
containers
as
many
devices.
C
All
right,
I
do
want
to
time
box
this
a
bit
so
that
we
have
maybe
15
minutes
to
just
prepare
a
bit
some
questions
for
the
cubicle
panel.
C
Was
there
anything
else
we
wanted
to
talk
about
for
nric
I?
Where
were
there
any
topics.
A
C
I
do
to
time-lock
it
since
we
have
to
the
coupon
panel.
Discussion
is
no.
A
C
Definitely
do
you
want
to
talk
about
it
or
do
you
want
to
also
present
it
in
the
next
in
the
next
meeting,
so
that
we
have
some
kind
of
formal
review
and
people
will
actually
yeah
like?
We
can
look
good
for
for
for
that
meeting
so
that,
after
at
the
end
of
the
next
meeting,
we
actually
say
yes,
no,
that
makes
sense,
it
does
not
make
all
sense
for
more
review
next
week.
C
All
right,
michael
thanks
for
joining
us
today,
I
think,
like
we
definitely
have
we'll
we'll,
probably
have
at
least
yeah.
At
least
I
have
a
better
idea
of
what
needs
to
be
done,
and
I
think
where
we're
going
with
this.
J
C
All
right-
and
I
think
that's
it
cubecon
panel
discussion,
so
it
at
least
what
some
I
mean.
Some
of
the
discussion
we
were
having
is
that
the
format
is
more
of
a
q,
a
for
since
it's
a
panel
and
we
might
present
some
slides,
but
it
really
should
be
reduced
to
maybe
a
small
charge
slide
a
small
roadmap
side
and
if,
if
there's
a
need
for
an
architecture
diagram
that
might
be
a
good
place,
but
it
should
be
at
most
3
4
slides,
not
that
much.
C
The
the
real
format
is
at
least
from
what
we've
I've
seen
from
other
panels.
Online
is
a
moderator
or
a
speaker,
or
maybe
speakers,
take
turns
and
asking
each
other's
questions
and
other
speakers
just
answer
them.
C
So
I
don't
know
my
general
idea
here
is
just
let's
list
some
of
the
questions
that
we
think
actually
makes
sense
in
a
q
a
let's
list,
some
of
the
answers
that
we
we
would
like
to
see
to
the
these
questions,
and
that's
it
also
keep
in
mind
that
we
do
have
45
minutes,
but
we
should
keep
10
15
minutes
at
the
end
for
online
questions.
Does
that
make
sense
all
right?
Let
me
write
down
some
of
the
questions
that
the
people
here
want
to
see.
A
C
I'm
also
presenting
the
cigarette
time
with
ricardo
which
I'll
be
presenting
cdi
there
I'm
happy
to
be
a
moderator,
but
if,
if
we
think
that
it
makes
more
sense
to
just
like
have
a
moderator
or
a
design
person
for
each
questions,
that
also
makes
sense
to
me.
I.
A
We
need
to
introduce
what
people
just
like
saying
like
now,
you
introduce
yourself
now
you
introduced
myself
and
so
on
and
when
probably
like
short
saying
from
each
one
like,
I
am
from
x
y
z,
like
what
and
maybe
again
like
first
first,
two
questions
like
where
charter
and
architecture
is
these
first
things
which
needs
to
be
asked,
and
afterwards
we
can
shuffle
to
already
to
like
more
discussion
between
all
participants,
but
at
least
like
the
first
few
minutes
to
set
up
with
tom
for
discussion.
We
needed.
C
Okay,
who
wants
to
be
a
monitor?
Let's
start
with
this
one
I
think
we
have
mike
and
alex
mirales
are
already
left.
C
All
right
and
let's
go
is
there
anyone
in
the
meeting
that
wishes
to
or
that
thinks
he
or
she
has
a
list
of
questions.
A
So
I
think
for
most
of
well
at
least
one
of
the
first
questions.
I
would
ask
everybody
in
in
this
forum
as
what
like,
we
are
representatives
from
with
different
companies
and
from
different
areas.
A
So,
like
I
know
you
are
device
manufacturer,
we
are
hardware
manufacturer
together,
like
both
device
and
like
resource
management,
things
mike
and
monalis
from
runtimes
worlds.
So
the
first
question
would
be:
what
is
this
cdi
for
you
for
your
area?
What
do
you
expect
and
what
problems
you
are
trying
to
solve
with.
A
I
E
Well,
I
mean
we,
what
do
you
expect
from
cdi
in
the,
but
it's
the
cod
working
group,
so
at
some
at
some
level
we
need
to
talk
about
the
you
know:
what
what
are?
What
are
the
various
projects
that
are
that
are
involved?
You
know
the
cod
working
group
is
trying
to
encompass
right
and
what
is
this
right.
E
C
That's
definitely
a
possibility
I
can
like
as
a
modder,
I
can
spin
up
like
one
or
two
slides.
I
mean
we
already
have
at
least
one
right
at
least
one,
so
that
the
audience
can
ask
questions
from
the
picture
right
yeah.
I
I
think
I
shared
that.
Let
me
give
me
a
quick
second,
and
I've
got
the
slides
actually,
in
my.
A
Actually,
what
I'm
thinking,
if,
if
you,
if
it
will
be
easier
for
you,
we
can
like
share
our
role
of
moderator,
especially
for
this
introduction
sites.
A
C
Like
that,
here's
the
slide
deck
that
I
have,
because
I
I
had
I
created
a
few
slides
as
part
of
sig
runtime's,
tob,
char
or
presentation.
Like
then
there's
a
technical
management
meeting.
What.
A
One
of
the
key
questions
we
need
to
ask
in
the
beginning
so
why
it's
was
formed
on
the
siger
sigmund
time
and
then
the
cncf
not
on
the
recover
notice.
A
E
And
and
on
the
not
kubernetes,
it's
probably
in
not
kubernetes
sig
node.
A
A
F
I
agree
with
the
questions
so
far.
I'm
also
thinking
of
what
do
we
want
to
talk
about
nri
at
all,
or
are
we
just
going
to
do
a
cdi
for
the
panel?
That's.
C
That's
really
a
great
question.
I
think
we
we
just
talked
about
it
for
an
hour
and
we
forgot
about
it.
I
E
C
Yep
runtime
hooks.
We
might
actually
now
that
I'm
thinking
about
it,
because
I
I
did
a
presentation
a
month
ago
in
this
work
group,
that's
called
the
hbc
advisory
council
and
I
talked
to
a
few
runtime
maintainers
singularity
saris.
C
C
Specializing
on
time
is
is,
is
I
think,
it's
it's
important
that
we
talked
about
and
talk
about
the
fact
that
we're
not
really
just
focused
on
these
two
kubernetes
runtimes,
but
also
the
the
the
more
specialized
runtimes,
and
that
there
is
conversation
with
them.
E
So,
like
you're
talking
about
canada,.
C
Containers
I
mean
cata
containers
is
definitely
another
one
time
we'll
we'll
have
to
talk
to
crackers.
That
kind
of
stuff
yeah.
A
Yeah
by
the
way,
speaking
of
fluvium
based
runtime,
so
sooner
or
later,
we
will
need
to
figure
out
how
to
inject
in
red
devices
where,
because,
if
we
are
injecting
only
on
the
container
start
time,
it
might
be
already
too
late,
because
vmware's
already
created
and
you
can't
hold
plug
with
device.
So
it's
giving
back
to
a
discussion.
F
I
C
G
E
Yeah
a
lot
of
the
a
lot
of
that
nri
coat's
gonna
have
to
live
in
the
shims.
That
looks
we'll
get
to
that
detail
later,
but.
C
So
if,
if
this
is
45
minutes,
then
we
have,
let's
say
15
minutes
for
external
questions.
That
leaves
us
with
30
minutes,
which
is
maybe
six.
E
C
Audience
so
I
would
say
at
least
two
things
as
you're
saying:
we
want
to
have
more
involvement
from
from
from
runtimes,
and
so
that's
probably
what
about
other
runs
specialized
runtimes
is
one
of
these
questions,
the
other
one
that
I
think
is
important
is
that
we
want
to
share
some
of
the
thoughts
with
some
of
these
other
sigs,
so
other
people
from
kubernetes,
just
not
just
sharing.
C
So
I
think,
and
and
to
me
we
need
to
make
sure
that
there's
a
bit
of
nuance
in
our
thoughts
and
answers
in
that
we
probably
need
to
be
on
the
side
of
here
are
some
of
the
ids
that
we
have.
They
might
not
fit
always
with
the
sigs
or
maybe
we
haven't
talked
with
all
the
runtimes
and
it's
possible
that
I
mean
you,
you
you
see
where
I'm
going,
isn't
it
prescribing
an
id
and
that
we're
just
here's
the
result
of
our
work
and
it's
possible
that
not
everything
is
done.
E
E
A
C
So
yeah,
so
maybe
then
taking
a
step
back
I
in
these
first
three
slides
that
I'll
be
presenting.
I
can
talk
about
why
this
is
important,
but
it's
it's.
C
C
Important,
that's
right,
yeah!
So
so
it's
not
just
something
that
I
say
immediately
as
part
of
the
presentation.
It's
also
something
that
both
alex
and
I
talking
about
right
yeah
so
that
maybe
maybe
this
should
not
should
not
be
ci
specific
right.
It's
what
you
expect
from
and
then
maybe
one
of
the
answer
could
be
from
cdi
and
then
from
the
cod
word
group
right,
yep.
A
F
E
E
A
C
Yeah,
but
that's
that's
an
actual,
very
important
point,
because
that
was
actually
also
one
of
the
questions
we
were
asking
ourselves
in
not
the
previous
meeting
but
the
meeting
a
month
ago,
which
is
there's
a
lot
of
initiatives
here,
we're
trying
to
close
on
cdi
but
at
the
same
time
there's
a
lot
of
ids
that
we
want
to
be
talking
about.
So
maybe
that's
something
that
we
can
probably
or
we
need
a
slide
out
of,
and
maybe
that's
something
you
want
you
can
present
alex
because
you
you
have
all
these
problems.
C
Thank
you
very
much
by
the
way
I
think
that
was
like.
That
was
that's.
That's
really.
An
important
point
is
roadmap
is
reuse.
Kids
and
roadmaps
are
going
to
be
the
big,
or
at
least
the
big
two
things
that
we
should
be
spending
time
on.
E
C
Okay,
so
maybe
last
one:
are
we
just
getting
more
involvement
in
sharing
the
ids,
or
are
there
things
that
we
want
to
be
getting
out
of
this
panel?
More.
A
So
we
have
set
of
ideas,
we
have
some
draft
of
implementations,
we
have
background
use
cases
why
it's
needed.
What
we
don't
have
is
we
don't
have
any
representatives
from
like
our
run
time,
so
we
covering
all
the
likely
major
ones,
we
cry
and
campaign.
We
need
people
from
hpc
world.
We
need
people
from
our
like
small
runtimes.
We
need
vm
based
runtimes
to
provide
information.
What
kind
of
challenges
with
devices
we
have.
A
Users
of
cri
we
have
non-kubernetes
users,
we
have
people
with
very
strange
devices
which
potentially
like
brought
to
a
plug-in
and
somehow
satisfied,
but
maybe
also
not
not
not
not
really
satisfied
but
silent.
A
May
we
need
those
people
to
speak
up.
We
need
to
have
information
about
more
complex
configurations.
We
need
to
have
information
about
more
complex
devices.
If
we
have.
A
C
Or
something
like
that,
I
think
it's
really
important
that
we
present
the
mental
model
at
some
point
if
that
makes
sense
of
cdi.
C
This
is
what
I
was
saying
to
michael.
This
idea
of
docker
run
dash
dash
device
rather
than
docker
and
v.
I
think
to
me:
it's
really
important
that
we
talk
about
this
because
it
it
it
makes
sense
in
everyone's
head
this.
This
idea
of
saying
docker,
run
back
device
versus
docker
and
v
is,
is
something
that
really
should
be
hitting
people
and
hey.
That
seems
like
a
really
important
idea.
Actually.
A
C
Yeah
introduction
to
cgi
in
a
way
it's
it's
something
that
needs
to
happen
here,
because
we're
really
talking
about
use
case
roadmap,
but
at
some
point
we
really
need
some
kind
of
introduction
to
ci
and
then
maybe
that's
when
we
start
explaining
why
we're
sig
run
time
and
out
of
cuban
acig.
C
C
All
right,
that's
that's
at
least
some
pretty
good
amount
of
work.
We
should
definitely
try
to
think
at
least
try
to
maybe
like
expand
on
these
offline
and
then
from
there.
We
probably
need
to
fix
the
recording
session
and
then
hopefully,
we'll
only
need
one
take,
but
we
all
know
we'll
probably
need
at
least
two.
E
E
There
needs
to
be
a
transition.
This
is
the
kubernetes
sig.
I
mean
you
know
kubernetes
group
right,
it's
cncf,
but
it's
also.
You
know
it's
mostly
kubecon
right.
C
A
C
E
A
Yeah,
but
another
question
is
well
well
cube.
Ctl
is
fine,
but
you
need
to
have
device
plugin,
which
actually
will
be
exposing
or
backing
up
what
device.
So
we
like,
even
we,
if
we
let's
imagine
today,
we
implemented
cdi.
We
are
great,
we
don't
have
a
part
and
recovery
which
will
pass
that
information
down.
C
C
The
audience
in
a
way
all
right,
so
let
me
see
I'll
try
to
set
up
the
doodle
asap
and
then
feel
free
to
all.
Come
back
to
this
document
rewrite
some
of
the
questions
we'll
probably
have
to
assign
some
of
these
questions.
If
we
only
have
six
questions
to
ask-
and
we
have
five
speakers-
that's
going
to
be
fun.
C
E
There
you
go
hello,
you're
ashy,
mike
brown,
I
don't
know,
I
don't
know
your
rashi.
C
C
Yeah
sounds
good
this
schedule
and
then
let
me
find
it
in
the
schedule
just
to
make
sure
cod
is
the
keyword
I'm
looking
for.
E
C
B
C
So,
actually,
since
we're
all
here-
let's
let's,
let's
very
quickly,
so
this
is
what
this
is
the.
What
question
who
asked
this
or
who
answers
this?
What
question.
C
C
E
E
I
A
So
the
first
question
is:
is
what
like
what
we
are
doing?
Why
why
it's
important
second
question
is
what
is
like
how
much
it's
different
from
existing
solutions?
So
that's
practically
this
mental
set
up
what
ronald
mentioned,
yeah
right,
why?
Yes!
C
So
introduction
to
cdi,
should
I
either
be
mirnau
or
mike
or
your
rashi,
all
right,
I
I
volunteer
your
rashi.
C
All
right
now
we
need
questions
to
be
answered
from.
A
Next
next
next
thing
is
like
what
is
actually
covered
by
it.
So
mri
cdi
things
all
together,
so
so
why
cncf?
E
E
G
E
C
E
And
you
guys
can
do
the
same
thing
when
you,
when
you
guys
are
talking
about
your
you
know
when
you're
showing
the
initial
charts
you
can
say
you
know
what
is
what
is
the
scope
for
you
right
and
how
are
you
involved
right
and
then
alice
can
talk
about
that
as
well?
You
know
he
can
introduce
himself
and
then,
or
did
you
want
to
introduce
people
nope.
A
Right,
I
have
a
problem
with
in
russian
language.
We
have
a
long
and
loaded
sentences
so
for
when
I'm
speaking
in
english,
I'm
trying
to
repeat
the
same.
C
I'm
just
I'm
just
pulling
your
leg
were
the
answers
to
these
questions.
I
think
everyone
should
try
to
write
two
or
three
bullet
points
feel
free
to
ask
these
questions
in
the
slack
chat
or
in
the
slack
direct
messages.
If
you
kind
of
wonder
if
there
are
points
that
we're
missing
or
maybe.
G
C
Definitely
so
quick
thought
on
this.
These
slides
are
talking
point
for
us.
We
shouldn't
be
showing
these
slides.
If
that
makes
sense,.
E
C
Yep,
okay,
I'll
set
up
the
doodle
or
finalize
the
doodle
bug
the
people
who
did
not
answer.
I
think
it's
you
alex
shame
on
you
and
yeah.
I
think
I'll
repost
the
link
to
this
slide
in
our
direct
message
and
that's
it.
Thank
you.
Everyone
for
your
time.
E
E
E
F
No
I'm
yeah,
I
was
just
letting
my
finish.
My
question
was:
is
one
person
going
to
be
asking
all
the
questions
or
should
we
field
the
question
after
we
answer
our
question.
C
Yep,
I'm
I'll
be
I'll,
be
asking
you
the
question,
so
you
will
have
a
card.
A
By
the
way,
if
you're
using
the
zoom
to
to
record-
and
if
you
feel
what
you
you
want
to
add
something
to
once
for
half
hour,
we
can
use
this
rice
hand
thing.
So
at
least
the
moderator
can
give
chance
to
answer.
Also,
definitely
actually
I
I
I
don't
know
like
this
rise
hand,
is
it
going
to
be
visible
in
recorded
or
not,
but.
C
No,
unless
we're,
unless
I
don't
think
so,
but
I
okay-
I
I
I
do
think
that
for
the
q,
a
section
where
it's
not
recorded,
so
we
it's
and
it's
not
on
zoom.
So
that's.
A
C
Okay,
let's
ask
nancy
and,
and
then
figure
out
I
mean
we.
We
we
need
to
ask
her
to
figure
out
what
are
the
technical
details
to
to
to
do
that.
Anyways.
C
Let
me
see
if
I
can,
where
is
that.