►
From YouTube: GMT 2018-01-11 Containerization WG
Description
Agenda and Notes:
https://docs.google.com/document/d/1z55a7tLZFoRWVuUxz1FZwgxkHeugtc2nHR89skFXSpU/edit
A
A
All
right,
I
think,
let's
get
started.
This
is
the
first
working
group
meeting
2018.
This
meeting
will
be
recorded
so
yeah
today,
I
plan
to
on
give
you
guys
some
overview
about
the
upcoming
CSI
support,
so
I
think
a
dog
has
been
committed
already
so
on.
If
we
go
to
the
the
master
branch
on
the
talk,
there's
a
talk
of
CSI
MD
and
all
the
information
about
CSI
is
documented
there.
A
So
the
reason
we
want
to
do
CSI
so,
first
of
all,
for
those
of
you
that
were
not
familiar
with
si
si
si
si
is
a
specification
that
defines
a
set
of
api's
between
container
orchestration
systems
and
storage
vendors
and
it's
a
gr
PC
based
service
and
each
vendor
storage
vendor
will
have
to
define.
We
have
to
implement
those
services,
G
RPC
services
and
the
container
orchestration
system,
like
Mesa
kubernetes,
is
going
to
use
this
interface
to
invoke
and
to
call
out
to
the
storage
vendors
code
and
to
do
proper,
bonding
provisioning
mounting.
A
So
that's
that
that's
something
that
we
want
to
improve.
I
think
external
learning
support
is
extremely
poor
compared
to
like
things
like
kubernetes,
where,
on
the
current
support,
is
based
on
dr.
bonding
driver
interface,
which
has
a
bunch
of
limitations
and
especially
around
like
idempotency
requirements
of
dr
running
driver
interface
for
most
for
mostly
interface,
it's
not
Island
potent
it's
really
hard
to
make
it
correct
in
a
distributed
system,
and
also
we
don't
have
a
way
to
model
external
volumes
as
resources.
A
So
right
now,
external
and
support
using
DVD
I
is
essentially
like
a
load
band,
so
we
don't
do
any
fair,
sharing
color
control
on
those
persistent
moaning,
which
is
not
the
spirit
of
missiles.
So
these
are
the
motivations
why
we
want
to
do.
Csi
I
think
that
interface
is
the
collaboration
between
these,
like
a
container
oxidation
systems
like
crew,
nightstalker,
calendaring
and
us
and
and
the
kind
of
decision
to
use
years-
that's
pretty
natural
so
that
we
can
plug
in
with
any
vendor.
A
So
this
is
kind
of
the
kind
of
the
high-level
architecture.
Yeah
feel
free
to
interrupt
me.
If
you
have
any
questions.
So
this
is
the
high-level
architecture
of
the
CSI
support.
So
so
there
are
like
two
major
things
that
we
add
to
my
cells:
one
is
called
resource
provider,
so
slurp
here,
main
storage,
local
resource
provider
and
ERP
main
storage,
external
resource
provider.
So
for
now
we
haven't
at
that
part
yet,
but
on
the
local
resource
provider
has
been
added,
so
the
local
resource
provider
will
talk
to
agent.
A
So
there's
a
component
inside
agent
called
resource
provider
manager
so
resource
each
resource
provider
on
that
agent
will
talk
to
the
manager
using
via
HTTP
API.
So
we
have
a
public
API
for
that.
So
in
theory,
you
can
write
your
own
resource
provider
if
you
want,
but
we
do
provide
a
default
implementation
for
storage,
called
storage,
local
resource
provider
that
talks
to
CSI
plugins
using
the
the
G
RPC
protocol
that
defining
the
CSS
and
and
the
plug
itself
is
a
container.
A
Basically,
it's
a
service,
it's
a
long-running
service
and
that
we
do
have
component
inside
store
to
local
resource
to
provide
to
make
sure
that
container
is
always
running
using
the
standalone
container.
Primitives
that
that
we
recently
introduced
as
well
and
the
plug-in
itself
has
two
major
services
when
it's
called
controller
service
and
one's
called
no
service.
A
So
note
the
difference
between
no
service
and
controller
services
is
that
the
controller
service
had
can
be
run
anywhere
in
the
cluster
think
about,
for
example,
if
you're
building
like,
for
example,
EBS
plugging
you
don't
have
to
run
the
controller
service
on
a
particular
node
that
you
want
to
use
Devon
it.
You
can
just
run
that
everywhere.
For
example,
I
do
like
create
a
volume.
You
can
just
hit
a
debase
API
to
create
moaning.
You
don't
have
to
rent
that
plugin.
You
know
very
specific
note,
but
no
plug-in
has
to
be
run.
A
A
If
you
only
launch
a
test
using
a
volume,
then
that
would
translate
into
a
mount
API
call
yeah.
As
I
mentioned,
we
do
introduce
a
first
caste
or
the
resource
provider.
Just
just
keep
in
mind
that
the
resource
provider
abstraction
is
not
just
built
for
on
storage,
it
can
be
used
for
other
things
as
well.
For
example,
we
can
customize
narrower
bandwidth.
You
can
customize
like
memory
storage
things
like
this.
You
can
use
that
interface
to
do
things
like
this
to
manage
your
custom
resources.
A
So
this
is
like
a
general
interface
to
allow
themselves
to
customize
the
providing
part
of
the
resource
yeah
and
also,
as
I
mentioned,
that
all
the
CSI
plugins
are
launched
as
standalone
container
so
essentially
using
the
same
container
riser
that
we
have
inside
missiles
to
launch
those
CSI
containers,
that's
long-running
and
that
we
don't
have.
We
do
have
a
component
inside
each
resource
provider
to
store
to
local
resource
provider
to
make
sure
that
that
container
is
always
running.
A
A
Any
questions
so
far:
no
okay,
so
Frank
API.
We
add
like
two
more
on
this
source.
So
so,
if
you
look
at
resource
disc
info
source
right
now,
it
only
has
path
and
mount
so
path
means
that
disk
can
be
that
disk
resource
can
be
split
into
multiple
pieces
and
mount
essentially
means
like
it's
a
it
has
all
nothing
semantics.
So
we
cannot
split
that
disk
into
multiple
pieces
and
then
we
introduced
to
Moore's
as
part
of
the
CSI
support
points
called
block
when
it's
called
raw.
A
So
the
intention
for
block
is
trying
to
support
act,
block
devices
rather
lock
devices
directly
I,
for
example.
Some
database
wants
to
use
raw
block
devices.
They
don't
want
any
file
systems
being
stalled
on
that
block
device.
So
so
that
type
is
for
that
use
case
and
then
on
raw
is
kind
of
interesting,
because
raw
is
a
type
where
on
the
firm
cannot
directly
use.
So
you
can
all
say
hell
no
launch
a
test.
A
You
need
some
routers
research
is
that
that
doesn't
make
sense,
because
you
have
to
convert
the
raw
resources
into
either
path
mount
up
block,
so
so
yeah
I'm
gonna
talk
about
raw
resource
a
little
more
because
I
think
it's
a
little
complicated
to
understand
that,
but
but
basically
on
the
the
rule
of
thumb
here
is
a
task.
Any
frame
cannot
use
rail
resources
to
launch
tasks.
We
cannot
specify
a
test
with
resource
that
has
Braille.
You
have
to
convert
that
into
either
of
these
three
to
be
able
to
use
that.
A
So
if
you
look
at
the
CSI
spec,
each
volume
has
ID
that's
generated
by
the
plugin,
so
that
idea
will
be
the
same
as
that
ID
and
also,
if
you
look
at
CSI
spec,
like
each
volume
has
in
attributes,
which
is
like
just
a
key
value
pairs
string
of
string
types.
So
this
one
will
be
map
map
to
that.
So
this
field
will
map
to
that
bond
in
that
rebuking
CSI.
A
So
these
are
two
optional
field
that
we
add
into
disk
source
to
map
to
CSI
so
also
essentially
like
this
is
a
way
to
expose
like
storage,
plugging
specific
information
to
the
framework.
Just
imagine
some
CSI
plugging
that
set
metadata
to
some
very
special
things
and
those
thing
formation
will
be
propagated
to
the
framework
and
a
frame
can
make
a
decision
based
on
those
things.
A
Yeah
I
think
the
talk
have
a
link
to
the
corresponding
CSI
on
stuff.
Okay.
So
now
we
have
two
basic
concepts
that
you
may
wanna
understand.
So
what
is
called
storage
pool?
Storage
pool
means,
like
you,
have
a
just
give
you
a
one
example
for
storage
pools
like,
for
example,
you
have
a
bunch
of
EBS
storage
space.
You
haven't
korean-born
it
yet,
but
you
know
now
you
have
like
one
terabyte
of
space
that
you
can
carve
out
or
you
have
a
local
like,
for
example,
LVM
volume
group.
A
You
have
a
bunch
of
space
and
you
haven't
create
a
logical
one
in
front
that
volume
group.
Yet
so
you
just
basically
have
a
bunch
of
disk
space
that
you
can
create.
Volume
from
we
call
a
storage
pool
and
the
size
of
the
storage
pool
is
reported
by
CSI
plug-in
using
this
interface
get
capacity
interface.
A
So
so
right
now
so
I'm
gonna
talk
about
profile
later,
but
each
storage
pool
must
have
a
profile
to
fine
right
now,
so
I'm
gonna
talk
about
that
later
on.
The
other
type
of
thing
on
disk
is
called
preexisting
disk.
So
so
there
might
be
some
cases.
A
The
reason
it's
a
virile
disc
is
because
that
the
how
we
and
how
the
frame
wants
to
use
that
disk
hasn't
been
decided.
Yet
at
that
moment,
so
it
can
be
used
as
a
mount
disk.
It
can
be
used
as
a
pass
sticks
or
can
be
used
at
practice.
So
basically
no,
we
don't
want
to
decide
that
for
the
framework
we
don't.
We
want
the
firm
to
decide
how
to
use
that
disk.
A
So
that's
the
reason
we
send
out
those
precinct
disk
as
Rowdies
resources
as
well,
but
with
an
ID,
so
I
cannot
be
carved
into
smaller
pieces
and
the
way
you
get
those
prisons
preexisting
disk
in
CSI
is
called
this
this
morning.
Interface
to
get
those
on
pre-existing
disks,
and
then
we
ask
them
new
alpha
operation
to
do
the
conversion.
For
example,
on
once
you
receive
a
route
resources,
you
can
reply
with
a
crate
volume
or
a
crate
block,
so
create
volume
will
create
on
the
route.
We
convert
the
route
resources
into
either
amount
or
paths.
A
A
A
We
do
have
some
upcoming
support
for
getting
operation
feedback,
because,
right
now
the
difficulty
for
those
operation
is
there's
no
way
you
can
get
a
feedback
to
tell
whether
the
operation
is
successful.
But
in
the
storage
case
it's
pretty
important
to
get
the
feedback,
because
if
that
operation
fails,
you
do
want
to
know
why
it
fails.
I
was
the
error
message.
You
want
to
surface
the
information
up
to
the
framework
so
that
fairphone
decide
what
to
do
next.
So
we
do
have
some
support
accounting
to
support
our
feedback.
A
I
think
most
of
the
code
is
being
wired
in
mastering
agent,
with
just
having
exposed
that
to
framework
it,
but
I
think
on
some
people
is
working
on
that
to
get
operation
feedback
so
before
that
there's
some
tips
to
to
make
it
work
either
using
reservation
labels
or
using
operate
API,
but
both
of
them
are
not
ideal.
So
essentially
we're
gonna
wait
for
the
explicit
operational
feedback
to
solve
that
problem.
Arrogantly
okay.
So
another
thing
we
add
here
for
CSS
support
is
a
concept
called
profile.
A
Think
about
profile
is
an
indirection
to
a
set
of
storage
vendor
specific
parameters.
So
we
have
two
ways
to
solve
that
problem.
So
one
way
is
like
free
make
decisions
based
on
those
vendor
specific
parameters
which
would
believe
it's
not
a
great
idea,
because
each
vendor
might
have
different
parameters.
It's
very
hard
to
write
each
other
gnarrk
framework
to
handle
that.
So
what
we
just
had
to
do
is
we
provide
interactions
or
meso
swell
any
questions,
okay,
so
yeah.
A
So
my
social
profile
is
like
kind
of
an
interaction
mechanism
so
that
you
can
specify
so
each
profile.
Just
a
name
is
a
string,
and
that
name
will
map
to
a
bunch
of
vendor
specific
parameters
on
the
disk
resources
and
the
each
profile
can
be
simple
stream
like
fast
slow
gold
and
the
frame
who
will
make
decision
based
on
the
profile
name
rather
than
those
vendor
specific
parameters,
so
that
the
frame
can
be
general
to
handle
all
different
type
of
storage.
A
Vendors
and
the
vendor
specific
parameter
are
abstracted
away
and
and
opera
have
to
keep
that
mapping
from
the
the
profile
name
to
a
bunch
of
vendor
specific
parameters.
So
on
so
that's
kind
of
the
feel
introduced
inside
disk
source
has
an
optional
string
called
profile
indicating
was
to
profile
for
that
disk
resources.
It
can
be
fast,
slow,
go
or
whatever
the
name
you
define
by
the
operator
and
then
to
customize
the
mapping
between
profile
name
to
on
the
actual
bond
parameters.
A
We
build
a
module
interface
called
disc
profile,
adapter
module
that
are
that
that
the
user
has
to
implement
that
single
method
to
translate
a
profile
to
a
bunch
of
profile
info,
which
is
a
bunch
of
CSI
bonding
capability
and
parameters.
So
that's
how
you
customize
the
mapping
in
your
cluster,
but
we
do
provide
a
default
implementation
which
allow
you
to
just
define
each
single
JSON
file
that
you
host
in
during
HTTP
server
or
s3
or
or
locally
on
the
file
and
and
you
can
just
define
a
profile
matrix
that
hey.
A
A
You
know
in
a
file
that
hosted
on
you,
know,
HTTP
server
and
then
within
the
module
that
we
built
in
open
source
will
just
fetch
that
that
matrix
and
then
send
out
those
resources
based
on
those
profiles
we'll
set
the
profile
appropriately
according
to
those
volunteer
ability
parameters.
So
there
are
some
flags
you
can
on
toggle
to
to
control
the
behavior
of
that
module,
and
this
is
how
you
load
the
module.
A
Right
yeah
so
on
so
storage
local
resource
provider,
for
those
of
you
who
join
lighters.
So
this
is
a
component
that
we
build
in
some
races
to
support
storage,
to
use
that
you
have
to
enable
G
RPC.
So
when
you're
building
a
those
have
to
do
like
in
a
doji
RPC
to
make
sure
they
on
the
mesas
has
G
RPC
support
and
I
mean
so
for
all
the
storage
features.
Are
you
have
turn
on
this
flag?
I?
Guess
we
can?
We
don't
have
to
do
this
right
now.
A
A
Yeah,
I'm
gonna,
just
show
you
the
demo.
Ok,
so
I
think
that's
pretty
much
it.
Let
me
show
you
the
demo.
Ok
guys
see
my
screen,
so
I
I
decide
to
use
DCOs
because
it's
easier
for
me
to
make
it
work,
I
mean
to
deploy
that
so
I
start
a
DCOs
which
that
feature
flag
on
in
a
dress-
and
this
is
the
state
endpoint
of
that
agent.
That
turn
on
this
feature.
So,
as
you
can
see,
there's
one
more
capability
that
we
need
to
turn
on
for
this
feature
is
called
resource
provider.
A
So
you
have
to
use
this
new
flag
for
agent
features
to
turn
on
this
optional
feature,
so
by
default.
This
feature
is
off
because
it's
instrument,
it's
an
experimental
feature
and
has
an
implication
on
the
old
persistent
long
inflows.
I
don't
want
to
like
change
too
much
industries.
So
it's
a
we
need
to
bake
when
release
before
we
make
this
a
default
feature.
A
So
if
you
look
at
reserve
resources
full
so
so
one
row,
is
these
your
storage
test?
So
I'm
desam?
That's
that's
the
resource
here
on
it's
a
rail
resource.
It's
that
that
resource
is
actually
from
a
CSI
plug-in
called
test,
CSI
plugging
that
we
built
in
some
missus
repo.
It's
a
it's,
a
very
simple
CSI,
plugging,
basically
carving
out
directory
from
a
parent
directory
and
the
mimic
non.
Creating
volumes-
and
it
does
has
a
capacity
like
a
fake
to
capacity
that
you
can
config
in
a
command
line.
A
A
Okay,
let
me
make
it
bigger,
so
I'm
how
agent
so
if
you
go
to
bar
actually,
where
is
the.
A
Okay,
so
we
do
have
a
directory
called
resource
provider,
so
so
that's
a
new
flag.
If
you
go
to
the
agent
Flags.
A
Yeah
so
there's
a
new
flag
called
resource
provider
can
Viktor.
So
that's
a
directory
where
you
put
all
the
configuration
for
the
resource
provider,
so
you
can
drop
in
JSON
file
into
that
directory,
so
that
makes
as
a
general
low-dose
resource
provider
original
code,
which
is
provided
by
default
so
on.
So
this
is
a
directory,
so
we
do
have
two
resource
providers.
I
wanna
enable
so
one
is
called
on
CSI
test
easier
test,
CSI,
so
there's
a
test
plug-in.
A
So
if
you
look
at
the
configuration
of
that
resource,
provided
you
specify
the
title
resource
provider,
the
name
and
you
have
some
default
reservation.
So
basically,
the
resource
that
send
out
by
this
resource
provider
will
have
a
default
reservation.
So,
but
you
can
leave
that
field
empty
so
then
down,
then
that
would
be
unreserved
resources
and
then
for
storage,
specifically
resource
providers.
You
need
to
specify
the
plug-in
type
and
name
and
then
what's
the
plug-in
binaries
like
what's
the
container
for
that
plug-in.
So
it's
a
as
I
mentioned.
A
Each
CSI
plugin
is
a
G
RPC
service,
it's
a
long-running!
So
it's
a
long-running
service,
so
test
CSI
plug-in
something
would
build
in
some
ways:
those
two
on
to
test
CSI.
So
the
test
CSI
plug-in.
You
can
specify
a
capacity
right
now,
I
specify
two
gig
and
you
have
a
working
directory
busy
car
volume
from
that
working
directory
by
crane
subdirectories
and
that's
the
binary
for
that
plugging
now
you
just
you
can
just
launch
that
and
that
plug-in
provide
both
controller
service
and
no
service
in
the
CSI
spec.
A
So
that's
pretty
basically
a
simple
configuration
for
the
look
storage,
local
resource
provider
and
you
just
placed
a
JSON
file
into
that
directory
and
then
you
can
and
then
make
those
agent
will
just
load
that
storage,
local
resource
provider
and
then
start
to
sending
our
resources
reported
about
that
CSI
plugin.
As
you
can
see
here,
it's
two
gig
as
a
default
reservation
and
there's
a
profile
here.
I'm
gonna
explain
that
profile.
Thank
you
there,
but
so
so
the
mapping
is
actually
here.
A
So
yeah
this
is
DC
u.s.,
so
we
load
a
bunch
of
modules.
But
if
you
look
at
this,
you
are
a
login
profile
module,
that's
a
new
module,
we
add.
So
basically
you
need
to
enable
that
module
and
the
module
parameter
has
a
URL
that
I
can
point
to.
It
has
a
JSON
file.
So
if
you
open
that
JSON
file,
that's
your
profile
matrix.
A
So
that's
your
profile
matrix,
so
I
define
an
example
profile
and
what's
the
bottom
capabilities
and
what
are
the
parameter
in
this
case?
Nothing!
So
it's
a
test,
plugging
sighs,
that's
fine!
Nothing,
and
if
we
go
back
to
state
your
show
as
a
disk
has
an
example
profile.
Has
this
size?
If
you
have
two
profile,
it
will
show
two
disk
resources
with
different
profiles.
A
A
So
that
there's
another
four
inkosi
aside
devices,
so
this
plugin
is
pretty
done
basically
like
it
will
do
an
LS
block
on
a
box
and
report
those
blocks
as
pre-existing
volumes.
So
it's
it's
pretty
much
the
same
configuration
and
just
the
binary
is
different.
So
if
you
go
to
the
stay
endpoint,
you
will
see
there
is
route
resources
with
a
given,
ID
and
metadata
about
all
the
information
about
the
disc
resources
like
these
are
the
basically
the
output
from
Alice
block
like
what's
the
attributes
removable
type
minor.
A
So
you
can
see
all
these
information
about
your
block
devices
on
the
Box
using
a
simple
CSI,
plugging
and
those
are
resources
cannot
be
used
directly.
You
have
to
convert
that
into
mount
or
pass
or
blog
to
be
able
to
use
that
now.
What
I'm
gonna
do
is
it's
a
DCOs
cluster,
so
I'm
gonna
start
a
a
framework.
A
A
So
it's
a
framework
I'm
connecting
the
master
and
there's
a
role
that,
on
the
framework
register
ender
it's
very
done
framework
basically
receiving
around
resources
in
the
reply,
with
a
crate
volume
to
create
a
to
convert
that
route,
this
career
source
into
mount
disk,
so
that
firm
can
prima
can
launch
a
task
using
that.
Do
you
run
this
service
in
the
cluster
yeah?
Okay,
it's
running!
So
if
you
go
to
this
frameworks
and
look
at
the
logs,
then
you
can
go
to
nice.
Oh
see
you
I,
just
click
the
sandbox
yeah.
A
If
you
look
at
the
missus
on
the
log
of
that,
once
you're
up,
it
will
be
offer
like
relics,
resources
and
then
it
will
reply
with
a
crate
volume
so
that
it
will
convert
a
trial
resource
into
amount
on
resources.
It's
pretty
done
framework
and
if
you
go
to
stay
and
refresh
that
again,
you
will
actually
see
that.
A
There's
the
unreserved
resource
I
think
because
I
think
that
for
me
will
also
unreserve
that
so
at
the
frame
we
can
receive
that
resource
yeah,
so
so
that
resource
still
showing
example
profile
and
it
become
a
mount
disk
and
I
have
a
bunch
of
metadata
attached
to
that
things,
which
is
very
plug-in
specific.
It
does
have
a
provider
ID
indicating
which
resource
provider
this
morning
is
came
from
and
there's
some
more
information
about
where
that
bonging
is
it's
a
directory.
So
so
again
the
firm
can
launch
a
test
using
that
resources.
A
We
can
launch
a
task
but
I,
don't
think
as
numerix
I'll
config
to
launch
that
test.
So
that's
pretty
much
it
any
questions
so
far.
A
A
No
okay,
by
the
way
so
so
I
mentioned
like
to
load
a
local
resource,
provided
so
one
way
is
to
drop
a
config
file
under
that
config
directory.
Another
way
is
trying
to
use
the
operator
API,
so
we
do
build
some
operating
API
to
allow
you
to
dynamically,
adding
a
resource
provider
remotely
a
question.
A
B
A
We
do
have
an
update
slurp
that
if
the
name,
if
the
type
of
name
matches
on
down
you
can
update
the
container.
So
basically
that's
how
we
are
update.
For
example,
your
CSI
plugging
needs
to
be
updated.
So
that's
how
you
update
a
CSI
plug-in
version
I
to
a
different
doctor
container
versions.
You
think
this.
Thank
you
any
other
questions,
yeah
and
by
the
way
we
do
have
authorization
for
that.
So
you
can
set
up
an
echo
to
prevent
people
from
getting
that
API.
A
If
we
don't
want
you
and
as
I
mentioned
okay,
one
thing
I
want
to
show
you
here
is
all
these
CSI
plugging
is
running
as
a
standalone
container,
so
we
add
an
endpoint
called
well.
We
include
the
easing
endpoint
I,
get
containers
opera,
the
API
to
show
standalone
container
as
well.
So
basically
like
this
is
the
curl
command.
So
you
can
specify
the
type
get
containers
and
then
can
say
show
can
stand
alone.
True
and
there's
another
flag
help
cost
you
nested,
so
you
can
shown
that's
the
container
and
you
can
show
both.
A
So
in
this
case,
I
just
want
to
show
standalone
containers.
You
get
that
API,
so
we
can
see
the
the
output
foot
so
we
have
to
since
I
run
like
multiple
stuff
here
so
I'll
start
from
the
top
yeah.
So
this
one
is
that
the
framework
itself,
okay,
so
by
the
way
so
I
show
standalone
means
it
also
shows
standalone.
But
it's
also
show
the
normal
makes
those
container
for
executors.
So
this
is
a
standalone
container
that
we
create
for
on
devices.
I
have
two
plugins
so
they're
there.
A
There
are
like
two
CS
I
stand
on
containers,
so
when
it's
called
CSI
devices-
and
you
can
see
the
stats
for
those
and
then
there's
another
plugin
stand
on
container
DSS
Tessier's
are
using
a
test
plugin
and
you
can
see
the
stats
for
that
plugin.
So
it's
the
same
container
riser
handling
those
standalone
containers
similar
to
execute
in
containers
or
nested
containers.
B
A
Which
my
name's
from
a
framework
so
right
now
we
don't
have
a
yeah,
so
there's
no
executor
ID
so
only
have
container
ID.
So
that's
one
way
you
can
tell,
but
it's
not
very
explicit,
so
maybe
like
one
thing
that
we
can
do
is
make
it
more
expenses.
It
is
standalone
container
of
logo
tasks.
Thank
you.
Another.
A
A
B
A
He
came
for
that
if
you
want
to
okay
thanks
yeah
any
other
questions.
Okay,
so
I
think
all
that
all
the
piece
has
been
committed
in
one
file,
so
one
finds
in.
Let
me
release-
hopefully
this
week,
there's
some
bug's
time
to
fix
for
some
other
stuff
not
ready
to
this,
but
in
a
way
for
that
to
be
landed,
and
then
what
this
one
more
bug
that
we
want
to
fix
for
this,
but
once
that's
landed
then
we're
gonna
cut
one
file
and
that
we
have
this
experimental
feature.
A
You
can
use
we're
gonna
improve
that
in
the
future,
especially
supporting
global
resources.
As
you
can
see
here,
only
local
resource
provider
is
supported
or
right
now
which
allow
you
to
do
things
like
LVM,
but
the
ultimate
goals
on
achieve
is
trying
to
support
things
like
EBS,
which
allow
wanting
to
be
moved
between
nodes,
not
tied
to
your
particular
nodes.
So
that's
the
eventual
goal
one
achieve,
but
all
the
stuff
we
build
so
far.
It
can
be
reused
for
external
providers
and
external
plugins
so
and
stay
tuned.
B
Question
/
feedback,
so
CL
VM
has
been
extensively
mentioned
in
this
document.
Yeah.
It's
really
it's
hard,
even
for
me
to
connect
to
all
the
costs
end
to
end
to
understand
how
to
log
something
with
an
LVN
with
Allium
LVM
modem,
uh-huh,
another
inch
blog
or
something
like
that
or
tutorial.
Something
like
that.
Show
people
how
that
works.
Yeah.
A
A
Lvm
CSI
yeah
yeah,
so
that's
the
LV
I'm
plugging
damages
for
your
open
source.
It's
a
CSI,
plugin,
so
yeah.
So
we
probably
want
to
write
a
end-to-end
tutorial
like
documentation
to
202
to
walk
you
through.
How
do
you
enable
LVM,
and
how
do
you
set
up
those
things
on
to
make
it
work?
That's
a
good
suggestion
that
we
should
definitely
do.
B
A
So
I
think
you
joined
the
meeting
late
late,
but
yeah
I
mentioned
earlier
that
resource
provider
interface
is
not
limited
to
storage.
Storage
is
just
one
implementation
of
the
resource
provider
interface.
We
can
potentially
use
the
same
interface
for
other
things
like
now,
a
bandwidth,
a
and
I
like,
like
all
sorts
of
resource
that
you
want
to
customize
this
also
yeah
and
I,
think
the
interface
itself,
probably
not
that
straight
for
so
I.
Think.
One
thing
that
we
may
want
to
do
is,
for
example,
for
special
for
special
type
of
resource.
B
You
mentioned
device
then,
for
example,
for
the
case
of
GPU,
since
there's
already
in
vanilla
vessels
you
so
right
now,
the
resource
writing
part
is
kind
of
because
provided
at
metals,
but
yet
to
runtime
isolation.
Runtime
mechanism
is
provided
by
the
binary
isolator.
You
know
in
the
future
architecture
I'm
mentioning.
What's
the
relationship
between
the
isolator
and
the
RP
yeah.
A
So
that's
a
good
question,
so
I
isolator
is
is
useful
for
isolating
resources.
It's
kind
of
also
gonna
to
resource
providers.
They're
related,
like
resource
provided
responsible
for
providing
resources,
and
you
probably
want
to
have
a
like
a
corresponding
isolator,
which
is
responsible
for
isolating
those
resources
on
a
single
box.
So
I
view
these
two
as
a
pair
thing
that
I
have
to
deploy
simultaneously
to
make
to
make
sure
that
providing
part
is
customized
by
the
resource
provider.
Interface
in
isolation
part
is
customized
by
the
isolator,
so.
B
A
Yeah,
yes,
if
you
see
the
state
end
point,
those
are
regular
resources
that
are
still
subject
to
our
resource
allocation,
collect
and
fair
sharing.
Macey.
Thank
you.
Thank
you
very
much
yeah.
So
that's
the
whole
purpose
that
we
don't
want
to
make
an
exception.
Everything
is
on
my
source
model
as
a
resource
and
it's
subject
to
the
same:
fair
sharing,
code,
control,
reservation,
control,
tricks.
A
A
So
that's
a
good
question.
I
think
it's
explaining
the
talk.
It's
a
little
confusing,
because
the
unfortunate
case
that
we
use
create
for
the
API
which
this
one
it
should
really
be:
create
persistent
volume,
mm-hmm
so
create
volume,
basically
converting
route
resource
to
a
mount
path
or
amounts
disk
or
past
disk,
and
create
persistent
money
on
on
convert
a
mount
or
past
disk
onto
a
persistent
so
avoiding
him
either
persistent
or
ephemeral
in
the
future.
A
So
in
order
to
use
that
Rhys
you
mean,
if
you
want
that,
if
you
want
that
volume
the
data
in
that
one
and
to
be
persist
across
tasks,
then
you
have
to
create
persist.
Ammonium
the
same
way
as
you
do
right
now
on
to
create
persistent
ammonium.
Just
a
source
is
whatever
the
new
disk
resource
that
is
received.
You
see
so.
A
A
Okay,
anything
else,
okay,
so
I
think
fifteen
minute
left,
I,
don't
know.
If
we
have
time
for
this,
maybe
you
can
leave
that
to
the
next
time,
so
the
next
hi
I.
What
I
plan
to
do
is
we
probably
do
some
planning
for
one
six
release
like
just
wanna
get
incense
like
what
feature
that
people
wants
to
work
on
any
any
like
resource.
A
What's
the
progress
for
each
of
those
I
think
it's
here,
I'm
not
sure
if
it's
accurate
but
but
yeah,
let's
do
some
grooming
and
the
planning
folder
for
this
current
release
next
time
working
group
meeting
to
decide
who
can
help
on
what
things
like
this
and
how
much
time
each
poster
has
so
that
we
can
planning
which
feature
can
ship
in
the
next
release.
As
a
group
that
makes
sense.
A
Okay,
I
think
that's
pretty
much
it
for
today's
meeting
on
thanks
for
Kenny
I'll
post,
the
video
to
the
youtubes
and
link
to
the
notes
and
and
Happy
New
Year
I
think
that's
the
first
meeting
we
have
in
2018
and
hope
you
enjoy
on
the
vacation
and
let's
do
some
bouncing
suffering
this
year.
Thanks
guys
see
you
guys,
bye,
bye,
right.