►
From YouTube: Mesos Containerization WG 08242017
Description
Agenda and notes:
https://docs.google.com/document/d/1z55a7tLZFoRWVuUxz1FZwgxkHeugtc2nHR89skFXSpU/edit?usp=sharing
A
B
C
C
B
So
the
concept
behind
Stanley
chairs
is
that
we
want
to
be
able
to
launch
containers
but
doing
so
directly
through
an
agent
API,
rather
than
going
through
launching
a
framework
which
connects
to
the
master,
which
then
has
has
to
talk
to
the
agent,
and
then
the
agent
has
to
launch
container
that
contains
it
executor
and
then
give
that
executive.
Your
task
to
then
launch
them
like
command
line
thing
underneath
there's
Palen
containers
bypass
most
of
this
and
just
basically
provision
a
container
image
and
then
launch
command
line
test
underneath
and
the
logic
that's
backing.
B
B
B
D
B
So
the
resources
right
now
are
this
isn't
actually
implemented
yet.
But
the
goal
for
our
Minimum
Viable
Product
is
to
have
these
resources
just
not
be
tracked
anywhere.
We
would
assume
that
when
you
start
the
agent,
you
will
reserve
some
amount
of
resources
for
admin
tests
and
the
resources
that
you
use.
The
sandwich
containers
will
just
come
from
that
reserve
set.
B
D
B
The
goal
for
later
on
is
to
use
the
same
interface
that
we
use
to
provide
or
we're
going
to
add
four
resource
providers,
so
the
agent
itself
will
be
a
resource
provider
of
CPUs
and
memory
and
GPUs
and
storage
and
it'll
to
have
that
interface.
It'll
need
to
update
the
number
of
resources
that
the
agent
is
advertising
that
master
Bruce,
often
and
stanlon.
D
B
So
these
aren't
tasks,
but
they
are
still
containers,
so
you
will
be
able
to
launch
yes
with
containers
in
them
and
nested
container
sessions.
As
long
as
we
support,
or
as
long
as
there
is
enough
fiddling
in
the
validation
within
the
existing
API
is
to
do
so.
I
I
think
that's
I
think
there
should
be
yeah.
The
only
thing
you
really
needed.
B
B
Alright,
so,
as
mentioned,
Sam
containers
have
no
executor
ZnO
tasks,
and
this
also
means
that
they
don't
have
health
checks,
non-kill
policies
and
status
updates,
so
I
mean
these.
These
three
things
are
implemented
in
the
executor
and
because
we
don't
have
an
executor
for
sale
in
containers,
fees
won't
be
sorted
right
away.
We
might
consider
adding
them
later
on,
because
you
know
they
might
be
needed,
but
for
MVP,
it's
not
part
of.
So
besides
that
certain
containers
share
the
same
exact
semantics,
other
maysa
containers.
They
use
the
maysa
container
hazard.
B
C
B
C
B
So
far,
we'll
just
have
two
capabilities
which
is
launch
message
container
and
launch
standard
container
yeah
and
the
I
guess
the
semantics
behind
each
shares.
A
nested
container
has
container
ID
you
with
the
parent
and
then
a
standard
container,
we'll
just
have
more
blank
fields
in
the
arguments
that
have
passed
into
the
isolator.
So.
D
B
So
we
won't
super.
Well,
we
don't
want
you
have
once
you
have
a
large
container
because
well
when
you,
when
you
launch
a
container
directly
on
the
agent,
it
is
always
going
to
be
a
Stanley
container.
It's
a
I'm,
not
talking
containers
here,
so
we're
going
to
have
a
call
for
launch
sample
containers
and
then
one
for
launch
nest
of
commuters.
Yeah.
D
C
I
think
that's
the
interesting
point,
but
I
think
we
already
have
actually
launched
the
nasty
container
and
I.
Don't
think
we
can
change
the
name
of
that
API
so
that
that
API
has
to
be
there
for
backwards
compatibility.
So,
like
I,
don't
think
it
makes
sense
to
hijack
that
API
to
support
I
launch
a
standalone
container.
We
don't
have
a
choice
but
adding
a
new
API
for
that.
Okay,.
D
F
B
B
Today,
online
containers
are
meant,
as
are
there
I
guess
they
are
designed
for
the
operator
to
use
rather
than
for
frameworks
and
tools,
and
that
so
the
launched
and
container
will
be
more
of
an
idempotent
launch,
whereas
launched
nest.
Entertainers
like
if
you
can
give
it
the
same,
ID
it'll
fail
press.
When
you
launch
the
same
ID
and
a
standard
container,
it
will
succeed.
C
So
so,
just
one
comment
on
the
item:
potency
you're,
saying
like
the
launch
scan
own
container
is
idempotent
based
on
the
ID
or
based
on
all
the
fields
like,
for
example,
if
I
have
the
same
ID,
but
next
call
I
specify
different
command
different
containing.
What's
the
semantics
there?
Yes,
it's
Colonel.
C
D
B
B
D
F
D
B
G
B
D
C
So
so
so
James
I
think
like
if
you
look
at
doctor
so
doctor
hadn't
daemon,
and
it
provides
HTV,
API
and
doctor
client.
It's
like,
for
example,
dr.
run
into
doctor
trying
to
talk
to
the
HD,
API
and
insistently
unions.
You
can
specify
like
darker
and
in
fact
guys,
I
think
it's
the
same
model
essentially
yeah.
E
B
So
this
can
be
used
for
development,
but
it's
primarily
aimed
for
running
additional
components.
Alongside
the
agent
such
as
I
mean
there's
like
existing
demand
for
things
like
admin
tasks
in
basis
where,
at
the
start
of
an
agent,
you
would
launch
one
or
two
extra
things
that
would
be
like,
maybe
a
proxy
you
too
agent
endpoints
or
like
some
extra
security
features
or
database
of
long
sided.
B
This
would
be
roughly
the
same
thing
where
we
would
launch
CSI
plugin
that
is
actually
offering
the
storage
resources
of
the
agent
and
we
can't
have,
or
in
our
newer
model
of
resource
providers,
with
agents
will
need
to
have
the
CSI
plugin
launched
alongside
the
agent
outside
of
the
resource
offer
cycle.
So.
A
C
I
think
so,
like
I
mean
just
like
I
won't
call
you
a
framework
because
there's
some
like
no
like
all
for
psychos.
It's
like
a
very
dumb,
like
Indian
system,
he'll,
say
like
just
monitoring
the
house
of
the
process
and
relaunch.
If
you
fail
since
I,
it's
very
simple
way
of
doing
like
monitoring
of
a
supervisor
for
a
given
container
did.
D
B
B
C
Reason
I
mean
we
can't
like,
for
example,
like
to
put
the
one.
The
use
case
for
CSI
I
mean
the
other.
The
other
alternative
is
like
we
just
directly
talking
to
dr.
Deema
for
those
CSI
plugins,
but
we
don't
want
to
do
that
because
we
don't
want
to
like
strictly
tied
to
dr.
demon
and
all
the
api's
I,
don't
want
to
straight
I,
to
doc
a
demon
and
and
then
providing
an
abstraction
layer.
Its
agent,
like
makes
more
sense
to
me.
A
B
G
B
C
/
containers
we
have
to
maintain
backwards,
compatibility
I,
don't
think
we
UN
showed
I
can
ask
the
container
in
/
containers,
so
we
only
show,
like
top
of
our
a
secure
containers,
to
be
backwards
compatible.
So
even
either.
We
ask
some
parameter
to
the
request.
Allow
operator
to
specify
hey,
I,
wanna,
see
all
containers,
including
as
to
container
and
schedule
container.
Otherwise
we
have
to
use
a
different
point
for
that.
Just
so
sick
of
like
backwards
compatibility.
A
G
B
Okay,
where
were
we.
B
So
the
is
continuing
off
on
the
API
itself,
and
so
you've
mentioned
that
the
launch
command
for
standard
containers
is
idempotent
primarily
so
that
you
can
make
the
same
call
twice
and
expect
both
this
is
you,
and,
as
mentioned,
we
have
resources
that
won't
be
accounted
for
in
the
master.
At
this
moment,
I'm
gonna
leave
a
giant
to
do
in
the
code
when
this
stuff
is
a
purview.
A
B
So
we're
gonna
do
that,
because
when
you
restart
the
agent
containers
will
still
be
recovered
normally,
but
we
at
this
moment
we're
not
saving
the
other
fields,
we're
not
checkpointing
them
the
disk
at
all.
So
that
point
we
just
have
the
container
ID
for
a
stand
container
and
you
try
to
relaunch
it.
Then
we
had
to
be
consistent
right,
okay,
yeah,
so
we're
not
checkpointing
these
things
at
the
moments,
because
it's
extra
code
and
we're
not
sure
if
it's
necessary
yet,
but
it's
certainly
doable
to
checkpoint
all
these
other
fields.
B
G
B
So
non
compressible
or
non
shareable
resources
are
going
to
be
a
bit
more
challenging
first
Eiland
container
to
use
mostly
because
these
resources
will
still
be
shared
to
the
mess
or
they'll
still
be
advertised
with
the
master.
So
there
it's
really
easy
to
get
a
collision
if
something
launches
a
framework
or
some
other
test
through
the
normal.
B
C
C
You
have
two
more
to
contains
just
trying
to
simultaneously
acts
to
the
same
volume
which
my
breaks
on
the
code
so
yeah
as
Joe
some
legend
like
compressible
resources,
is
okay
for
non
compressible
resources
in
Fernie
to
disallow
the
initial,
a
or
maybe
just
say,
resources
here
in
the
SkyDome
containers
only
for
CP
in
the
memory,
because
that
may
be
discs
because
there's
something
we
care
about
and
the
rest
for.
We
don't
care
about.
Just
this
saga
and
I
start.
C
Think
about
how
gee
this
is
a
bit
mark.
That's
a
good
feedback
when
should
say
think
about
that
more
especially
for
fists
in
volumes.
B
B
C
C
B
D
C
But
that's
kind
of
kind
of
a
little
different
than
like
the
way
paired,
because
I
think
like
that,
you
have
two
component
involved
boy
of
the
first
way.
You
don't
know
if
it's
successful
or
not
I'm
dating
called
a
second
wait.
What
if
the
first
way
to
dis
succeed,
and
then
we
like
kind
of
GC
those
they're?
Actually
the
second
way
is
gonna
fail,
yeah
I,
don't
know
like.
What's
the
I
mean
for
the
from
the
caller,
then
it's
very
kind
of
like
then
the
caller
don't
have
a
way
to
get
the
exit
code
right.
B
C
The
caller
ID
issue
so
if
I
say
color,
launch
the
standalone
container
and
then
terminates
the
container
terminates
and
then
I
call
the
first
wait
and
somehow
aging
flashes
before
the
response
of
that
way,
return
to
the
kind
okay,
now
agent
we
restart
recover
and
then
things
you
like
that
the
agent
already
process
to
wait
so
that
direction
might
be
G,
C
and
then
C.
But
the
client
don't
know
the
result.
C
B
C
I
think
the
way
we
handle
diagnostic
container
is
we
have
a
local
remove,
which
kind
of
me
like
it's
kind
of
like
another,
call
to
indicate
that
it's
safe
to
remove
the
other
artifacts,
the
socialist
a
container,
including
the
on
the
exit
code.
So
if
you
don't
know
the
result
of
a
way
to
should
not
call
remove,
but
if
you
do
know
the
resultant
weight
and
can
safely
call
remove
to
to
remove
all
these
artifacts
there
socialist,
a
container
yeah.
B
C
C
G
So
I
have
a
quick
question:
why
aren't
there
executives
in
these
containers.
B
G
Yeah
because
I
guess
originally
I
didn't
it
wasn't
clear
to
me
originally
that
the
agent
had
to
be
running
for
these
things
to
launch
so
that
if
the
agent
is
running,
then
you
have
a
full
agent
stack
on
the
hose.
So
you
have
the
executives.
It
seems
like
you
have
almost
everything
you
need
to
make
these
basically
the
same
as
regular
containers,
in
which
case,
if
they
are
the
same
as
regular
containers,
then
it
seems
like
a
good
thing.
C
D
B
G
It
seems
like
if
you're
running
I
could
see
us
off
at
something
like
a
CSI
service.
Maybe
you
do
what
that
to
be
health
check
right,
because
this
thing
never
come,
it
never
becomes
healthy,
like
you
want
it
to
be
I.
Think
the
concept
of
health
checks
makes
as
much
sense
in
the
deadlines
container
world
as
it
does
in
you
know.
The
framework
managed
container
world
yeah.
C
B
C
Just
just
a
matter
of
like
like
reusing
some,
the
color
in
my
sauce,
yeah
I,
think
that's
a
good
point
that
we
might
want
to
think
about
how
to
you.
Maybe
you
could
have
like
a.
I
B
B
Okay,
besides
that,
at
some
point,
if
we
have
a
service
within
maces
itself
that
launches
stanlon
detainers,
we're
gonna
have
some
sort
of
wrapper
in
there,
but
if
we
have
say
an
internal
components
inside
basis,
let's
launch
the
stand
containers.
These
will
still
go
through
the
operator
API
a
just
like
any
other
call
any
other
caller.
So
they'll
have
the
same
restrictions
as
everyone
else.
G
C
D
B
C
A
D
C
D
G
G
Right,
okay,
so
I
guess
the
motivation
for
this
was
we
run
or
I
cost
us
with
his
Network
and
we
allocate
ports
and
we
expect
user
tasks
to
use
the
ports
that
every
now
and
then
we
have
problems
or
a
user
task
has
problems
because
it
doesn't
follow.
He
doesn't
write
a
contract.
As
you
know,
it's
not
always
straightforward
to
do
what
people
forget,
and
things
mostly
seem
to
work,
and
so
you'll
have
tasks
which
take
a
poor
start
listening
on
a
port
that
they
haven't
been.
G
They
don't
have
resources
for,
and
everything
will
sort
of
work
and
then
they'll
get
some
kind
of
weird
problems
and
health
checks.
If
they
don't
understand-
and
someone
has
to
be
by
that
for
from
a
security
point
of
view,
if
you
have
tasks
which
can
which
for
where
there's
no
technical
restriction
on
the
two
ports,
they
can
listen
to,
then
you
know
there's
lots
of
possibilities
where
tasks
about
trust
if
I
are
able
to
intercept
traffic
from
other
tasks,
and
lots
of
them
is
different.
G
I
look
at
security
head-on,
so
I
looked
at,
but
basically
the
idea
is
looking
and
look
into
how
various
tools
work.
The
answers
are
not
back
to
line
pretty
much.
What
we
want
to
do
is
you
want
to
make
sure
that
each
container
only
on
orcs
or
which
holds
resources,
there's
no
real
good,
colonel
api's.
For
this.
G
Basically,
what
you
have
to
do
is
find
all
the
processes
in
degree
find
all
the
sockets
and
all
the
processes
in
the
subaru
and
then
match
those
suffix
up
with
the
subjects
instead
of
loosening
suffix
crew
chief
obtained
from
met
men.
So
this
is
like
an
otter.
End
looks
like
that,
because
you
end
up
having
four
scared,
5
ft
or
every
evidence
with
them,
and
so
redoing.
D
G
C
G
The
contract-
sorry,
it's
actually
an
extra
mark,
so
they
have
that
there's
a
configuration
knob,
which
somewhat
a
person's
life
in
general.
My
view
is
that
we
should
be
strict
and
that
processes
should
only
be
allowed
to
listen
on
surface
for
which
they
hold
their
sources.
One
careful
now
lid
process
doesn't
break
this
rule.
So
anytime
we
have
a
built-in
resource
executor
that
uses
the
process
live.
Photos
opens
a
port
with
us,
listen
to
the
port,
so
we
have
a
mode
which
this
stuff
is
called
container
force.
G
Check
agent
for
Angel,
so
if
you
have,
if
you
enable
this
option,
then
what
we
do
is
we
only.
We
only.
There
are
five
Forks
that
our
theme
you
listen
to
within
the
range
of
ports,
that
agent
foods
hazardous
resources.
What
that
means,
and
how
close
is
that
in
any
container,
can
listen
on
any
for
they
want
as
long
as
it's
not
within
agent
support,
page
I,.
G
G
So
if
you
have
a
say
like
my
executives,
HTTP
API,
so
we
that
doesn't
listen
only
ports-
you
don't
just
happen
to
to
the
agent.
We
really
don't
want
music
containers
listening
on
arbitrary
force.
We
that's
generally
a
bad
idea
and
we
want
to
give
we
want
to
give
strong
feedback
to
users
when
they
get
it
wrong.
Okay,.
G
So
it's
actually
so
the
actual
algorithm
as
I
described
is
fairly
inefficient,
but
there
is
a
kernel
patch
there's
a
kind
of
patch
here.
So
thank
you
to
the
Verizon
kernel
guys,
and
so
this
is
currently
in
met
dev
the
next
kernel.
What
this
does
is.
It
provides
the
clap
by
class
ID
on
the
net
link
API.
So
that
means
that
we'll
be
able
to
make
this
same
mechanism
work
efficiently
and
where
that
work
is
you
apply
the
net
class
secret
isolator
and
you
apply
this
network
ports
isolator.
G
Then,
when
the
in
the
network,
port
isolator
uses
the
net
link
interface
to
obtain
the
listening
sockets,
each
listening
socket
will
be
charged
with
its
class
as
a
then
we
can
use
the
class
ID
to
find
the
C
group
and
the
secret
watch
them
every
cigarette.
Can
we
we
know
the
resources
straight
away.
So
all
this
I
know
all
this.
C
G
C
G
We'll
be
able
to
do
that.
Basically,
what
will
happen
is
so
we
need
only
to
I
need
to
do
a
patch
to
move
in
on
free
to
come
through
the
API.
But
basically
what
will
happen
is
we'll
get
they
stop
it
in
the
flow
back
from
the
sub
diagonal
interface
and
either
they
contain
right
there
class
how
they
will
be
operated
or
it'll
be
zero.
So
if
it's
zero,
then
they
have
to
believe
the
old
way.
C
Got
it
sorry,
I
have
another
question
so
regarding
like,
without
this
kernel,
patch
you're
gonna
do
a
polling
on
the
socket
diagnosis,
information
I,
think
my
experience,
I
kind
of
like
this
kind
of
very
slow,
especially
if
you
have
a
container
that
has
like
a
million
emotions
like
like.
What
do
you
have
a
polling
like
in
troubled
I
can
tune
or
like?
How
do
you
make
I
mean
it's
here.
G
C
E
G
G
So
that's
kind
of
it
I've
been
going
through.
Ehm
he's
been
doing
some
very
quickly
that.
G
One
problem
that
he
pointed
out
last
last
night
is
that
the
resources
of
the
ports
in
the
isolated
update
method
attributed
to
the
I
select
their
executives
container,
whereas
we
really
need
to
track
them
on
each
mister
container.
So
they
need
to
have
nucleus,
more
thinking
and
experiment
about
how
to
sulfa.
You.
G
Update
call
back
the
containerized
wolf
Aubry,
they
all
the
resources
and
apply
them
to
the
executives
container.
This
is
when
you
have
a
trio
of
mr.
containers,
which
means
that
all
the
ports
for
all
the
NASA
containers
were
active
at
that
point
would
be
then
attributed
to
the
executive
container.
What
we
want
is
for
each
ports
to
be.
We
need.
C
G
C
I
think
the
tricky
part
here
is
like
we
right
now.
We
don't
support
a
kanessa
container,
specifying
resources,
that's
kind
of
like
a
limitation
at
the
moment,
and
the
reason
we
don't
do
that
initially
is
because
for
some
of
the
resources
it's
more
more
complicated,
for
example
for
cpus
you,
if
you
do
a
long
as
the
container,
to
specify
cpus.
How
do
you
structure
the
C
group
CPU
hierarchy
to
do
you
need
to
extract
the
CPU
shares
from
top-level
container
or
not,
and
we
don't
have
those
answers
and
also
I
think
so.
D
C
Can
owe
it
I
could
resource
coming
for
our
executor,
like
all
the
executor,
rewriting
requests
those
resources.
I,
think
that
sounds
fine.
What
tasks
for
that
executors
container
can
request
resources,
but
the
fact
that
you
want
to
deal
with
this
nesting
nesting
structure
for
C
groups
like
CPU
memory,
it's
gonna
be
complicated
and
we
don't
know
the
answer,
so
that's
it.
It
won't
be
that
yet,
but
you'll
be
great
to
see.
Kinda
do.
D
G
C
Okay,
James:
do
you
have
anything
else
all.
G
C
C
Yeah
I
think
like:
let's
do
the
Pam
discussion
next
time
next
working
group
meeting.
Does
that
make
sense
think
ago
right
six
up,
we
have
to
do
that.
Let's
do
the
depend
discussion
next
week.
Love
sorry
next
time
in
two
weeks
and
I
encourage
folks
to
to
do
some
like
background
singing
on
that
pitch.
D
D
C
It's
James.
It
would
be
great
that
you
can
provide
some
background
doing
the
the
same
to
next
time,
because
I
don't
think
many
people
understand
what
that
is
for,
like
don't
remember,
most
people
don't
have
the
contacts
so
it'd
be
great
to
have
some
some
background
inches.
Actually,
although
that
is,
and
what
is
what
things
are
trying
to
do,
that
would
be
really
useful
why
this
okay.