►
From YouTube: Kubernetes SIG Architecture Meeting 20180524
Description
Agenda/notes: http://bit.ly/sig-architecture
Chat transcript: https://drive.google.com/file/d/1Gf7uNtEHxL6GwQqU03dTL6YXelK8gOTc/view?usp=sharing
A
So,
just
quickly
now
that
we're
recording
and
more
people
adhere
meaning
times
I
guess
we
should
potentially
send
out
a
doodle
or
something
with
a
bunch
of
meeting
times.
But
the
two
times
that
we
discussed
previously,
which
are
again
being
proposed,
now,
would
be
Monday
at
10:00
a.m.
Pacific
us
specific
time,
which
is
after
sig,
apps
and.
A
Would
and
then
later
on,
Thursdays
such
as
after
the
community
meeting
11
8
at
11:00
a.m.
Pacific
time
tuesdays
and
wednesdays,
have
large
numbers
of
sig
needings
already
all
overlapping
and
fridays,
of
course,
is
evening
in
Friday
evening
in
Europe.
So
you
don't
do
community
meetings
on
Fridays
generally
and
afternoons
are
out
for
similar
reasons
on
every
other
day,
so
there's
not
a
lot
of
latitude.
There
are
some
cigs
that
actually
schedule
over
the
community
meeting
that
I
don't
want
to
do
that.
A
A
C
D
If
I
understand
it
right,
we
have
a
whole
lot
of
people
who
attend
this
or
want
to
attend
this,
who
are
in
the
US
Pacific
time
zone
and
in
the
US
Pacific
time
zone
for
lots
of
folks
commutes.
It's
just
a
difficult
time.
At
the
same
time,
we
want
to
have
the
meeting
accessible
to
places
in
other
parts
of
the
world
like
Europe
and
so
trying
to
find
a
time
that
works
across.
So
many
time
zones
is
just
difficult,
yeah.
A
E
I'll
talk
for
a
little
bit
and
we
also
have
Brendan
and
we
have
Bob.
So
basically,
we
wanted
to
come.
We've
been
making
circles
around
sig
note
and
then
I
think
everyone
kind
of
decided
that
we
should
come
into
Carroll
tour
in
my
video,
yes
conceived,
Liam,
hello,
so
yeah.
We
decided
to
come
here
and
talk
to
you
guys
about
virtual
coolant
and
what
we're
doing
in
the
space
and
why
we
think
it's
important.
E
The
biggest
thing
I'm
also
going
to
actually
link
a
document,
so
we
can
just
use
it
as
reference
for
the
future.
So
let
me
just
put
that
in
the
chat
you
know
there
we
go,
but
our
biggest
goal,
I'll
start
out
with
the
goal.
Our
goal
is
to
basically
create
a
node,
this
working
group,
ideally
under
Sagarika
texture,
and
you
want
to
talk
about
basically
what
no
this
means
in
kubernetes
and
the
fact
of
just
not
having
any
VMs
so
for
the
users.
They
don't
have
to
manage
those
VMs.
E
The
way
that
we're
doing
it
is
through
models
like
pods
is
the
service,
and
that's
basically,
the
consumption
model
we're
using
to
enable
these
sort
to
know
that
scenarios.
The
scenario
that
we're
going
towards
right
now
is
hybrid,
so
having
both
be
I,
was
in
your
cluster
and
like
how
does
the
service
services
within
your
cluster
to
be
able
to
spin
out
to
four
burst
or
spillover,
and
things
like
that.
E
But
there's
been
a
lot
of
customer
stories
and
we've
done
a
lot
of
customer
investigation
and
we've
also
realized
that
there's
been
an
evolution
where
about
like
ten
years
ago,
we
were
all
using
on-premise,
VMs
and
people
had
to
manage
those
and
things
like
that
and
plan
for
capacity.
Then
you
went
into
ayahs
and
you
went
into
the
cloud
with
that.
E
You
now
get
like
Kol
and
people
are
now
interacting
with
VMs
in
the
cloud,
but
people
are
still
having
to
manage
those
VMs
and
today
we're
the
container
space,
we're
all
creating
and
developing
within
containers.
So
why
are
we
making
users
go
down
an
extra
layer
and
have
to
manage
that
extra
layer,
but
now
we're
able
to
abstract
that
layer
with
these
containers
and
service
models?
So
that's
really
the
biggest.
E
Basically,
our
own
architecture
meetings
every
week
and
we
have
a
bunch
of
specs
I'll,
send
over
this
drive.
We
have
a
bunch
of
things.
We've
been
working
through
like
storage
and
networking,
and
things
like
that.
So
we're
trying
to
heart
in
what
all
of
these
things
mean
and
I
know
this
world
and
we
just
yeah.
So
that's
a
broad
overview
I
actually.
F
Sorry
is
that
is
that
bad
there's,
a
better
yeah
I
was
hiding
my
headset.
It's
not
sorry.
I
am
commuting.
Actually
I
want
to
people
get
slot
I.
Think
I
can
Makino's
desserts
and
pot
as
a
service.
Is
it's
here
and
it's
gonna
be
part
of
the
cloud
it's
already
into
clouds
and
so
I
think.
Fundamentally
we
it's
going
to
have
to
be
orchestration
for
it
and
I
want
that
orchestration
of
the
kubernetes
and
if
we
don't
sort
of
sort
this
out,
some
other
orchestration
will
develop
right
because
it's
just
a
natural
tendency
and
I.
G
F
Other
thing
that
I
would
say
is
that
I
think
this
investigation
has
actually
exposed
some
architectural
decisions
that
we
made
in
terms
of
binding,
pods
really
tightly
to
nodes,
and
that's
the
awkwardness
that
we
see
here
and
and
that's
the
thing
that
I
think
that
is
interesting
grief.
Is
that,
ultimately,
is
to
see
if
we
really
still
believe
that
that
is
something
that
is
fundamental
or
if
it
was
just
a
mistake
that
was
made
because
of
where
we
were
at
and
maybe.
H
That's
a
myth
is
a
good
lead-in
Brennan,
which
is
that
the
idea
that
effectively
we
have
kind
of
two
perspectives.
We
have
the
one
that
containers
are
tied
to
the
operating
system
and
another
perspective
which
is
a
container,
is
a
unit
of
software
being
run
and
the
details
are
kind
of
hidden
from
it,
which
are
two
different
abstractions.
H
F
Is
that
because
I
think
I
think
saying
that
a
container
is
tied
to
it
and
operating
systems
like
a
generic
operating
system
is
fine,
but
it
doesn't
need
to
be
a
give
me
the
concept
of
an
operating
system,
not
a
literal
machine,
with
an
operating
system,
if
that
makes
sense
right
the
understanding
that
it's,
the
Red
Hat
kernel
version
for
that
it's
not
X.
That's
underneath
my
container,
that's
whose
relevance
and
importance
well.
H
F
I,
don't
what
I'm
arguing
or
what
I'm?
What
I'm
actually
saying
is
something
like
it
is
a
little
bit
architected
for
our
potential.
This
used
to
say
that,
for
example,
you
know
we
say
node
node
name
in
the
pod
spec
node
name,
node
name
in
the
pot
aspect
is
what
indicates
that
on
has
been
scheduled
it
and
that's,
probably
not
right
right.
It
probably
should
be
something
like
scheduled
as
the
content
schedule
or
not.
It
should
be
a
boolean,
not
a
string
that
implies
and
and
then
another
thing
is
like
exact
right.
F
So
we
assume
that,
in
order,
the
API
server
assumes
that,
in
order
to
exact
inside
a
container,
it
needs
to
go
talk
to
a
cubelet
running
on
and
oh
right
and
and
and
so
it
doesn't
really
have
any
concept
it
like
being
able
to
say
well.
The
URL
for
the
exact
API
is
here.
It
assumes
like.
Oh
I
can
look
up
the
node
for
the
odd
and
then
I
can
go
and
call
the
executive
di
on
the
node
IP
address
right.
It's
just
in
there
are
places
where
we
just
bake
things
a
little
tight,
but.
F
It's
deeper
than
that,
because
the
trouble
with
it
is
the
there's
all
this
stuff
like
sandboxing
is
religious,
devout,
isolation,
great,
whereas
this
isn't
really
about
isolation.
This
is
about
the
fact
that
the
surface
that
you're
running
on
it's
no
longer
visible
to
you,
like
that
aspects
of
the
surface,
that
you're
running
on.
A
Yeah
so
I
actually
went
and
looked
at
the
virtual
qiblah
repo
and
I
saw
Brennan's
original
demo
way
back
when
and
had
some
feedback
which
doesn't
look
like
it
was
actually
implemented.
But
my
high-level
concern
is
the
project
is
super
ambitious.
It's
it's
trying
to
implement
kubernetes
on
top
of
incompatible
container
services
and
it's
reimplemented
cubelet.
The
container
runtime
keep
proxy
pod
networking
storage,
the
scheduler
and
even
some
work
loaded
api's
are
mentioned
in
the
issues.
That's
a
huge
massive
effort
and
it
looks
like
you're
like
1%
down
the
path
of
implementing
all
that
functionality.
F
Yeah
I
think
the
question
really
is:
there's.
There
are
three
options
for
this
implementation.
You
know
we
don't
go
into
this
lightly.
I
think
that
yeah,
the
three
options
are
what
we
did
the
virtual
cute
list.
Are
you
have
a
big
note?
There
is
what
Joe
is
proposed
as
like
the
shrink-wrap
option,
which
is
whenever
there's
a
container
to
be
run.
You
create
a
note
around
it
right.
So,
instead
of
having
one
infinite,
you
happy
you
have
an
infinite
number
of
perfectly
sized
nodes
for
one
for
each
or
the
third.
F
F
So
the
thing
is
Brian
as
I
got
back
as
I
said
in
the
beginning,
I
I
think
we
have
a
choice
right
and
the
choice
is:
do
we
think
that
container
as
a
service
is
an
important
thing
to
integrate
it
and
I.
A
F
I
mean
this
is
containers
to
other
three
public
clouds.
Have
this
right
now
defined
this
way?
It's
not
going
to
change
right,
and
so
it
you
know,
you're
I,
think
that
this
is
and
it's
gonna
look
the
same,
no
matter
what
right,
like
I'm,
not
actually
sure
we'd
be
made,
because,
like
I,
think
the
surface
area
is
going
to
look
like
this,
not
cool,
that's
it!
This
is
how
hyper
implemented
it.
This
is
how
anybody
was
implementing
its
going
to
implement
it
right
and
so
I
think
we
it's
important
for
us
to
explore
this
I.
F
B
F
C
Right
so
just
just
to
hang
on
here
we
got
a
lot
of
people
who
want
to
speak,
and
Erica
von
Bulow
actually
had
a
question.
First
I
don't
want
to
get
to
you,
which
is
if
there
are
all
these
architectural
discrepancies,
why
you
use
pods
as
opposed
to
a
new
resource
type.
You
want
to
speak
to
that
real
quick
and
then
we
can
go
to
other
questions
right.
C
J
Can
own
some
welcome
Trillo
the
gaps
most
numerous
you
guys,
okay
Zulu.
My
only
one
of
my
concerns
is
that
I
don't
see
how
this
is
going
to
interact
well
with
the
current
way
that
we
handle
storage.
So
they
get
feel
like
it's
really
constraining.
The
types
of
workloads
are
actually
applicable
even
run
on
virtual
couplet,
like
staple
workloads
or
high
performance
Bachelor
of
load
smart,
for
instance,
I.
Don't
see
how
that
works
in
this
model
and
I.
F
I
mean
different
implementations
of
container
as
a
service
will
have
different
kinds
of
volumes
that
they
can
announce.
Just
like
different
files
have
different
types
of
volumes
that
they've
been
found
in
different
clusters
if
they're
offensive
volumes
that
they
can
mount,
but
in
the
high
level
in
terms
of
TVs
and
PVC,
is
this
bit
like
that?
Yes,.
J
F
J
J
A
G
G
A
G
I,
just
I
wanted
to
express
a
perspective
and
also
why
I
think
it's
so
important
that
we
be
doing
this
work
inside
kubernetes
and
I
I
do
agree
that
in
the
short
run,
this
approach
is
going
to
have
some
constraints.
G
We
have
a
lot
of
customers
who
are
very
interested
in
using
kubernetes
but
accessing
the
Fargate
service,
but
doing
that
from
the
perspective
of
using
using
kubernetes
and
even
in
the
Fargate
case,
like
the
storage
limits,
there
are
limitations
to
the
way
the
service
works
with
with
regards
to
storage,
but
that
doesn't
mean
that
it
isn't
useful.
So
this
is
maybe
the
20
80
%
thing.
That's
being
that's
being
stated
there.
So
I
think
this
is
a
like.
Would
you
be
able
to
pass
total
kubernetes
conformance
out
of
the
out
of
the
gate?
G
G
So
what
I
think
like
this
doesn't
feel
to
me
like
out
of
the
gate,
that
this
is
the
long
term
solution,
but
we
really
need
a
place
where
we
can
kind
of
work
together
collaboratively
in
a
place
where
we
have
a
good
place
in
a
way
to
do
this
and
make
some
progress
on
this
and
work
on
the
work
on
like
what
what
the
longer-term
architectural
solution
is
this
in
parallel
to
it.
So
you
know
I
I,
think
one
one
important
thing
here,
certainly
from
the
AWS
perspective,
Brian's
comments
about
Vernors
opinions.
G
Notwithstanding
you
know,
there
have
been
a
lot
of
people
who
have
suggested
to
me
more
directly
that
our
involvement
in
kubernetes
is
just
a
some
sort
of
stunt
to
try
to
like
hold
us
over
to
wear
something
like
Fargate
is
worth
while
people
have
suggested
to
me
that
oh
we're
just
kind
of
hollow
out
kubernetes
and
try
to
replace
it
with
Fargate.
And
you
know
it's
really
important
for
us
for
everyone
to
understand
that.
G
G
Think
I
think
what
we
have
to
do
is
make
some
progress
on
that
I
haven't
been
considering
that
you
know
our
might
with
eks.
Our
intentions
have
been:
we've
been
pretty
open
about
our
intentions,
which
is
to
run
an
upstream
service.
I.
Think
we
need
to
see
how
the
upstream
community
adopts
this
before.
We
just
make
a
decision
about
whether
we're
going
to
add
that
as
a
separate
product,
yeah
I'm.
A
A
F
H
H
Originally
it
was
really
easy,
was
does
docker,
supported
or
not,
and
now
we're
in
the
well
having
talked
to
all
the
container
runtime
implementations
to
make
sure
they
support
it,
and
that's
been
mostly
an
easy
problem,
because
everybody's
kind
of
based
on
Runcie
and
most
of
the
things
people
have
wanted
to
add
have
been
abstractions
that
work
at
the
Runcie
level,
not
at
the
container
runtimes
like
passing
it
through.
That
I
wanted
to
draw
the
analogy
to
like
VM
workloads
on
queue
as
well.
H
So,
like
the
Qbert
team
and
a
few
others
who
said
like
hey,
we
want
to
build
new
API
is
on
top
of
cube.
They
have
to
re-implement
staple
sets,
they
don't
have
to
re-implement
services
or
auto
scaling
or
three
or
four
others.
When
you
think
about
like
adding
this
new
abstractors
is
so
like.
There's
a
hypothetical
abstraction
that
either
a
pod
doesn't
completely
work
or
there's
a
new
type
of
pod
like
when
you
think
about
that
abstraction.
The
moment
that
exists.
H
If
it's
under
a
pod,
everybody
has
to
go
talk
to
the
the
various
people
who
have
those
cute
little
imitations
or
whatever.
If
it's
above
the
pod,
then
you
have
to
do
extra
work
to
make
the
same
use
cases.
Somebody's
gonna
pay
the
cost
either
way.
It
sounds
like
we're
talking
about
whether
this
is
under
or
over
discussion.
Well,.
F
I
think
I
would
also
say
just
it
to
Ryan
point.
There
are
places
that
you
have
to
re-implement.
It's
true
right,
like
counting
files.
Does
the
cubelet
kml
files?
If
we
have
a
you
know
pile
mountain
damage,
it
was
distracted
way
and
they
do.
But
on
the
other
hand,
as
please,
we
don't
have
reemployment
services
great
like
because
we
are
integrating
with
existence
to
put
in
saunders
many
things.
Just
one
sector,
percents,
just
work
services
just
work
right
and
as.
F
You
create
a
service,
it's
an
internal
load,
balancer
service,
so
like
an
IP
table
service,
but
an
internal
load.
Balancing
service
does
well
right
and
that's
the
that's.
The
point
is
that
the
cube
proxy
implementation
itself
is
not
a
core
part
of
through
the
next
few
blocks.
It
was
just
an
implementation
of
services.
You
can
rep
it
out.
It
was
built
to
be
with
my.
K
Hey
guys
so
I
just
wanted
to
provide
a
BMF
perspective
on
this,
because
we've
been
looking
at
this
for
quite
a
while
might
be
helpful.
We've
been
doing
containers
as
a
service
for
about
three
years
now
and
this
year.
Well,
we
create
containers
as
VMs,
and
you
know
to
the
the
sandboxing
discussions
that
have
been
happening
in
signo.
K
K
So,
in
that
respect,
what
we're
looking
at
does
overlap
quite
a
bit
with
the
with
the
sandboxing
model,
but
I
think
that
there
are
potentially
two
use
cases
here,
that
virtual
kibbutz
serving
what
I
would.
What
I
would
say
is
I'm
not
sure
that
presenting
this,
this
abstract
resource,
as
a
literal
node
is,
is
the
right
long-term
solution.
K
I
think
that's
why
we
really
were
having
some
good
discussions
about
about
how
that
shows
up
in
a
cluster
and
how
that's
represented,
because
I
think
there
are
some
issues,
for
example,
if,
if
a
virtual
know
that
presents
multiple
failure
domains,
what
does
that
mean
to
demon
sense
right?
What
does
that
mean
for
affinity?
How
does
that
work?
So
there's
some
important
architectural
questions,
I
think's,
going
to
result
there.
That's?
Why
it
is
a
longer
discussion?
You
need
to
keep
having.
D
But
what
I'm
starting
to
hear
is
is
now
we've
got
something
new
that
is
containers
as
a
service
right,
it's
container
services.
So
we
don't
have
to
worry
about.
Managing
that's
been
pushed
further
down
the
stack
to
somebody
else,
and
so
now
the
discussion
is
well.
How
can
we
make
kubernetes
and
it's
orchestration
and
declared
of
management
of
workloads
work
with
these
things?
D
So
we
don't
have
to
care
about
them
anymore,
because
there's
no
services
that
offer
this
and
it's
it's
a
problem
because
we
just
didn't
have
this
before,
and
so
we
designed
our
architecture
differently
and
the
order
here
with
the
virtual
couplet
is
it's
a
way
of
working
where
we
don't
have
an
extension
point
in
kubernetes
to
work.
So
it's
trying
to
solve
by
working
around
the
problem
today,
but
there's
a
desire
to
say
how
do
we
natively
make
this
work
with
kubernetes,
so
these
kinds
of
services
have
that
API?
D
You
know
you
don't
have
to
work
around
it
because
kubernetes
can
natively
work
around
those
points,
and
so
how
do
we
make
that
work
can
happen?
And
so
I
understand
that
there's
problems
with
the
current
path
forward.
But
that's
because
we
don't
have
an
extension
point,
but
one
of
the
things
we
keep
talking
about
in
cig
architecture
is
adding
these
extension
points.
D
So
the
ecosystem
in
the
community
can
add
to
kubernetes,
so
the
API
works
and
everything
works,
but
we
create
these
extension
points
for
cloud
provider,
storage
providers,
everything
else-
and
this
seems
like
a
place
where
figuring
out
how
we
add
that
extension
point.
So
we
can
still
work
with
VMs
out
of
the
box
and
hardware
out
of
the
box,
but
coming
up
with
a
way
to
work
with
these
other
kinds
of
Burlison
fireman's.
That,
maybe
is
an
API
extension
point
for
people
might
be
a
good
way
to
go.
D
In
fact,
this
seems
like
the
kind
of
thing
that
is
right
fit
for
an
interesting
working
group,
because
it
has
cross
sig
responsibilities
and
maybe
it
would
make
the
idea
of
the
virtual
couplet
in
its
current
form,
have
to
go
away,
because
now
we've
got
the
extension
points
to
do
it
and
in
the
meantime
they
can
work
on
a
solution
to
solving
by
working
around
our
lack
of
extensibility.
I,
totally
agree
with
that.
Yeah.
C
I
think
that's
the
right
direction.
My
my
concern
is
that
so
this
represents
a
pretty
massive
effort
to
get
this
implemented
in
the
right
way.
And
meanwhile
we
have
some
really
serious
project
threats
like
the
technical
debt
that's
baked
in
and
the
fact
that
it's
still
a
lot
of
things
are
tightly
coupled
the
mono
repo
and
those
things
it's
like.
Where
do
we
want
to
spend
our
effort
and
our
time
are
we
gonna
I?
Don't.
F
Think
well,
I,
don't
think!
That's
a
I,
don't
think
a
fair
question
right,
because
we
have
a
lot
of
people
who
are
interested
in
this.
No
I
I
get
that
I
mean
resources
are
fungible
in
that
way
and
and
we
we
invest
elsewhere
right,
like
I
mean,
like
my
team's,
invest
in
things
other
than
this
too
right.
So
I
think
you
know
everybody
pays
their
due
diligence
in
terms
of
the
community,
but
but
I
don't
think
we
should
make
the
decision
based
on
like
oh
there's,
no.
C
It's
more
just
that
there's
I
mean
we
have
a
limited
number
of
high-level
reviewers.
You
have
a
limited
number
of
time.
You
know
that
can
be
invested
in
this
I.
Just
I
just
want
to
be
cognizant
of
what,
when
we
choose
to
do
something,
if
we're
actually
choosing
to
not
do
something
as
a
result
of
that
and
just
have
a
discussion
area
or.
B
It
can
I
make
a
suggestion
that
before
I
am
sure
there
are
many
obstacles
to
building
virtual
cubelet
and
supporting
it
and
distracting
resources
from
other
things
onto
it.
But
I
still
don't
have
an
understanding
of
what
you're
actually
trying
to
solve
here
so
from
brandon
that
that
we
want
to
expose
non
culinary's
containers
services
to
look
like
kubernetes.
That
seems
to
be
one
one
key
requirement,
but
beyond
that,
all
of
the
other
things
I've
be
orthogonal
to
virtual
cubelet.
For
example.
I
don't
want
to
have
to
manage
my
nodes.
D
C
H
Somebody,
yes,
sorry,
I,
think
the
issue
of
do
I
want
to
pay
for
it
or
not.
Is
is
a
product
issue,
not
a
project
issue
and
that
I
think
that's
cool
if
you
guys,
if
people
want
to
make
a
product
that
is
pay
for
what
you
like
go,
do
so
I'm,
not
sure
that
it
needs
technical
infrastructure
for
that
I
think.
F
It
I
think
it's
different
than
that,
though,
actually
because
just
the
fundamental
characteristics
are
different,
I
can
spin
up
a
container
in
you
know,
milliseconds
to
seconds
I
know
one
can
spit
up
at
the
end
right
and.
E
F
I
can
actually
I
can
actually
charge.
You
pay
for
what
you
use
costs
with
container
as
a
service
and
I
really
can't
with
the
DM
z--,
because
it's
reserved
capacity
is.
B
F
But
from
a
customer
perspective
like
this
is-
and
maybe
this
is
a
path
towards
multi-tenant
kubernetes
ultimately
but
like
kubernetes-
is
not
multi-tenant,
safe
right
now,
right
and
so
I
think
I
want
to
be
working
on
this
now
and
and
as
matt
said,
I
think
may
be.
The
ultimate
path
of
this
is.
Is
we
decide?
You
know
what
multi-tenant
kubernetes
is
the
right
way
to
solve
these
problems
and
and
in
your
right,
I.
F
F
C
G
Together,
for
example,
I
mean
you
can
even
think
about
how
to
use
this
to
bridge
from
say,
good
Burnette
ease
Tomatoes,
just
to
like
pick
a
random
one
right
where
you
have
a
different,
a
different
system,
that's
building
a
that's,
that's
actually
going
and
executing
the
going
and
executing
the
containers
underneath
and
while,
while
I've
got
the
mic
I,
also
that
we
do
actually
have
a
second
kind
of
related
project
in
kubernetes
presently,
which
is
COO
mark
and
one
of
the
things
I
know.
I
talked
to
Tim
a
little
bit
about
this
at
coop
con.
G
One
of
the
things
that
I
would
want
to
see
come
out
of.
This
is
a
built
into
the
virtual
couplet
that
would
let
us
deprecated
COO
mark
and
have
a
more
fully
baked
note
implementation
with
a
test
driver
underneath
so
that
we
can
support
scalability
tests
without
actually
you
know
against
the
control
plane
without
actually
running
containers
underneath-
and
this
would
be
a
pretty
pretty
good
approach
for
that.
Okay,
I
I'm
done
now.
D
This
I'm
kind
of
feeling
that
maybe
there's
a
question
at
the
core
of
what
kubernetes
is
here
right?
Is
it
containers
as
a
service,
or
is
it
orchestration
on
top
of
containers
as
a
service,
or
is
it
both
I'm,
not
sure
we
all
assume
the
same
thing
just
given
some
of
the
comments
I've
heard
in
the
conversation,
but
that
does
speak
to
what
the
heart
of
it
is.
D
If
it's
orchestration
on
top
of
containers
as
a
service,
then
this
this
API
layer
to
talk
to
containers
as
a
service
make
sense,
including
to
providing
our
own
or
the
environment.
Isn't
there
if
it's
containers
as
a
serpent's
itself,
then
that's
why
it
would
be
seen
as
competing
against
that
some
of
these
other
services.
So
there's
kind
of
that
that
philosophical
question
of
who
we
are
here.
D
M
So,
just
from
a
logistical
perspective,
there's
clearly
enough
inertia,
it's
just
a
matter
of
who,
what
working,
what
sig
would
sponsor
the
working
group
and
what
it
would
live
underneath
now,
I
I
think
right
now,
it's
very
early
goings
and
there
might
be
some
contention
on
how
we
disentangle
some
pieces
of
the
API
and
try
to
create
new
primitives
and
that's
okay,
and
if
folks
want
to
create
a
working
group.
I
don't
see
any
problem
with
that,
that's
up
to
them
and
their
resources
and
how
they,
how
they
utilize
them
right.
M
M
We
are
bike,
a
lot
on
some
details
which
I
think
it's
very
it's
too
early
for
us
to
even
ascertain
all
of
the
changes
that
could
be
made
so
I'm
just
wondering
what
is
our
intent
of
some
of
this
conversation
because
it
seems
like
folks,
want
to
create
a
working
group
and
buy
from
from
the
steering
committee
perspective,
there's
nothing
preventing
folks
from
creating
working
groups
so
long
as
it's
sponsored
by
a
set
of
six.
So
I
was
worth.
H
Gonna
I
was
gonna,
say
like
to
summarize
Jason's
point
earlier
that
Jason's
point
earlier,
like
what
the
ask
is
and
Brendon
tell
me
if
you
think
this
is
incorrect.
The
ask
is
to
put
aside
time
at
a
high-level
architectural
view,
to
consider
large
fundamental
abstraction
changes
to
something
that's
fairly
important
for
the
project,
as
well
as
an
understanding
that
some
of
the
abstractions
that
we
have
that
work,
because
we
don't
bother
like
we
don't
support
two
different
types
of
notes.
H
We
just
support,
notes
or
a
single
cubelet
that
we
spend
additional
time
allowing
or
in
designs
considering
the
possibility
that
that
abstraction
has
change
your
examples
around
exact
and
blogs,
and
all
that
those
are
costs
to
everyone.
Everyone
has
to
enforce
and
deal
with
those
and
the
potential
payoff
is
that
it
provides
value
to
these
users
on
these
platforms,
which
isn't
necessarily
a
bad
thing,
because
if
people
on
those
platforms
that
cost
I
think
the
flip
side
of
it
misses
to
like
Tim
was
saying,
is
the
the
pushback
or
the
question
would
be?
H
Does
everybody
have
to
sign
up
for
that
cost?
If
we
create
a
working
group,
because
the
working
group
hasn't
said
trying
to
be
deliberate,
upfront
like
changing
fundamental
abstractions
costs,
a
cost
to
everything
else,
therefore,
is
the
end
with
value
for
all
users
of
kubernetes
sufficient
to
justify
paying
that
cost.
I.
F
Think
that's
a
totally
reasonable
supposition
and
summarizing
of
what
brass
I
would
say
that
I
guess
the
degree
to
which
we
move
forward
quickly
or
slowly
will
probably
depend
how
valuable
right
like
we're
on
a
path
right
now
where
we
can
move
forward
and
if,
at
some
point
we
say
you
know
what
we
have
to
fundamentally
read
Ryoga
Tech
the
way
log
exec
work.
Well,
that's
gonna,
be
you
know,
go
through
the
future
proposal,
like
any
other
feature
proposal
and
people
will
prioritize
it
depending
on
how
valuable
they
think
it
is.
F
It
will
either
go
fast
or
slow
and
that's
just
kind
of
how
the
project
works.
So
I'm
not
I,
feel
like
the
project
can
figure
out
for
itself
how
much
or
little
effort
they
want
to
put
into
it.
So
I,
don't
think
it
will
cause
a
lot
of
extra
pain
and
and
I
guess
all
I'm
really
I
guess
what
we're
really
looking.
F
What
I'm
looking
for
more
than
anything
else
is
that
we
say
this
is
the
working
group
for
these
kinds
of
issues
and
and
if
anybody
is
interested
in
these
kinds
of
issues,
that's
the
work.
You
know
once
we
do
once
we
design
a
working
group.
I
don't
want
to
be
in
a
situation
where
we
have
two
or
three,
the
unparallel
initiatives
doing
similar
issue
things
right,
and
so
that's
that
would
be
the
other
part
RIA.
E
The
project
as
a
whole
is
moving
forward
right
now,
so
that's
already
a
given
and
it's
already
happening
we're
already
implementing
it
where
customers
are
using
it
things
like
that,
so
the
project,
like
the
feature,
specs
the
everything
else
like
that's
happening,
so
we
just
want
a
place
to
bring
those
up
and
talk
about
it
in
a
forum
where
we
all
can
like
meet
and
talk
about
it.
Basically,
I.
C
Think
that
makes
a
ton
of
sense
and
we
definitely
need
to
have
a
really
constructive
discussion
around
this,
because
I
think
the
really
dangerous
scenario
we
could
bind
up
is
a
ton
of
work
goes
into
something
there's
a
lot
of
popular
movement
around
some
corollary
project
like
this
and
and
then
there's
just
customer
churn
because
they
don't
know
how
to
get
it
supported
or
where
to
implement
it.
Whatnot
so
definitely
discussing
this
makes
a
lot
of
sense.
C
I,
just
I
really
want
to
understand
what
we're
not
doing
as
a
result
or
how
we,
how
we
don't
implement
this
in
a
way
that
actually
distracts
or
causes
us
to
end
to
lose
faith
or
have
trust
undermined
in
our
customers,
because
we're
not
focusing
on
on
actual
things
that
they
need.
So
if
that
winds
up
being
the
case
that
this
is
a
high
customer
need,
and
we
need
to
adopt
it
and
do
these
things
and
then
that
makes
sense,
I
mean
we'll
figure
it
out,
but
we
just
need
to
be
really
cognizant
of
that.
H
You
you
already
have
mailing
lists
and
you're
having
discussions
and
you
have
a
home
for
the
project
right,
so
I'm,
not
sure
why
that
needs
to
change
or
what
value
you
would
get
out
of
that.
But,
aside
from
that,
I
mean
I'm.
A
big
fan
of
the
idea
here,
but
not
the
design
I
would
love
to
see
us
talk
about
it.
What
I'm
worried
about
is,
if
we
seed
a
conversation
with
a
half-done
implementation,
that
it
becomes
sort
of
impossible
to
really
consider
alternative
models
so.
F
I'd
love
to
have
that
discussion
and
I
find
in
some
ways
we
were
gonna.
Have
that
discussion
here.
I
think
I've
stayed
at
the
three
designs.
The
three
designs
that
I
see
possible.
One
is
the
current
one
that
we
have
to
is
the
shrink
wrapped
shrink,
wrapped
containers
and
three
is
that
it's
a
totally
new
API
object
and
they
all
had
trade-offs
and
they're
all
imperfect
right.
If
it's
a
totally
new
API
object,
then
things
like
services,
don't
work
and
we
had
to
do
a
lot
of
work
to
re-implement
services.
A
F
Shrimp,
that
means
basically
somebody
submits
a
pod
and
dynamically.
You
create
a
node
that
is
exactly
the
right
size
and
you
find
that,
like
you,
create
a
fake
node
again,
but
instead
of
having
a
fake
node
that
contains
lots
of
containers,
you
have
a
fake
node
that
contains
exactly
what
you
know.
That's.
F
But
the
trouble
with
that
is
you
can't
you
have
to
implement
a
new
scheduler?
Well,
you
know
so
you
know
I'm
all
over
the
problems
of
ritual
cubelet,
like
you
have
all
of
the
problems
with
virtual
cubelet
and
you
have
to
implement
a
new
scheduler
I.
H
H
That's
an
easy
way
to
get
a
lot
of
the
small
stuff
you
know
lined
up
and
that's
more
of
like
stig
negotiating
like
going
to
a
cig
and
being
like
hey.
This
feature
could
overlap.
You
want
to
work
together.
We
both
get
something
out
of
it,
like
I,
feel
like
that
sort
of
horse
trading
is
appropriate
when
it's
more
mechanical
work
and
abstractions.
That
just
exist
because
they
exist
like
exact.
D
So
you
know
somebody
brought
up
that
this
is
the
kind
of
thing
that
maybe
Clayton
you
said
not
necessarily
here.
We
should
have
this
conversation,
but
I
think
there's
two
big
reasons
to
bring
this
into
the
community
one
is
it
allows
people
from
many
different
companies
to
be
involved
and
not
just
one
company
to
have
it
as
their
project
and
that's
one
of
the
big
reasons
for
you
know
bringing
things
into
the
community
and
then
it
lets
us
look
at
better
ways
to
design
this
and
maybe
to
alter
kubernetes
to
make
it
easier.
D
So
it's
less
complicated
and
easier
to
do,
because,
if
you're
doing
it
on
the
outside
you're
stuck
with
the
constraints
of
the
community,
it's
harder
to
move
it.
But
if
we
intentionally
work
together
inside
the
community
to
make
this
easier,
it
becomes
easier
to
have
a
simple
design
and,
of
course,
with
all
this
benefit.
You
get
more
neat
ideas
like
the
discussion
around
scheduler
and
writing
a
scheduler
to
solve
this,
and
maybe
somebody
can
go
play
with
it
by
bringing
it
into
the
community.
N
Thanks
Erica
I
was
just
trying
to
understand
if
you're
agreeing
that
a
working
group
is
required
and
can
we
have
virtual
cubelet
as
one
of
the
options
but
actually
discuss
other
ways
of
solving
the
nodelist
problem
and
not
have
the
discussion
here,
because
I
don't
think
it's
going
to
be
productive,
I
think.
F
F
Don't
want
to
be
clear,
I
want
I,
want
to
be
clear
and
I
think
this
is
true
of
all
working
groups
right,
which
is
to
say
if
people
say
hey
I'm
too
busy.
We
can't
make
that
decision,
we'll
wait
till
the
next
week
right,
like
every
every
project
moves
forward
at
the
speed
of
which
it's
a
priority.
I
think
what
we're
mostly
asking
for
is
that
this
be
a
working
group
so
that
everybody
in
the
Cooper
Nation
community
is
interested
in
this
idea
and
have
a
place
where
they
will
work
together.
F
F
Don't
think
either
I
don't
think
either
of
those
I
don't
think
anyone's
gone
and
done
to
those
working
groups.
I,
don't
think
that
either
of
those
I've
read
a
lot
of
their
dogs
and
actually
so
jess
Frizzell
has
participated
in
the
multi-tenancy
she's
networked
she's,
not
directly
in
the
virtual
Qiblah
project,
but
she's
part
of
I
think
I
understand
both
those
projects
reasonably
well
and
I.
I.
A
Think
there's
a
substantial
amount
of
overlap,
so
they're,
the
multi-tenancy
working
group,
for
example,
is
trying
to
define
some
security
profiles.
So
if
you
think
about
the
perspective
of
an
unprivileged
user
and
a
multi-tenant
cluster,
one
of
the
things
that
are
not
going
to
have
access
to
is
shared
resources,
like
notes.
So
if
you
look
at
open
ship,
for
example,
unprivileged
users
cannot
access
nodes
and
you
can,
and
they
cannot
apply
things
like
note,
selectors,
necessarily
because
they
can't
even
know
what
the
labels
on
those
notes
are
right.
A
F
A
F
Exactly
but
we've
already
said,
I
mean
the
original
cubic
from
the
get-go.
It's
an
explicit
gesture
right.
It's
intended
to
be
you,
don't
accidentally
land
on
the
virtual
cubelet
and
that's
not
the
design
for
the
sort
of
foreseeable
future
until
all
people's
apps
to
do
work,
I
have
to
explicitly
say:
hey.
My
app
is
compatible
with
this
and
I
understand
the
restrictions.
This.
K
Is
where
there
is
some
overlap
with
the
sandbox
in
question
right
because
already
an
assembly
discussion,
people
are
saying:
okay,
well
privileged.
What
does
that
mean
right
in
the
sandbox?
What
does
what
does
it
mean
to
have
access
to
the
control
planes
of
sandbox
right?
The
older?
Some
of
these
questions
are
already
being
asked
and
I
think
that
that's
where
some
of
that
overlap
today
and.
N
Just
wanted
to
convey
that
we
have
done
it
is
that
a
detailed
design,
discussion
and
I'm
happy
to
write
a
full
document
to
show
you
what
works,
what
doesn't
work
just
so
you
can
look
at
it
and
then,
if
that
allows
us
to
positively
move
forward
and
discuss
what
is
the
right
design,
so
we
would
be
happy
to
put
the
work
into
it.
That's
we
ask,
is,
can
you
guys
be
involved
with
us,
so
we
can
actually
move
in
the
right
direction
and
maybe
dynamic
tim
was
mentioning,
could
be
the
right
design.
G
Nietzsche
I
think
the
issue
here
is
that,
when
we're
asking
the
steering
committee
folks
for
this
level
of
involvement
with
this,
we're
back
to
Brian's
concern
about
a
zero-sum
game,
I
think
what
we're
really
asking
for
here
is
some
space
for
us
to
go
and
work
on
this,
and
you
know-
maybe
maybe
you
know
we
do
have
to
find
the
right
sig
to
support
it.
Whether
I'm
hearing
some
suggestion
from
Brian
that
say
multi-tenancy
might
be
the
right
way.
G
I
would
actually
propose
that
the
new
sig
provider
might
be
the
better
spot,
since
a
lot
of
the
interests
here
seems
to
be
getting
driven
through
from
the
provider
side
but
I.
My
feeling
is
that
we
need
to
go
and
like
work
on
this
more
before.
We
are
starting
to
ask
for
sig
architecture,
time
directly.
F
So
yeah
I
think
I
think
basically
what
I'm?
What
I'm
interested
in
in
in
is
two
things.
One
is
an
acknowledgement
that
this
is
a
problem
that
is
worth
solving
and
that
that
surrealist
container
orchestra,
serverless
container
architectures,
should
be
integrated
with
kubernetes
at
some
point
somehow,
and
then
the
other
is
a
backstop
for
the
working
group,
which
is
to
say
you
know,
everybody
has
lots
of
not
everybody
has
lots
of
things
that
they're
doing.
I
Really,
what
I'm
looking
for
they've
been
I
think
from
signals
perspective
that
I
think
we
have
engaged
and
said.
We
want
to
understand
why
it
could
not
work
underneath
the
CRI
and
I
think
that's
why
the
sandboxing
discussion
is
heavily
pertinent,
because
the
sandbox
scopes
the
surface
area
of
the
pod
spec
to
something
much
more
palpable
for.
F
I
F
F
I
mean
Jim
Oh
Claire's
proposal
does
proposed
changes
to
the
CRI
API,
but
it
awesome
but
I
met.
Are
you
I
like
fundamentally
changing
I
want
an
act?
I
have
to
admit.
I
haven't
looked
at
the
details
of
this
proposal,
but,
like
the
fundamental
disconnect
has
to
do
with
things
like
the
CRI
API
says
things
like
prepare
container
like
that,
and
that's
just
not
something
that
works
in
one
of
these
New
York.
So.
I
The
center
is
still
a
fluid,
not
fixed
right,
so
if
there
were
changes
that
were
needed
to
be
driven
to
drive
a
use
case,
it's
worth
discussing
I
think
the
tension
had
been
if
virtualizing
a
node
was
the
right
model
versus
a
node,
presenting
virtual
resources
or
a
pod,
exposing
maybe
subset
of
the
API
that
a
runtime
could
support.
By
saying
you
know,
I
only
support.
You
know
that
which
is
defined
as
a
sandbox
pop,
but
I
think
when
you
all
first
engage
three
node
months
ago.
F
Sure
and
I'm
having
to
do
that,
but
I
think
it's
gonna
be
really
messy
honestly,
because
you're
gonna
end
up
with
something
that
looks
a
lot
like
the
virtual
couplet
anyway.
If
you
go
down
this
road,
because
if
I
want
to
have
a
cluster
that
has
no
nodes
in
it,
that
can
access
contain
serverless
container
infrastructure
like
I'm
gonna,
have
to
have.
F
Trouble
with
it
is
like
mounting
a
file
system,
for
example,
happens
outside
of
the
CRI
right
and
so
like.
Well,
we're
clearly
gonna
want
metal
file
systems.
So
how
do
we
make
that
work?
Like
there's
a
lot
of
challenges?
I've
looked
at
it,
I
mean
honestly,
like
I.
Looked
at
CRI
and
I
was
like
I
think
this
is
the
right
way
to
do
it,
and
then
it
just
did
work.
So
hey.