►
From YouTube: SIG Architecture 20181018
Description
A
A
If
you
want
to
follow
along
at
HTTP
colon,
slash,
slash
bit
dot,
Lee,
slash,
sick
architecture,
and
it's
also
in
the
chat
window,
so
we're
gonna
start
off
with
Andrew
to
talk
about
club
provider
dependencies,
and
it
looks
like
there's
a
little
bit
of
back
and
forth
in
the
chat
and
the
agenda
on
this.
So
let's
go
ahead
and
reconcile
those
comments
as
we
have
this
discussion
and
couldn't
try
and
time
DOCSIS
in
about
ten
minutes.
If
that
works,
sure.
B
A
B
B
Why
I
wanted
to
talk
about
it
in
this
meeting?
Is
that
we
have
kind
of
a
three
phase
approach
for
this,
which
is
first
to
move
all
the
provider
code
into
kubernetes
staging,
and
the
point
of
this
is
that
we
can
then
sync
all
the
providers
into
their
respective
external
repos,
while
also
being
able
to
build
all
the
entry.
B
You
know,
cute
controller
manager
cube
API
server,
couplet
with
all
the
provider,
libraries
at
the
same
time,
and
so
we're
trying
to
kind
of
do
this
entry
auditory
in
a
smooth
transition,
because
we
don't
want
to
break
anyone's
production,
cluster
right
and
so
part
of
doing.
That
staging
part
requires
us
not
to
depend
on
any
of
the
internal
libraries
in
kubernetes,
slash
communities
and
we're
finding
that
there
are
a
lot
of
dependencies
in
there
and
pretty
much.
B
What
we've
been
kind
of
doing
so
far
is
listing
out
all
the
dependencies
and
and
trying
to
figure
out
like
where
can
we
jump
these
generic
utils
libraries
and
all
that
in
a
place?
That's
not
criminality,
slash
kubernetes,
but
also
in
a
in
a
clean
way
that
everyone
kind
of
agrees
is:
is
okay
going
for
so
right
now
we've
been
kind
of
just
dumping,
things
on
kubernetes,
slash
details
and
pretty
much
what
I'm
asking
is
like
do
we?
Are
we
okay
with
that
going
forward,
or
is
there
alternatives
that
we
haven't
thought
of
yet.
C
C
There
I
could
sort
of
have
made
an
argument
that
it
should
be
its
own
repo
because
it's
really
sort
of
a
useful
library
on
its
own,
but
I,
don't
know
if
it's
worth
the
energy
to
go
through
that,
so
I'm
find
to
keep
approving
PRS
against
utils,
knowing
that
the
I
reserve
the
right
to
question
API.
Is
that
move
into
utils,
it's
sort
of
a
promotion
in
scope,
and
so
everything
that
moves
in
there
get
sort
of
a
another
look
at
it.
C
Both
the
doc
comments
and
the
test
cases
and
the
API
is
overall
but
I
I'm
happy
with
that.
Overall,
we
should
also
be
open
to
moving
things
into
new
top-level
repos.
If
that
is
appropriate,
I
have
no
problems
with
it.
It's
just
you
know
a
little
bit
of
work
to
do
it
yeah
and
honestly.
If
somebody
wanted
to
chase
down
Anna
center
and
move
it
to
a
top-level
repo,
I
would
be
okay
with
that
yeah.
D
I
think,
in
terms
of
user
experience
that
you
top-level
repo
is
better
user
experience
like
go.
Linguist
developer
in
user,
targeted
repos
with
strict
compatibility
guarantees
are
the
best
user
experience
or
things
that
are
like
one
file
with
three
functions.
Then
maybe
a
home,
you
tells
us
all,
that's
warranted,
but
yeah.
C
D
I
challenge
that
like
like,
should
we
be
putting
stuff
in
repos
if
we're
not
going
to
sign
up
for
that
as
someone
who's,
writing
go
code
and
importing
things
and
using
them
and
putting
them
together
and
trying
to
make
something
that
operates
and
keep
it
operating.
It's
a
huge
pain
when
my
various
break
yeah.
C
I
sort
of
object
to
the
de
facto
go
assumption
that
all
libraries
are
valid
for
import
and
I
know
it's
part
of
the
ecosystem,
so
I
can't
really
fight
that,
but
I
don't
think
that
you
should
be
peeking
under
my
covers.
Unless
I
tell
you,
you
can
I
think
goes,
got
it
ass
backwards,
but
that
is
what
we've
got,
but
maybe.
E
More
to
push
some
of
this
functionality,
that's
currently
in
tree
out
of
tree.
This
is
a
pattern.
Navigation
establish
yes,
and
that
the
trends
that
we
want
to
support
is
moving
those
shared
dependencies
out
of
kubernetes
kubernetes,
where
it
belongs
in
the
junk
in
the
trunk
utils
repo.
We
should
do
that.
We're
renaming
its
chocolate
truck
yeah
can
every
utils
directory
or
library
is
a
junk
in
the
trunk.
Well,
I
prefer
kitchen
sink
is
good,
but
where
it
makes
sense
and
it
might
be
useful
beyond
just
the
kubernetes
project.
Maybe
that
deserves
more
scrutiny.
Well,.
C
F
No
or
anything
well,
I
was
gonna,
say,
I,
mean
I,
think
the
things
we're
talking
about.
We
should
be
you
specifically,
what
we're
doing
is
taking
them
out
from,
as
you
put
it
under
the
under
the
under
the
covers
and
making
it
public.
So
we
should
be
asking
that
question
of.
Is
this
something
that
we
want
as
relatively
public
and
that
we
want
to
support
like?
If
we
don't
want
to
do
it,
then
we
should
be
questioning
whether
we
should
be
make
allowing
this.
That's
its
I'm.
G
Gonna
argue
with
you
Tim
a
little
bit
here,
because
I
think
you're,
imagining
there's
three
levels:
there's
like
a
cake,
a
dependency,
there's,
a
kubernetes
ecosystem
dependency
and
then
there's
like
awesome
go
right
and
I
would
actually
argue
that,
like
that
middle
one
doesn't
exist
like
it's
either.
You
know
in
a
slash
internal
directory
in
KK
or
the
moral
equivalent
or
it's
an
awesome
go
and
there's
really
not
much
in
between
I.
C
So
fine,
you
were
at
that
position
with
Clank.
Oh
right,
yeah
I
mean
things
like
it's
like
I'm,
just
gonna
pick
on
enna
Center
for
a
minute
right
because
it
was
written
in
a
way
that
was
well.
Let's
just
say
it's
not
the
best
library
I've
ever
seen
if
it
is
expedient
and
it
does
what
it
needs
to
do,
but
it's
not
like
I,
don't
think
anybody's,
put
a
lot
of
design
thought
into
what
the
awesomest
UX
around
NS
enter
would
be.
A
B
Okay,
so
just
so
I
have
an
actionable
item
on
my
end
seems
like
we're:
okay,
keep
going
with
K
utils
and
if
we
try
to
add
something
in
K
utils
that
we
feel
like
should
be
breaking
into
it's
only
Pope,
then
it
can
be
proposed
in
that
P
R,
and
then
we
can
kind
of
keep
doing
work
to
break
things
out
where
it
makes
sense
yeah
what
we
are
blocked
on
is
we
don't
have
someone
who
is
a
top
level
approver.
That
is
a
kind
of
a
single
point
of
contact
for
this
project.
C
G
C
H
And
so,
if
we
want
to
slow
down,
maybe
it's
time
to
have
some
real
conversations
about
how
we
get
there
and
what
we
should
do
them.
You
don't
have
to
do
it
today,
but
I
think
it's
a
worthy
agenda
item
for
us
to
talk
about.
How
do
we
get
to
a
saner
place
for
that
ecosystem
because
it's
already
there
and
it's
annoyed
with
us
and
they
don't
necessarily
have
easy
to
work
with
alternatives
or
they're,
creating
them,
and
we
don't
even
want
them
to
so.
I
C
C
J
B
J
J
C
Suggested
Walter
also
raised
his
hand
here.
It's
not
like
we
get
that
many
reviews
on
this
and
honestly
I'll
walk
back
to
my
previous
statement
and
say
fine.
If
we're
treating
these
repos
as
really
supported
things,
then
yeah,
let's
apply
some
real
rigor
around
API
design
and
test
coverage
and
other
things,
and
you
know
maybe
a
longer
term.
Medium
term
plan
is
to
actually
start
semantically
versioning.
These
things
yeah.
L
L
J
I
Sharing
screen
now
so,
just
to
recap,
I,
said
I
would
do
this
last
week.
This
is
happening.
I
am
running.
The
proposed
113
feature
set
through
Sagarika
texture
prior
to
feature:
freeze,
I'm,
also
going
to
do
the
same
thing
prior
to
code
freeze,
so
we
can
also
ejected
Lee
decide
whether
or
not
these
things
look
like
they
are
actually
contributing
to
the
reliability
and
stability
of
kubernetes,
as
that
is
one
of
our
proposed
gos
goals,
and
that
is
one
of
the
goals
of
the
release
theme.
I'm
sure
this
is
probably
unreadable
I
apologize
for
that.
I
This
is
linked
in
the
meeting
notes
if
you
want
to
follow
along
there.
So
what
I've
done
here
is
just
cluster
up
the
features.
According
to
my
machine
learning,
brain
and
I've
added
comments
to
the
things
that
I
think
we
should
talk
about
so
back
up
from
a
high-level
theme.
I
think
that
Asscher
is
one
of
the
sets
there's
some
just
miscellaneous.
As
your
improvements,
there
are
some
miscellaneous
Vince
to
CR
DS.
There
are
a
lot
of
features
related
to
CSI.
There
is
a
push
for
this
phrase.
I
Csi
is
going
to
GA,
but
it's
unclear
to
me
what
that
means.
There's
a
push
to
make
cube.
Atm
go
to
cheat
GA,
there's
some
miscellaneous
coop
CTL
stuff
like
plugins,
and
such
there
also
seem
to
be
some
features
or
enhancements
related
to
the
interaction
between
coop
CTL
and
the
server
side.
So
things
like
server
side
apply,
API
server
dry,
run
server
side,
printing
of
coop
CTL.
There
are
a
couple
of
miscellaneous
things
that
I
didn't
know
where
to
place
things
like
dynamic
audit
configuration
finally
deprecating
at
CD
2.
I
Switching
the
core
DNS
is
the
default,
and
ephemeral
container
is
formerly
known
as
debug
containers.
There
are
a
couple
miscellaneous
node
things
like
supporting
node
level,
user
namespace
or
run
as
group.
There's
the
miscellaneous
scheduling
things
like
taint
based
eviction
or
Co,
scheduling
or
topology,
aware
volume,
scheduling,
and
then
there
is
support
for
Windows
Server
containers
going
to
da.
I
So
that's
just
the
high-level
summary,
let's
dive
into
a
couple
of
the
things
I
wanted
to
ask
this
group
specifically
so
for
CR
DS,
it's
kind
of
unclear
to
me
whether
or
not
there's
a
real
push
or
driving
force
behind
improving
webhook
conversion
for
custom
CRD
for
CR
DS
or
the
c
rd
installation
mechanism.
The
c
rd
installation
mechanism
seemed
to
be
something
that
CSI
was
depending
upon,
but
that
they
had
some
sort
of
work
around
for
IC
david
has
his
hands
up.
I
will
defer
yeah.
J
I
My
gut
tells
me
that
neither
of
these
things
would
really
affect
or
alter
the
core
stability
of
kubernetes
I.
Think
my
main
question
was
whether
anybody
had
a
sense
that
this
could
affect
the
stability
of
other
features
that
I
just
mentioned,
but
hearing
nothing
real,
objectionable.
There
I
will
move
on
so
CSI
going
to
GA
the
only
thing
out
of
these
13
storage
features
that
was
that's
listed
as
GA
is
the
out
of
tree
CSI
plugins,
going
to
GA.
I
There
are
a
whole
bunch
of
other
things
like
inline,
persistent
volumes,
block
storage,
support
entry
migration
to
out
of
tree
like
none
of
that
is
GA
or
alpha
or
alekos,
some
of
its,
not
even
beta,
so
like
I
guess,
maybe
I
should
have
gotten
Saad
to
show
up
here.
Maybe
somebody
in
Sunnyvale
folks
ought
to
understand-
or
maybe
somebody
here
already
knows
like
is
a
my
phrasing.
This
incorrectly,
when
I
say
CSI
is
going
to
GA,
be.
L
B
L
I
Folks,
I
was
pinged,
I
saw
it
just
trying
to
sort
of
ascertain,
maybe
from
a
messaging
level.
When
we
talk
about
CSI
going
GA,
it
seems
like
the
only
real
feature
issue
here.
That's
going
to
staple
is
out
of
tree
CSI
plugins.
It
might
confuse
users
if
there
was
a
message
of
CSI
is
going
GA
when
it
turns
out
like
there's
no
support
for
in
line
volumes,
that
this
is
just
persistent
volumes
only
and
there's
no
clear
path
of
migration
from
entry
to
out
of
tree.
So
yeah.
O
So
we
had
a
lot
of
discussion
within
the
sig
about
this
and
what
we
decided
was
that
we
need
to
decouple
the
various
components
of
si.
Si
si
si
is
a
massive
massive
project,
so
we
started
at
q4
of
last
year,
at
least
the
core
ofit,
and
what
we
decided
was
that
the
core
feature
for
actually
being
able
to
create
an
external
driver
that
functionality
is
ready
to
go
ga
by
the
time
that
we
mark
it
as
GA.
What
that
means?
O
Does
anybody
with
an
external
you
know
anybody
that
wants
to
write
an
external
driver
can
now
do
so
against
a
stable
API.
So
the
two
things
that
you
mentioned,
one
was
inline
volume
support.
Inline
volume
support
is
being
added
for
two
reasons:
one
is
backwards:
compatibility
for
the
migration
from
the
entry
drivers
to
the
CSI
drivers.
O
So
it's
for
backward
compatibility
reasons
and
then
the
second
is
for
ephemeral,
drivers,
ephemeral,
local
driver.
So
ideally
we
want
CSI
to
be
able
to
be
used
long
term
to
replace
or
to
write
drivers
similar
to
the
drivers
that
we
have
for
say,
config
map
volumes
or
MD
durval
yems
things
like
that.
So
that
is
a
longer-term
goal.
It's
not
a
requirement
for
the
initial
release
or
the
initial
release.
We
want
remote,
persistent
volumes
so
for
that
we're,
okay,
decoupling,
those
two
things
and
I
think
there
was
a
third
one
that
you
mentioned.
I.
I
I
think
I've
I've
gotten
what
I
need,
which
is
you
know
the
part
of
this
is
a
review
to
make
sure
we're
not
potentially
breaking
any
existing
functionality.
It
sounds
like
you're,
more
moving
external
functionality
to
stable
and
that
we're
just
gonna
have
to
be
careful
about
messaging
of
this.
From
a
release
perspective,
this
isn't
CSI.
Okay.
This
is
a
very
specific
piece
of
a
very
specific
section
of
CSI
right.
O
So
see
aside,
the
protocol
is
going
to
go
to
1.0
this
quarter
and
the
implementation
on
this
kubernetes
side
for
remote
persistent
volumes
is
going
to
go
to
GA.
So
that
means
you
can
write
remote,
persistent
volumes
against
a
stable
API,
where,
when
we
do
the
communication,
the
blog
post,
everything
will
have
a
list
of
things
that
are
still
pending
on
that
will
include
ephemeral
volumes
and
talk
about
the
migration
plans
for
the
entry
volumes
as
well.
Okay,.
I
Thanks
Saad
cool
trying
to
move
ahead
quickly
since
I
am
time
boxed
here,
the
next
one
that
kind
of
caught
my
eye
was
adding
ephemeral
containers
it's
so
this
was
formerly
known
as
debug
containers.
It's
been
kind
of
around
for
a
while.
The
thing
that's
tripping
me
up
is
this
seems
to
be
about
changing
the
core,
API
and
I,
wasn't
sure
if
anybody
from
cigar
shed
was
familiar
with,
this
had
gone
through
an
API
review
on
this
stuff.
I
D
C
I
Great
sounds
good.
Moving
on,
like
I
said
the
thing
that
tripped
me
up
was
it
sound
like
changing
the
v1
API
that
seems
weird
okay,
dropping
support
for
at
CD.
Two
I
would
like
to
call
this
done
when
we've
actually
gotten
the
entire
tree
of
that
CD
out
of
the
vendor
directory.
We
can't
do
that
right
now,
because
the
migrator
still
has
dependencies
on
that
I.
Don't
think
this
is
at
risk
I'm,
just
giving
a
heads
up
that
it
might
get
punted
to
the
next
quarter.
I
L
I
The
other
question
similar
to
that
is
there's
a
feature
around
Co
scheduling,
formerly
known
as
gang
scheduling
in
most
of
this
work
is
planned
to
happen
out
of
tree
in
the
kuba
harbor
trigger
project,
but
there
is
apparently
some
entry
stuff
that
has
to
happen.
Maybe
api-related
has
anybody
here
taking
a
look
at
that
or
review
that.
L
I
That
was
Jordan.
Okay,
sorry,
my
ears
are
not
working.
That's
okay,
I've
got
the
the
like
the
Brady
Bunch
bug,
so
just
a
heads
up,
the
taint
based
eviction
on
nodes
is
going
to
beta,
so
that
means
it's
flipping
from
being
gated
behind
a
future
flag
to
being
turned
on
by
default.
We
have
concerns
that
this
will
have
scalability
problems.
They
were
raised
last
release.
That's
why
I
didn't
go
to
beta
last
release,
so
we're
keeping
a
close
eye
on
this
and
may
ask
for
prior
to
code
freeze.
I
So
that's
the
major
time
box,
because
I
wanted
to
get
to
Windows
server
containers
going
to
GA.
This
has
a
lot
of
implications
when
it
comes
to
conformance
and
maybe
I
want
to
hand
it
over
to
Patrick
Lang
to
just
kind
of
walk
through
this,
because
I
think
you
have
the
most
context.
But
basically
there
are
a
litany
of
tests
that
we
need
to
decide
whether
they
still
make
sense
in
the
context
of
conformance,
whether
we're
okay
with
those
tests
being
skipped
for
Windows
containers,
specifically
so
on
and
so
forth.
Okay,.
N
Thank
you,
and
also,
we
also
have
Michael
Michael
on
the
call
who's,
the
other
co-chair
for
cig
Windows.
Some
for
talk
is
volume:
okay,
yeah,
okay,
good
all
right.
So,
basically,
over
the
last,
it's
we've
been
working
on
a
number
of
changes
to
support
being
able
to
run
conformance
tests
on
Windows.
So
some
of
them
are
as
simple
as
you
know,
rebuilding
some
of
the
containers
that
are
needed
to
work
on
windows.
N
Things
like
you,
know
the
basic
kitten
web
server
and
things
like
that
that
we're
going
to
be
recompiled
and
then,
when
the
EDA
tests
go
to
start
that
container
they
work,
but
in
order
to
swap
between
os's,
we've
already
got
a
PR,
and
that
lets
you
pick,
which
repositories
are
going
to
use
for
those
images,
because
right
now,
all
the
ones
that
the
EDA
tests
depend
on
are
not
multi
arc
images,
so
we
basically
have
a
set
of
substitute
ones
that
are
there,
and
so
for
the
majority
of
the
test
cases.
That
approach
has
worked.
N
N
We
basically
need
some
way
to
you
either
skip
or
decide
that
we
want
to
have
conformance
tests
for,
can
have
us
and
do
intentionally
test
Oh
a
specific
behavior.
You
know
with
some
if
def
statements
for
you
know
do
this.
If
it's
Windows
do
this,
if
it's
Linux
do
something
else,
if
it's
BSD
and
you
know
I'm
sure
the
list
could
go
on
from
there.
So
I
guess
kind
of
a
meta
question
is:
what
is
the
right
approach
to
to
handle
this?
N
There's
one
PR,
that's
open!
That's
proposed
reusing
the
distro
flag,
it's
already
using
some
of
the
other
test
cases
to
do
things
like
you
know,
include,
or
exclude
tests
based
on
the
Linux
distro,
and
so
we've
got
a
PR
open
that,
basically
it
will
skip
these
tests
that
I
mentioned
if
the
distro
is
set
to.
You
know
the
field
windows,
so
that's
kind
of
one
approach,
I.
Think
at
this
point
that's
I'd
like
to
turn
over
for
for
questions.
E
So
I
can
speak
to
the
conformance
process
quickly.
The
distinguish
between
what
is
the
brainstorming
discussion
here
and
the
process
for
conformance
process
performance
is,
you
can
submit
pull
requests
to
change
the
conformance
tests
that
invalidates
the
list
of
sort
of
gold
accepted
conformance
tests,
and
there
is
an
approver
process
to
submit
that
PR
and
I.
Think
Clayton
and
Brian
are
in
the
owners
files
there
at
this
point,
so
that
that
is
the
mechanics
of
changing
they
can
the
conformance
tests.
The
second
mechanism
is,
when
you
submit
conformance
test
results.
E
There
is
a
space
to
put
in
how
you
achieve
those
results
so
that
others
can
reproduce
them,
and
so
there
is
a
mechanism
for
someone
to
use
reasonable
judgment
to
say.
Yes,
this
makes
sense
to
run
these
components.
Tests
so
I
think
it's
useful
to
preflight
and
get
guidance
in
this
discussion,
but
as
it
goes
forward,
that
is
the
that
is
what's
actually
written
into
the
terms
and
conditions
of
the
conformance
program
and
finally
yeah
whatever
the
outcome,
it
may
not
be
conforming
kubernetes,
but
that
might
be
messaged
with
caveat.
E
N
I
A
I
Not
an
adequate
time
to
do
that
so
correct
and
I,
also
like
I,
don't
this
is
really
tricky.
I,
don't
know
how
to
deal
with
this
because,
generally,
our
rule
is
like
if
they
perform,
if
there's
any
skip
of
anything.
For
any
reason,
it
shouldn't
be
considered
a
conformance
test
because
we
can't
work
the
same
way
everywhere.
Some
of
these
seem
to
have
to
do
with
the
way
environment
variables
work.
Some
of
these
have
to
do
with
the
way
file
system
permissions
work.
I
K
Yeah
I
just
wanted
to
clarify
and
I,
haven't
been
involved
in
detail
in
the
conformance
process,
but
there
has
been
discussion
about
multiple
layers
of
conformance
and
that
would
seem
to
address
this
problem.
So
you
know
want
to
run
your
program
on
any
kubernetes
cluster.
Then
it
can
only
use
the
features
that
are
part
of
the
base
level
conformance.
If
you
know
you're
running
on
the
windows
kubernetes
cluster,
then
you,
you
know
as
long
as
it's
conforming
with
others
suite.
K
Then
you
can
run
it
on
those
ones,
but
not
all
kubernetes
clusters
and
vice-versa
for
linux.
Do
we
have
that?
Do
we
have
a
plan
or
actually
a
structure
for
that?
Yet
no
we've
avoided
that
discussion
as
long
as
possible
that
program
or
badges
I
mean
it's.
It
sort
of
seems
inevitable
that
we
have
to
have
that
and-
and
it
seems
like
that's
the
way
out
of
all
these
kinds
of
problems-
well,
not.
A
P
Yeah
I
was
just
curious
because
Aaron
was
talking
about
you
know.
Is
it
a
matter
of
just
running
operating
specific
versions
of
Linux
commands?
Do
we
have
a
I
apologize
I
haven't
a
chance
to
fully
read
the
PR
yet,
but
does
that
talk
about
all
the
different
types
of
variants
in
there
like
how
many
of
them
follows
into
that
category?
He
mentioned
versus
something
that
you
just
can't
abstract
away
to
have
a
good
sense
of
what
how
big
the
problem
is
for
each
of
the
categories.
P
N
Don't
have
the
category
list
in
front
of
me,
but
I
could
how
to
basically
add
a
top
level
bullet
point
for
each
one
like
another
one
I
just
realized.
Is
there
a
conformance
test
that
it
also
depend
on?
You
know
temp
FS,
which
is
again
for
like
specific
one
like
I,
don't
see
why
the
same
test
case
needs
to
run.
You
know
two
different
times
to
test
the
file
system,
so
things
like
that.
Okay,.
C
Tim
bunch
of
thoughts
that
I'll
try
to
keep
it
short
I'm,
very
wary
of
trying
to
produce
an
abstraction
across
all
of
these
things.
I
think
that
way
lies
madness,
and
also
it
was
such
a
huge
set
of
API
changes
that
I
don't
think
it's
a
good
starting
point.
I
want
to
be
careful
that
we
don't
conflate
the
idea
of
a
profile
which
has
sort
of
functional
and
semantic
meaning
with
the
architectural
differences
which
I
think
are
pretty
fundamental
to
the
understanding.
C
What
I
don't
have
in
my
head
is
a
clear
picture
of
where
the
architectural
differences
make
semantic
differences
in
plus
or
make
semantic
conversions
impossible.
So
the
Tampa
fest
example
is
a
great
one.
I'm,
assuming
that
Windows
has
an
equivalent
of
the
idea
of
a
memory
back
file
system,
it
just
doesn't
call
tempo
fest.
The
answer
is
no
like.
That
is
a
much
deeper
question,
like
that's
a
pretty
important
feature
that
a
lot
of
people
and
other
sub
systems
use
can.
Can
we
make
kubernetes
without
such
a
thing,
so.
F
C
Sense,
maybe
it's
time
to
re-examine
core
files,
because
these
so,
for
instance,
were
closed.
Api,
there's
a
open
issue
that
people
are
saying.
We
can't
skip
daemons
that
tests
for
non-destructive
updates
in
single
node
clusters,
but
that
test
doesn't
make
any
sense
for
anything.
That's
not
so
remote.
C
If
you
support
demon
set-
and
you
don't
support
non-destructive
role
in
updates,
you're
really
out
of
component,
so
I
don't
want
to
get
rid
of
that
just
in
the
same
way
like
I,
wouldn't
want
to
get
rid
of
ten
by
best
testings
or
the
reasons
Tim
just
said.
But
you
know,
maybe
that
doesn't
make
you
completely
non-conforming
in
their
profile
would
help.
E
I,
respond
to
a
couple
points
at
the
the
intention
and
goal
of
the
conformance
program
is
to
have
make
some
guarantees
to
end
users
about
the
portability
of
workloads
and
the
consistency
of
the
behavior
of
kubernetes
as
a
system
where
there
are
dependencies
on
implementation,
details
and
specific
operating
systems,
that
could
just
be
a
gap
in
the
conformance
tests
today
and
as
long
as
the
API
surface
remains
consistent
and
workloads
are
made
portable
across
them.
That
is
in
the
spirit
of
the
conformance
program.
E
Have
that
debate
initially
and
the
components
say
or
working
group
I
guess
it
is
and
come
up
with
a
proposal
and
then
I
think
that
together
a
unified
message
from
that
group
in
we've
considered
this,
and
this
is
the
direction
we
think
we
should
go
and
that
would
become
a
proposal
to
signal
architecture
again
going
back
to
more
of
the
process
and
how
to
make
the
distinction.
Okay,.
A
M
Love
having
the
last
word,
I
have
two
thoughts
about
this.
One
is
that
I
think
that,
rather
than
our
getting
down
into
the
minutiae,
I
think
the
right
approach
is
to
say,
and
this
is
sort
of
what
what
said-
crews
think
that
to
really
focus
on
the
principle
of
least
surprise
right,
like
if
I
am
running
a
Windows
container
and
I
am
specifying
environment
variables.
The
fact
that
they
behave
slightly
differently
than
if
I
was
running
a
Linux
container
in
specifying
environment
variables.
I,
don't
think.
That's
gonna.
M
Surprise
me
right
because,
like
I
understand
that
it's
a
Windows,
container
and
I
think
that
we
cannot,
we
should
not
have
the
gold
data
like
it
looks
and
feels
exactly
identical
between
the
two
systems,
because
everybody
understands
that
they're
different
and
so
I
think
that
we
should
really
focus
on
effectively
like
is
this
test
minimizing
surprise
for
the
end-user
or
is
the
test
just
like
validating
of
a
behavior?
That
is
specific.
That's
that's
point
one
two
and
then
I
think
also
in
the
spirit
of
making
progress
towards
the
GA.
M
It
may
be
that
the
most
effective
way
to
do
this
is
to
say,
hey,
you
know
what
we're
gonna
mark
hybrid
clusters
as
conformant,
and
we're
going
to
defer
conformance
for
windows-only
clusters
and
with
the
modifications
that
we
will
make
to
the
tests
are
simply
to
add
labels
so
that
it,
you
know,
forces
those
courses,
the
containers
that
are
living
specific
onto
linux
hosts
and
I
sort
of
split
the
problem
into.
Can
we
produce
a
conforming
hybrid
cluster
and
then
can
you
later
produce
a
compartment
windows
on
the
cluster
I.
E
Think
that
sounds
like
it's
purely
additive
that
the
cluster
itself
passes
the
conformance
tests
in
that
way,
that
seems
reasonable
and
then
adding
additional
functionality
to
it.
I,
don't
think,
there's
any
constraint
on
that
that
I,
remember
from
the
Terms
of
Service,
so
sooner
game
seems
like
a
good
path.
Can.
C
I
ask
one
related,
but
not
exactly
the
same
question.
If
you
make
it
quick
there,
or
can
there
be
a
group
put
together
to
try
to
work
through
the
places
where
the
API
exposes
things
that
don't
work
on
one
architecture
or
the
other
and
try
to
come
up
with
a
proposal
to
reconcile
that
I
really
hate
having
parts
of
the
API
that
we
say
you
just
don't
use
that
on
Windows.
It
doesn't
work
at
all,
I
think.
C
M
C
Least,
surprise:
I
I
would
love
to
see
that
analysis
done,
like
I'm
surprised
to
learn
that
there
isn't
attempt
FSC
sort
of
abstraction
in
Windows,
like
that's
part
of
our
API
surface
area.
Is
it
the
case
that
if
somebody
specifies
a
file,
a
volume
of
medium
memory,
that
it
just
will
never
work
on
Windows
or
is
there
a
different
implementation
just
to
pick
on
one
example-
and
we
don't
need
to
answer
here,
but
I
would
love
to
see
somebody
go
through
sort
of
piece
by
piece.
I
A
So,
let's
good
move
on
this
is
definitely
good
talk,
a
lot
of
good
things
here
that
we
should
queue
up
for
future
discussions.
So
Daniel
do
you
want
to
hit
feature
gates,
real,
quick
and
then
hang
out.
N
D
Question
is
basically
does
a
feature
gate
prevent
new
usages
of
the
feature,
or
does
it
also
disable
existing
uses
of
usages
of
the
feature?
So
the
scenario
that
we're
thinking
of
is
I
enable
the
feature
gate.
Some
of
my
cluster
users
use.
It
I
discover
some
problem
which
may
or
may
not
be
related
to
the
usage
of
the
of
the
feature.
I
want
to
disable
the
feature
gate
again:
those
objects
or
resources
or
whatever
it
is
it
started
using
the
feature.
Should
they
stop
using
the
feature
or
should
they
continue
using
the
future?
D
E
D
D
D
L
Think
there
are
basically
three
states:
there
is
using
a
feature
gate
to
prevent
data
and
code
related
to
a
new
feature,
and
so
that
is
we're
working
on
this.
It's
an
alpha.
It's
not
stable,
yet
we're
not
going
to
enable
it
supported
clusters,
we're
not
going
to
allow
data
in
and
we're
not
going
to
exercise
any
code
related
to
this
feature
and
that's
a
very
easy
state
to
reason
about
right.
Nothing.
The
system
is
exercising
this
feature.
L
Then
there
is
the
fully
enabled
state
right,
we're
gonna,
let
the
fields
be
set,
we're
gonna,
let
controllers
and
code
make
use
of
those
fields
and
that's
pretty
easy
to
understand,
and
if
you
only
roll
forward,
then
that
that
works.
The
the
issue
is
when
you
want
to
roll
back,
and
you
want
to
turn
off
the
code.
That's
dealing
with
this
feature
because
it's
causing
problems
of
some
sort,
but
you
still
have
data
set
in
the
system,
and
so
that's
that's.
This
sort
of
third
untested
PAP,
in
other
words,
there's.
C
Been
a
schema
transition
yeah,
what's
what
come
with
ASP?
What
would
you
actually
do
to
roll
that
back?
Look
if
you
change
the
object
in
the
underlying
storage
and
then
when
they
get
off
you're,
going
to
implement
logic
that
tries
to
mutate
the
object
again
under
the
hood
and
roll
back
a
change.
This
is.
D
J
N
D
L
On
the
feature,
I'm
less
concerned
about
preserving
the
exercise
of
this
new
feature,
because
the
feature
get
got
turned
off
like
the
goal
of
the
administrator,
is
to
disable
this
code
to
make
it
stop.
I'm
I
am
concerned
about
that
field.
Snipping
leaving
invalid
objects
which
new,
okay,
a
union
type
where
you
have
a
new
storage
type
like
CSI,
and
you
submit
a
pod
or
a
PV
or
PVC
PV.
E
I'll
just
give
feedback
that,
as
the
operator
of
many
clusters,
my
goal
and
in
using
feature
gates
is
to
distinguish
between
the
binary,
rollout
and
the
feature
enablement,
and
if
that
feature
goes
wrong,
I
would
like
to
be
able
to
roll
that
back,
and
it
doesn't
seem
that
either
the
way
that
we're
proposing
to
turn
on/off
the
feature
or
the
binary
reball
out
solves
the
problem.
So
I
don't
quite
see
the
value
I
propose
given
my
own
use
case.
That's
all
I.
C
Think
it's
just
a
reality
of
the
roll
forward.
Roll
back
ken
and
I
had
a
discussion
on
this.
Just
yesterday,
the
evolution
of
versioned
api's
and
whether
there's
actually
any
sort
of
change
at
all
that
is
actually
completely
forward
and
backwards
compatible
safe
and
that
the
answer
is
no
that's
just
not
I
mean
we
feel
that
more.
There
are
some,
but
micro.
Versioning
is
an
honorable
I
guess
so.
D
A
C
We
can
boot
mine
to
next
week.
My
my
a
I
come
last
week
was
to
survey
all
the
open
caps
and
to
bring
them
to
this
group
for
some
high-level
review.
There
are
52
of
them
from
that.
Guy
I
spent
a
good
chunk
of
time
this
morning,
reading
over
just
the
subject
and
sort
of
looking
at
what
some
of
them
were.
It
was
really
hard
to
not
get
dragged
into
reviewing
each
of
these
things,
because
so
these
are
really
interesting
ideas.
C
C
So
there
was
more
obvious:
please
go
through
and
look
at
these
things
and
if
you
think
we're
not
going
to
do
them,
please
suggest
that
we
close
them
and
if
we
are
going
to
do
them,
please
put
your
feet
back
in
and
get
people
unstuck,
and
it's
super
valuable.
There's
a
ton
of
really
interesting
proposals
out
there
that
I
didn't
even
know
we're
in
flight,
and
the
other
question
is
is
like
what
is
blocking
us
from
moving
capture
separately.
Anything
me
credibly
painful,
to
find
all
the
caps
it's.
Q
Thing
that's
blocking
is
the
I:
don't
want
to
do
a
bunch
of
gymnastics
to
slam
in
the
the
toulon
code,
as
well
as
the
content,
so
I'd
like
the
content
to
be
so
I
want
to
do
the
get
a
bunch
of
keps
merged
I
want
to.
You
then
extract
all
the
caps
into
their
final
home
us
alongside
the
the
tooling.
What's
the
tooling
is
ready
for
that.
So.
G
Q
C
We
didn't
if
we
just
move
things
now
clean
up
the
junk
ones.
If
people
really
care
about
them,
they
would
reopen
the
repository
anyway.
I
would
like
to
say
if
there's
any
way,
that
we
can
celebrate
that
and
and
take
things
out
of
the
blocking
of
that
it
would
be
really
beneficial
to
us
and
our
ability
to
find
and
sort
of
pay
attention
to
and
help
steer
these
ideas.
And
then
the
last
point
I'll
note
is
of
all
the
caps.
C
A
A
G
I
G
I
C
Tim
would
that
have
helped
with
with
your
review?
Do
you
think?
Yes,
although
the
preponderance
of
ones
I
looked
at,
would
not
have
fallen
into
that
I
have
personally
told
people
who
written
caps
I.
Don't
think
you
need
a
cap
for
this.
If
this
is
entirely
within
your
sig
go.
Do
whatever
your
CDs
yeah.
I
C
A
You
so
much
because
it
has
been
me
and
me
alone
doing
all
of
this
management
now
for
weeks
and
weeks
and
weeks
and
weeks.
So
this
is
one
of
those
many
very
unglamorous,
time-consuming,
horrible
things.
As
a
chair
moving
to
the
next
topic
you
get
to
do
and
you
get
the
benefit
of
being
a
chair,
which
means
you
get
to
do
all
this
work.
I
I
would
love
for
all
of
us
to
step
up
and
be
chairs.
A
I
would
love
for
every
member
of
the
sig
to
be
a
chair
actually
so
that
we
can
take
care
of
these
things,
and
so
I
have
a
list
of
things
in
the
in
the
Charter
and
the
the
agenda
that
are
sort
of
the
housekeeping
things
that
happen
every
single
week.
All
of
our
project
boards
there's
for
them
need
to
be
curated.
All
the
caps
need
to
be
reviewed.
We
need
to
proactively
reach
out
to
people
with
reviews
that
need
to
be
done
in
the
sig
and
get
them
on
the
agenda.
A
You
need
to
make
sure
the
agendas
filled
in
all
these
things.
You
don't
have
to
be
an
official
chair
to
do
any
of
this.
Just
make
sure
that
you
leave
a
trail
of
breadcrumbs
in
terms
of
assigning
keps
to
yourself
or
whatever.
It
is
that
you
need
to
do
to
let
people
know
that
you're
doing
it,
but
you
know,
there's
been
some
consternation
about.
A
You
know
affiliation
with
who's,
what
company
or
etc
to
be
a
chair,
and
it's
like
honestly,
just
whoever
does
the
job
I
don't
care
if
you're
a
cat,
that's
an
extremely
talented
cat.
They
can
do
this,
you'll
be
a
chair
like
let's,
let's
spread
this
load
out,
so
it's
not
so
onerous
and
make
it
really
that
what
it
should
be,
which
is
a
sort
of
servant,
leadership
position,
so
please
everybody
pitch
in
and
it
would
be
super
helpful.
A
A
And
lastly,
review
the
Charter
and
can
I
say
one
more
thing
about
the
Charter
too,
is
that
it's
very
stripped
down
and
that's
intentional.
The
steering
committee
has
a
very
has
worked
very
hard
to
create
a
fungible
set
of
values
and
practices
across
all
the
SIG's,
so
that
we're
not
dealing
with
a
bunch
of
variations
and
make
it
hard
for
people
to
contribute
and
interact
with
SIG's.
So
there
is
a
value
placed
on
trying
to
follow
existing
norms
and
so
that
the
draft
that's
out
there
right
now.
A
The
Charter
is
very
much
in
alignment
with
that,
and
it's
something
that
will
evolve
over
time
is
not
cast
in
stone.
So
let's
get
in
an
initial
charter
in
place
which
is
better
than
nothing
and
then
we'll
improve
on
it
over
time.
You
know,
let's,
let's
use
Kaizen
in
our
own
work.
Let's
do
this
the
right
way
and
just
get
stuff.
You
know
moving
forward
work.
A
G
What
you
say
but,
like
you
know,
we've
already
seen
other
SIG's,
where
they've
just
picked
chairs
without
without
consulting
the
rest
of
the
sig.
We
won't
do
that
as
chair
should
not
choose
chairs,
but
that's
the
way
that
the
docs
are
written
right
now.
So
I
think
that's
something
I
want
to
take
up
at
the
steering
committee
level
in
terms
of
the
template,
Docs,
but
I
think
you
know,
sort
of
that.
You
know.
Sig
architecture
should
at
least
model
sort
of
fixing
that
particular
loophole.
I.
A
D
The
the
general
point
or
the
general
version
of
Joe's
point
is
the
the
governance
needs
to
have
a
self
correction
mechanism
like
if
it
goes
off
the
rails.
Something
needs
to
fix
it,
but
there's
there's
one
additional
point
where
there
beyond
that.
That
you
have
to
think
which
is
the
self
correction
mechanism
also
needs
to
not
be
an
attack
vector
so.
I
Oh
yeah
I
think
the
existing
language
talks
about
a
supermajority
of
members,
but
member
is
qualified
as
chair
tech,
leader
sub-project
owner.
If
you
turn
that
into
just
a
member
of
the
sig
that'd
be
great,
but
then
that
means
we
have
to
figure
out.
How
do
we
establish
sig
membership
so
but
I
think
that
less
of
an
attack
vector
supermajority
I,
don't
know
well.