►
From YouTube: Kubernetes WG LTS 20190219
Description
https://git.k8s.io/community/wg-lts
Meeting agenda / minutes:
https://docs.google.com/document/d/1J2CJ-q9WlvCnIVkoEo9tAo19h08kOgUJAS3HxaSMsLA/edit?ts=5bda357d#bookmark=id.udhx5g1vxxls
A
A
A
All
right,
I'm
gonna
go
ahead
and
get
things
started.
This
is
the
February
19th
2019
meeting
of
the
working
group
talking
about
the
support
stance
of
kubernetes
and
exploring
that
problem
space.
This
is
a
community
public
meeting,
that's
being
recorded,
we'll
post
it
to
YouTube,
and
we
ask
that
everybody
adhere
to
our
code
of
conduct.
I've
just
pasted
the
agenda
doc
link
into
the
zoom
I'm
guessing
most
of
you
have
seen
it
since
you're
here,
because
that's
where
we
have
our
soup
meeting
link,
but
just
for
reference.
A
We've
got
a
couple
things
on
the
agenda
today.
Talking
about
Tim,
Sinclair's,
initial
brainstorming
on
a
potential
kept
around
stable
versus
DeBell
and
just
in
Santa
Barbara
talking
about
dependencies
and
then
novel
to
talk
about
the
survey
so
well
go
ahead
and
start
with
Tim
Sinclair,
maybe
give
15
minutes
or
so
to
talk
about
this.
Okay.
B
So
I
link
to
the
formations
of
a
cap
and
starting
to
highlight
some
bits
and,
as
I
talk,
feel
free,
everyone
should
I
edit
rights.
So
who
has
the
link?
So
if
you,
if
you
see
aspects
inside
of
the
stock
that
are
missing,
feel
free
to
amend
or
add
them
as
I
start
to
talk
about
them.
But
the
goal
of
this
conversation
is
to
outline
what
I've
talked
at
many
times
before
about
a
development,
stable
series
and
I'll
give
you
some
context
on
history
around
this
and
why?
B
B
So
there's
a
couple
of
key
constraints
that
kind
of
exists
today.
One
is
that
one
of
the
key
constraints
is
that
users
or
administrators
and
communities
have
a
desire
to
have
longer
term
support
life
cycle
right
and
currently,
that's
not
really
supported
and
their
force
upgrade
every
nine
months.
That's
one
of
the
key
constraints.
B
We
call
Ceres
right
if
you
want
to
call
it
that
within
a
Darrell's
series
or
stream,
you
basically
have
a
monthly
cadence
as
long
as
it
meets
the
minimum
bar
you
release
every
single
month,
all
the
artifacts
are
published
through
a
develop
channel
and
as
part
of
that
monthly
release
process,
you
basically
give
notifications
to
a
certain
set
of
VIPs
or
MVP
people
that
you
care
about
that
basically
test
your
bits
at
regular
intervals.
Right
the
purpose
of
having
these
sort
of
VIP
customers
or
consumers.
B
Is
you
give
them
you
give
them
some
benefits
in
the
community?
You
you
give
them
speaking
talk
engagements
as
part
of
good
con.
You
talk
to
them
early
and
often
about
these
new
features
that
are
coming
down.
The
pike
you
basically
incentivize
them
to
want
to
install
and
eat
on
these
different
new
features
that
are
coming
right
so
every
every
month,
or
so
once
it
meets
the
minimum
criteria,
it's
always
released.
B
So
the
purpose
of
this
two
is
also
to
force
automation,
because
if
we
release
in
this
often,
if
it's
not
a
completely
automated
system,
you'll
you'll
be
backed
up
against
people
right,
so
every
at
the
end
of
a
12
month
cycle
or
year
timeframe.
This
time,
for
it
could
be,
you
know,
determined
by
the
release
team,
but
ideally
after
a
year
timeframe
you,
you
start
a
new
stable
series.
It
all
it
is,
is
a
rebase,
so
you
would
start
in
your
stable
series
and
you
go
through
a
hardening
window.
B
The
purpose
of
the
hardening
window
is
basically
to
ensure
that
all
the
or
twiddle
properly
the
state
of
features
for
given
release
across
this
entire
de
bellas
artes
Tate.
Alpha
beta
whatever,
and
you
also
verify
that
your
migration
path
is
clean
from
stable,
serious
to
stable
series,
usually
in
my
history,
this
is
this
has
been
you
know
with
other
systems.
This
is
taken
about
on
the
order
of
a
month
or
two
on
average
in
part,
because
it
there
were
requirements
as
part
of
hardening.
B
Like
you
had
to
have
your
docks
in
place,
you
weren't
allowed
to
release
until
you
had
all
of
these
other
criteria
that
were
met
and
then,
what's
that,
what
once
that
occurred,
you'd
have
the
new,
stable
series.
So
then,
there's
often
a
question
that
happens
like
how
many
different
stable
series
would
you
support
in
history?
The
stable
series
that
we
did
this
with
with
other
systems
that
were
part
of
the
grid
was
a
two-year
window
right.
So
you
would
have
you
know
two
stable
series
in
support
at
any
moment
in
time.
B
The
one
thing
that
we
had
in
the
past
was
we
had
criteria
for
bugs
what
bugs
were
allowed
to
be
pushed
through
staple
series.
We
don't
currently
have
this,
it's
very
subjective
as
exists
today,
but
basically
only
see
ease
and
breaking
things
or
or
known
regressions
were
allowed
to
get
as
bug
fixes
as
part
of
stable
series.
Everything
else
had
to
go
through
de
belem,
the
Train
kept
on
running
right.
B
So
if
a
person
missed
a
series
well,
they're
just
gonna
have
to
wait
a
while
before
they
can
get
the
fixes
for
those
things,
and
you
had
to
go
through
a
process
and
what
are
you
taking
a
bug
fixed
right
to
make
sure
that
it
was
clear
and
clean
and
well
documented,
as
part
of
the
release
errata?
For
that
we
do
some
of
this
today.
But
it's
you
know
not
necessarily
the
most
clear
process
in
the
world
and
sometimes
people
kind
of
circumvent
it
at
different
areas.
B
C
Have
a
question:
if
no
one
else
does
sure
I
was
just
wanting
to
understand
the
practical
delta's
event
that
arrow
that
you've
been
pointing
at
a
lot
that
goes
from
the
developer
or
series
yeah
that
bad
error
there,
so
so,
basically
you're,
if
I
understand
correctly
you're
transferring
the
developer
stream
to
the
healthiest
Saur,
whatever
we
call
you're,
simply
creating
a
bridge,
it's
just
exactly
the
master
brain
at
that
point.
The
two
are
identical
and
then
you're
applying
I
think
what
you
estimated
to
be
a
month
of
hardening.
C
The
same
argument
goes
for
those
monthly
releases.
At
the
top,
there
I
mean
are
those
monthly
releases
actually
materially
different
than
what's
ahead
at
any
point
in
time,
the
develop
is
head
pretty
much.
So
the
only
difference
you
have
these
monthly
releases
of
head,
which
are
somehow
different,
then
just
to
pick
a
random
hit.
These
are
like
somehow
blessed
monthly
releases
of
head
right.
Yes,.
B
You
have
to
pass
a
certain
criteria.
A
minimum
bar
must
be
met
in
order
for
you
to
actually
do
the
release,
so
it
has
to
pass
a
series
of
tests.
You
know
you
could
have
a
bunch
of
commits
in
between
here
that
actually
break
something
also
another
another
stipulation
that
was
kind
of
overlooked,
which
is
not
part
which
is
gonna,
be
written
down,
is
actually
listed
in
the
the
dock.
C
Okay
I
mean
my
I
guess.
My
overall
feedback
is
is
it
feels
like
there
isn't
actually
a
big
material
difference
between
essentially
three
different
kinds
of
releases
that
we've
identified,
which
it
or
four
in
fact
so
this
the
stuff
that
gets
into
head,
which
goes
through
all
the
CI
a--
bucking
changes
and
everything
else.
So
there's
a
barrier
there
there's
a
not
very
much
big
bigger
barrier
by
the
sounds
of
things
to
getting
to
the
monthly
release,
because
in
practice
that's
basically
going
to
be
whatever
head
looks
like
at
that
month.
C
B
But
you
also
disable
you,
you
basically
turn
your
future
dates
back
on
right,
so
the
purpose
of
hardening
is
that
you
set
the
you
set
the
criteria
for
the
features
for
a
different
release,
as
well
as
making
sure
that
documentation
is
in
place
for
that
release
cycle.
So
it
is
different,
so
the
configuration
is
is
could
be
vastly
different
than
what
any
configuration.
B
So
the
whole
purpose
is
to
get
feedback
signal
early
for
features
that
are
under
development
right,
because
currently,
today
the
only
feedback
signal
we
have
is
from
developers
itself,
which
is
notoriously
bad.
So
we
release
a
feature
and
then
we
kind
of
do
this
promotion
cycle
without
necessarily
actually
having
direct
feedback
from
initial
customers
or
clients.
Consumers
would
we
get
better
feedback
or
we
end
up
getting
feedback
from
early
adopters?
Only
it
depends
how
you
structure
the
the
VIP
slash
the
most.
You
know
the
VIP
structure
for
this.
B
B
Durham
was
one
of
those
key
pieces
or
the
layer
that
part
of
the
tool
chain,
and
we
had
this
process
that
evolved
over
15
plus
years
and
Condor,
was
installed
on
thousands
of
government
across
the
entire
open
science
grid,
and
we
needed
a
way
to
signal
to
those
folks
that
they
could
test
out
new
features
and
federate
workloads
to
those
new
those
developers
to
get
signal
back.
But
they
also
for
their
core
clusters.
They
could
always
rely
on
the
single
series,
so
that
gives
a
little
more
context
or
background
of
where
this
comes
from.
B
D
Speaking
of
someone
who
you
actually
does,
do
you
I'm
more
of
an
end-user
than
developer,
so
there's
the
thing
that
I
like
about
this
is
that
it
it
makes
the
distinction
clear
between
between
the
between
what
and
drill
and
stable
is,
like
you
know
the
it's
you
don't
need
to
read
much
documentation
understand.
What's
going
on,
I
think
that
the
the
problem
that
I
see
is
that
you
know
for
Cuba
Nonis
there,
there's
kind
of
like
there's
so
many
different
surfaces
that
have
versioning
that
the
I
mean.
D
Maybe
this
is
something
that
your
previous
experience
but
there's
so
many
different
things
that
have
versioning
that
currently
have
you
know
a
whole
bunch
of
you
know:
there's
sort
of
the
deprecation
policies
around
object,
types
and
stuff
like
that.
It
feels
like
that
will
be
difficult
to
mug
into
this
into
this
sort
of
design.
Those.
B
So
I
actually
patently
disagree
with
that,
but
that's
okay!
We
can
talk
about
it.
The
the
whole
purpose
of
this
is
to
determine
and
get
feedback.
If
your
API
is
that
you're
creating
are
sane
right,
it's
an
ulcer
is
so
if
you're
developing
against
Cabernets
proper.
You
know
you
get
feedback
early
warning
signal
along
the
way
whether
or
not
you
should
actually
promote
this
API
so
currently
the
way
API
promotion
works
and
Jordans
on
the
call.
B
So
like
we
tried
to
rush
promotion
of
CSI
this
cycle
right
and
there
was
no
early
warning
signal
along
the
way
of
this
makes
sense-
or
this
is
the
right
thing
to
do,
based
upon
the
feedback
from
this
was
just
done
arbitrarily
by
developers,
because
they
wanted
to
get
into
a
known
state
for
consumption
right
if
the
consumption
model
was
ahead
of
time,
where
a
person
could
actually
use
it
before
they.
Actually,
you
know
bought
into
what
this
new
API
looks
like.
B
They
would
be
able
to
shape
the
state
and
understand
the
warts
that
exist
with
this
feature
and
that
promotion
would
occur
only
after
you've
gotten
the
signal
back
right,
so
you
would
actually
stay
this
this
this
this
feature
clearly
is
not
baked.
Given
the
feedback
we
got
from
the
wild.
D
Okay,
yeah
short,
that
that
sounds
quite
reasonable.
It
was
more
just
you,
I
mean
we
we
do
currently
use
in
the
past
had
used
a
whole
bunch
of
stuff
when
it
was
alpha
or
yo.
So,
and
you
and
we've
been
okay
with
changing
stuff
as
we
used
it,
because
we
wanted
to
use
it
straight
away
and
it
hasn't
I.
We
actually
haven't
had
that
much
fun
with
the
the
API
schema
version
changes
in
general,
but
yeah,
yeah
and
so
I,
don't
yeah
I'm
not
like
hard
against
it
or
anything.
I
was
just.
B
E
There
were
two
points
that
I
wanted
to
mention,
one.
If,
if
the
stable
version
AHS
away
has
pretty
drastically
different
configuration
than
the
dead
version,
then
some
people
look
at
dev
as
or
their
expectation
around
dev
is
I,
run
dev,
and
if
it's
good,
then
I
should
be
good
on
the
next
stable
version
and
this
this
approach
wouldn't
necessarily
validate
that,
like
if
you're
running
dev
that
might
have
no
relation
to
the
next
snapshot,
stable
version
that
comes
out.
E
Let's
see,
what's
currently
true
issue,
that's
just
like
good
communication
around
like
what
what
we
mean
by
this
yeah
I
agree,
the
the
point
about
being
able
to
change
or
add
or
remove
api's,
especially
in
more
drastic
ways
and
kind
of
having
this
dev
stream.
Where
we
have
more
freedom,
I
guess
it
sounded
like
it's
gonna
make
bigger
changes.
We
need
more
separation
than
just
config
on/off,
like
once.
We
still
have
issues
once
we
ship
a
stable
version
that
has
like
types
with
these
fields
and
structs.
E
We
still
can't
change
the
structure
of
those
field
names,
so
some
of
the
issues
around
the
way
we
promote
api's
and
things
are
mechanical.
Given
the
way
objects
are
encoded
decoded
and
things
like
that,
so
even
if
we
like
left
it
in
dev
for
two
years
and
never
enabled
it
in
stable
if
they
were
stable
versions
that
had
those
types
with
that
structure,
we
couldn't
change
the
structure
of
those
field
names.
So
some
of
it
is
mechanical.
B
B
So,
if
we,
you
know
it
applies
pretty
generically,
it's
very
analogous
to
like
the
idea
of
like
a
fedora
/rel
series
right
where
you're
constantly
checking
out
and
testing
out
the
latest
versions
of
what
you're
trying
to
get
out
in
the
wild
and
the
the
realm
is
the
downstream
model
of
like
these
are
more
tested,
validated
configurations
that
you
know.
We
have
some
level
of
guarantees
that
we're
giving
you.
F
Like
you,
I've
worked
on
Linux
I
also
worked
at
Mozilla
for
a
while,
and
my
experience
is
having
defined
developer
alpha.
Whatever
it
is,
streams
doesn't
really
change
who
tests
the
development
versions.
It's
still
your
developers
and
your
Redistributor
customers
who
do
pretty
much
all
of
the
actual
testing,
it's
very
hard
to
get
end-users
to
pay
any
attention
to
them.
I'm
not
saying
it
would
be
any
worse
than
the
current
situation,
because
we
have
that
situation
right
now
with
Alfa
features,
but
my
experience
with
other
projects
is
that
it
also
wouldn't
be
any
better.
B
It
all
based
upon
incentivization,
who
is
incentivized
to
use,
develop
things
and
test
them
right.
So
a
lot
of
times
you're
totally
correct
there.
There
are
sort
of
high-value
targets.
The
developers
have
a
tendency
to
use
the
developers,
but
certain
customers
who
bleed
on
the
edge
or
who
want
certain
features,
also
have
a
developer
in
their
environment,
but
without
incentives
you
know,
for
most
people
will
still
be
using
stable.
That's
because
they
will
want
a
stable
target.
B
The
whole
premise
of
of
getting
value
out
of
devel
is
entirely
based
upon
actually
having
people
consume
the
develop
and
incentivizing
that
structure,
so
that
when
you
get
that
early
warning
signal
that
helps
to
shape
what
what
should
be
in
the
next
stable
series,
without
that
this
model
is,
is
essentially
worthless.
To
be
honest,.
A
On
Jordans
comment,
I'm
wondering
if
implicit
here
is
some
slight
difference
and
if
that
becomes
a
problem,
then
between
what
alpha
and
beta
mean
on
devil
and
the
same
code.
If
it
was
branched
or
rebased
into
the
stable
series
like
it
once
it's
there,
people
could
use
it
in
devil
they're
supposed
to,
and
the
things
are
meant
to
be
under
development
and
changing,
but
that
same
code
having
been
pulled
into
stable
at
that
sort
of
forking
point
the
downward
arrow
is.
Are
they
going
to
explicitly
be
under
non
support?
There.
B
And
yes,
that's
true,
but
that's
an
expectation
thing
that
has
to
be
documented
right.
A
part
of
the
hardening
phase
should
be
documenting
what
features
exist
and
what's
the
differentiation
like,
why
was
this
thing
enabled?
Why
was
this
thing
not
enabled
what
things
got
promoted
from
alpha
to
beta?
You
know
very
analogous
to
what
we
do,
but
actually
I
would
say
stricter.
We
kind
of
Yolo
it
in
many
ways.
In
fact,
we
totally
Yolo
it.
We
try
to
do
our
best
guess.
B
What's
there
today,
but
I
think
a
part
of
formalizing
the
hardening
phases,
you
don't
release
until
you
know
until
you
actually
had
the
people
who
founded
the
I's
and
cross
the
t's,
because
there
are
many
scenarios
that
exist
today
where
we
release
and
the
documentation
is
wrong.
The
features
that's
wrong.
The
state
of
things
is
wrong.
This
has
happened
across
a
number
of
releases.
A
So
last
year
we
made
a
shift
from
saying
that
beta
could
change
to
beta
is
like
you
aren't
changing.
The
api's
like
beta
is
supposed
to
be
stable
code,
maybe
not
in
terms
of
api's,
but
maybe
there's
bugs
still
so
with
this
practically
mean
if
something
had
moved
over
to
the
stable
series
as
a
beta
symbol,
foo,
and
then
it
needed
change
that
we
would
throw
away
a
whole
set
of
code
and
duplicate
it
so
that
somebody
had
started
using
it
like.
Would
we
really
go
that
strict
I.
A
E
A
Stable
yeah,
so
in
terms
of
api's
that
we
don't
change,
symbols
and
api's
alpha
implicit,
they
are
changing
right
and
beta.
They
used
to
change,
but
there
were
some
Tim
Hawkinson,
an
email
last
year
saying
like
because
we
want
people
to
start
using
home.
We're
going
to
give
a
stability
statement
on
those
API
is
that
we
will
not
change
it
guys
once
they
hit
beta
level.
E
E
B
Yeah
pretty
much
for
the
first
for
the
first
phase,
there
might
be
bug
fixes
that
get
merged
through
you
know,
once
you
actually
go
through
the
hardening
phase,
would
you
actually
do
upgrade
tests
like
there's
no
guarantees
for
upgrade
tests?
In
this
scenario
the
bar
is
lower,
so
you
know
as
part
of
develop
series.
You
can
feel
free
to
break
things
and
that's
okay.
So
if
you
are
in
this
state,
you
know
you
want
to
get
signal.
You
could
have
the
signal
be
read
for
a
period
of
time.
B
E
C
B
B
So
you
can't
even
consume
head
like
you
can
consume
pieces
of
head,
but
you
can't
actually
deploy
live
clusters
of
head,
not
without
a
lot
of
work
so
like
today,
all
the
artifacts
aren't
continuously
published
on
head
right.
So,
if
you
go
to,
if
you
go
to
the
master
branch,
you
can't
actually
build,
you
could
build
a
cluster,
but
it's
a
lot
of
sweat
and
work
right.
B
C
Maybe
I
need
to
clarify
my
understanding
that
do
we
or
do
we
not
Hopi,
are
actually
run
a
bunch
of
tests,
some
of
which,
at
least
on
a
fairly
regular
basis
like
as
in
daily,
involve
building
a
cluster
that
actually
works
and
that
you
can
run
tests
against
I
believe
we
do
Justin's
nodding,
so
that
to
me
constitutes
a
working
cluster.
I
mean
it's
not
broken
in
the
sense
that
it
doesn't
come
up
or
you
can't
run.
You
know
a
few
hundred
tests
against
it.
There's.
B
There's
layers
to
this
onion
there
are
P
are
blocking
jobs
which,
which
give
you
a
certain
level
of
signal.
Then
there
are
periodic
s--
and
then
there
are
upgrade
jobs
which
are
only
really
checked
which
are
periodic,
but
I
really
really
checked
this
part.
Once
you
get
closer
into
the
alpha
beta
release
category
so
the
today
the
periodic
SAR
only
run,
and
you
only
get
signal
from
that.
After
a
PR
has
been
merged
and
oftentimes
you.
B
E
True,
though
those
do
get
attention,
there
are
small
sets
of
them
to
get
higher
levels
of
attention
and
then
a
long
tail
that
gets
very
little
attention.
It
seems
like
the
upgrade
tests
when
those
are
read,
issues
do
get
opened
and
do
get
resolved
so
right
now
the
upgrade
tests,
the
most
recent
green
run,
was
today
so
I'm,
not
disputing
that
they
are
flaky
and
I'm,
not
disputing
that.
There
is
potential
for
breakage,
but
they
do
gate
the
Alpha
releases
and
they
do
get
raised.
E
H
A
Sorry
officer,
less
periodic,
and
some
of
like
you
have
things
that
can
stay
red
for
a
couple
of
weeks,
quite
commonly
so
like.
If
you
just
go
to
the
the
really
simple
statement
that
Quentin
made
like
at
any
time
more
or
less
its
releasable
for
something
you
care
about,
that's
quite
likely
not
true
for
somebody
out
there
if
you're
doing
something
simple,
probably
true,
most
of
the
time
within
a
few
weeks,
yeah.
E
That
seems
to
me
to
converge
to
the
description
of
the
develop
branch,
which
is
a
day
period
order
of
weeks
month.
Ish
there
may
be
some
series
of
commits
in
the
middle
where
stuff
was
broken
or
had
a
side-effect.
We
didn't
anticipate,
but
we
do
have
some
level
of
signal
that
you
know
at
a
three
week
or
one
month
through
order
of
weeks
period,
we
can
say
yeah
upgrade
job
is
green,
periodic
or
green,
at
least
the
ones
we
consider
release
blocking
the.
A
Really
nice
things
that
this
potentially
does
is
by
shortening
that
window.
It
gets
us
closer
to
that
goal.
That
Quentin
I
think
was
looking
for
there.
That
master
is
always
releasable.
One
of
the
things
that
people
have
talked
about
shortening
the
cycle
does
is
force
you
to
care
sooner,
because
you
have
less
stabilization.
It's
not
like
yeah
next
time,
nice,
the
stabilization
period,
it's
much
more
immediate,
so.
E
One
of
the
questions
I
would
have
is:
how
is
this
different
than
like
the
existing
see
I
job
that
enables
all
outlet
features
and
our
alpha
release
cadence
if
the
alpha
CI
job
that
enables
all
alpha
features
is
green
and
our
periodic
SAR
green
and
we
cut
an
alpha
release,
maybe
more
regularly,
maybe
every
three
weeks
every
month,
how
is
the
devel
branch
different
than
that
I?
Think.
B
I
I
The
question
I
guess
was
that
we
one
of
the
in
some
ways
the
veil
will
have
some
of
the
characteristics
or
at
least
monthly.
Their
quarterly
is
off
of
develop,
will
have
the
characteristics
of
upgradability
between
versions
that
releases
do
now.
What
are
we
willing
to
say
about
upgradability,
on
Snape,
unstable
or
between
stable
generations,
a
strict.
B
I
E
E
K
B
Sometime
I
would
say
artificial:
it
becomes
a
forcing
function
for
sure,
so
I
think
the
one
thing
it
does
is
it:
the
hardening
phase
is
part
of
becoming
stable
forces
you
to
really
think
about
what
promotion
means
versus
forcing
the
promotion
based
upon
signal
from
the
wild,
because,
like
right
now,
we've
promoted
a
bunch
of
api's
are
in
the
process
of
also
trying
to
promote.
Epi
is
based
upon
developer
perspective
right
very
early
in
the
life
cycle.
B
There's
there's
if
you
look
at
the
sheer
number
of
knobs
that
exist
on
the
API
server,
that's
from
an
obscene
number
of
features
so
that
that
level
of
feedback
to
understand
like
if
I
want
to,
if
this
feature,
doesn't
work
for
me
or
is
not
exactly
the
shape
or
orientation
that
expect
to
make
it
useful
that
that
feedback
and
having
that
signals
is
useful.
But
again,
this
this
whole
premise
is
entirely
predicated
on
incentivizing
people
to
use
devel
and
having
all
the
artifacts
there.
Well.
K
B
Just
the
the
devel
series
is
goes
on
and
continues
regardless
of
stable,
so
it
just
keeps
on
truckin
based
upon
whatever
the
needs
are
so
like
the.
If,
if
a
person
was
or
if
a
team
or
a
cig
wants
to
do
features,
they
do
it
irregardless
of
the
time
window
of
stable,
I.
Think
the
one
benefit
here
is:
there's
no
lock
down
right
master
or
develops
on
truckin
irregardless
to
everything
else,
and
the
branch
then
goes
through
a
hardening
phase.
So.
A
To
put
this
may
be
in
a
little
more
explicit
terms,
I
think
what
I'm
hearing
Noah
described,
I've,
definitely
seen
in
the
past
in
the
linux
world.
Having
worked
for
a
couple
of
hardware
vendors,
there
was
a
pressure
to
get
enablement
code
into
the
Linux
kernel
ahead
of
a
given
rail
release.
That
was
going
to
pick
up
the
given
colonel
version,
so
you
had
a
push
to
get
early
code,
often
even
before
hardware
availability
in
so
that
it
was
there
and
then
you
would
debug
it
later.
A
I
I
think
that's
that's
a
normal
industry
thing
that
happens
and
the
the
answer
that
I
would
give
to
know
is
that
at
that
point
you
need
maintain,
errs
and
those
core
projects
who
understand
that
this
happens
and
push
against
it
and
have
clear,
established
quality
criteria
and
and
really
are
prepared
to
say
no
in
the
face
of
somebody
who
really
wants
the
answer.
Be
yes,
and
that
somebody
is
somebody
with
a
lot
of
dollars
relative
to
the
project
so
that
the
situation
can
be
very
awkward.
So.
B
To
give
to
give
some
context
here,
it
was
very
common
for
in
the
past,
in
the
grid
landscape
to
have
sites
different
sites
that
would
have
multiple
clusters,
DeBell
and
stable
series
clusters,
because
developers
were
trying
to
target
the
next
series
wanted
access
to
the
features
nor
them
to
program
to
get
ready
for
the
next
stable
series
right.
That
cadence
gave
developers
time
to
know
and
bet
their
development
for
given
a
feature
or
leveraging
of
a
given
feature
time
to
understand
you
know
and
to
write
their
code
against
us.
B
So
when
the
series
the
stable
serious
hit
that
their
their
project,
which
was
dependent
upon
the
core,
could
then
be
released
along
with
the
core
right.
So
if
I'm,
a
downstream
developer
or
some
kind.
Creating
a
new
new
feature
based
upon
latest
of
CRTs
and
I,
wanted
to
make
sure
that
everything
is
compatible
and
working
properly.
I'm
constantly
using
the
developers
on
an
actual
develop
closer
to
get
it
up
to
snow.
As
part
of
my
getting
ready
for
the
staple
series.
A
So
this
has
been
good
conversation
I
want
to
make
sure
that
we
have
time
to
talk
about
the
couple
of
things
on
the
agenda.
We've
got
sort
of
a
set
of
notes
here
in
the
meeting
agenda
and
the
the
sketch
of
an
outline
in
the
Google
Doc
that
Tim
st.
Clair
had
linked
and
we'll.
We
can
work
on
filling
more
details
in
there,
but
keep
the
conversation
going
a
little
asynchronously
then.
A
G
How
do
we
as
a
project,
try
to
offer
that?
What
does
it
mean
to
have
a
like
a
we
weren't,
even
really
hitting
a
nine
month
s
window
right
now?
If
there
is
no
way
to
continue
running
that
cluster
for
more
than
say
three
to
six
months,
because
your
dependencies
are
no
longer
supported,
we're
no
longer
getting
fit
patches
and
I.
Think
it's
a
question
of
what.
What
are
we
really
saying?
E
So
there
are
two
ways:
there
are
two
ways
you
can
use:
docker
right,
you
can
point
the
qiblah
directly
at
it
and
it
uses
it
via
docker
shim
or
you
can
point
to
give
it
at
a
CRI
socket
and
that
socket
implementation
can
talk
to
any
container.
Runtime
docker
could
be
one
of
them
right,
yes,
which
which
version
was.
G
F
You
know
even
at
this
point,
when
we're
looking
at
something
like
docker.
Yes,
90%
of
users
are
using
Dockers
that
contain
a
runtime,
but
it's
not
the
only
container
runtime
we
support
you
know
and
even
for
docker
is
Jordan
pointer,
there's
two
different
ways
to
install
it.
So
I
think
you
know
this
thing.
A
sort
of
s
window
is
that
I
think
we
need
to
carve
out
in
exception
where,
if
we
have
a
dependency
that
breaks
on
even
for
a
purported
LTS
release,
we
will
either
break
compatibility
on
the.
F
A
Think
a
portion
of
the
the
answer
like
I
I,
agree
with
you
Josh
that
it's
probably
not
possible
to
like
make
a
hard
guarantee
that
all
of
these
are
going
to
work
for
the
set
of
time,
just
because
there's
so
many
there's
so
many
and
they
all
have
their
own
schedules
and
or
or
not
like.
There
are
things
that
we
would
and
on
that
don't
have
like
a
formal
support
systems
even
but
for
certain
p
component
tree
like
this.
A
I
think
the
right
solution
is
kind
of
already
happening
as
a
result
of
the
CVE
like
the
folks
in
tyson
infra
we're
discussing
hay
for
114,
which
doctor
should
we
be
on,
so
that
their
support
stance,
which
is
public,
will
actually
run
through
114
s,
support
lifetime.
So
for
some
things
you
can
have
visibility
on
that
and
make
a
choice
that
k.
We
should
push
this
component
forward
so
that
we're
testing
with
something
that
will
have
the
right
longevity.
D
B
Part
of
why
we
give
the
intent
test
suite
out
to
the
wild
as
part
is
why
we
publish
these
artifacts
so
that
they
can
run
their
configure.
Whatever
you
know,
incantation,
they
chose
and
mixture
that
they've
chosen
to
create
a
cluster
that
they
can
run
the
same
set
of
tests
that
we
run
for
a
given
release
and
if
it
doesn't
pass
them,
you
know
that's
up
to
them
and
that's
where
the
vendors
I
have
to
come
in
and
step
up
right.
D
B
They're
the
problem
with
both
docker
and
cloud
provider
is
that
there
are
completely
decoupled
yet
and
I
know.
Dims
is
currently
working
on
working
on
the
docker
shim
part
for
the
goodlet,
but
there's
also
the
cloud
provider
extraction
has
taken
eons
and
it's
like
it.
So
I
don't
know
if
it's
like
a
promise.
Now
it's
some
political
promise,
but
in
an
ideal
world
you
know
we'd
actually
have
a
timeline
for
when
that
actually
comes
to
fruition.
G
But
it
does
seem
like
when
we
we
should
have
some
notion
that
it
is
possible
to
have
a
kubernetes
cluster
that
will
continue
to
survive
without
relying
on
some
vendor.
Even
if
that's
the
down
scope,
what
kubernetes
is
which
we
sort
of
are
doing
when
we
say
like
we're
expecting
out
providers,
for
example,
they
I
very
much
like
the
idea
of
dependencies
and
requiring
them
to
be
longer
than
the
Trinity
support,
or
s
window.
F
Or
as
a
contrast,
I
like
the
idea
that,
if
we're
looking
at
an
LTS
release
that
the
LTS
applies
only
to
defined
api's,
si
si
si
RI
cloud
provider
interface
and
that
any
entry
dependencies
are
not
included,
it
would
give
a
real
incentive
to
the
people
who
have
not
been
putting
a
lot
of
effort
into
extracting
things
to
actually
give
more
thought
to
it.
One.
A
G
Would
be
good,
we
could
also
just
in
future,
make
sure
that
our
core
dependencies
are
that
we
have
Forks
of
them.
We
are
gradually
getting
there
anyway,
like
we
used
to
have
a
bunch
of
external
dependencies
now
I
think
we
will
shortly
have
CRI
under
our
control.
Let's
attend
or
control.
There's
not
a
lot
left
I
think
we
just
need
to
embrace
that
and
say
we
will
have
a
working
set
of
dependencies
that
we
will
support
for
the
s
window.
If
you're
talking
about
the.
E
G
F
That's
maybe
the
question,
or
should
we
and
my
answer
to
that
is
no,
it.
B
Depends
upon
what
the
contract
or
the
one
of
the
claims
that
you're
giving
to
a
release,
the
requirements
will
feed
down
from
whatever
we
state
right.
So
if
the
requirement
is
that
we
should
support
a
some
stable,
/
LTS
version
for
a
time
window
of
X,
then
we
actually
need
to
be
able
to
back
that
up
with
tangible,
real
progress,
an
ability
to
do
so.
Otherwise,
it's
it's
a
false
promise.
Right,
I.
D
Suppose
then,
the
the
things
around
CRI
they're,
like
an
implementation
detail
of
how
we
deliver
that
the
contract
that
we
have
said
we're
going
to.
Maybe
that
makes
it
easier
for
the
project
because
the
you
know
there's
less
change,
surface
to
manage
and
and
and
then
you
and
then
you
put,
and
then
you
make
it
you're,
giving
a
clear
demarcation
point
of
where
the
change
service
exists.
I
think.
B
The
answer
to
solve
this
so
we're
not
in
the
in
the
world
of
backporting
everything
forever
is,
is
for
providers
to
beef
up
testing
automation
as
much
as
humanly
possible
because
they
should
be
incentivized
to
do
so,
because,
without
that,
that
means
that
somebody
in
the
core
is
going
to
have
to
make
maintain
backwards.
That's
gonna
be
who
exactly
what.
G
I
think
we
should
have
a
set
of
providers
to
create
a
conformance,
kubernetes
cluster
that
has
a
support
window
that
is
compatible
with
whatever
s
wind
that
we
want
to
do
and
if
we,
if
there
is
no,
you
need
an
existence.
Proof,
in
other
words,
I,
think
if
there
is
no
way
to
create
a
kubernetes
cluster
that
lasts
more
than
three
months
before
it
is
completely
insecure.
Then
we
have-
and
it's
a
little
bit
of
a
lie
to
say
that
it
lasts
even
nine
months.
F
But
again
we
can't
be
responsible
for
the
providers
right.
You
can't
build
a
cluster
without
having
a
C&I
plug
in
and
having
storage
and
having
that
sort
of
thing.
But,
like
you
know,
imagine
that
you
know
we
choose
weave
as
aren't
see
and
I
plug
in
for
a
demonstration
cluster
and
then
something
happens
to
event
and
they
stopped
making
releases.
F
Are
we
gonna
take
over
their
work?
I,
don't
think
so?
Well,
you
might
exhale
I
disagree.
I,
think
what
we're
actually
supporting
is
the
API
that
the
provider
is
incidental
and
that
we
might
have
a
demonstration
cluster,
but
we
reserve
the
right
to
change
the
providers
in
that
demonstration
cluster
at
any
time.
I
think.
G
A
We
get
to
the
point
of
saying
that
what
we
do
in
our
core
test
is
a
reference
implementation
or
a
reference
set
of
implementations,
there's
going
to
be
some
mutability
within
that
third,
but
that
that
that's
in
those
words
like
saying
this
is
a
reference
implementation
for
our
core
test.
We're
not
saying
that
we
make
a
distribution
that
is
sort
of
the
pure
pristine
upstream,
that
one
could
solely
use
without
a
vendor
in
the
mix.
I.
B
M
Oh
part
of
the
problem
there
is
that
there
is
a
war
of
technical
depth
in
the
testing
infrastructure,
so
any
new
tests
you
want
to
add
is
difficult
and
there's
currently
refactoring
all
over
the
place.
Also
like
you
cannot
test
everything
like
the
the
basically,
the
landscape
of
quad
is
so
complex
that
the
testing
matrix
is
impossible.
E
G
E
Don't
think
the
answer
is
less
taking
over
maintainer
said:
I
agree
with
Tim.
The
answer
is
more
rigorous
testing
and
better
information
about
what
versions
of
kubernetes
these
things
support.
So,
if
you're
trying
to
choose
a
provider
and
you
care
about
long
term
support,
it
would
be
useful
to
be
able
to
see
oh
well,
this
provider
actually
like
certifies
compliance
back
with
the
last
six
versions
of
kubernetes,
and
they
have
like
a
sliding
window
of
a
year
and
a
half
that
they
test.
E
C
This
can
also
affect
the
the
kubernetes
sort
of
posture
regarding
dependencies,
so
I
mean
there's
similar
discussions
happening
in
CNC
F
at
the
moment,
for
example,
and
the
only
reason
I
mentioned
the
system,
you
know
to
take
a
concrete
example.
Currently
kubernetes
relies
on
EDD,
and
so
we
could
view
that,
as
an
you
know,
external
dependency,
we
don't
care
if
HCT
is
like
totally
vulnerable
breaks
or
whatever
we
kind
of
don't
have
any
responsibility,
but
fortunately
sed
is
also
part
of
the
CN
CF
and
and
the
CN
CF
has
graduation
requirements.
C
C
So
it
might
not
solve
the
problem
100%
in
that
the
kubernetes
project
itself
may
not
take
full
responsibility
for
patching
every
CBE
in
in
SED
but
kind
of
transitively.
We
can
either
be
more
confident
or
less
confident
in
the
things
we
depend
in
on
and
that
may
influence
things
like
architectural
decisions.
A
As
you
integrate
open
source,
you
constantly
have
that
demand
to
figure
out
whether
it's
something
you've
chosen
has
viability
for
the
long
term
and
in
the
same
applies
beyond
your
C
and
C
F.
Maybe
whether
there's
a
stronger
statement
there,
but
everything
that's
in
our
vendor
directory-
has
that
same
potential.
M
So
Josh
brought
a
very
good
point
at
some
points
that
we
don't
have
a
clear
definition:
what
are
external
dependencies
and
we
don't
have
a
different
different
qualifications
for
the
different
external
dependencies.
So
HCD
obviously,
is
a
core
one,
but
something
like
fois
no
I
mean
we
can
be
broken
with
a
certain
version
of
fan
up
funnel,
but
we
still
do
not
define
what
version
of
frana
was
supported
for
a
release,
so
I
think
we
should
make.
J
And
such
yeah
I
think
one
way
the
support
statement
could
be.
We
support
a
version
of
CR,
ICS,
I,
see
and
I,
and
we
have
tested
for
a
particular
version
of
talker
at
CD,
our
CSI.
That
would
help
us
not
claim
support
for
a
particular
version
of
talker
and
only
CRI,
and
when
situations
like
this
would
happen
like
how
Justin
mentioned,
maybe
a
vendor
would
come
forward
and
say:
I'll
fix
the
darker
version
or
so
that
we
can
continue.
The
users
can
continue
to
use
that
particular
version
of
docker.
The
department's.
A
At
that
point,
you
have
what
Justin
was
saying:
I
guess
the
the
existence
proof
point
in
time,
but
you
still
have
the
question
of
existence.
Proof
can
you
keep
something
integrated
and
stays
secure
for
more
than
a
month?
Well
we're
at
the
top
of
the
hour
I'm
gonna
bump
the
survey
stuff
and
see
if
we
can
do
it,
asynchronously
discuss
that
and
really
interesting
conversation
and
exactly
what
we're
going
to
drive
with
this
working
group.
So
thank
you
all
I
think.
A
A
I
think
so
too,
so,
barring
any
objections
we'll
do
it
and
slack
if
you
care
passionately,
pay
attention
there
and
object.
If
you
see
something,
meeting
objected
to
or
thumbs
up
likewise,
if
it
looks
good,
so
we'll
get
it
out
in
the
next
couple
weeks.
Thank
you
all
for
joining
and
good
conversation
today.