►
From YouTube: Kubernetes Community Meeting 20170727
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
Highlights this week:
1.8 Release updates, SIG Contributor Experience and SIG AWS updates, and then a discussion on how to approach deprecating etc2 in favor of etcd3.
A
Today
is
July
27th
2017,
and
this
is
the
Cooper
name,
meaning
this
is
a
recorded
and
publicly
available.
Recording,
so
don't
say
anything,
you
wouldn't
want
your
grandmother
to
hear
alright.
So
we
don't
have
a
demo
today,
so
we'll
head
straight
to
our
release,
updates
so
Jase.
Take
it
away
at
least
one
quick.
E
D
Okay,
great
so
yeah
I,
just
think
it's
pretty
really
in
the
relief,
so
I
just
want
to
notice
everybody
that
the
release
team
now
he's
staffed,
which
is
super
exciting
and
there's
a
PR
out
in
the
features
repo
right
now.
That
has
the
final
version
of
the
relief
team
data.
Indeed,
that
has
everybody's
name
in
it,
and
I
want
to
give
a
big
thanks
to
everybody.
D
Somebody
that
the
flavor
and
feel
for
sure
there
is
because
that's
going
to
be
the
basis
for
a
lot
of
the
marketing
communication
that
happens
around
the
release,
so
we'd
love
to
give
a
lot
of
insights
into
the
things
in
each
release
and
that
one
liner
is
key
to
that
activity.
So,
as
always,
it
seems
he
has
stepped
up
to
manage
the
features
process.
D
D
So
it's
it's
much
better
and
easier
for
us
to
track
down
what
needs
to
be
done.
A
couple
other
things
we
have
a
to
new
roles
or
actually
they're,
not
new
they're,
really
just
trying
to
extrapolate
out
some
of
the
things
that
are
happening.
Every
release
the
marketing
coordinator
role
is
essentially
so
that
we
can
have
a
point
of
contact
between
sig
p.m.
and
the
items
in
the
features,
repo
and
also
between
external
marketing.
D
Folks,
like,
for
example,
core
OS
has
so
many
use
focused
on
marketing
there
and
and
if
anybody
has
questions
that
the
marketing
coordinator
is
going
to
be,
that
point
of
contact
to
make
sure
that
everybody
has
what
they
need
to
do.
The
promotion
and
publicity
on
what's
happening
in
the
release
and
1.8.
That's
not
necessarily
that
a
huge
component,
because
it's
not
a
super
heavy
feature
release,
but
in
1.9
this
rule
really
critical
in
a
key
point
in
terms
of
the
release
activities,
especially
as
things
go
live.
D
That
will
be
very
useful
and
it's
basically,
the
same
thing
goes
for
release,
notes,
coordination
in
that
the
police
couldn't
coordinator
role
is
just
to
make
sure
that
all
the
I
had
in
the
teaser
crossed
as
far
as
completeness
in
release
documentation,
and
that
we
have
a
good
draft
to
review
ahead
of
the
release
and
all
that
good
stuff.
So
any
questions
on
those
so
far.
D
Have
a
con
board
for
the
release
and
my
goal
is
to
provide
a
single
view,
the
release
at
any
given
moment.
So,
if
you're
curious,
what
milestones
are
coming
up?
What
activities
are
happening
in
the
release
if
there
are
issues
or
blockers
that
come
up?
Those
will
show
up
on
that
board
and
will
basically
be
tracking
there.
D
For
me,
this
is
how
I
manage
my
work
from
day
to
day
and
I
find
it
very
useful
and
I'm
hoping
that
this
view
is
also
used
as
well
for
the
community
at
large,
and
this
is
something
that
I
think
could
be
tremendously
valuable,
especially
as
we
have
a
lot
of
more
breaking
out
of
the
repositories
and
and
all
the
distribution
of
work,
because
this
will
be
a
single
point
to
unify
all
those
release,
activities
and
also
blockers
and
also
opportunities
for
for
integration.
So
hopefully
that
will
stay,
updated,
I,
anticipate
it.
D
So
it's
a
little
a
little
tricky,
because
it
means
that
there
will
be
issues
in
the
sig
release
repo,
oh,
that
might
not
normally
be
seen
as
traditional
issues,
but
they'll
really
be
intended
to
be
cards
on
the
Kanban
board.
So
I
I
will
I
will
see
how
that
works,
and
hopefully
this
is
will
give
a
value
to
the
community
and
I
believe
that
is
it
for
1.8.
Any
questions.
A
A
A
F
G
Is
Garrett
and
I
want
to
go
over
some
of
the
things
that
we've
done
over
the
last
few
months
with
contrive
X,
there's
kind
of
been
some
work
done
on
both
the
automation
side
and
on
the
policy
side
and
documentation.
Those
were
some
of
our
goals
for
41.7,
so
we
tried
to
make
sure
that
the
bot
commands
were
documented
and
more
easily.
G
We
also
had
a
PR
to
try
and
speed
up
PR
review,
bilasa
T
by
pinging
inactive
reviewers
and
then
suggesting
additional
reviewers
that
can
get
your
PR
moving.
So
we're
not
gonna
signing
anybody.
There
was
some
pushback
on
you
know,
taking
a
PR
review
away
from
somebody,
but
instead
we're
just
suggesting
other
people
that
you
can
that
can
can
also
review
by
using
blood
blunderbuss
and
the
owners
files.
G
We
also
had
a
couple
people
work
on
putting
together
an
issue
triage
guideline,
and
these
are
some
of
the
things
surrounding
getting
them
routed
to
cigs,
closing
support
issues
I
linked
to
PR
there.
If
you
want
more
information
and
we.
Finally,
we
had
a
couple
of
people
update
the
issues,
template
or
multiple
templates.
Both
issue
template
and
kubernetes
kubernetes
tried
to
simplify
that
because
it
was
getting
deleted
frequently
and
the
features
repo
template
as
well
is
now
a
lot
shorter
and
simpler
upcoming
work.
G
The
most
important
thing
I
think
for
the
quarter
is
going
to
be
metrics.
So
when
we
talk
about
some
of
these
policies,
updating
issue
template,
for
example,
or
Fenian
active
reviewers
or
requiring
an
Associated
issue
any
of
those
things,
we
don't
really
have
too
much
of
an
idea
of
how
that's
impacting
the
project's
velocity
and
health
and
I
know
that
there
have
been
a
number
of
efforts
to
pull
out
metrics.
There's
a
tool
called
velodrome
and
the
testing
for
repo
Victor
G
has
a
dashboard.
This
makeyou
has
some
information
in
contributor
experience.
G
We're
talking
with
a
third-party
company
called
Sappho
that
that
claims
that
has
ideas
for
four
dashboards
and
visualizations,
but
the
goal
is
really
to
make
it
so
that
anyone
can
come
up
with
metrics
and
propose
metrics
and
get
them
created
and
make
sure
that
it's
kind
of
an
open
process
and
it's
it's
easy
to
learn
how
to
add
your
own
metrics.
So
that's
one
of
the
things
that
we're
working
on
the
other
kind
of
theme
is
consistency,
so
we're
looking
at
standardizing
the
labels
across
all
the
repos
there's
a
PR
out
for
that
right.
G
Now
it
kind
of
got
stuck
and
we're
pushing
it
through
a
little
bit
more,
and
these
aren't
all
labels
from
the
king
from
the
from
the
kubernetes
repo.
But
there
are
a
subset
that
should
be
available
throughout
the
project.
For
example,
the
cig
labels,
the
kind
labels
lgt
em,
probably
approve,
and
a
few
others.
So
there's
some
work
being
done
there
we're
trying
to
get
the
box
turned
on
and
more
of
the
repos.
G
We're
starting
with
community
right
now
and
we're
probably
also
going
to
need
to
define
the
process
for
turning
the
bots
on
everywhere
additional
documentation
and
then,
as
Jace
mentioned
earlier,
it's
up
to
the
release
team
to
define
and
continue
supporting
some
of
the
release
policies.
If
we're
gonna
move
forward
with
Tom's
proposal,
which
it
sounds
like
we
are,
we
are
going
to
help
with
some
of
the
automation
surrounding
the
automatic
generation
of
release,
notes
and
I.
G
Think
along
those
lines,
there
was
an
issue
in
testing
press
around
egging,
making
these
things
configurable,
though,
so
that
the
bots
can
can
work
in
other
repos
a
little
bit
more
clearly,
so
we're
gonna
try
and
make
the
box
or
the
at
least
the
the
approver
bot
a
little
bit
more
configurable
and
useful.
So
that's
kind
of
what
we
have
going
right
now
for
contracts.
H
Today
we
have
big
news:
is
we
have
a
new
sigelei
Louisville
bomb
from
core
OS?
So
welcome
to
him
he's
on
the
call?
Welcome
I?
Don't
we
are
so
I
tell
you
in
a
depressed
land,
the
cloud
provider
is
actually
relatively
stable,
which
I
think
is
a
good
thing.
Ie
there's
not
a
lot
of
ice
table.
I
mean
like
there's,
not
a
lot
of
code
turn
going
on
an
inch
I'm,
not
trying
to
imply
anything
about
its
reliability,
but
it
is
actually
really
good.
I.
H
Think
we've
made
a
lot
of
improvements,
for
example
on
EBS
volumes,
which
have
been
a
bit
of
bit
of
a
problem
area
for
the
past
couple
of
releases,
so
those
are
looking
reasonably
good
and
1-7
still
a
couple
of
issues,
but
looking
pretty
good,
and
there
is
some
work
going
on
on
tagging
of
resources
so
that
when
kubernetes
creates
a
resource
in
native
s,
it
is
tagged
appropriately.
But
I
think
the
more
interesting
thing
is
very.
H
The
majority
of
the
work
on
AWS
is
actually
happening
in
other
repos
now,
rather
than
being
in
the
career
news,
kubernetes
repo.
So
there's
some
good
stuff
happening
in
the
external
DNS
project
involved
in
you
know
how
to
configure
route
53
for
DNS
names
and
I
think
there
are
two
a
ob
ingress
controllers,
so
we
don't
necessarily
have
to
do
everything
through
in
a
store,
Nettie's
and
I
think
we
are
doing
well
on
that
front.
I
guess
the
weakness
is
that,
of
course,
we
have
to
a
it'll
be
ingress
controllers.
H
We
would
ideally
like
to
have
one.
Pops
is
another
example.
The
I
think
the
the
major
thing
that
I
personally,
you
think
is
that
the
biggest
thing
that
the
fix
is
the
I
am
issue.
In
other
words,
our
pods
currently
typically
inherit
the
IEM
permissions
of
the
node.
There
is
a
solution
called
cubed
I
am
which
most
of
the
community
seems
to
have
adopted,
we've
blessed
that
in
the
sig,
I
guess
and
we're
also
there's
also
the
identity
working
group.
H
Spinning
up-
and
hopefully
we
can
get
cooked
I
am
fitting
in
that
framework
and
have
a
really
nice
integrated
solution
that
is
sort
of
a
first-class
kubernetes
citizen,
I
think
that's,
that's
all
I've
got
yeah.
We
have
meetings
every
other
Friday.
Think
about
this
Friday,
but
the
other
Friday,
and
we
have
a
slack
channel
and
you're
welcome
to
come,
hang
out.
If
you
use
community
something
depress
or
even
if
you
don't,
that's
all
I
got.
A
A
A
B
So
those
of
you
who
haven't
had
time
to
read
the
email,
the
case
I'm
making
is
that
our
documented
support
policy
says
we
only
support
three
minor
versions.
So
when
we
release
kubernetes
1a,
we
drop
support
for
kubernetes
1
5
kubernetes
1
5
was
the
last
release
to
default
to
at
CD
2
in
pruning
clusters
in
kubernetes,
1
6
was
the
first
release
to
default
to
a
2d
3d
printing
clusters.
As
you
may
be
aware,
the
release
team
uses
upgrade
tests
as
part
of
its
go/no-go
signal
for
whether
or
not
to
cut
a
release.
B
B
So
when
we
say
that
the
ants
that
the
kubernetes
project
supports
SED,
some
of
you
may
be
surprised
to
learn
that
previous
releases
have
been
cut,
with
increasing
levels
of
coverage
of
that
CD,
2
functionality
and
I'd
like
us
to
come
out
and
call
it
for
what
it
is
that
it's
about
a
year
or
so
since
we've
been
supporting
SED
2
is
the
default
configuration
and
it's
time
for
us
to
drop
it
so
part
of
the
problem
here?
Is
we
as
a
project?
B
Don't
have
an
official
policy
for
what
our
deprecation
policy
or
support
policy
looks
like
the
versions
of
third-party
components
upon
which
we
depend.
This
includes
sed,
but
could
also
be
tied
to
things
like
CNI
plugins
for
the
particular
container
engine
that
were
running
I
tried
to
raise
this
issue
at
sega
architecture,
asking
for
some
guidance
on.
Perhaps
what's
the
broad
policy
you
want
the
third-party
dependencies?
B
This
is
ordinarily,
the
sort
of
thing
I
would
raise
to
the
steering
committee.
But
as
we
don't
have
this
in
committee,
as
yet
I'd
like
to
raise
this
issue
to
the
community
and
decide
whether
or
not
we
can
find
a
stick
to
own
this
or
if
he
decided
against,
not
be
working
group
to
do
this,
because
what
I
raised
did
it's
a
Garko
textured?
There
were
a
couple
raised.
B
Eyebrows
I,
think
Justin
articulated
a
point
just
a
couple
minutes
ago
on
that
thread,
pretty
well
that
you
know,
there's
a
big
checklist
of
things,
somebody
who's
very
dependent
on
NCV
to
might
like
to
see
done
to
gain
confidence
that
we
have
fully
thought
through
the
SVD
to
2x
53
upgrade
path.
And
if
you
look
at
our
upgrade
tests,
for
example,
they
haven't
been
passing
for
quite
some
time
and
there's
a
lot
of
there's
a
fantastic
checklist
that
Justin
proposed
that
I
have
seen
no
action
more
input
on.
B
H
High
I
can
jump
in
and
give
you
a
little
bit
more
background.
The
guess
which
is
like
when
we
first
adopted
it
to
be
3.
There
was
a.
There
was
a
migration
approach
that
was
recommended.
We
discovered
sort
of
late
in
the
day
that
there
were
some
snafus
with
that
in
some
scenarios
like
H,
a
cops,
for
example,
decided
not
to
adopt
at
cd3
until
those
have
been
resolved.
H
They
have
not
been
under
testing
yet.
So
we
have
not
made
any
progress
on
adoption
of
that
city
3.
The
sort
of
the
quagmire
we're
in
is
we
figure
at
some
stage.
There
will
be
an
automated
update
solution
and
if
we
we,
we
already
have
two
scenarios
at
city
2
and
at
CD
3,
and
if
we
have
at
city
3
manual
and
it's
d3
automated
upgrade,
then
we
have
like
three
scenarios.
So
that's
why
cops
is
holding
on
at
CD?
H
Two
I
would
say
that
we
should
get
per
Kate
and
city
2
and
start
the
start.
The
clock,
but
I
think
we
should
continue
with
our
deprecation
policy
on
something
as
important
as
you
know
where
our
data
lives
and
I'd
say.
Like
two
things,
one
of
which
is
like
the
default
is
not
the
right
thing
to
do
because
well,
actually,
this
is
the
default
we
should
not
have
switched
according
to
a
deprecation
policy.
We
should
not
have
switched
the
default
at
all.
H
We
should
have
stuck
the
default
on
that
city,
so
looking
the
default
is
the
wrong
thing
to
do.
Otherwise
we
would
not
be
on
it
CD
3
at
all.
The
the
other
thing
is
that
Ed
CD,
2
and
X
2
B
3
are
basically
completely
different
data
stores.
They
have
nothing
in
common.
It
is
not
a
version
upgrade,
so
we
added
support
for
a
city
3
and
like
when
we
add
support
for
console
or
cockroach
TV
right.
That
doesn't
mean
we
remove
support
fretts
83.
H
D
So
I
want
to
get
into
something
here
and
said
that
is
I.
Think
the
crux
of
this
issue
is
that
the
support
of
the
deprecation
policy
or
that
your
serve
proposing
justin
is
unfunded.
It's
an
unfunded
mandate.
There
is
nobody
on
track
to
pay
for
that
work.
You
know
in
terms
of
hours
or
or
dedication
from
a
sig,
that's
willing
to
just
step
up
and
own
it.
So
this
is.
This
is
something
that
is
just
generally
something
that
we
as
a
community
need
to
solve.
Is
the
the
unfunded
mandate
problem.
D
So
what
this
is
one
of
those
things
where
the
the
lazy
consensus
is
that
it's
the
release
is
just
gonna
roll
out,
look
out
at
CDs
to
support
just
because
that's
the
way
the
inertia
is
rolling,
so
we
need,
if
somebody's
question
about
that
and
and
taking
care
of
that,
we
need
to.
We
need
to
actually
have
somebody
stand
up
like
right
now
and
and
say:
I
got
it
and
actually
carry
the
ball
through
the
end
of
the
release.
I
B
B
That's
what
I'm
saying
it
happened
isolated
in
an
NGO,
but
let
me
just
be
clear:
yeah
testing
is
what
has
has
dropped.
Write
the
code
to
actually
talk
to
at
CD.
2
is
absolutely
still
there,
but
nobody's
checking
to
see
if
it
works
anymore.
What
cups
is
right?
All
of
the
cops
tests
or
studies
got
to
get
to
yeah.
J
Announcing
the
deprecation
makes
sense
to
me.
I
put
this
achievement
for
that
formal
policy
on
the
cigar
picture
backlog,
but
there
is
a
pretty
significant
backlog,
so
I
didn't
get
to
it
yet,
and
we
have
a
general
problem.
Is
that
unless
someone
strongly
cares
about
something,
even
if
the
test
run,
they
don't
necessarily
pay
attention
to
whether
the
tests
are
passing.
So
we
that's
a
general
problem.
We
need
to
figure
out
how
to
solve.
On
the
you
know,
other
data
stores
issue
I
expect.
We
eventually
may
support
other
data
stores
right
now.
J
We
can't
afford
the
increasing
complexity
in
the
test
matrix,
and
then
we
haven't
had
time
to
really
pin
down
the
storage
API.
There
are
features
of
Etsy
v3
we
would
like
to
take
advantage
of
which
is
going
to
make
that
CD
to
not
work.
So
now
we
need
to
settle
the
official
policy
on
it
before
then,
and
this
was
also
why
we
didn't
merge.
K
From
the
point
of
view
of
something
like
gke
is
the
end
user
who's
actually
reading
or
codes
on
top
of
kubernetes
at
CD
hits
the
operations
person
the
person
actually
operating
the
cluster,
which
is
in
some
ways
a
different
user
in
it,
feels
like
the
the
the
deprecation
policy
hasn't
taken
that
into
account
so
I
think
it's
it's
clear
for
us,
Eddie
I'm,
not
sure
where
we
take
care
of
this.
Maybe
this
is
this.
Is
this
is
sega
architecture?
K
Maybe
this
is
the
steering
committee,
but
I
think,
looking
at
the
deprecation
policy,
in
terms
of
which
features
are
user
facing
which
features
our
operations
teachers.
Do
we
actually
the
same
deprecation
policy
across
all
of
these
I
think
that
there
is
this
assumption
implied
that
operators
can
take
more
pain,
I,
don't
know
if
that's
something
that
we
actually
want
to
live
with.
K
J
It's
actually
true
and
deliberate.
That's
things
that
affect
operators
like
changing
the
flags
on
API
server
or
something
like
that.
We're
willing
to
induce
more
pain
there,
parties
we
can
move
faster
and
clean
up
debt,
but
just
as
a
practical
matter,
there
are
gonna
be
more
applications
that
are
built
that
in
bad
bapi
ban
cluster
turn
up
I'm.
F
We
should
have
ownership,
we
just
sig.
No,
we
just
we
all
night
deprecation
policy,
we
own,
so
we
own
the
packing,
and
so
we
have
to
change
every
single
things.
So
we
need
to
polish
and
on
occasion,
I
think
the
ownership
is
more
important.
He
said,
of
course,
application
is.
It
is
important,
but
to
the
deprecation
policy,
also
we're
a
contributor
will
be
adjusted
adjusted
by
the
owner.
Like
the
nekton
knows,
a
lot
of
people
ask
us
which
we
support
more
than
suite
doctor
worship.
F
B
Could
not
possibly
agree
more
with
what
God
is
saying
with
regards
to
who
actually
owns
this,
and
that's
particularly
why
I
am
raising
it
in
this
forum
there
are
a
number
of
sig
needs
here.
The
genesis
of
bringing
at
cd34
miller
was
largely
pushed
by
state
scale.
I
heard
during
the
stick
architecture
meeting
that
Sega
API
machinery
is
in
charge
of
all
of
the
interactions
between
the
API
server
and
the
quote.
Unquote.
B
Registry
I
heard
that
state
cluster
lifecycle
is
in
charge
of
the
lifecycle
of
a
cluster
through
all
of
its
upgrades
and
I've
heard
that
state
testing
is
in
charge
of
making
sure
that
all
the
tests
pass,
which
is
false.
We
just
run
your
tests.
We
do
not
actually
write
your
tests
for
you,
okay
and,
but
so
I'm.
Really,
oh,
and
let's
not
forget,
that's
it
really,
maybe
should
own
what
the
supported
releases
policy
is,
but
we're
wildly
understaffed
right
now:
stick
releases
mandate
has
kind
of
carried
up
to
the
cutting
of
the
release.
B
It
hasn't
really
moved
beyond
to
the
support
of
the
release
and
the
end-of-life
thing
and
Sun
setting
up
a
release
so
on
and
so
forth.
This
is
totally
something.
I
would
be
escalating
to
the
steering
committee,
but
in
lieu
of
that
I'm
here
to
ask
everybody
here
what
we
think
we
should
do
going
forward
so
and
I
just
personally
I'm
a
I'm
okay
with
yeah
Syd,
calling
the
status
quo.
What
it
is
and
saying,
there's
really
just
not
that
much
much
test
coverage!
George
I
saw
you
had
your
hand
up,
yeah.
C
I
just
wanted
to
add
that
I
feel
like
so
you
posted
your
message
to
kubernetes
dev
and
we
know
how
developers
use
it.
We
know
that
people
who
are
paid
to
work
on
public
clouds
are
paying
attention
to
this
kind
of
stuff,
but
I
feel
like
we
have
a
hole
as
far
as
everybody
else,
who's
using
kubernetes
and
I
feel
like
it
would
be
nice
to
get
another
data
point
you
know.
Perhaps
we
can
socialize
via
the
Twitter
account
and
have
all
the
famous
people
retweet
it.
B
D
Technically
it
should
be
sig
cluster
ops,
but
frankly,
we've
had
trouble
trying
to
sustain
memberships
so
that
if
we
say
it's
the
mandate
for
that
bad
sake,
that's
great,
but
it
honestly
it's
not
going
to
get
that
way
of
an
audience.
So
this
is
I
have
pulled
the
fire
alarm
on
this
problem,
multiple
times
that
we
are
becoming
an
echo
chamber
for
developers
and
real
world
cluster
operators
are
getting
left
out
of
a
lot
of
key
decision
and
discussions
so
I.
We
need
to
solve
this
problem.
This
isn't.
D
B
Yeah
I
agree
what
it's
worth:
I
went
singer
architecture,
a
small
group
of
people
looking
for
that
deprecation
policy;
first
now
we're
dev
and
now
we're
at
the
community
meeting
where
theoretically,
people
who
are
developers
and
also
people
who
are
consumers
of
Cooper.
These
are
so.
We
are
widening
this
circle
and
talking
about
it,
but
I
agree,
there's
more!
That
could
be
done.
This
is
a
large
issue.
B
E
B
H
J
D
J
I
guess
that's
another
solution,
but
yeah,
and
the
chatter
seems
to
be
that
neither
openshift
nor
DK
are
using
a
CD
and
we're
funding
a
lot
of
the
project.
As
it's
dance,
I
can't
imagine
where
we
get
head
count
to
do
more
work
on
at
CDT,
given
that
it's
not
part
of
our
product,
where
stretch
really
been.
D
B
D
B
H
H
B
K
I
think
you
know
with
respect
to
deprecation
policy.
You
know
that's
an
SLO,
it's
an
objective.
There's
gonna
be
times
when
we
actually
don't
hit
that
objective.
I
mean
a
concrete
thing
we
can
do
is
just
document.
The
fact
that
EDD
overtime
is
actually
even,
if
not
officially,
is
in
effect
sort
of
on
its
way
to
being
deprecated,
and
you
know,
may
not
last
out
the
full
deprecation
policy
just
because
it's
underfunded.
It's
not
ideal,
but
I.
Think
it's
better
to
be
honest
with
users
about
where
we're
at
versus.
K
Well,
you
know
I
think
who
owns
the
relationship
between
kubernetes
and
that
CD.
You
know
you
know
they
should
be
documenting
the
the
requirements
around
that
you
know,
I
think
I.
Think
just
like
testing
is
not
a
is
not
a
centralized
responsibility
on
tested
test.
I.
Think
the
practicalities
of
running
and
maintaining
a
cluster
overtime
is
a
shared
responsibility
across
the
entire
project.
We
can't
have
something
like
API
machinery
who
you
know
ostensibly
owns
this
relationship
say
well,
you
know
figuring
out
how
to
deploy.
This
thing
is
somebody
else's
problem.
I
Right
along
those
lines,
like
upgrade
testing,
the
framework
is
owned
by
cluster
lifecycle,
but
if,
if
you're,
if
you
create
an
object,
you
were
responsible
for
making
sure
it
can
be
upgraded
across
roads
right.
That's!
It's!
Not.
Cluster
life
cycles
problem.
If
you're
upgrade
tests
are
failing
it
to
your
problem,
I.
K
Think
this
is
this
just
points
in
the
direction
they're
just
having
clear
lines
of
ownership
over
time
towards
sixes
the
way
that
we
want
to
go.
We
can
all
say
I.
Think
the
first
thing
is:
let's,
let's
agree
on
what
you
think
it
is,
and
then
let's
figure
out
what
we
can
do,
incentivize
those
that's
sig
to
actually
step
up
and
take
responsibility.
So.
F
Think
the
API
machinery
is
the
good
ownership
to
own
the
integration,
validation
and
all
those
kind
of
things
with
the
new
storage
whatever,
but
because
they
are
talked
there
and
they
no
requirement
of
all
those
kind
of
things
and
the
factor
to
the
deployment
at
the
scene
and
the
new
version
of
no
storage
II.
Should
we
own
the
caster
opposite,
but
there
is
a
separate
story.
So
I
think
that
we
should
start
with
the
API
machinery.
F
So
they
should
take
this
owner
and
a
piece
idea
what
it
is
they
want
to
support,
which
was
initially
support
at
least
and
the
next.
What's
the
range
of
the
washer?
What's
the
minimum
washer
and
they
make
those
policy
and
then
we
talk
about
okay,
what's
the
requirement
all
those
kind
of
things
and
then
come
up
the
test
and
in
the
castor
ops
it
will
over.
B
F
A
B
Agree,
I
thought
it
was
really
helpful
to
have
a
lot
of
quick,
fast
feedback
with
a
lot
of
key
players
here.
So
I
want
to
thank
everybody
for
participating.
My
final
question
is:
is
this
the
sort
of
thing
that
should
be
tracked
to
be
a
feature
issue
and
tied
to
the
release?
That
way,
or
is
this
a
separate
effort,
I.
D
K
Would
say
that
it
also
makes
sense
to
actually
have
this
is
where
the
named
feature
really
doesn't
work,
a
issue
there
that
actually
tracks
deprecation
of
at
CD
2
right
and
we
can
put
together
the
checklist
of
what
does
it
take
to
actually
be
able
to
do
this?
You
know,
and
at
least
be
honest
with
with
where
we
add
our
along-
that
yeah
tracking
mechanism
with.
K
B
A
D
I'm
not
sure
on
that
I,
don't
I,
don't
I,
think
that's
polluting
the
waters
of
the
feature
repo
a
little
bit.
I!
Think
that
that
that,
let's
take
this
into
the
developer,
grip
and
hash
it
over.
But
just
to
recap
what
we've
decided
we.
We
definitely
need
to
have
a
good
deprecation
policy,
that's
enforced
in
some
mechanism
and
is
not
unfunded
and
for
the
the
auspices
of
the
1.8
release.
D
We
will
attempt
to
rely
on
secondary
or
tertiary
tests
that
hit
at
CD
2
and
we're
going
to
try
and
see
if
we
can
find
some
work
to
get
that
upgrade
test
in
place
so
that
we're
not
just
dropping
hot
potato
before
before
the
198
goes
out.
The
asterisks
behind
that
is
that
it's
not
working.
Now
it
didn't
work
for
one
seven
and
so
I'm
gonna
have
a
hard
time
making
that
a
release
blocking
effort.
A
To
host
this
meeting,
you
should
definitely
do
so.
It
looks
like
we're
missing
a
host
for
August
10th
and
then
besides,
August
17th,
the
rest
of
the
calendar
going
forward
is
free,
so
definitely
sign
up
it's
fun
to
host
the
community
meeting
that
helps
take
the
burden
off
of
our
community
ventures.
Definitely
do
that
and
with
that
I
think
we
can
give
everybody
back
15
minutes
unless
anybody
has
any
last
questions
or
comments.