►
Description
We have PUBLIC and RECORDED weekly meeting every Thursday at 5pm UTC.
See this page for more information: https://github.com/kubernetes/community/blob/master/events/community-meeting.md
A
Now
issue
all
right,
hello:
everyone
today
is
October
4th.
My
name
is
Jeff
Sica
I
work
at
the
University
of
Michigan
I
also
help
out
in
cig
UI
and
contributes
and
I
am
this
week's
host
of
the
community
meeting.
This
is
a
special
community
meeting
for
two
reasons.
The
first
is
Brendon
burns
from
the
steering
committee
is
here
to
announce
the
results
of
the
steering
committee
election
and
second,
hopefully
not
100%
sure,
but
Jace
will
be
here
to
lead
the
1.12
retro
for
us
some
housekeeping
things.
First,
as
a
courtesy,
please
keep
yourself
muted.
A
Unless
you
are
speaking
remember
this
is
streamed
and
will
be
posted
publicly
on
youtube.
So
please
be
mindful
what
you
say
is
being
recorded
and
will
be
in
the
permanent
record
forever.
If
there
is
anyone
that
would
like
to
take
notes
in
the
Google
Doc,
that
would
be
awesome.
I
will
post
the
link
unless
someone
beats
me
to
it.
A
A
A
D
D
D
The
8th
and
the
ask
for
the
cig
leads
and
enhancement
owners
here
is
to,
if
you
want
anything,
that's
to
go
in
the
113
release.
Please
make
sure
you
open
a
issue
in
the
features,
repo
and/or
or
update
your
existing
issues
with
the
milestone
priority
kind
and
cig
levels.
Yeah
and
we'll
communicate
this
more
broadly,
as
we
go
along.
D
E
And
just
a
quick
hang
your
heads
up
there,
two
biggest
changes
that
hopefully
don't
have
much
impact.
One
is
removing
support
for
@cd
were
trying
to
land
the
PR.
That
does
that
right
now
and
I've,
stripped
out
all
the
jobs
related
to
te
to
number
two
is
moving
to
go
111
that
accidentally
snuck
in
and
uncovered
a
whole
bunch
of
things
that
breaks.
So
that's
going
to
take
a
little
bit
longer
to
land,
but
I
just
wanted
to
be
big
break
bold
about
the
visibility
of
those,
since
those
have
wide-ranging
back
for
me,.
E
A
A
F
F
A
G
Everybody
can
hear
me
yes,
good
all
right
excellent
before
we
give
the
results.
I
wanted
to
just
give
a
thank
you
to
everybody
in
the
community
who
got
out
and
voted
and
to
Paris
and
George
who
helped
organize
and
I
will
say
that
turnout
was
about
50%,
so
we
can
do
better
and
I
expect
us
to
do
better
the
next
time
around.
So
these
comes
out.
Please
remember
to
do
so,
and
so
then,
without
further
ado,
the
steering
committee
people
who
will
be
serving
for
the
next
two
years.
G
There
are
three
of
them
that
we
are
electing
and
they
are
aaron
kirk
and
Berger,
timothy,
st.
clair
and
devon
um's,
three
of
us
I
hope,
I,
pronounced,
get
otherwise
known
as
dims
I
hope.
I
pronounced
his
name
right,
and
that
is
that,
thank
you
for
everybody
who
voted
and
thank
you
to
everybody
who
comes
hopes
up
for
candidacy.
G
A
All
right:
congratulations!
Everybody!
Next
up
I'm
trying
to
go
through
this
quick.
So
we
had
plenty
of
time
for
the
retro
we
have
shoutouts,
then
the
elder
shoutout
to
Mr
Hoenn
Neil
it
one
two
three
just
an
SB
and
liggett
for
each
helping
me
in
turn
to
track
down
a
certain
kubernetes
issue.
This
one
was
tricky
to
pin
down
sweat
smile,
perish
out,
gave
shout
out
to
Justin
Augustus
I
cannot
pronounce
that
KR
z,
y
z,
AC
y
mr.
A
Bobby
tables
and
G
fee
filling
in
for
Jorge
for
being
mentors
in
our
first
episode
of
the
october
meet
our
contributors
and
paris
also
gave
shout
outs
to
Brendan
Burns,
Timothy,
Sinclair
and
spiff
XP
for
spending
time
with
us
and
answering
questions
on
meet
our
contributors
this
week,
Cates
and
Hart
eyes
and
without
further
ado,
O
son
Lou.
Thank
you,
Thank
You
Erin.
Without
further
ado,
though
Jase
are
you
here.
H
H
This
is,
let's
see,
I've
been
doing
this
since
1.3
and
Paris
has
filled
in
a
couple
times
in
between
when
I
was
released,
lead
so
there's
a
long,
a
long
history
of
these
being
a
key
part
of
our
community
and
the
things
that
we
do
so
just
a
quick
thing
about
what
we're
trying
to
accomplish
here.
I
one
is
just
to
give
visibility
to
the
community
as
a
whole
that
we're
really
serious
about
continuous
improvement
and
all
the
big
processes
that
we
undertake.
H
So
the
release
process
is
one
of
the
most
defined
most
comprehensive
and
important
processes
that
we
undertake.
So
there's
a
lot
of
moving
parts.
There's
a
lot
of
people.
It's
a
lot
to
do.
I
do
want
to
give
a
personal
shout
out
to
Tim
pepper,
who
is
an
outstanding
release,
lead
this
go
and
and
everybody
on
the
release
team
has
been
thrilled
to
be
on
the
team,
which
is
always
a
good
sign,
and
it's
it's
really
exciting
to
see
this
tradition
continues
so
about
the
the
release
retrospective
process
itself.
H
They
can
make
the
next
release
cycle
better
and
if
you
haven't
heard
Ayesha
is
our
next
release
lead
and
that's
super
exciting,
and
so
a
lot
of
this
will
be
for
her
to
take
into
consideration
in
ways
that
she
can
adjust
the
process
moving
forward.
So
it's
really
important
there.
I
I
would
ask
that
everybody.
If
you
have
commentary
or
things
that
you
want
to
share,
please
make
it
respectful.
Please
make
it
constructive
and
as
positive
as
you
can
and
wherever
possible
attack
the
the
process
or
the
policy
and
not
the
person.
H
So
if
you
feel
like
somebody,
let
you
down
it's
probably
not
necessarily
something
they
did
as
much
as
maybe
ways
that
we
could
have
reinforced
the
process
to
be
more
supportive
of
their
efforts
without
further
ado.
Let's
go
ahead
and
get
into
the
first
part
of
the
retro,
which
is
really
to
talk
about
some
of
the
things
that
went
well
and
we
should
probably
do
again.
H
F
One
of
the
things
that
we
tried
to
put
a
focus
on
back
in
June
I
was
trying
to
see
if
we
could
kind
of
cast
a
broader
net
on
who
we
got
involved
in
the
release
team
trying
to
get
out
of
the
Pacific
North
America
time
zone
a
little
more,
so
we
managed
to
actually
get
a
few
people
from
Asia
and
Europe
and
I.
Think
at
this
point
we
can
say
that
we
actually
have
done
that
again.
F
H
That's
amazing
and
I
want
to
call
out
that
this
is
an
incredibly
inclusive
practice,
and
this
is
something
I
would
like
to
see
a
lot
of
our
SIG's
also
consider,
especially
if
they
have
attendance
from
other
parts
of
the
world.
So
we
are
cognizant
of
how
we
schedule
things
for
a
pack
and
whatnot.
So
that's
really
great
feedback.
Yeah.
D
F
The
next
thing
that
was
a
notable
change
and
this
release
was
I-
think
going
back
to
the
late
spring.
Kaleb
kind
of
did
a
shout
out
trying
to
see
if
we
could
push
the
branch
manager
role
out
of
Google.
It's
one
that
historically
has
required
a
Google
employee
and
we
got
a
volunteer
from
VMware
Doug
macaca
and
who
stepped
up
to
do
that
and
kind
of
knowingly
that
it
was
gonna,
be
some
stumbling
in
public
trying
to
figure
out
and
we
were
through
a
whole
bunch
of
kinks
there.
F
There
were
things
that
weren't
necessarily
documented
and
changed,
but
we
I
said
we
did
it.
We
we
had
of
a
whole
bunch
of
good
new
documentation
of
the
process.
There's
a
next
iteration
of
things,
I
think
we
can
do
there,
but
then
the
other
really
cool
thing.
In
addition
to
that,
we
tried
to
make
it
a
little
bit
more
of
a
team
to
be
more
sustainable.
F
F
The
and
also
those
folks
are
carrying
on
to
the
next
release
to
make
sure
that
we
have
this
solid
documentation
updates.
One
of
the
things
that
I
am
really
passionate
about
is
like
doing
what
we
say
and
saying
what
we
do
and
that
comes
down
to
having
documents
that
are
accurate
and
we
have
a
new
set
of
role
handbooks
that
have
been
thoroughly
scrubbed
for
the
different
release.
Team
roles
really
appreciate
the
work
that
each
of
the
the
leads
and
their
shadows
put
into
updating
that
documentation
and
I
think
the
next
one
of
the
set.
F
That
was
me-
and
this
is
actually
less
me
than
I,
would
say,
Aaron
kind
of
a
shout-out
to
him.
There
were
a
lot
of
process
reductions
that
we
made
in
this
release:
kind
of
kind
of
small
things,
but
incremental
improvements
that
were
done
pragmatically
to
say,
hey.
We
could
simplify
things
here,
remove
some
redundant
things,
whether
it's
labels
specifically
or
the
processes
around
them,
and
then
simplifying
what
we
do
operationally
to
make
it
more
robust,
so
Thank,
You
Aaron
for
the
work
you've
been
doing
there.
E
Yes,
+1000,
I
one
thing
I
did
specifically
want
to
call
out
on
that,
because
I
don't
really
have
a
great
way
of
measuring
pain,
but
one
of
the
things
we
specifically
shut
off
as
a
bot
that
nags
people
to
add
a
bunch
of
milestones
and
stuff
to
their
issues
and
a
bunch
of
labels
to
hopefully
produce
alert,
numbness
and
at
least
stem
some
of
the
noise
or
thinning
people's
inboxes
I.
Believe
that
caused
some
additional
burden
on
the
people
on
the
release
team,
who
had
to
then
go,
do
and
that's
reminding
themselves.
E
I
Thing
actually,
because
whenever
I
started
poking
people,
they
responded
pretty
quickly.
I
almost
I,
don't
remember
a
time
that
I
had
to
actually
get
Tim
to
bring
out
the
big
weight.
E
Yeah
and
that's
the
nobody
paying
attention
to
the
bots
notices
thing
is
kind
of
troubling
because
like
there
in
order
for
this
project
to
scale
and
grow,
we
do
need
to
have
BOTS
help
us
or
some
local
automation
to
help
us
and
some
for,
but
it's
formed.
But
it's
clear
that
was
noise,
that
nobody
was
really
paying
attention
to.
So
thanks,
everybody
for
stepping
up
and
I
hope
that
was
a
more
pleasant,
efficient
experience
for
everybody
back
to
you,
excellent
and.
H
I
don't
know
if
Nick's
around
but
yeah,
so
that
the
Hackham
D
thing
I
guess
this
is
maybe
something
that
we
look
at
for
the
future
as
a
way
to
do
this,
but
a
paid
account
and.
H
F
F
Is
they
expand
out
there
their
membership
and
contributor
base,
but
it
takes
some
critical
mass
of
people
attending
both
meetings,
even
if
it's
just
a
lead
and
giving
some
continuity
there.
So
we
trialed
that
I
think
it
was
successful,
but
it's
also
a
significant
amount
of
work,
something
to
aspire
to
continuing,
though
if
possible,
and
then
the
last
one.
That
was
me.
D
So
big
shout
out
to
Cole,
Aaron
and
Tim
for
actually
getting
the
submit
queue
moved
over
to
tight
this
time.
It
was
super
well
timed.
Well,
there's,
no,
not
always
a
great
time
to
Meeks
brought
change,
but
it
was
very
well
managed
communicated
broadly,
we
had
yeah
multiple
follow-ups
and
then
it
was
executed
really
well
and
goal
made
sure
that
we
didn't
have
any
loose
ends
there
and
him
continuing.
Has
the
test
lead
just
in
for
a
lead-in
to
113
just
make
sure
everything
still
is
really
migrated
and
solid
there.
J
J
What
kind
of
you
know
signatures?
Do
they
have
things
like
that?
We
did
a
lot
more
of
that
this
time,
which
kind
of
made
me
happy.
We
moved
from
like
sha-1
to
sha-512
and
things
like
that.
So
and
then
we
also
identified
a
few
things.
We
logged
issues
that
we
could
work
on
on
the
longer
term,
so
I
think
that
worked
out
well,
this
time
too,.
H
I
I
have
I
have
a
tiny
little
thing
to
add.
This
is
because
of
turning
the
BOTS
turning
off
the
bots
grammar,
I,
think
and
because
issues
and
pull
requests
no
longer
need
to
exist
in
pairs.
I
It's
become
a
little
trickier
to
find
bug,
issues
that
haven't
been
labeled
correct,
that
by
the
pull
requests
that
haven't
been
labeled
correctly,
just
because
it's
unclear
if
they
relate
to
something
in
the
release
or
if
it's
just
you
know
it's
so
I'm
working
on
it
from
the
bug,
triage
I
changed
the
documentation
for
the
handbook
for
bug
triage
need
to
just
make
sure
to
look
out
for
that,
a
little
bit
better
for
the
next
release.
I
H
H
H
F
H
F
So
hip,
you
put
his
note
there
and
I
also
captured
it
up
above
and
considering
daylight
savings
time
changes
around
how
we
manage
alternate
meetings
or
split
split
me
as
across
time
zones
on
things
that
we
should
change.
Features
in
the
milestone,
I
think
continues
to
be
a
rough
spot
and
I
know.
J
F
A
lot
of
work
going
on
here
in
cig
p.m.
and
sick
release
around
how
to
have
a
better
this
situation.
I,
don't
know
how
much
more
to
say
about
that.
I
mean
that
there's
discussions
ongoing
if
you're
interested
in
getting
involved
with
them,
I
guess
I,
would
give
a
shout
out
to
people
to
go,
go
track
that
down.
This
is
a
big
cross
project
discussion.
H
Do
you
do
you
feel
like
there's
a
good
best
way
to
facilitate
that
that
discussion
or
what?
How
would
you
like
to
see
that
handled
well,.
F
I
think
I
have
confidence
in
what
you
and
Steven
I
guess
just
seem
to
be
doing.
I
think
that's!
The
right
set
of
people
are
right.
Having
the
right
discussions
getting
the
right
stakeholders
brought
in
so
I
I'm,
just
I
I,
mostly
call
it
out
here,
is
to
say
that
getting
an
improvement
there,
some
some
resolution,
an
incremental
change
or
I
mean
there's
also
been
talk
of
sort
of
a
seismic
change.
Something
for
an
improvement
there
I
think
would
be
beneficial
to
the
release
process.
To
help
us
better
understand.
H
F
So
I
guess
a
specific
aspect
of
the
futures,
then,
depending
on
how
the
process
for
future
definition
gets
done
as
it
hits
the
release
team,
we
need
a
better
sense
of
where
things
are
coming
in,
as
the
the
splitting
of
the
monolith
happens
and
things
are
split,
spread
across
more
repos.
Typically,
the
release
team
had
only
been
tracking
what
was
in
KK,
but
there's
going
to
be
more
and
more:
that's,
not
KK,
and
especially
on
those
this
cycle.
F
We
had
a
bit
of
confusion
because
we're
we're
trying
to
watch
and
track
artifacts
and
see
did
that
code
actually
land?
Where
is
it?
And
if
it's
not
stated
in
the
future,
where
it's
going
to
be,
which
has
been
the
case
thus
far,
and
some
of
those
things
that
are
split
out,
it's
made
it
harder
for
us
the
next
one,
then
release
branch,
tooling
version
control.
F
We
still
have
oddities
around
how
we
track
dependencies
and
how
we
produce
artifacts
and
the
code
that's
used
to
produce
those,
and
so,
for
example,
the
RPMs
and
Deb's
is
a
specific
one.
If,
if
those
spec
files
aren't
kept
in
a
branched
fashion
or
having
some
sort
of
conditional
logic
within
them,
and
you
need
to
build
new
ones
for
some
prior
patch,
but
you've
changed
them
to
support
circuit,
to
require
some
newer
thing,
you
you
get
some
odd
skews
there.
So
I
think
we
need
to
consider
either
standardizing
one
or
the
other.
F
Do
we
do
we
branch
and
fork
those
and
build
old
code
updates,
with
old
scripting,
potentially
leading
to
some
maintenance
overhead
if
a
common
fix
needs
to
go
in
both
or
do
we
standardize
on
some
conditional
logic
within
them?
But
we
need
to
do
that
everywhere,
then,
and
there's
a
lot
of
tendrils
to
all
of
that.
E
F
J
So
that
was
you
know
that
caused
a
little
bit
of
back
and
forth
on.
Do
we
make
the
changes
in
QbD
m
to
make
sure
that
everything
is
everything
uses
manifest
or
not?
So
we
went
back
and
forth
there
towards
the
end,
but
then
we
were
able,
since
Doug
M
was
able
to
try
mark
releases.
That
was
a
huge
help
in
able
to
being
able
to
get
past
that
problem.
J
At
the
last
minute,
we
were
able
to
pull
the
trigger
and
get
that
working,
and
then
the
QbD
so
as
part
of
this
I
also
want
to
mention
that
the
two
are
c1
and
our
c2
turned
out
to
be
really
good
because
we
actually
found
problems
both
you
know
people
were
able
to
try
our
c1
and
art
c2
and
the
file
bugs
and
we
were
able
to
patch.
You
know
we
were
able
to
catch
things
before
the
actual
release
went
out.
J
If
we
had
a
little
bit
more
time,
we
would
have
caught
the
so
no
boy
bug,
but
then
you
know,
let's
leave
it.
For
another
day,
we
are
trying
to
figure
out
a
way
to
fix
that
problem
now
in
another
way,
but
yeah,
so
the
manifests
so
far
so
good.
Even
after
the
release
we
haven't
found
any
new
bugs
being
filed
against
the
manifest
so
far
so
cross.
My
fingers.
H
Super
awesome.
Thank
you
for
that
great
feedback.
It's
always
nice
when
you
start
with
it
turned
out.
Well,
sometimes
we're
not
so
lucky.
So
thank
you
for
everybody's
efforts
on
that.
It's
that's
a
big
deal.
So
let's
go
ahead
and
hit
day
and
ago
which
it
looks
like
a
lot
of
people
are
on
this,
so
I'm
not
sure.
F
H
F
F
I
think
it
would
be
useful
to
see
if
there
are
things
that
we
could
do
around
enhancing
that
making
it
a
little
bit
more
nuanced
and
at
least
giving
a
few
people
the
ability
to
have
read
access
and
entail
logs
and
things
like
that
and
collaborate
on
debug
a
little
easier
for
for
some
of
these
builders
shoes
that
we
were
starting
to
work
through
and
some
of
the
trial
things
that
I
was
doing.
That
Tim's
had
just
mentioned.
J
H
E
E
So
there
is
there's
a
little
bit
of
work
that
needs
to
happen
otherwise,
or
some
people
can
follow
along
the
logs
more
generally,
what
we
also
need
to
actually
audit,
but
the
Tago
output
is
at
some
point
to
make
sure
that's
safe,
but
right
now,
all
the
logs,
through
their
streams
to
a
bucket
in
GCS,
that
only
people
who
are
able
to
write
to
that
bucket
can
access,
and
that
should
be
fixed
right
caleb
is.
Is
it
easier
to
fix
that
in
place,
or
is
this
something
that
is
more
effectively
addressed?
E
As
we
look
to
migrate
this
infrastructure
over
to
the
CN
CF?
Oh,
oh,
we
should
fix
it
in
place,
probably
first
anyway.
That
will
also
allow
us
to
reduce
the
firm's
required
to
be
a
release
manager,
because
right
now
that
yeah
it's
yes,
so
it
we
need
to
fix
it
in
place
first
anyway.
Okay,
Belair
yeah.
J
The
only
other
thing
here
was
I
think
there
are
some
of
the
some
of
the
buckets
where
we
store
where
we
staged
the
container
images.
We
were
not
able
to
look
at
the
images
that
were
pushed
there.
Only
a
few
people
could
do
that.
So
if
we,
if
we
can
make
read
capabilities
available
on
the
staging
repository,
that
would
be
good
too.
H
F
That,
yes,
we
did
damn
yes,
see,
I
signal
this.
This
is
always
a
balance
between
and
in
the
face
of
instability,
potentially
in
CI
signal
how
how
much
debugging
do
you
do
before
you
formally
open
an
issue
to
track
versus
vs.
immediately
open
an
issue
to
track
and
I
feel
like
in
the
cycle?
We
went
a
little
too
far
in
the
direction
of
maybe
being
a
little
passive
or
or
waiting
to
seek,
and
can
we
get
tomorrow's
results
for
the
next
result
to
confirm?
E
For
what
it's
worth,
I
have
lots
of
additional
opinions
on
this
and
how
to
fix
it.
But
one
of
the
key
underlying
problems
is
that,
like
this
is
a
this
is
a
great
job
to
automate
a
way.
It's
not
like.
We
necessarily
need
a
human
to
read
all
the
magic
incantations
on
test
grid
then
walk
through
and
decipher
from
the
logs.
Whether
or
not
something
is
really
really
truly
a
failure.
E
So
there
there
does
need
to
be
some
sort
of
happy
medium
where
I
think
the
tools
better
assist
a
human
in
reaching
out
and
making
sure
people
are
aware
that
something
is
happening.
I
have
a
number
of
efforts
ongoing.
That
I
would
like
to
do
to
address
that
as
part
of
this
release
cycle,
that
I
will
have
to
dig
up.
E
So
I
will
document
those
a
eyes
later,
okay,
so
the
next
I'll
move
on.
If
there
are
no
other
comments
on
that,
the
next
issue
I
have
I
hope
to
have
something
more
fully
scoped
here,
but
basically
I
have
real
concerns
that
there's
this
massive
group
team
of
people
who
get
together
to
bring
the
dot
o
release
out
the
door
and
then
the
entire
team
dissolves,
and
it
is
left
to
one
single
person
to
carry
the
weight
of
all
of
those
responsibilities
for
every
patch
release
going
forward.
E
This
means
taking
a
look
at
whether
or
not
this
is
actually
a
bug
fix
or
a
feature
or
a
regression
sneaking
in
this
includes
filling
the
role
of
CI
signal
which
I
just
described
is
difficult.
This
includes
doing
all
the
responsibilities
of
a
release
lead,
which
means
getting
in
touch
with
people.
Communicating.
This
includes
making
sure
that
the
release
notes
are
saying.
E
This
is
an
awful
lot
for
one
single
person
to
do
and
there
have
been
issued
occurrences
in
the
past
where
some
things
have
slipped
through
the
cracks
and
I
really
think
it
is
important
for
our
reliability
and
stability
to
make
sure
that
we
have
a
team
of
people,
not
just
one
single
person
at
a
minimum.
It
has
to
be
two
people.
It
should
probably
be
more
than
that
and
I.
E
E
E
Here
is
also
a
place
where
it
definitely
appears
that
the
entire
team
dissolves,
but
there
I
guess
my
experience-
there's
been
kind
of
a
on
official
patch
release
team,
which
consists
of
all
the
previous,
at
least
at
Google.
All
the
previous
release,
managers
and
branch
managers
and
people
who
have
been
involved
in
the
release,
so
I
have
helped
to
route
requests
and
do
some
cutting
work
when
people
in
other
time
zones
are
unavailable.
E
There's
there
is
a
a
came,
but
it
would
be
nice
to
have
a
hit
official
patch
release
team,
rather
than
just
a
unofficial
step,
unofficially
staffed
group
of
people
all
right.
All
right.
We
cannot.
You
cannot
have
heroes,
staffing,
our
patch
releases,
like
that.
Just
guarantees,
tribal
knowledge
and
things
slipping
through
the
cracks,
and
that
is
unacceptable
for
patch
releases
going
out
the
door.
E
J
C
F
We
also
have
some
cops
AWS
testing
in
the
mix,
but
this
is
only
going
to
get
more
complicated,
as
cloud
providers
are
being
split
out
of
tree
and
we
have
the
monolith
splitting
so
somehow
I
believe
we
need
to
incentivize
vendors,
make
it
easier
for
them
to
add
to
the
test
matrix,
but
make
sure
that
the
additions
are
done
in
a
way
that
gives
some
coherent
sense
of
readiness
to
the
release
team
in
terms
of
cluster
creation
and
upgrade
across
the
set
of
platforms.
There.
J
B
My
audio
is
cutting
out
people,
so
I'll
try
to
talk,
and
hopefully
people
can
hear
me
so
Kevin
is
anywhere
is
kind
of
like
woefully
maintained
and
we've
used
it
as
a
default.
Provisioner
for
comedian
I've
been
talking
with
two
separate
groups
to
get
a
separate
provisioning
set
of
tools
in
place
for
comedian.
One
is
to
get
the
cluster
API
AWS
implementation
in
place,
because
it
gives
us
two
signals.
One
is
AWS,
the
other
is
a
highly
available
employment
medium
in
place,
but
it's
still
that
would
be
in
an
ideal
world
for
113.
B
The
second,
which
is
trying
to
to
loop
back
in
the
coop
spare
folks
into
sequestered
lifecycle
to
have
them,
have
set
up
a
provisioning
tool
using
cube.
Atm
is
the
default
back-end,
so
we're
working
on
all
friends
to
get
these
other
signals
in
place.
I
don't
know
if
they
will
all
land
for
113,
but
we're
making
an
effort
to
try
and
make
that
happen.
B
F
Testing
our
our
quarterly
friend
this
one
comes
up
in
every
release
nearly
scale
scale
testing
is
hard
scale.
Debugging
is
very
hard.
You
run
into
two
fascinating
bugs,
but
I
do
wonder
now
that
testing
is
shifting
from
Google
to
CN
CF
if
it
might
be
possible
for
others
to
contribute,
because
some
aspect
of
this
is
funding
if
you're,
if
you're
running
a
whole
bunch
of
huge
tests,
as
that
could
give
us
better
data,
but
it
comes
at
more
expense
right
now.
F
If
the
tests
are
running
less
frequently
because
they're,
huge
and
long-running
but
also
expensive,
I,
think
we
it's
an
area
where
we
could
do
better
and
in
this
release
it
I
would
say
I'll
every
other
release.
On
average,
we
end
up
with
some
sort
of
a
critical
late
issue
and
scalability
that
we
have
to
sort
out
and
again
that
causes
a
need
for
heroics
and
if
we
could
find
a
way
to
get
better
CI
signal
earlier
on
that
that
would
be
hugely
beneficial.
Yeah.
J
Sorry,
I
have
something
to
say
on
this
too
I'm
talking
a
lot,
but
the
one
thing
is
the
number
of
people
who
are
paying
attention
to
the
scale
test
right
now,
it's
just
folks
from
Google,
so
we
definitely
want
people
from
other.
You
know
other
companies
that
are
using
the
fruits
of
the
scale
testing
to
actually
stuff
the
work
that
is
being
done.
In
addition
to
trying
to
set
up,
you
know
tests
on
other
platforms,
because
it
will
need
a
full-scale
crew
to
go
through
the
whole
cycle
of
getting
the
test
too.
J
The
other
one
on
the
flip
side
is
we,
we
don't
plant
some
things
really
well,
for
example
the
core
DNS
one.
We
try
to
sneak
it
in
towards
the
end
and
we
should
try
to
do
it
really
early
in
the
cycle.
So
so
we
don't
have
to
make
the
scale
team
work
so
hard.
So
if
there
are
things
like
that
which
are
which
we
know
are
going
to
cause
issues,
you
should
try
to
schedule
them
earlier
in
the
cycle.
D
E
This
is
one
of
those
areas
that
is
actually
explosively
spelled
out
and
sakes
scale
abilities
charter,
where,
if
they
find
that
somebody
has
checked
in
something
that
produces
a
noticeable
performance
regression,
they
have
the
ability
to
block
all
subsequent
merges
from
landing.
I
think
we
added
this
ability
to
tied
to
support
that,
but
I
don't
know
how.
E
Often
it's
been
used,
and
one
of
my
big
concerns
with
that
superpower
is
that
there,
the
the
tests
that
they
run
are
often
red
I
mean
they're
passing
if
they
interpret
them
by
reading
the
appropriate
tea
leaves
and
the
logs
but
they're
red
and
a
lot
of
our
like
alerting
infrastructure
and
signal
infrastructure
can't
really
differentiate
between
more
than
green
and
red,
and
so
I.
Don't
really
have
them
in
front
of
me,
but
I
am
pretty
sure
that
leading
up
to
the
release,
the
5,000
node
tests
were
almost
perpetually
red.
E
It
never
looked
like
we
were
actually
in
the
green
and
meeting
our
performance
thresholds
according
to
the
same
dashboards
that
we
look
at
for
all
of
the
other
scalability
jobs.
There's
also
questions
about
whether
or
not
we
could
catch
these
kinds
of
regressions
earlier
with
more
frequent
slower
scale
tests,
so
you
may
have
noticed
we
run
coop
mark
which
simulates
a
large
cluster
on
every
PR.
We
run
hundred
node
clusters
against
every
PR.
E
We
also
run
2000
no
tests,
but
those
are
not
on
the
CI
signal
board,
and
so
it's
unclear
to
me
how
much
sig
scalability
is
actively
watching
this
stuff.
So
I,
yes,
I
agree.
It's
been
a
problem
child
with
all
the
things
and
I
just
continue
to
insist
upon
the
brute
force
solution
of.
Why
don't
you
have
your
tests
always
failing?
So
we
know
when
our
sorry,
why
don't
you
have
your
tests
go
back
to
passing
all
the
time,
so
we
know
when
they
fail
and
then
we
can
do
something
about
it.
Accordingly,.
I
This
is
sound
like
a
like
an
issue
of
just
human
resource
and
scalability
as
well
I'm
hearing
that
there's
only
a
few
people
with
eyes
on
the
correct
things
and
I'm
hearing
it's
mostly
Google
or
only
Google,
yes,
is
there
I
mean
we
can't
force
anyone
to
participate?
Obviously,
but
is
there
possibly
some
way,
I'm
speaking
from
a
contributor
standpoint
here,
so
how
can
we?
How
can
we
talk
to
people
and
make
them
aware
so.
E
E
We
treat
all
of
our
other
jobs
where
we
can
pull
any
old
thing
off
the
shelf
and
schedule
them
any
old
place.
We
would
like
to
so.
I
would
put
that
question
back
on
anybody
who
wants
to
spin
spin
up
like
5,000
nodes.
We
are
happy
to
to
work
with
you,
but
again
it's
like
you're
gonna
have
to
make
you
know.
Yeah
AWS
is
involved.
I've
noticed
AWS
has
been
involved
in
in
six
scalability
or
a
while.
E
So
it's
just
that,
like
we
can't
those
of
us
who
work
at
Google
can't
troubleshoot
the
intricacies
of
scandal
that
you
hit
on
AWS
at
five
thousand
nodes
and
similarly
I,
don't
think
AWS
engineers
could
troubleshoot
whatever
scalability
concerns
come
up.
If
a
five
thousand
a
cluster
was
spun
up
and
Azure,
but
we
absolutely
have
the
support
for
people
who
are
willing
to
periodically
run
jobs
that
spend
things
up
in
their
clouds,
using
their
infrastructure
using
their
cluster
providers,
etc,
etc.
Yeah.
I
I
was
us
I,
wasn't
that's
that's
good
to
hear.
I
did
I
wasn't
saying
that
there
was
no
support,
I
just
again,
I
think
it's
possibly
potentially
invisibility
seeing
where
we
can,
or
we
could
perhaps
there's
room
to
shake
down
more
people
just
by
making
them
aware,
rather
than
you
know,
because
it
sounds
like
if
the
infrastructure
is
there.
That's
great,
but
perhaps
people
are
just
thinking
about
that
way.
F
A
contributes
perspective,
one
of
the
things
that
we've
been
trying
to
do
is
standardize
a
little
bit
of
the
information
that
comes
in
the
saiga
dates
during
this
meeting,
and
it
would
be
really
awesome
in
the
next
scale
of
six
scalability
update
here,
to
see
some
plans
specifically
how
the
cig
is
working
to
scale
their
involvement.
So.
A
C
H
So
if
we
can
figure
out
how
to
solve
for
that
a
little
better
that
might
help
these
tests
be
more
effective.
So
there's
probably
things
that
we
can
chip
away
at
it
will
help
the
the
test
suite
be
more
effective
and
that's
something
that
doesn't
necessarily
require.
You
know
a
specific
cloud
provider
effort.
It's
really
about
making
a
test
better
at
handling
things
like
that,
so
I.
K
Just
want
to
represent
AWS
here
a
little
bit
in
the
discussion.
We've
just
started.
Our
active
efforts
on
cleaning
task
force,
which
I
will
acknowledge,
is
a
mess
from
an
AW
standpoint
and
we
are
getting
involved
with
scale
testing,
but
it
will
take
some
time
for
us
to
get
to
the
level
that
Google
is
at
so
I
don't
want
to
set
expectations
that
are
unrealistic.
We
have
to
clean
up
desk,
for
we
are
doing
that
and
sig
AWS
sub-projects,
as
we
integrate
into
intestine
cops,
has
taken
some
load
from
us.
J
J
We
always
say:
oh
just
spin
up
a
new
cluster
and
don't
worry
about
the
old
cluster,
so
that
kind
of
like
it
goes
against
this
model,
where
we
are
saying:
okay,
have
a
large
cluster
and
divide
it
in
namespaces
and
then
do
multi-tenancy
and
then
so
that's
that's
another
thing
that
we
need
to
watch
out
for,
and
you
know,
try
to
tell
people
no.
You
should
be
able
to
do
this
too.
I
kind.
E
Of
want
to
punt
this
discussion
off
to
sake
scalability
for
follow-up,
because
there
is
it's
not
I'm
using
5,000
nodes
is
like
a
rough
swag
here,
but
there
are
specific
limits
for
all
the
different
kinds
of
resources.
It's
actually,
this
weird,
like
8,
dimensional,
hyper
cube
of
a
volume
that
describes
the
bounds
within
which
kubernetes
cluster
can
perform
well.
E
H
That's
good,
so
we
are
getting
close
to
our
time
window.
I
want
to
continue
so
Aaron.
You
got
the
next
one
in
subsequent
comments.
Kameen
we
might
have
to
we're.
Gonna
have
to
probably
schedule
part
two
during
the
the
Zig
release
meeting,
but
let's
go
ahead
and
keep
going
as
long
as
we
can.
If
that
works,
erybody.
E
E
You
can
look
at
this
and
then
there's
a
known
issue
where,
like
test
grid
and
some
of
our
other
infrastructure,
just
don't
we
don't
parse
both
versions
of
the
clusters
out
and
then
post
them
in
a
place
that
is
machine
readable.
So
I
would
like
to
work
on
that
this
quarter,
so
that
you
can
like
look
at
test
grid
and
very
easily
see.
What
is
the
commits
that
you're
upgrading
from
what
is
the
committee
are
upgrading
to
right?
Now
you
kind
of
have
to
dig
into
logs
and
stuff
and
I
described
how
to
do
that.
E
D
F
So
documentation
again
around
features
as
we
split
things
out
from
KK.
We
had
a
number
of
features
were
in
addition
to
try
understand,
did
the
code
land
we're
trying
to
understand
should
documentation
land,
did
it
land
and
a
number
of
them.
We
we
had
this
feature
we're
tracking,
yet
we
couldn't
find
the
code.
Oh
it's
elsewhere,
in
another
repo,
okay,
fine!
That's
it's
done!
Yes,
it's
done!
What
about
the
docs?
We
don't
need
doctor
that
and
I
got
a
sense
across
a
couple
of
these
that
this
is.
F
This
leads
to
a
potential
oddity
from
a
user
perspective,
if
you're
looking
at
the
kubernetes
website,
but
a
whole
bunch
of
the
new
features
for
the
new
release,
aren't
documented
they're
there
elsewhere.
This
Federation
and
Docs
becomes
a
little
weird
as
a
as
a
user
experience
thing.
So
it's
something
that
we
should
be
thinking
about.
What
the
core
documentation
needs
to
cover.
F
D
At
least
for
1:30,
when
I
talk
to
the
other
out
of
trade,
repos
were
planning
for
some
changes,
I'm
actively
asking
if
there's
any
jobs
or
reasons
that
need
to
go
with
it,
but
yeah
again.
Putting
this
somewhere
in
the
set
of
question
is
tying
back
to
the
feature
questionnaire
that
you
were
referring
to
them.
This
would
be
useful
to
track
more
rigorously.
H
F
Next,
one
might
be
the
one
that
runs
ass
out
of
time,
but
dependencies
inversions
and
the
release
notes.
This
is
always
a
late-breaking
pain
point
in
the
release
trying
to
collect
these.
The.
If
you
look
at
the
change,
long
I
should
have
actually
linked
it
there
on,
and
maybe
it's
a
couple
of
the
prior
ones.
We
we
started
with
a
list
of
dependencies
and
the
next
release
gets
bigger.
The
next
release.
F
It
was
incorrect
so
as
long
as
it's
manual
and
it
splits
across
a
bunch
of
different
sources
of
truth,
it's
gonna
be
error
prone
and
we're
gonna
make
mistakes
and
yeah.
We
need
to
do
better
there
and
we
also
have
somebody
who's.
Look.
There
are
a
couple
of
us
who
are
kind
of
looking
at
what
we
might
improve
there
for
the
next
release.
J
Only
thing
I
wanted
to
add
here
is
this
time
we
were
able
to
figure
out
a
way
to
download
the
Deb
and
to
actually
try
it
out.
So
we
had
a.
We
threw
out
a
couple
of
scripts
that
people
could
use.
So,
yes,
it's
possible
to
do
that.
We
just
have
to
make
sure
that
we
have
the
depths
wave.
They
don't
get
commingled
with
the
shipped
artifacts
understable,
which
was
the
problem
we
had,
but
with
one
of
the
betas
or
RC
I.
Think
and.
F
J
D
D
H
Like
a
good
good
place,
let's
go
ahead
and
make
a
D
mark
in
the
the
thing
where
we're
stopping
and
pick
up
the
rest
in
the
next
cig
release
meeting
and
go
into
more
detail
so
community
members
who
have
stuck
around
for
this
this
retrospective.
Thank
you
so
much
for
listening
and
watching
super
important
to
have
you
involved,
and
hopefully
you
see
that
the
release
team
is
the
best
way
to
learn
everything
about
the
project
and
for
new
contributors
to
get
involved.
So
please
feel
free
to
join
sig
release
meetings.
H
All
the
the
release
team
meetings
are
public
and
welcome
attendance,
so,
on
behalf
of
the
communities
community
and
the
behalf
of
the
user
community
in
the
world,
thank
you
release
team
for
everything
that
you
did
all
the
time
and
the
diligence
and
care
and,
most
importantly,
preserving
the
trust
and
and
value
that
we
deliver
to
our
community.
That's
a
huge
and
important
duty,
and
a
very
solemn
one,
and
and
you
you
did
it
with
incredible
care
and
expertise.