►
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
(Sound missing for first 30 seconds, but kicks in right after, sorry about that!)
Notes: https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
B
A
A
I
can't
enlist
into
to
list
later
here
due
to
some
reason,
and
we
will
also
get
some
requirement
from
the
working
groups,
such
as
scheduling
group
and
the
Big
Data
quote.
But
but
I
don't
quite
get
this
message
or
from
power
and
I.
Don't
I,
don't
know
the
details
about
requirement
so
no,
and
at
and
I
asked
you
working
on
the
project
and
the
we
plan
to
release
a
version
in
queue.
A
A
The
results
are
to
the
to
the
to
the
cue
job
with
the
highest
our
priority
first,
and
that
the
Unicaja
will
reward
the
status,
also
cure
drugs
and
will
include
up
okay.
This
will
create
a
cube
accumulator.
Our
jobs
after
and
cubanÃa
job
is
finished.
The
user
client
will
remove
Q
double
formed
under
the
Cure
and.
A
A
A
A
A
A
A
A
A
D
D
F
Thanks
Chris
yeah
I
just
wanted
to
let
everybody
know
that
the
110
release
has
kicked
off
officially
and
in
the
meeting
minutes
for
this
meeting.
There
are
links
to
the
schedule
in
the
meeting
minutes
as
well.
I
put
in
some
of
the
key
dates
that
we'll
be
tracking
for
the
ones
you
really
want
to
know
about
or
really
stay
being
Wednesday
March
21st
code
freeze
is
is
going
to
be
Monday,
February
26th
feature
freeze
will
be
January
22nd
and
we're
looking
to
have
all
Doc's
completed
and
review
ready
for
merging
on
Friday
March
9th.
F
So
if
you
didn't
get
those
dates,
they're
available
in
the
agenda
and
also
you
can
look
at
a
bitly
link
which
is
Kate's,
110
dash
dash
schedule,
and
that
has
some
more
information
about
the
release
as
well.
The
team
is
forming.
We
are
currently
lacking,
two
roles
being
filled,
the
CI
signal
role
and
the
branch
manager
we
do
just
as
of
today
have
a
bug
triage
lead.
Who
is
going
to
be
on
Josh
burkas
filling
in
role
again.
F
He
did
that
in
1.9,
but
because
of
time
and
constraints
we
want
to
make
sure
we
have
a
scary
shadow
there.
So
if
you're
interested
in
learning
from
Josh,
which
would
be
a
great
opportunity
for
somebody
who
wants
to
get
involved
in
the
release
process,
please
let
me
know
on
slack
or,
however,
you
can
do
that,
because
this
is
a
great
place
for
you
to
give
back
to
the
community.
If
possible.
Did
somebody
have
a
question?
F
B
B
You
to
the
page
that
tells
you
exactly
what
you
need
to
do
and
I'm
happy
to
walk
you
through
it.
Okay,
sharing
screen
all
of
the
stuff
I'm
going
to
show
is
linked
in
the
meeting
notes
so
grab
for
week,
Bachmann's
I'm,
a
little
biased
state
testing
kind
of
run.
The
bot
that
implements
this
commands
a
quick
walkthrough
of
some
other
common
boxes.
You
see
in
a
number
of
dead
stats,
dashboards
I
can
show
the
bot
commands
have
ate
it
by
week
by
quarter,
etc,
etc.
B
I'm
going
to
do
two
seven
day,
moving
average
I
can
focus
in
on
specific
commands.
With
this
drop
down
these
red
dashed
lines
here,
demarcate
when
we've
had
releases
go
out
the
door
I
can
hide
them
if
they're
distracting
but
I
kind
of
like
them,
and
then
this
repository
group
thing
lets
me
constrain
to
use
certain
groups
of
repositories
so,
for
example,
all
the
projects,
all
the
project
repositories
all
the
project
info
repositories.
I
personally,
would
find
this
more
useful
if
it
was
grouped
by
sig
that
owned
each
repo
and
I
plan
on
implementing
that.
B
As
a
member
of
the
steering
committee
going
forward,
but
for
now,
if
you
have
a
question
about
what
those
repo
groupings
are,
there's
a
sequel
file
here
that
defines
you
know
the
API
machinery
repo
group
consists
of
these
repositories
notice,
there's
one
living
in
kubernetes
incubator.
What
are
all
these
Bachmann's
they're
documented
via
manually,
updated
table
here?
It's
the
go
decades
that
IO
bot
commands
thing.
That
sort
of
tells
you,
roughly
speaking
in
one-page
what
they
all
are.
B
We
also
have
this
new
show
plugins
help
page
most
of
the
bot
commands
are
implemented
via
a
program
called
prowl,
and
here
we
can
see,
for
example,
the
LG
TM
plug-in
I
click,
this
drop-down
arrow,
and
you
see
description
of
what
the
plug-in
is,
who
can
use
it
and
some
examples
of
how
to
use
it.
I
can
also
do
this
for
label
plugin,
which
shows
how
to
remove
an
ad
of
their
various
labels.
B
That's
here
real
fast,
ok,
some
interesting
things
I
have
found
from
this
graph
personally
oops
if
I
click
on
a
thing
on
the
table
on
the
right
here
that
unifies.
That
shows
just
that
command
to
the
graph.
Another
way
of
doing
the
same
thing
with
the
commune's
drop-down
CC
is
the
most
used
bot
command.
It's
used
even
more
than
a
sign.
Do
people
here
actually
know
what
CC
does?
If
you
don't,
it
turns
out.
It
automatically
requests
a
review
from
somebody.
B
B
Let's
see
here,
one
of
the
commands
we
put
in
place
recently
is
this
idea
of
hold
it
applies.
They
do
not
merge
label
to
your
pull
request.
You
can
remove
it
with
slash,
hold,
cancel
it's
something
we
have
been
personally
using
in
tested
for
quite
a
bit.
You
can
see
this
I
go
to
their
project
in
for
group.
Its
shape
of
the
graph
is
roughly
the
same.
B
We've
been
using
it
most
often
to
implement
a
workflow
where
the
pull
request
author
submits
their
thing,
but
then
they
want
to
hold
it
back
because
they
want
to
be
the
person
to
actually
merge
the
poll
requests
or
they
want
to
be
the
person
to
actually
deploy
the
call.
The
quest
we
found
it's
a
really
great
way
to
implement
sort
of
a
social
construct
of
a
workflow.
B
Instead
of
having
to
put
all
this
really
crazy
automation
over
it,
I've
also
done
things
in
the
password,
I
hold
a
PR
and
then
I
tell
somebody
else
why
I'm
holding
it
I
say
when
you
want
to
merge
this
yourself,
please
remove
the
hold
one
personal
mint
I
have
is.
We
can
see
the
growth
of
the
use
of
the
LG
TM
command,
lgt
NPR's?
We
can.
B
Know
in
command
clicking
instead
of
just
clicking
here.
I
can
also
see
the
growth
of
the
approved
command.
When
we
go
back
to
you
all
we
produce,
and
you
can
see
that
it
looks
like
we're
really
not
approving
that
much
compared
to
what
we're
LG
teaming.
This
is
weird.
It
turns
out
that
a
lot
of
people
who
have
approval
power
stand
/l,
GTM
and
the
bots
will
know
who
they
are
based
on
their
presence
in
owners
files
and
automatically
apply
the
approved
label
on
their
behalf.
B
But
dev
staff
isn't
smart
enough
to
collect
this
knowledge,
so
it
really
throws
off
our
approvers
and
reviewers
drafts.
The
only
way
we
know
of
collecting
unique
approvers
is
whether
or
not
somebody's
using
a
slash,
approve
command
to
get
a
pull
request
pushed
through
some
other
spikes
of
interest.
You
might
have
experienced
yourself
use
of
the
priority
command
to
apply
priority
label,
guess
which
of
our
most
recent
releases
required
people
to
apply
the
priority
critical
urgent
label
in
order
to
get
code
merged
in
the
lifecycle
label.
B
B
B
So
this
to
me
would
be
evidence
that,
although
some
of
you
might
find
the
bot
really
kind
of
annoying
and
the
number
of
issues
that
it's
pinging,
it
does
seem
to
be
incentivizing
some
people
to
actually
close
issues
that
they
haven't
looked
at
in
a
very
long
time.
I
know,
I
can
speak
on
behalf
of,
say,
testing
I've
been
going
through
and
triaging
all
the
issues
that
have
our
label
or
any
that
notify
me
I.
Think!
That's
all
that
I
have
time
for.
Thank
you
for
letting
me
the
graph
of
the
week
possum.
C
E
Yeah,
okay,
so
yeah,
just
a
quick
update
from
sick,
instrumentation
I
think
it's
actually
gonna
be
relatively
quick,
I'm
in
Frederick,
I,
workout,
chorus
and
I'm,
basically,
I
work
on
Prometheus
and
kubernetes,
and
basically
everything
in
between,
and
that's
also
what
one
of
our
updates
is
about.
So
as
part
of
cig
instrumentation,
we
maintain
this
agent
for
kubernetes,
which
basically
converts
all
kubernetes
objects
to
metrics.
So,
for
example,
you
have
your
deployments
and
your
deployments
have
a
replica
and
a
spec
and
available
replicas
in
the
status.
E
Then
coupe
state
metrics
converts
all
of
those
two
metrics,
and
then
you
can
alert
those
graph,
those
visualize
etc
and
for
coupe
state
metrics
in
the
past
cycle,
and
we
actually
did
a
couple
of
major
releases
and
actually
released
version
1.0.
So
that
was
huge
and
there's
a
lot
of
traction
on
it.
So
that's
great
yeah.
That
was
one
part
and
I'm
not
sure
when
the
last
community
update
was,
but
we
have
been
developing
the
core
and
custom
metrics
api's.
E
So
these
are
in
order
for
to
define
formally
defined
api's
for
metrics
in
kubernetes
and
the
core
metrics
here
are
things
like
that?
That
all
workloads
have
so
like
CPU
memory
file
system
and
the
car
metrics
can
really
be
anything
arbitrary,
so
like
queue
length
or
really
anything
that
your
monitoring
system
can
capture
and
both
of
those
api's
are
now
starting
and
actually
already
starting
in
1.8,
are
in
beta,
so
do
go
ahead
and
check
them
out.
E
Bear
in
mind
that
all
those
api's
are,
although
they're
in
beta.
Only
there
are
their
specification.
Are
they
are
aggregated
API
servers
so
there's
their
respective
implementation
may
not
be
in
the
same
state,
for
example,
for
the
core
metrics
there's,
actually
a
pretty
stable
implementation
that
we
refer
to
as
the
metrics
server,
which
just
collects
these
metrics
from
the
couplets
stats
API
and
for
custom
metrics.
There
needs
to
be
an
adapter
for
whichever
monitoring
system
you
use
and
as
part
of
second
fermentation
something
else
that
one
of
our
members
has
been
working
on.
E
Sol
us
from
Red
Hat
he's
been
working
on
a
custom,
metrics
API
implementation
for
Prometheus,
so
Prometheus
is
an
open-source
monitoring
system
if
you're
not
familiar
with
it.
There's
super
popular
for
monitoring
kubernetes
with
it,
and
that's
why
it
was
the
natural
first
choice
to
implement
custom
metrics
that
have
done
I,
believe
there's
also
one
for
stackdriver
I.
Don't
think
there
is
any
other
implementation
so
far,
but
do
check
that
out
as
well.
E
So
basically
what's
cool
about
this
is
that
now
we
can
auto
scale
on
arbitrary
metrics,
and
this
is
actually
already
also
implemented
in
the
HPA
by
ceccato
scaling.
This
was
implement
I
believe
actually
Solly
was
working
on
that
as
well.
So
now
you
can
auto
scale
on
any
any
metric
that
you
can
imagine
that
you
are
connecting
with
Prometheus
and
something
else
that
we
are
sort
of
in
the
middle
of
doing
right.
Now
that
I
already
started
doing,
but
there's
still
some
left
work
left
to
do
so
we're
in
the
process
of
phasing
out
hipster.
E
So
there
were
a
couple
of
architectural
problems
with
hipsters,
so
the
the
people
who
had
previously
maintained
hipster
came
in
too
sick
of
instrumentation,
and
that's
that's
essentially
why
we
were
developing
the
corn
custom
metrics
metrics
api's,
so
that
we
eventually
just
have
this
definition
of
api's
and
then
there
can
be
providers
for
that.
And
in
that
sense
we
don't
have
the
limitations
that
we
have
today
with
hipster
anymore.
I,
won't
on
going
into
detail
about
that.
If
you're
interested
sega
instrumentation
meets
every
Thursday
7:00,
no
6:30
p.m.
E
C
D
A
PSA
that
there
is
a
thread
on
kubernetes,
dev
and
I've
been
socializing
a
number
of
other
SIG's
and
it's
been
kind
of
inspired
by
Tim
Hawkins
coupon
talk
with
regards
to
how
we
do
releases
and
it's
just
a
very
simple,
modest
proposal
which
should
be
discussed,
instant
release.
And
if
you
are
interested.
Please
read
that
thread
and
please
come
talk
to
the
folks
at
cig
release,
and
hopefully
we
can
start
to
sort
out
what
folks
want
to
do.
Eventually,
in
the
long
term,.
C
If
you
haven't
heard,
there's
a
architectural
vulnerability
and
pretty
much
every
CPU
made
since,
like
95
meltdown
inspector
effects,
Intel
arm
AMD,
there's
tons
of
information
about
it
on
the
web
of
speculation,
which
waters
avoid
that
but
check
with
your
cloud
providers.
Most
of
them
have
already
issued
announcements
and
have
mitigated
it
so
yeah
there
is
meltdown
attack
comm.
If
you
want
to
go
deep,
diving
into
more
info
start
there
office
hours
is
back
17
January.
There's
a
link
in
the
jinda
notes.
Here,
group
mentoring,
cohort
number
one
kicked
off
today.
C
C
G
F
C
F
This
one
take
it
away
great,
thank
you.
So
this
is
gonna,
be
a
different
meeting
than
the
community
meeting.
I
highly
encourage
anybody
who's
interested
in
this
process
to
stay,
because
the
release
process
definitely
affects
everybody
who's
here.
But
if
you
want
to
drop
off
to
understand
so
just
really
quickly,
I'm
gonna
give
my
spiel.
The
retrospective
is
intended
to
find
areas
where
we
can
improve
over
time,
and
so
the
focus
is
really
on
identifying
solutions
and
not
necessarily
focusing
on
the
problems
per
se.
F
So
without
further
ado,
I'm
gonna
go
ahead
and
dig
into
the
last
section
of
the
retrospective,
which
is
what
could
have
gone
better
in
1.9,
because
this
is
how
we
help
identify
things
for
improvement
specifically,
and
there
is
a
part
of
the
document
that,
if
you
scroll
down
and
I,
will
put
a
link
in
the
chat
here.
So
everybody
has
it
where
it
says,
end
part
1
and
begin
part
2
below
and
what
we'll
do
is
start
off
with
Aaron,
Crick
and
Berger.
Who
has
the
first
comment
there
so
I.
B
As
we
did
burn
down
meetings
which
started
every
other,
the
started
three
times
a
week,
and
then
we
were
graduating
every
day.
During
the
one
line
release,
we
had
a
talk
that
looked
sorry
during
the
one.
Eight
release
made
a
doc
that
looked
like
this,
with
green
red,
yellow
statuses
and
metrics,
with
little
indicators.
That
said
whether
or
not
they
went
up
and
down.
This
was
maintained
by
humans.
I
adopted
the
same
pattern
for
110
assuming
per
cycle.
B
One
line
is
singing
and
we're
all
going
to
do
this,
but
as
I'm
scrolling
down
here,
you
can
see
that
many
of
the
statuses
were
just
sitting
there,
red,
yellow
green.
So
for
somebody
sort
of
going
back
to
try
and
understand.
Historically
what
happened
during
our
burn
down,
there's
very
little
information
in
this
doc.
That's
actually
been
kept
up
to
date.
B
B
Where
people
aware
that
this
dock
existent
and
then
people
find
this
top
useful
kind
of
be
my
thing
don't
have
to
answer
it
now,
just
a
concern
that
I
have
that
I'm
raising
because
it
was
sort
of
unless
you
erect
really
living
or
breathing
and
participating
in
every
burndown
meeting.
It
was
difficult
for
me
to
get
a
zeitgeist
of
where
we
work.
B
Tests
were
we're,
passing
I'm
gonna
hold
myself,
I'm
gonna,
try
doing
a
live
demo
right
now
here
and
see
if
the
upgrade
tests
are.
Oh,
my
goodness
they
are
actually
all
kind
of
passing.
Look
at
that
I
mean
they're
flaky
right,
but
well,
alright,
I'm,
not
sure.
That's
hang
on
okay!
Well,
there's
there's
some
green
in
there
that
this
is
our
release.
Signal
I!
Really
it
drives
me
kind
of
bunkers,
but
you
know
actually
seemingly
pay
attention
to
this.
We
continue
to
have
historically
not
blocked
releases
on
these
things.
B
Failing
I
really
wish
that
we
would
otherwise,
what's
the
point
of
using
a
signal
in
the
first
place,
a
thing
I
was
a
little
north.
Next
thing
is
on
me
as
well.
Just
blowing
through
these
thing,
I
was
a
little
noisy
about,
was
the
fact
that
the
gke
related
tests
weren't
really
getting
a
lot
of
attention.
B
So,
if
I
look
at
GK
email,
they
are
in
fact
pretty
green
looking
compared
to
what
they
used
to
be
so
I.
Don't
know
if
this
was
me
poking
enough
people,
the
sticks
inside
of
the
core,
to
get
some
transparency
or
accountability
or
visibility.
Making
sure
that
GK
stuff
was
passing
because,
essentially,
if
you
want,
if
you
as
a
cloud
provider,
want
your
your
version
of
kubernetes
to
block
the
release
of
kubernetes,
it
is
on
you
to
make
sure
that
you
have
the
staff
on
hand
to
actually
fix
the
problems
that
are
blocking
kubernetes.
B
It
does
look
like
gke
has
heard
this
signal,
which
is
great
I'm,
happy
to
continue
to
the
discussion
on
how
we
can
do
this
in
a
more
transparent
manner
like
during
the
release,
a
lot
of
the
bugs
I
filed
just
kind
of
magically
solved
themselves.
Sometimes,
although
too
many
people's
credit
people
did
actually
dig
in
on
some
of
these
issues,
and
some
of
them
were
difficult
in
Harry
and
I
really
appreciate
the
effort
that
went
into
solving
it.
It's
kind
of
a
question
of
how
we
can
keep
this
going
on
incrementally,
rather
than
realizing.
H
H
B
H
Right,
no
one
right
right,
we're
on
1.9,
sorry,
the
so
some
things
went
really
well.
We
added
some
new
automation
around
handling,
PRS
and
1.9,
and
that
worked
really
well
to
eliminate
some
manual
steps
on
those
terms,
I'd
like
to
actually
complete
that
automation
effectively
in
1.10,
although
there
may
be
technical
obstacles
to
that,
because
something's
sound
easy
to
say,
but
they're
not
actually
easy
to
implement
like,
for
example,
we're
requiring
PRS
now
to
be
tags.
That
issues
are
tagged
with
and
ideally
the
bot
should
take
care
of
that.
H
H
H
It
had
a
lot
of
new
automation
around
how
issues
are
handled
and
the
related
PRS
and
I
don't
feel
that
the
majority
of
our
contributors
are
actually
up
to
date
on
how
all
of
this
works
now
I
certainly
had
quite
a
few
cases.
I
would
say:
I
spend
as
much
time
reminding
people
of
what
they
you
know.
Bot
commands
they
needed
to
send
now,
as
I
did
reminding
them
with
things
that
actually
needed
to
be
fixed.
H
H
And
the
one
other
sort
of
missing
piece
in
the
automation
is
being
able
to
assign
thickness
to
milestones
without
direct
without
direct.
You
know,
repo
control
and
the
reason
why
that's
a
major
issue
is
that
we
did
have
a
couple
of
SIG's
where
the
only
people
with
repo
rights
in
the
sig
were
not
available
during
the
two
weeks
before
the
release,
and
so
this
became
a
major
obstacle
to
getting
things
wound
up,
even
though
the
code
to
fix
the
issue
already
existed.
H
H
The
remaining
thing
is,
and
this
is
sort
of
a
major
issue
for
the
future.
He
is
as
we
look
at
farming
things
out
to
other
repos,
whether
we
go
full
full
sort
of
people
Federation
or
not.
We
already
have
a
couple
things
that
are
another
repos
and
for
the
things
no,
the
repos.
They
do
not
follow.
Kaykai
standards,
around
issue,
timing
or
tagging
or
reporting
at
all.
H
As
an
example,
I
never
had
any
clear
idea
of
what
was
going
on
with
tube
admin
issues
because
they
have
their
own
sort
of
internal
standards
for
how
they
track
those
issues
for
the
release
and
the
issues
all
got
resolved.
But
I
didn't
know
that
that
was
going
to
happen
because
I
have
anyway,
to
figure
out
the
status
of
them,
and
so
I
do
really
feel
that,
as
we
push
out
required
components
of
kubernetes
to
other
repos.
Those
other
repos
do
need
to
follow
the
same
standards
and
be
part
of
the
same
automation
for
issues.
F
Sorry
I
was
trying
to
starting
to
type
and
also
at
the
same
time,
Thank
You
Josh.
So
much
for
those
and,
as
we
mentioned
earlier
in
the
community
meeting,
you're
gonna
be
filling
this
role
again,
so
hopefully
we'll
identify
these
things
and
target
them
for
fix
this
time
around
and
if
you're
in
this
meeting
still
and
you're
interested
in
shadowing
Josh
and
learning
more
about
this,
please
contact
myself
or
even
Josh
to
get
lined
up
for
that
Jennifer
Rondo.
Are
you
in
this
meeting
now?
I?
F
F
Sig
docks
is
moving
toward
representation
more
SIG's
to
help
out
with
this
issue,
and
basically
this
is
the
whole
strategy
we
have
around
trying
to
embed
facilitation
roles
and
SIG's,
as
opposed
to
asking
SIG's
to
create
those
roles
internally
and
have
them
follow
us
and
1.10
I
still
have
my
concerns
that
the
release
notes
process
is
broken
and
incredibly
difficult
to
manage.
So
we're
gonna
put
all
hands
on
deck
to
try
and
improve
that
process.
I'm,
so
scared
that
the
next
items
there
are
mine
getting
acceptance
on
process
changes
is
extremely
difficult.
F
This
was
evidenced
by
my
confusion,
inspiring
for
a
end
of
the
cap
and
feature
process
that
the
contributor
summit
I
walked
into
the
room
feeling
like
most
people,
had
at
least
some
idea
that
kept
for
her
sort
of
in
progress
and
that
the
the
feature
process
around
those
tips
was
intended
to
make
them
actionable,
and
what
I
did
was
lack
a
giant
hornet's
nest
of
confusion
and
people
were
like
what
is
a
cap?
What
is
across
what
is
the
feature
process?
Why
is
this
changing?
F
What
are
you
doing,
and
so
I
learned
that
this
is
pretty
hard
to
do.
It's
a
scale
we're
at
right
now.
So
this
is
something
that
we're
seeing
with
the
Timpson
clearest
proposal
around
changing
the
three
release
cycles
per
year.
Instead
of
four,
these
big
things,
we
need
to
figure
out
how
to
talk
about
them
as
a
community
and
get
some
sort
of
consensus
in
a
way
that
people
understand
and
know
how
to
follow.
F
The
right
process-
my
last
thing
here-
was
contributors
having
to
know
way
too
much
of
the
release
process
in
order
to
interact
with
it
I.
If
I'm
a
contributor,
I
really
don't
want
to
be
focused
on
administrivia.
That's
not
a
good
use
of
my
time
as
a
good
contributor.
F
Technical
aids
and
SIG's,
radiate
this
information
out
and
handle
the
adding
of
items
to
milestones,
we're
adding
the
right
labels
and
whatnot
instead
of
having
contributors
who
are
not
in
the
communities
or
try
and
forget
us
out.
So
the
bada
is
a
great
place
to
start,
but
we
really
do
need
humans
telling
other
humans
the
right
way
to
do
this,
because
it's
the
project
right
now
depends
on
these
BOTS
and
forced
labels
to
even
function.
F
So,
if
they're
not
in
place,
we're
gonna
be
in
real
trouble
and
if
the
released
cadence
does
get
longer,
this
is
just
going
to
be
so
much
worse,
so
I'm
imploring
sig
leaders.
If
you
are
contributing
features
in
code
and
fixes
to
the
codebase,
please
make
sure
that
the
people
who
are
doing
work
under
the
auspices
of
the
cig
know
what
they
need
to
do
in
order
to
label
these
properly
and
get
reviews
and
all
that
stuff.
So
that
is
my
request.
F
B
F
B
Maybe
just
a
tie-in
to
Tim's
three
releases
thing
we
could
have
maybe
not
done
a
release
that
spans
the
Thanksgiving
holiday,
which
took
out
a
large
chunk
of
the
Pacific
time
zone,
people
necessary
for
this
and
coupe
Cohen
and
the
contributor
summit,
those
those
definitely
drain
our
productivity.
Quite
a
bit.
F
Okay,
that's
not
the
first
feedback
that
we've
gotten
about
that.
There
is
a
code
slush
in
the
timeline
for
110,
okay,
with
documentation
associated
with
it
I
it's
one
of
those
things
that
there's
not
a
specific
high
level
enforcement
around
it
per
se.
It's
more
that
the
the
release
team
means
that
time
to
start
stabilizing
things
out,
I
I
think
we
should
take
a
look
at
that
and
see
if
we
still
need
to
do
that.
B
B
We've
done
kind
of
slush
we've
done
current
slush
for
every
release
prior
that
I've
participated
in
as
far
as
I
remember,
it
was
just
a
compressed
timeline
and
I
think
this
may
have
been
the
first
time
we
used
an
additional
label
of
priority
critical
urgent
instead
of
just
requiring
that
pull
requests
and
issues
were
part
of
the
milestone,
the
correspondent
to
the
release
that
we
were
flushing
or
freezing
for,
but
all
of
that
automation
is
documented
in
the
release.
Schedule,
like
all
those
requirements,
are
documented
in
the
release
schedule.
F
Yes,
that-
and
that
is
true
of
Wynton
as
well,
and
I.
Yeah
I
want
a
second
dad,
Thank
You
Anthony
for
codifying
a
lot
of
these
things,
because
that
was
there's
been
a
lot
of
tribal
knowledge
around
this
and
the
more
we
document
the
better.
So
thank
you
very
much
for
that
four
move
on
is
there
anything
else
that
we
want
to
cover
here.
G
So
this
item
was
added
a
bit
before
we
have
figured
out
what
what
we're
going
to
do
weighs
one
to
ten
in
terms
of
the
features
so
I
want
to
turn
is
going
to
be
early
interest
in
release
in
terms
of
the
features
because
we
have
to
overlap
in
proposal.
So
how
should
we
handle
features
in
this
release?
We
have
all
all
those
existing
I
don't
want
to
call
it
legacy,
but
it's
well
known
process
for
almost
everybody
who
has
ever
had
a
chance
to
develop
a
new
feature
for
kubernetes.
G
So
it's
deeply
documented
in
the
features
ripple
in
community
/
features
report
went
but
all
their
features
tracking
happens,
girl.
The
second
one
is
the
cat
prostitute
that
has
been
presented
a
few
months
ago
and
actively
discussed
during
the
last
few
months
during
communities,
contributor
summit
that
keep
going
itself
and
so
on.
So
we
have
some
six
four
examples
across
the
lifecycle
who
is
awaiting
the
cap
process
as
their
brand
new
process
for
handling
their
features
for
kubernetes.
G
At
the
same
time,
we
have
other
six
and
most
of
the
six
are
still
following
their
existing
features
process.
So
my
my
request
here
is
to
codify
what
we
exactly
do
with
the
features
in
this
release
and
I
would
I
would
follow
the
classic
of
using
feature
three
or
for
this
release
as
well
as
we
did
it
before
for
for
the
previous
releases.
G
That
have
happened
in
the
last
year,
but
if
we
have
any
people,
any
groups
or
any
six
who
would
like
to
try
the
new
process,
it
should
be
also
codified
and
I
feel
that
natural
for
the
next
ruins
will
figure
out
a
best
way
for
developing
features
for
communities
that
satisfy
the
most
people
in
the
communities.
Community.
F
Great
thanks,
eager
and
yeah,
my
the
second
point
after
that
in
the
retrospective
document
is
basically,
we
just
need
to
find
a
way
to
pilot
these
things
and
I.
Think.
That's
probably
you
know
when
we
talk
about
getting
consensus
around
decisions.
This
may
be
the
way
we
do.
It
is
to
actually
just
try
things
and
instead
of
trying
to
tear
down
the
old,
we
created
a
new
that
is
so
compelling
that
people
gravitate
toward
it.
I
think
that
that's
probably
going
to
be
the
right
way
to
do
this.
F
H
Well,
just
yeah
I
mean,
for
example,
one
of
the
things
that
you
do
for
for
issue,
wrastling,
etc
before
code
freeze.
Is
you
look
at
things
that
are
listed
in
the
features
repo
and
make
sure
that
there's
related
issues
and
PRS
for
those?
And
so
if
the
sum
of
the
expected
features
or
all
the
expected
features
for
release
are
coming
from
the
kept
process,
then
they
just
want
to
know
how
to
find
that.
Yes,.
H
F
Dims,
are
you
on
this
call?
I,
don't
think
you
are,
but
his
comment
about
need
stronger
code
ownership
and
liaisons.
That
is
something
that
is
really
is
really
front
and
center
right
now,
as
well
as
we
need
to
increase
our
density
of
reviewers,
because
we
don't
have
enough
by
a
long
shot.
So
that's
something
I'm,
not
sure
what
we
can
do
about
it
per
se,
but
that
is
we
need
to
continue
moving
down
that
path.
F
Jennifer
Rondo
had
embedded
writing
X
expertise
in
SIG's
for
Docs
and
release
notes.
This
again
is
about
that
facilitation
concept,
about
empowering
SIG's
to
do
their
best
work
by
taking
over
some
of
the
administrivia
for
them
and
I.
Think
that
having
doc
writers
who
visit
sig
meetings
and
help
construct
those
release,
notes
and
whatnot
is
going
to
be
incredibly
helpful.
So
hopefully
we
can
do
that
DIMMs,
we
should
branch
early.
We
pay
a
lot
of
cost
in
blocking
master
for
a
long
time
that
has
not,
unfortunately,
changed
in
the
schedule
as
it
is.
F
Lastly,
thorough
and
there's
a
lot
here,
holy
cow,
so
my
thing
was
start
everything
earlier,
which,
as
110
rallied
I
have
done.
I
have
gotten
the
schedule
and
team
in
place
as
soon
as
humanly
possible
and
hopefully
we'll
continue
that
track
over
communicating
about
status
until
people
are
fed
up
with
it.
F
F
For
documentation,
so
this
is
interesting.
This
is
really
clarifying.
What
kind
of
docs
are
needed,
so
there's
release
notes,
obviously,
is
one
form
of
documentation,
but
there's
also
usability
user-facing
documentation.
So
that's
something
we
definitely
need
to
talk
about
how
we
differentiate
those,
because
right
now,
the
only
place
that
you
can
tell
something
needs
user
facing
Docs
is
in
a
feature
tracking
spreadsheet,
which
is
not
a
good
place
to
do
that.
It's
not
visible
so.
F
F
G
We
have
to
automate
it
so
the
biggest
problem
with
it
that
it's
not
automated
its
main
element,
ain't
and
sometimes
it's
not
in
sync,
with
the
actual
state,
because
so
many
people
are
excited
about
having
the
similar
form
of
visualization.
So
I
don't
want
to
say
that
it's
terribly
bad
fun,
but
again
the
biggest
problem
is
it
can
be
a
bit
outdated,
I.
B
Would
greatly
welcome
any
and
all
feedback
on
how
we
could
better
automate
the
collection
of
release,
notes
in
associating
them
with
all
requests
and
issues,
because
a
lot
of
the
feedback
I
have
gotten
right
now
and
I
have
suggested
that
we
remove
direct
right
access
to
the
repo
from
people.
Is
that
many
of
them
find
themselves
editing,
pull
requests,
descriptions
and
titles
in
release?
Notes
blocks
in
fixes
things
all
under
the
name
of
making
better
release
notes
seems
a
little
more
manual.
C
B
B
B
How
do
we
have
a
coherent
view
of
what
this
release
is,
while
simultaneously
letting
everything
evolve
from
the
bottom
up?
It's
it's
a
little
difficult,
so
I,
don't
think
we'll
ever
be
able
to
get
away
from
humans
and
good
writers
and
copy
editing
and
and
stuff
like
that,
but
super
open
to
suggestions
on
how
to
make
it
easier
to
triage
stuff.
At
the
start,.
F
Josh
you've
got
a
couple
here,
you
mentioned
automation,
completion
already
and
actually
the
the
bot
requirements.
Those
are
those
are
one
sort
of
you're
covered.
B
B
Signing
up
to
do
is
that,
after
feature
freeze,
we
have
somebody
actually
write
up
a
blog
post,
titled,
a
preview
of
what
we're
working
on
for
1:10,
based
on
all
of
the
feature
issues
we
should
know
what
all
the
large
big
chunks
of
work
are,
that
each
sig
has
signed
up
for
it's,
not
a
promise
or
a
guarantee
that
it
will
land,
but
it
will
let
people
know
if
they
wanted
to
beta-test
kubernetes,
110
or
even
alpha
test
it.
Why
would
they
want
to
do
so?
What
would
be
coming
down
the
pipeline?
What
they?
B
What
can
they
plan
on
in
the
next
couple
of
months?
To
me,
such
a
deliverable
is
the
entire
point
of
having
a
feature
phrase,
because
unfortunately,
the
reality
of
today
is
codes
going
to
get
in.
However,
it's
going
to
get
in,
we
can't
seem
to
block
that,
but
we
can
at
least
start
communicating
very
early
on
what
we
think
we're
trying
to
do
so
I.
You
know
I
really
hope
that
our
I
think
we've
renamed
the
role
from
marketing
to
communications.
B
I
hope
that
the
communications
person
can
step
up
and
fill
this
gap,
because
I
think
it
would
be
greatly
appreciated
because
I
personally,
every
time
I
bring
up
that
we
don't
we
cut
alphas
and
we
come
Fatah's
with
failing
tests.
I
think
like
that,
doesn't
make
them
really
I
know.
I,
certainly
wouldn't
want
to
try
out
something.
That's
failing
tests
from
the
project
that
made
it,
but
I
also
have.
B
Want
to
bother
doing
that,
like
I,
don't
know
what's
coming
down
the
pipeline,
so
hopefully
that
solves
at
least
one
part
of
that
as
part
of
is
the
cig
testing
lead
I'll
say
that
we
are
working
on
rolling
out.
This
thing
called
tied
to
replace
Munch
github
tied
can
be
pointed
at
an
entire
github
organization,
instead
of
pointed
at
individual
repos,
which
means
that
gradually
speaking,
every
repo
in
the
kubernetes
organization
will
have
the
exact
same
set
of
automation
and
tools
available
to
it.
B
It
can
additionally
be
configured
for
individual
repositories,
but
our
goal
is
to
have
as
much
consistently
consistency
as
possible
across
all
of
the
kubernetes
repos.
So
this
means
the
use
of
owners
files.
This
means
approved
in
LG
TM.
This
means
merges
the
tied
you
I
I
may
have
linked
you
to
it,
and
I've
discussed
this
on
the
kubernetes
dev
mailing
list.
We
now
turn
it
into
something:
we're
like
you:
can
click
on
the
label?
Query
on
the
github
query,
to
see
exactly
what
pull
requests
tide
is
looking
at
to
attempt
to
test
and
merge.
B
It
should
provide
us
a
significantly
faster
velocity
and
more
consistency,
so
planning
on
doing
this
at
a
bare
minimum
for
all
of
kubernetes
and
all
of
kubernetes
incubator.
We're
going
to
be
improving
test
grid
to
hopefully
automatically
link
link
to,
if
not
create,
issues
for
failing
tests.
So,
anytime,
you
see
a
red
row
on
the
left
hand
side,
sometimes
right
now,
there's
a
link
called
changes
that
you
can
click
on
to
take
you
to
a
github
diff
or
you
can
maybe
figure
out
what
caused
the
test
to
fail.
B
We're
also
going
to
do
the
same
thing
for
issues
because
I
as
a
human
being
was
having
to
go,
create
a
github
issued
for
a
failing
test
and
then
assign
a
bunch
of
labels
and
stuff
and
go
nag
the
appropriate
sake
to
make
sure
that
somebody
knew
their
stuff
was
broken
and
that
they
should
go
fix
it.
That's
totally
something
that
we
can
automate
so,
hopefully,
we'll
get
some
of
the
way
there
with
test
grid.
B
This
quarter
as
part
of
the
steering
committee
to
help
take
care
of
it's
unclear
who
owns
what
I'm
going
to
take
a
best
guess
at
which
cig
those
which
repo
and
assign
that,
in
a
machine
consumable
way
via
the
six
game,
will
file
in
the
community
repo
and
update
some
of
our
tooling
to
act.
Based
on
that
stuff.
Look
for
proposals
on
how
I'm
going
to
do
that
coming
down
the.
B
As
part
of
my
effort
to
revoke
direct
access
to
kubernetes
I
know
that
applying
them
stone
is
something
that
some
people
in
the
release
team
need
to
do
as
part
of
their
job
for
triage,
so
we'll
be
adding
a
slash
milestone.
Commune's
to
you,
proud
to
join
up
with
some
of
those
other
bot
commands.
I
was
showing
earlier.
It's
all.
I
got
great.
G
Me
so
we're
going
to
see
I'd
like
to
sync
with
Natasha,
who
is
a
communication
lead
for
this
release
again
and
features
blog
post
like
the
prefecture's
blog
post
for
Wanda
and
then
total
support
over
these
ideas.
Please
add
me,
as
a
center
section
item
to
me
so
yeah,
that's
that's!
That's
a
great
suggestion.
Stan
Karen
great.
F
Thank
you,
so
much
no
worries,
so
we
are
done
with
this
retro
I
appreciate
everybody's
time.
So
much
thank
you
and
have
a
great
new
year.
If
you're
interested
in
the
joining
the
110
release
team
hit
me
up
in
slack
G
to
Mars
and
I
will
see
you
all
in
a
week.