►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone:
this
is
the
september
2nd
edition
of
the
kubernetes
github
administration,
subproject
meeting.
We
have
a
bunch
of
things
on
the
agenda
today.
I'll
start
by
mentioning
we
follow
the
kubernetes
and
cncf
code
of
conduct.
Please
be
excellent
to
each
other
nikita.
You
have
the
first
thing
on
the
agenda.
B
Yeah,
so
I
just
wanted
to
flirt
the
idea
out
for
a
rotation
program
among
the
github
admins
and
probably
even
the
new
membership
coordinators
for
handling
issues
that
come
up
in
the
chaor
people.
B
I
think
it's
mostly
paying
me
and
bob
handling
a
bunch
of
it
and
I
have
been
sucking
off
often
it
recently
starting
boxing
looking
at
it,
and
it's
not
fair
to
him
and
it's
not
sustainable
either,
and
when
I'm
talking
about
this,
I
realize
that
I
should
probably
come
up
with
some
ideas
and
how
this
relation
program
would
work
but
yeah.
I'm
curious.
Like
first
of
all,
do
you
all
think
this
is
a
good
idea
and
your
thoughts
on
how
we
can
roll
this
up.
C
I
mean
I'm
in
favor
of
it
for
sure,
and
I
don't
know
that
I
particularly
need
a
super
technical
solution
to
figure
out
what
the
rotation,
how
the
rotation
would
be
implemented.
I
would
be
fine
if
we
created
a
shared
calendar
that
github
admins
had
right
access
to
where
we
could
just
schedule
ourselves
to
be.
On
a
you,
know,
weekly
rotation,
sort
of
deal
and
swap
out
as
needed
for
when
life
events
come
up.
C
I
I
like,
I
will
make
a
culpa
I've.
Definitely
I
think
I
agree
with
nikita's
assessment
that
it's
probably
been
nikita
and
bob
doing
the
majority
of
the
administrative
I
have
selfishly
like.
I
have
swept
through
open,
prs
and
open
issues
when
I
look
at
that
repo,
but
I
generally
only
look
at
it
when
somebody
or
some
project
I
know,
needs
help
right
now,
so
it's
been
best
effort
cleanup
for
me.
A
A
D
B
So
looks
like
we're
not
plus
one
I'll
work
on
like
rolling
this
out
and
documenting
this
and
getting
this
for
the
dollar.
B
A
Yep,
the
other,
the
other
thought
is
like.
A
A
D
Yeah
for
what's
where
I
think,
they'd
also
be
in
favor
of
a
schedule,
because
it
is
just
kind
of
like
up
to
whoever
checks
it
to
then
take
care
of
the
stuff.
A
Yeah-
and
I
know
I
know
all
three
of
them
are
still
active
in
the
project.
It's
just.
Are
you
active
like
actually
looking
at
at
the
the
membership,
the
memberships
coming
in.
A
Thanks
for
that,
nikita
mad
f
looks
like
you.
Have
the
next
thing
on
the
agenda.
E
Yeah
hi,
so
what
I
wanted
to
like
bring
attention
to
once
again
was
so
I
was.
I
was
working
on
migration
of
kind
design
to
kind
feature
on
at
least
kubernetes
kubernetes,
and
there
were
a
few
like
really
interesting
points
that
aaron
brought
up
with
respect
to
label
sings
behavior
if
it
were
done
in
the
current
way.
So
a
little
bit
of
context
on
what's
happening
right
now.
E
So
the
current
scenario
is
that
we
remove
kind
design
from
the
default
section
in
label,
sync,
meaning
that
any
new
repositories
that
are
added
will
not
by
default,
get
a
kind
design
label
and
that
will
have
to
be
an
opt-in
for
any
and
all
repositories
that
are
further
added
after
this
changes
must,
if
it's
done
in
this
way,
and
the
second
thing.
The
second
interesting
point
that
adam
brought
up
was
that
if
we
do
it
this
way,
then
you
would
have
a
persistent
diff
between
the
repositories
that
already
have
kind
design.
E
So
let's
say
cluster
cluster
api
or
cube
adm,
for
example,
so
they
already
have
kind
design,
but
the
source
of
intent.
That
is
the
label.
Sync
ml
file
wouldn't
have
kind
design
in
it
unless
it's
opted
in
explicitly
so
is
this
behavior
acceptable
is
one
question
that
we
need
to
ask
second
question
that
we
need
to
ask
is,
if
not,
then,
would
the
way
forward
be
to
modify
label
sync
in
some
way,
so
that
we
can
add
like
white
listing
capabilities
of
some
sort,
for
example?
E
C
I
mean
I
I
just
posted
in
chat
when
I
feel
like
the
tldr
is,
is
like,
as
proposed,
we'll
end
up
with
a
situation
where
some
repos
will
have
the
kind
design
labels
specified,
both
in
github
and
in
the
labels
yaml
file,
and
then
every
single
issue
that
has
kind
design
will
keep
the
kind
design
label.
I
suspect
the
in
the
ideal
intent
is
those
repos
that
want
to
continue
having
kind
to
design
have
it
specified
in
labels,
yaml
and
everywhere
else.
C
The
kind
design
label
is
renamed
to
kind
feature,
and
it
is
not
an
option
in
the
majority
of
the
repos,
but
how
we
go
about
reconciling
to
that
ideal
state.
It's
not
something
that
the
tool
currently
supports
and
would
require
some
sort
of
survey
or
mass
communication
where
we
get
everybody's
intent
and
then
implement
it.
So
one
more
human
manual
smudgy
way
we
could
do
that
is
to
actuate
the
changes
as
described.
C
So
every
existing
one
of
our
240-something
repos
will
still
have
kind
to
design
as
a
label,
and
then
a
human
will
be
responsible
for
sweeping
through
all
of
them
and
deleting
it
from
the
repos
that
don't
currently
use
it
and
then
contacting
the
sub-project
owners
for
all
of
the
repos
that
do
have
issues
that
use
it
and
figuring
out
what
they
want
to
do
with
the
label.
E
Yeah
yeah,
that
makes
sense
so
on
the
original
issue
that
was
opened
in
a
gay
community.
So
there
was
a
phase
of
feedback
that
was
that
existed
in
terms
of
asking
sub
project
owners,
whether
like
who
actually
uses
kind
design
in
their
sub
projects.
So
three
of
them
came
forward.
So
I
recall
directly
controller
runtime,
cluster
api
and
cube
adm,
but
I
suspect
that
we
would
have
to
either
like
post
in
a
k,
dev
mailing
list
asking
for
feedback
about
this
or
once
the
change
is
merged.
E
Then
just
like
put
out
a
notification
either
on,
if
not
kubernetes.dev,
then
kubernetes
contributors
on
slack
and
then,
if
folks
don't
come
forward,
then
that
would
be
an
assumption.
I
don't
know
if
it's
you
have
to
make
the
assumption
that
they
are
okay
with
that
delta
existing
and
if
we
should
actually
like
go
ahead
and
like
a
github,
admin
should
go
and
like
delete
it
manually
on
that
sub
project.
D
So
the
original
intent
was
at
least
to
remove
it
from
kk,
because
all
the
design
issues
like
people
aren't
using
that
label
anymore
and
it's
and
the
ones
that
do
get
tagged
with
it
essentially
fall
through
the
cracks
of
triage.
Since
all
of
it
has
shifted
over
to
you
know,
caps
for
designs.
C
There's
some
other
way
more.
The
the
correct
way
of
doing
this
would
be
to
allow
per
repo
exclusion
of
the
default
set
of
labels,
but
that
would
require
some
technical
changes
which
may
or
may
not
be
gnarly.
I
don't
know-
and
I
think
it
also
gets
into
the
question
of
whether
we
we
are
okay,
with
having
some
repos
exclude
themselves
from
the
general
set
of
org
wide
labels.
A
Well-
and
I
I
think,
that's
where
the
original
intent
we
wanted
to
steer
away
from
that,
we
wanted
to
have
like
hey
here's,
the
the
default
set
of
labels
that
you
can
expect
to
be
everywhere,
and
then
we
can
add
if
there's
repo
specific
stuff,
but
we
wanted
to
avoid
that
in
general,
and
I
I
think
I
still
feel
that
way,
but
if,
like,
I
think,
you're
the
the
kind
of
process
that
you
originally
mentioned
there
aaron-
I
I
I
think,
that's
the
one
that
makes
the
most
sense
to
me
as
far
as
like
remove
like
we
can
remove
from
the
from
the
default
set
kind
design.
A
We
can
add
it
where
we
know
we
explicitly
need
it
so
where,
where
sub
project
owners
have
called
it
out,
we
can
with
the
api
it's
easy
to
kind
of
sweep
through
and
delete
it
based
on
hey.
Is
there
any
issue?
Is
there
any
open
issues
or
pr's
that
are
using
the
label
if
no
delete
the
label
from
the
repo,
but
then
it's?
It
will
be
that
that
languishing
a
little
bit
of
like
tracking
down
like
okay.
So
if
you
are
using
the
label,
do
you
do
you
still
need
it.
C
Yeah,
I
I
mean
I'll,
throw
up
one
other
technically
correct
thing,
which
is
we
don't
have
labels.yaml
is
not
fully
reconciled
with
the
state
of
the
world.
That's
what's
partially
allowing
us
to
get
into
this
mess
where,
like
not
all
labels
that
each
individual
repo
has
live
inside
of
labels.yaml,
nor
does
labels.yaml
and
the
tooling
that
consumes
it
go
delete
anything
that's
not
in
labels.yaml,
and
we
could
also
try
and
solve
that
problem.
So
we
do
have
one
source
of
truth
and
then
it's
pretty
easy
to
reconcile
the
world
to
that.
C
But
I
know
you
know:
there's
been
work
on
going
for
this
for
a
while,
so
I
would
rather
do
the
pragmatic
thing
to
move
this
forward,
but
maybe
open
up
some
issues
to
talk
about
those
other
more
technically
correct
solutions,
and
I
think
I
think
a
notification
out
to
k-dev
that
hey
this
is
happening
is
a
good
first
step
with
a
deadline
for
hey.
C
C
E
Okay,
so
what
I'll
do
is?
I
will
take
a
few
days
I'll
draft
out
an
email
I'll
reach
out
to
conte,
like
I
put
it
I'll
post
it
up
on
contrivex
or
I'll
reach
out
to
the
github
management
channel
and
we'll
send
out
a
notification
and
with
maybe
like
a
week's,
is
a
week
appropriate
or
in
this
case
a
longer
period
is
needed.
A
I
think
a
week
for
the
notification
period
is
fine,
because
the
the
intent,
the
intent
of
what
we're
going
to
do,
we're
going
to
remove
it
from
the
default
set.
A
We've
added,
I'm
looking
at
the
pr
we've
added
it
specifically
to
the
repos
that
we
know
need
it.
We
can
also
do
a
delete
like
we
can
add
it
to
kubernetes
kubernetes
with
a
delete
after
so
that
it
is
explicitly
deleted
when
this
merged
from
kubernetes
kubernetes
and
then
yeah
and
then
yeah,
then
it's
just
reconciling
against
the
rest
of
the
the
repos
doing
that
the
scripting
to
figure
out
like
where
is
it?
Where
is
it
still
being
used?
C
Thanks
for
raising
this
by
the
way,
I
know
this
has
been
super
gnarly
and
I
didn't
mean
to
throw
a
wrench
in
the
works
or
anything.
I
just
wanted
to
make
sure
we're
all
on
the
same
page
about
what
exactly
the
implications
are
of
this
so
super
appreciate
it.
A
Thanks
a
lot
bob,
you
on
the
agenda
for
today
easy
cla.
D
D
For
me,
like,
I
am
honestly
less
concerned
about
the
wording
being
in
the
the
picture,
because,
honestly,
the
the
cla
stuff
is
not
kubernetes,
it's
out
of
our
hands
like
anything
that
gets
them
and
like
reaching
out
to
the
cncf
to
work
through
these
issues,
instead
of
like
coming
to
us
about
various
little
like
cla
problems,
and
then
us
literally
just
telling
them
to
talk
to
the
cncf,
I
think
is
a
good
thing
and
then
the
the
fact
that
it
also
explicitly
lists
which
commit
is
the
problem
is
also
like
super
super
super
handy,
like
we
have
routinely
run
to
this
problem
in
docs
when
trying
to
like
update
the
the
next
release
branch.
C
But
so
refresh
my
memory
is
there
I
think,
like
the
ideal
thing
I
was
asking
for-
and
there
was
maybe
a
pr
about
rolling
out
a
change
to
the
cla
plug-in
to
listen
to
one
of
the
two
contexts
like
I
was
really
hoping
we
could
turn
on.
We
could
keep
our
existing
cli
check.
We
could
turn
on
the
new
cla
check
but
non-blocking,
and
then
I
would
expect
it
would
be
the
cncf's
responsibility
to
generate
a
report
that
scrapes
all
of
the
prs
to
see.
C
D
The
one
problem
is
like
right
now:
it's
just
batch
updates
to
the
easy
cla
system
to
the
from
the
old
cla
to
the
new
cla.
So
it's
like
if
you've
signed
the
old
cla
recently
and
eccla,
is
on.
You
might
get
a
warning
saying
that
like
hey,
you
haven't
signed
it
because
they
haven't
done
their
batch
update.
Yet.
C
C
But
the
the
check
is
not
listed
as
a
required
check.
Is
that
correct,
like
it
shows
up
as
a
red
x,
but
it
doesn't
really
prevent
merge.
D
C
Well,
that's
still
my
ideal,
I
the
alternative
sounds
like
we
just
flip
it
and
we
go
along
for
the
bumpy
ride,
and
if
that
is
our
only
recourse,
then
we
should
do
so
while
being
mindful
of
deadlines
that
might
cause
a
flurry
of
people
to
open
a
flurry
of
pr's
like
enhancements,
freeze
yeah.
D
C
D
Even
though,
like
the
the
rule,
the
branch
protection
or
whatever
rules
are
set
up
so
that
way,
it's
not
a
required
check,
there's
an
easy
way.
We
can.
We
can
test
this
and
let's
just
create
a
new
account
and
open
a
dummy
pr
with
the
cli
signed
in
like
the
the
old
cla
system,
but
not
in
the
new
one.
A
C
D
Will
they
will
coordinate
with
us
on
any
time
frame
to
do
async
and
we
we
did
go
through
the
list
of
about
50
contributors
that
were
active.
That
did
not
exist
in
the
new
system
that
should
theoretically
be
reconciled
already
like
they
gave
me
a
list
of,
I
think,
like
200
people
that
didn't
exist
in
the
new
system
and
their
github
id
and
basically
found
who
was
active
and
just
had
like
track
down.
How
the
and
sync
their
lf
ids.
C
I
was
just
gonna
say
I
feel
like
the
next
steps
here
are:
let's
get
certainty
on
whether
enabling
it
non-blocking
is
possible
that
can
help
us
make
a
more
informed
decision
about
how
to
roll
out
and
then,
if
we
have
to
do
a
disruptive
roll
out,
I
think
we
choose
a
higher
traffic
repo,
but
not
kubernetes
kubernetes
gain
more
confidence
that
it's
working
and
then
we
can
flip
over
for
everything.
C
C
A
I'm
wondering
if
that,
like,
I
know
we're
over
time,
but
I'm
just
wondering
I'm
wondering
if
we
need
to
be
that
prescriptive.
A
C
We
end
up
with
a
bunch
of
pr's
that
have
a
blocking
check
that
somebody
will
have
to
sweep
through
and
manually
fix.
If
we
revert
back
to
the
previous
system,
so
we'll
have
to
make
sure
we've
got
a
script
or
something
that
is
able
to
mask
comments
on
prs
that
are
found
by
a
specific
context.
Failing.
C
C
A
C
C
Thank
you
for
that
much
more
positive
outlook
on
why
we
might
do
that.
A
Okay,
so
we
are
over
time.
Is
there
anything
else
that
we
want
to
talk
about
right
now
about
this
or
anything
else,.
C
I
I
don't,
I
don't
want,
there's
no
way,
we
can
drive
it
to
consensus,
but
I
I
just
want
to
throw
out
there
another.
C
Disruptive
thing
would
be
pushing
forward
on
renaming
branches
from
master
to
maine.
I
haven't
had
the
time
to
like
sit
down
and
think
through
the
process.
I
feel
like
the
main
blocker
is
for
us
to
figure
out
how
to
most
effectively,
delegate
and
measure
completion
of
this.
I
think
that
all
of
the
technical
blockers
are
basically
gone.
C
It's
mostly
the
social
blockers
and
some
of
the
tricky
tangled
hard-coded
things
for
our
scarier
repos
like
kubernetes,
which
I'm
I'm
almost
at
the
point
where
I
would
say
no
to
kubernetes
kubernetes
renaming
its
branch,
but
maybe
everybody
else
would
disagree
with
me
on
that,
but
I
just
I
would
I
would
love
to
see
more
progress
on
it
than
we
have
it's
somewhere.
C
It's
like
20,
something
repos,
maybe
maybe
slightly
less
than
that
that
have
migrated
and
we're
totally
at
the
point
where
we
could
blast
out
a
call
to
action.
The
only
reason
I've
held
off
on
doing
that
is
because
I
feel
like
tying
together
a
tracking
issue
to
the
repo
in
question.
I
I've
not
been
able
to
do
that.
We
could
like
mass,
create
tracking
issues
against
all
the
repos
and
say:
okay
go,
but
I
think
we
should
we
can.
We
can
talk
about
this
asynchronously.
C
D
D
A
Well
I'll
call
it
there
for
these
and
everything
else
we'll
follow
up
in
the
github
management
public
channel
on
the
kubernetes
slack.
Thank
you,
everybody
for
attending.
We
will
see
you
next
month
cheers.