►
Description
APIServer Network Proxy Adoption in Azure
Cloud-Provider-GCP fix
Plan to accelerate graduation of leader migration
Chair/TL succession planning
A
A
A
So
first
item
on
the
agenda
has
to
do
with
the
api
server
network
proxy,
so
rodrigo.
Would
you
like
to
go
ahead
and
lead
out?
Yes,
thank.
B
You
so
we
naturally
were
trying
to
adopt
the
api
server
network
proxy
in
production
so
but
we're
facing
some
several
issues.
So
I
wanted
to
check
if
I
can
ask
other
folks
how
they
are
deploying,
so
we
can
learn
from
from
you
guys
and
we
will
share
what
we
learned
on
the
way
to
and
also
I'm
quite
sure
we
found
some
leaks
and
we'll
open
an
issue
and
appear
soon,
hopefully
hope
an
issue
for
sure
rpr.
Is
it's
been
it's
getting
a
little
bit
more
tricky.
B
So
can
I
ask
others:
here's
like
how
are
you
deploying
the
api
server
network
proxy,
like
how
much
mem,
or
which
mode
http
or
grpc
on
and
if
you're,
using
unix
sockets,
because,
like
one
of
the
leaks
that
we
are
trying
to
where
we
are
fighting
with
might
be,
might
only
happen
with
unique,
sockets
and
grpc
mode,
and
I
wonder
if
folks
are
maybe
using
others
combination.
A
So
I
will,
I
I'm
probably
the
best
person
to
answer
for
some
of
this,
so
gke
specif
I've
got
two
different
or
two
or
three
different
answers,
so
gke
specifically
uses
unix
domain,
sockets
and
grpc
between
the
cube
api
server
and
the
network
proxy
server
and
then
just
general
grpc
over
the
network,
obviously,
for
with
tokens
between
the
pro
the
the
network,
proxy
server
and
the
network
proxy
agent,
and
as
far
as
and
we've
we've
done,
a
fair
amount
of
rollout.
A
We've
got
quite
a
bit
of
the
fleet
on
that
and
we
do
various
testing.
There
are
folks
who
are
not
here,
who
are
the
the
people
who
actually
run
it
day
to
day,
but
I
work
with
them
pretty
closely
and
they
haven't
noticed
anything
but
feel
free
to
join
the
I
well.
Let
me
start
with.
I
would
strongly
suggest,
adding
yourself
to
the
network
proxy
slack
channel
and
we
can
go
into
a
lot
of
the
details.
A
A
The
other
thing
I
will
say
is
that
the
open
source
reference
implementation
on
gcp,
so
the
not
managed
solution-
is
actually
run
using
grpc
and
unix
domain
sockets
on
on
all
of
the
open
source.
Kubernetes
testing-
that's
the
default
configuration,
but
there
is
a
bit
you
can
flip
from
the
cube
up
script
and
we
do
flip
for
at
least
one
of
the
proud
jobs
that,
instead
of
using
uds,
grpc
and
tokens
uses,
I
think
http
connect
network
and
mtls.
B
A
Say
that
I
believe
it
is
one
of
the
ibm
teams
in
eastern
europe
is
also
working
with
the
network
proxy,
and
I
know
they
have
a
bug
open
that
I
have
been
investigating
for
the
last
few
weeks.
A
There
is,
there
are
a
couple
of
other
google
teams
using
it,
but
that
are
neither
neither
do
neither
the
open
source
gcp
nor
the
gke
and
chow.
When
he's
here
is
the
expert
on
those
teams,
I
know
less
about
them
and
then
there
are
a
few
smaller
companies
that
I
know
have
been
using
it.
There's
a
guy
named
patrick.
It
goes
by
shade
one.
A
I
know
his
company
uses
it.
I
don't
think
it's
one
of
the
larger
companies,
so
I
don't
know
to
what
degree
that
helps,
but
there's
definitely
quite
a
bit.
Please
feel
free
to
reach
out
to
me.
I'm
happy
to
help.
I
also
like
I
said
I'd
strongly
recommend
joining
the
relevant
slack
channel.
A
lot
of
the
discussion
goes
on
there.
B
Yeah
I'm
there
and
I
I've
asked
some
of
these
questions
in
the
past,
but
I
didn't
there
wasn't
much
activity
but
that.
A
B
A
I'm
sorry,
you
had
a
very
small
question
and
I
missed
it:
rada,
okay,
yeah
yeah,
okay,
I
will.
I
will
definitely
take
a
look
at
that
and
I
can
point
you
to
some
things.
I'm
just.
A
A
A
A
On
the
we're
we're
doing
the
rollout,
I
mean
basically
it's
and-
and
we
have
unlike
everyone
else,
we
have
to
make
it
work.
Ssh
replaces
ssh
tunnels,
it
does
other
things
as
well,
but
the
ssh
tunnels
code
has
been
deleted
from
kubernetes,
and
so
the
only
way
for
google
to
to
actually
work
is
on
the
api
server
network
proxy.
So
we
are
very
committed
to
making
this
work.
B
Yeah
yeah
we
do
and-
and
we
are
using
it
also
because
ssh
tunnels
were
super
unreliable
for
us.
So
it's
so
it's
great.
I'm
not
gonna
disagree.
B
So
so
one
thing
that
I
I
if
I
have
some
like
two
more
minutes
like
there's,
a
an
fd
leak
issue
that
I'm
investigating
and
I
have
a
way
to
reproduce
and
with
a
with
a
custom
program
that
is
open
source
and
so
far
I
noticed
several
things
like
the
fd
that
is
leaking
is
connections.
Is
it's
an
fd
that
belongs
to
a
unique
socket?
B
That
is
the
connection
from
the
from
the
api
server
and
those
are
the
only
leaks
that
we're
easily
reproducing
at
least
and
those
connections
yeah,
and
there
are
tons
of
those
those
leaks.
So
I
noticed
two
things:
one
is
that
for
use
when
using
grpc
and
unix
sockets
the
keep
a
live.
Timeout
is
not
it's
not
set.
It
is
set
when
you're
running
mtls
with
grpc,
but
it's
not
set
when
you
create
the
grpc
server.
B
So
we
try
adding
adding
the
time
parameter
with
the
keeper
live.
That
is
specified
on
the
command
line
and
we
don't
have
much
test
yes,
but
yet,
but
it
seems
it
helps,
and
I
also
try
using
the
max
idle
time
or
something
like
that.
That
grpc
takes
in
the
keep
alive
parameters
configuration
and
that
also
helps
but
but
yeah.
B
B
A
I
I
mean
we've
certainly
had
issues
I
can
try
and
pull
up.
I
know
that
there
are
quite
a
few
of
the
quote-unquote
op.
You
know
for
backward
compatibility
when
we
add
new
flags,
they
tend
to
be
optional
because
we
don't
want
to,
but
there
are
quite
a
few
of
the
flags
that
I
know
that
we
of
the
optional
flags
are
it's
like
yeah.
We
definitely
set
that
because
it
makes
things
a
lot
better.
I
am
not
aware
of
the
issues
you're
mentioning
off
the
top
of
my
head.
A
A
Gets
a
little
cranky
if
too
much
of
your
communication
is
in
one
direction,
and
we
definitely
had
things
like
to
keep
alive
that
in
part
of
the
reason
that
is,
there
is
because
we
were
having
problems
when
there
was
too
much
one-way
direction.
So
and
in
fact
I
still
suspect
the
latest
thing
I'm
looking
at
is
going
to
be
that
the
other
thing
I
will
mention
you
may
want
to
consider
taking
a
look
at
I.
A
It
has
never
helped
me,
which
is
why
I've
never
actually
made
the
change,
but
kubernetes
uses
a
very
old
instance
of
the
grpc
library.
B
A
It's
helping
you,
I
would
suggest
I
mean
you,
know:
you're
gonna
end
up
paying
me
anyway.
I
mean
feel
free
to
set
like
this
is
an
open
source
project.
It's
on
all
of
us
to
fix
it,
and
I
mean.
Let
me
ask
one
question:
are
you
an
open
source
contributor?
A
Yes,
of
course
I
I'm
a
reviewer
and
core
coordinators
too.
All
right!
Well,
are
you
a
reviewer
on
the
api
server
network
proxy?
No,
all
right,
so
you
seem
interested
enough.
I
suggest
we
we
start
making
that
happen
so
feel
free
to
submit
prs,
I'm
happy
to
review
them
and
then,
once
you
got
a
couple
of
pr's
in
feel
free
to
add
yourself
to
the
reviewers
list
and,
let's
start
making
this
thing
move
faster.
B
Yes,
yeah
I'll
be
happy
to
yeah.
I
was
planning
to
open
up
here,
but
I
wanted
to
test
also
in
not
only
in
jrpc
but
with
tcp
mode,
because
I
think
it
if
it's
a
grpc
issue,
it's
weird
that
it
also
happens
with
with
tcp,
because
tcp
is
super
widely
used.
So
I
wanted
to
check
if
that,
helps
and
and
then
open
an
issue
with
really
concrete
information.
A
B
One
thing
I
wasn't
sure-
and
maybe
someone
here
knows
if
not
I'll,
educate
myself
and
read
about
it-
is
that
the
grpc
keep
alive
seems
to
be
implemented,
implemented
using
the
http
2
ping
command,
but
I'm
not
sure
if
the
keep
alive
makes
sense
when
you
were
using
unix
sockets
instead
of
tcp.
A
B
A
Awesome
also,
I
mean,
if
you
have
something
like
a
really
good
reproducible
test
case.
Please
feel
free
to
add
it
to
the
issue
and
in
fact,
if
you
can
find
a
way
to
add
it
to
our
test
code,
I'm
a
huge
fan
of
improving
the
overall
testability
of
the
api
server
network,
proxy
and
isolation
of
kubernetes.
A
B
A
A
C
Yeah
I'll
try
to
make
this
quick,
so
we've
been
noticing
a
situation
where
I
guess,
like
the
the
cloud
provider
gcp
is
using
the
alpha
sdk
in
the
upstream,
and
we've
been
noticing
the
situation
where
some
of
our
users
have
security
permissions
set
that
don't
give
them
access
to
that
alpha,
sdk,
and
so
what
are
the
api
endpoints?
And
so
this
pr
that
I
linked
here
we're
trying
to
see
if
it's
amenable
that
we
could
that
we
could
maybe
get
a
change
into
the
upstream.
C
Where
in
a
dual
stack
situation,
it
could
choose
to
use
the
alpha
or
non-alpha
api.
So
you
know
this
would
just
help
us
out
with
some
of
the
problems
that
we're
seeing.
So
I
just
wanted
to
bring
it
up
here
and
see.
If
maybe
we
can
get
some
reviews
or
just
get
some
opinions
on
it.
A
I
I
am,
I
am
generally
in
in
support
and
yeah.
In
fact,
if
we
can
work
out
if
we
have
non-alpha
versions
of
the
apis,
that
will
just
generally
work.
I
am
a
fan
of
doing
just
not
using
the
alpha
apis.
C
A
D
All
right,
sorry,
so
yeah
I'm
going
to
talk
about
speeding
after
graduation
of
the
leader
of
migration.
So
for
those
who
don't
really
know
linear
migration,
when
you
water,
what
is
what
the
determination
is
about
so
leader?
Migration
is
a
mechanism
that
allows
a
high
cluster
to
transfer
the
little
election
between
kcm
to
kcm
processing,
because
you
know
the
qcm
system
holds
different
leader
leader
election
lock,
but
they're
running
different
set
of
controllers.
We
cannot
just
upgrade
because
they're
holding
their
own
different
type
of
block
by
running
a
different
set
of
controllers.
D
That's
going
to
be
a
case
that
sound
controllers
that
was
moved
from
ksm
to
ccm
were
run
by
both
kc
and
ccm.
If
we
don't
have
any
immigration
or
similar
mechanisms,
so
linear
immigration
is
right
now
a
beta
feature
for
a
site
for
a
release
cycle
is,
is
now
so
I'm
here
to
propose
to
have
a
proposal
to
amend
the
graduation
criteria
to
speed
it
up
a
little
bit.
So
the
reason
for
speeding
up
is
that
we
need
to
prevent
the
forever
beta
deprecation
policy.
D
D
So
the
issue
is,
I
don't
really
see
a
consistent
pattern
of
how
securities
deprecate
beta
feature
so
yeah,
so
the
only
safer
way
if
you
don't
stay
in
beta
for
quite
some
time,
so
it's
been
beta
for
two
release,
so
I
think
it's
time
we
speed
it
up
a
little
bit
just
to
avoid
that
second
reason:
we
want
to
speed
up
the
adaptation,
because
some
providers
may
want
the
feature
to
be
ga
before
actually
using
it.
We
don't,
we
don't
really
want.
D
We
want
people
to
use
that
a
little
bit
sooner
so
that
we
can
speed
up
the
ccm
extraction
effort
so
reason
because,
like
every
provider
may
have
a
different
timeline
for
the
migration,
we
don't
really
want
to
say:
okay,
leader
migration,
confusion
works
in
four
entry
corporate,
because
some
providers
may
not
want
to
use
integration
for
a
long
time
and
plus
plus
the
second
point.
So
we
want
to
go
ga
faster,
not
waiting,
for
you
know
some
providers
that
are
a
little
bit
slower
than
the
others
to
adopt
it.
D
So
what
I'm
trying
to
do
is
that
I'm
going
to
amend
the
graduation
criteria
by
the
pr2
kubernetes
enhancement,
saying
that
we
are
going
to
make
it
in
a
way
that
we
only
need.
We
only
need
a
new
test
suite
for
it
to
code.
First
one
is
we
test
it
with
open
source
kk?
Second,
one
is
we
test
we
test.
We
would
have
the
feature
end
to
end
with
at
least
one
automatic
collider,
so
the
original
say
is
intriguing
qualifier,
which
is
pretty
unclear,
so
I'm
among
so
I'm
changing
that
to
other
people
either.
D
So
please
note
that,
because
there's
ongoing
effort
on
the
lk
are
last
known,
good
testing
within
qualified
gcp,
so
this
has
made
what
may
overlap
with
each
other
but
anyways.
The
point
is
we
need
tests
from
the
kk
and
k
and
and
at
least
one
other
trig
provider
to
graduate
all
the
tests
are
about
to
be.
D
E
Go
ahead,
I
was
going
to
just
comment.
I
I
do
agree
that
I
think
I
mean
I
would.
I
would
love
to
hear
if
there
are
any
other
cloud
providers
that
are
planning
to
test
it
out
soon
or
we
could
convince
test
it
out
soon,
but
I
suspect
that
it's
going
to
be
intractable
for
us
to
get
like
all
cloud
providers
or
even
a
very
large
group
of
crowd
providers
to
do
this
before
it
gets
to
ga.
E
So
I
think
that
I
think
I'm
in
agreement
with
jared
on
that
that
probably
isn't
the
main
criteria
we
should
focus
on.
We
should
probably
focus
more
on
testing
and
if
we
can
find
any
cloud
providers
that
are
going
to
try
it
out,
I'm
happy
to
work
with
them
but
like
we
should
probably
start
just
kind
of
moving
forward.
E
F
Yeah
I
mean
we're
planning
on
using
this,
and
it
will
be
fairly
soon
and
by
fairly
soon
it
will
be
before
the
end
of
this
year.
D
A
Concerns
right,
okay,
I'm
not
even
talking
about
we
can
get
into
e
to
e
tests.
In
a
minute
I
would
like.
I
mean
there
are
plenty
of
things
where
we
we
have
proven
that
it
works
on
two,
and
we
only
have
a
test
on
one.
You
know
web
hooks
as
an
example.
As
far
as
I
know,
the
last
time
I
checked
web
hooks
are
only
tested
on
gcp,
but
we
have
a
reasonable
confidence
that
they
have
worked
somewhere
other
than
gcp.
A
So
then
this
is
why
I
turned
to
nick
right,
I'm
not
saying
that
we
need
to
have
an
amazon
test
in
the
open
source,
in
fact,
eventually,
presumably
we're
not
going
to
have
even
a
gcp
test
in
open
source
which
we're
going
to
have
to
work
out
how
we
want
to
deal
with
that.
A
D
Okay,
so,
like
crown
kind
of
a
specific
cloud
providers,
may
not
be
open
source,
I'm
just
just
a
side
note.
They
may
do
that
testing
internally
and
reported
results
to
us
sure.
A
And
if
nick
doesn't
want
to
show
us
his
tests
but
is
willing
to
come
and
sign,
you
know
our
the
the
cap
saying
I
I
I
have
tested
this
on
amazon
and
it
works.
We
have
reason
to
believe
this
is
not
a
google
specific
fix.
A
E
That
doesn't
sound
like
a
problem
right
because
I
mean
by
the
end
of
the
year,
would
be
in
plenty
of
time
for
the
next
release
cycle.
So
maybe
we
use
that
as
our
working
model
and
if
we
can
get
the
data
we
get
the
data.
I
really
appreciate
it
nick
and
you
know,
if
there's
a
snag,
we'll
talk
about
it
and
figure
it
out.
Yep.
A
D
All
right,
so
that's
the
point.
So
that's
sorry,
that's
a
problem,
so
each
provider
like
each
each
I
mean
hostile
cognitive
provider
may
have
their
own
mechanism
to
test
the
sa
cluster,
but
in
open
source
you
may
not
do
a
realistic
testing,
so
it
may
be
like
a
single
node
but
deploying
from
but
upgrading
from
kxm2
something
like
dynamics.
Third,
the
whole
upgrading
process
does
not
cost
downtime.
E
I
I
I
agree
with
what
you're
saying
there's
there
might
be
some
design
proposals
that
do
the
testing
that
make
you
sufficiently
confident
that
are
somewhere
in
between
like
adding
full
to
cuba.
Would
you
be
willing
to
like
review
those
as
part
of
the
cup
update.
C
Yeah,
like
I'm
kind
of
curious,
if
there's
some
way
that
we
could,
if
there's
some
way,
that
red
hat
could
contribute
back
like
our
proud
testing,
because
we
are
currently
testing
aws
and
openstack
nha
migrations.
We
have
like
three
control
plane
nodes
that
we
migrate
from
entry
to
out
a
tree
and
we're
running
that
in
our
ede
test.
Right
now,
we're
planning
to
add
gcp
and
azure,
I
think,
by
the
end
of
the
year,
so
we're
going
to
be
testing,
probably
five
or
six
of
these
through
an
entry
to
auditory
migration.
C
A
Great
honestly,
irrespective
of
anything
else,
that
sounds
great,
that
sounds
like
a
backup
plan
to
if
nick
has
problems
getting
his
stuff
like
I'd,
rather
have
two
two
channels
trying
to
to
get
the
the
second
cloud
provider
done
rather
than
one.
So
thank
you.
That
would
be
awesome.
Yeah,
unfortunately,
joe
has
left,
but
I
think
jared
had
the
same
joe
did.
A
Could
we
build
a
test
framework
that
does
sort
of
a
pseudo
upgrade
across
six
processes
going
from
three
to
six
and
making
sure
the
right
thing
happened
at
every
time?
I
think
we
probably
could
for
at
least
some
of
it,
but
I
think
we,
I
think,
that's
the
sort
of
thing
I
am
open
to
reviewing
that
in
the
cap.
Let
me
put
it
that
way.
My
one
and
only
concern
with
the
solution
like
that
is
that
part
of
this
work
is
in
the
cloud
is
in
the
leader.
A
D
So
I
think
this
can
be
resolved
by
you
know,
running
multiple,
multiple
processes,
just
as
you
mentioned
right,
so
we
so
existing
kubota
can
work
and
we
just
need.
We
definitely
need
a
hack.
A
Sure
I
mean
the
only
thing
I
will
say
is
that
the
existing
cube
up
only
brings
up
a
single
single
instance
of
the
control
plane
and
between
weirdnesses
around
s,
the
x,
the
multiple
lcd
ports
and
fcd
load,
balancers,
the
firewall
rule,
changes
and
various
other
things.
I
I
am
just
going
to
say,
I
think
it
it
is.
It
is
quite
a
big
effort
to
get
aha
working
on
cubeup.
A
Now
there
are
alternatives
you
may
want
to
look
at.
Something,
like
you
know,
cops.
I
believe.
G
cops
has
an
instance
of
bringing
up
gcp.
F
It
does
it
it's.
It's
got
gcp
and
aws.
I
was
going
to
suggest
cops
yeah,
that's
what
we're
doing
a
lot
of
our
testing
on
so
yeah
yeah.
Definitely
a
good
good
option.
There.
A
A
You
know
the
forever
bro
we
can
prevent
the
forever
beta
deprecation,
but
I
don't
want
to
do
it
repeatedly,
so
I
I
mean
I
am
in
general,
a
fan
of
what
you're
trying
to
do
here.
I
just
want
to
make
sure
that
we
don't
make
a
compromise
that
we're
going
to
be
unhappy
about
in
doing
it.
A
Okay,
so
thank
you
for
the
point.
Yep
awesome
did
you
have
anything
else
you
wanted
to
bring
up
on
this
jared?
Oh,
I
think
I'm
doing
it
right
now.
Thank
you,
awesome
yeah.
So
we
can
certainly
continue
this
in
the
cap
updates
you're
talking
about,
and
I
think
we
can
come
to
a
good
spot.
The
one
thing
I
will
mention
is
that
this
is
obviously
going
to
be
a
discussion,
so
we
probably
like
we
need.
F
I
I
do
think
we
should
have
something
that
has
a
real
aj
cluster
and
I
think
we
should
be
looking
at,
like
the
the
I
think,
there's
a
metric
like
kcm
and
ccm
have
that's
like
hazley
has
leader
as
leader,
or
something
like
that.
We
should
be
making
sure
that
there's
only
one
leader
so
that
this
whole
thing
actually
works,
which
I
I
must
I
don't
know
like
I'll
go.
What
what
your
test
looks
like.
F
D
A
If
I
may
suggest,
I,
like
your
general
idea
nick,
but
there
is
jared,
actually
did
some
really
nice
work
recently
of
doing
a
perk
rather
than
a
per
controller
manager.
Health
check,
jared's
added
per
controller
health
checks,
and
so
I
think
that
it
would
make
a
lot
of
sense
to
add
something
that
would
allow
us
to
increment
a
value
based
on
the
controller
running
on
its
health
check.
And
we
could
actually
get
a
count
of
controllers
that
way
and
validate
that.
We
only
have
one
controller
running.
C
C
And
then
we
exercise
the
master
nodes
or
you
know
the
control
plane
nodes
to
see
that
the
ccms
are
providing
the
information
they're
supposed
to
provide,
and
I
think
we
have
one
more
check
to
make
sure
that
the
kcm
is
completely
turned
off
at
that
point.
So
we
do.
We
do
a
couple
sanity
checks
to
make
sure
we're
running
in
ccm
mode.
Then
we
do
a
like
kind
of
a
soak
test
to
make
sure
that
there's
no
disruptions
in
the
cluster
and
then
we
do.
A
C
I'd
have
to
double
check
how
we're
checking
those
quantities
now,
because
you
know
I
don't
I
don't.
I
know
it
does
some
check
to
see
if
the
deployment
has
happened
and
we
have
you
know
we
have
an
operator,
that's
managing
these
things,
so
it's
kind
of
checking
to
make
sure
that
the
operator
is
doing
the
right
thing,
but
I'll
have
to
check
for
specifics
on
those,
because
that's
a
good
suggestion,
walter.
A
Awesome,
thank
you
jared.
That's
yeah,
then
also
thank
you
for
the
health
check,
because
I
think
the
per
controller
health
check,
because
this
is
one
more
place
where
I
think
we
may
be
able
to
get
value
out
of
it.
Thank
you
awesome
did
anyone
have
anything
else
on
the
migration.
A
All
right,
so
this
actually
touches
nicely
on
stuff.
We've
already
talked
about
so
sick
cloud
provider.
Succession,
leadership
positions-
you
probably
noticed
andrew,
is
not
here.
Andrew
right
now
is
taking
a
little
bit
of
time
off
to
to.
You
know,
look
after
himself
and
his
family,
I'm
hoping
he
will
come
back.
A
Having
said
that,
we
if
he
does
step
away
or
even
if
he
doesn't,
and
what
either
he
or
I
later
step
away.
We
do
not
have
a
succession
plan.
Who
is
the
backup?
A
I
think
it
would
be
healthiest
for
the
sig
if
we
actually
had
that
set
up,
and
in
fact,
if
we
decide
on
who
the
backup
is,
I
would
strongly
suggest
that
we
start
training
those
people
up
so
that
they
they
don't
go
in
sort
of
suddenly
from
nothing
to
full,
chair
cold.
A
So
and
similarly
for
the
tl
spots.
Now
governance
says
we
are
not
allowed
to
have
more
than
one
share
from
the
same
cloud
provider.
We're
not
allowed
to
have
more
than
one
tl
from
the
same
cloud
provider
currently
andrew
and
I
are
both
working
as
chair
and
tl.
A
A
So
I
am
going
to
propose
that
I
would
like
to
see
that
start
happening
if
we
are
democratic,
so
I
would
suggest
if
someone
is
interested
in
becoming
the
backup
they
nominate
themselves,
and
then
we
affirm
that
everyone
in
the
sig
is
happy
having
them
as
the
backup
chair.
Does
that
seem
reasonable
to
everyone.
F
Yeah,
I
would
advocate
for
opening
up
like
chair,
and
I
get
our
backup
chair
and
back
up
tl
separately
and
but
not
not
having
the
constraint
that
they
have
to
be
different
people.
So
if
we
end
up
with
different,
you
know
the
more
people
that
volunteer
there's
more
spots.
But
if
you
know
we
don't
have
a
lot
of
volunteers
and
they
can
be
filled
by
the
same
person
if
necessary.
A
Okay
sounds
good
to
me.
The
other
thing
I
will
mention
and
sort
of
funny.
This
was
pushed
by
someone
at
microsoft,
but
not
rodrigo.
A
A
Based
on
that,
so
we
have
good
reasons
why
we're
limiting
that
access
and
there's
some
interesting
rules
to
it.
So
the
rules
are
we're
trying
to
make
sure
that
feature.
Work
does
not
go
into
that
repo,
so
bug
fixes
are
fine
and
to
make
sure
that
this
is
a
fair
process.
One
of
the
things
we're
doing
is
saying
that
googlers
cannot
accept
under
special
circumstance
or
the
googler
me
cannot
approve
google
changes
to
the
cloud
to
the
cloud
legacy
cloud
providers
gcp
the
vmware
chair
cannot
approve
vmware.
A
A
Awesome,
so
toward
that
end,
is
there
anyone
who
would
like
to
become
backup
chair.
F
A
Sounds
good,
so
I
don't
think
we'll
do
the
too
well
mikko's
point.
I
don't
think
we
do
the
election
today,
but
what
I
can
do
is
send
out
the
email,
I'll
include
nick's
name
as
a
perspective
and
suggest
that
we're
going
to
vote
on
this
in
the
next
next
cloud
provider
meeting,
which
is
next
wednesday.
C
F
Awesome,
I
also
just
want
to
throw
I
mean.
I
know
that
you
know
jared
you
work
for
google,
but
you've
done
a
lot
of
a
lot
of
work,
so
maybe
stl
or
chair.
If
something
happens
to
walter,
I
guess
but
okay,
I'd
love
to
see
jared
also
involved.
A
Yeah,
I'm
I'm
all
for
that.
All
right
cool
sounds
good.
Does
anyone
have
anything
else
they
wanted
to
chat
about.
A
I
think
I
had
third
row
seats
when
they
they
came
through
in,
like
1992.,
amazing
yeah.
Also
I
just
dated
myself.
A
All
right
cool,
if
no
one
has
anything
else.
Thank
you
all
and
have
a
great
day.