►
Description
Meeting agenda: https://docs.google.com/document/d/1aPgGRl4WewM3txrCYvkepsxLUvGdMG1EzlVfCNeV74M/edit#bookmark=id.8n1dijyikgpx
A
All
right
welcome.
Everyone
today
is
Wednesday
July,
the
26
2023,
and
this
is
the
API
server
Network
proxy
meeting.
This
meeting
is
being
held
as
a
sub-project
of
kubernetes
sigs
and
as
such,
we
follow
their
their
meeting
guidelines,
which
is
essentially
means
treat
everyone
kindly
and
please
raise
your
hand
if
you'd
like
to
speak,
there's
only
three
of
us
here
today.
So
I
guess
those
rules
are
probably
a
little
abbreviated.
A
B
Yeah
I,
just
I
think
starting
yesterday,
I
noticed
I
sort
of
looked
at
the
last
few
PR's.
They
all
and
I
marked
a
few
okay
to
test
and
like
two
or
three
I
noticed
are
all
filling.
With
the
same,
like
lint
error,
it
seems
unrelated
to
the
changes
and
one
of
them
even
helpfully.
Somebody
commented
what
might
be
wrong
with
the
test
infra,
so
we
just
need
to
maintain
her
and
I'm
happy
to
volunteer
to
be
the
person
kind
of
follow-up.
C
C
Yes
and
I
did
create
a
PR
half
an
hour
ago
regarding
the
update
to
the
golang
CI
link
version,
I
linked
an
issue
that
other
users
have
been
facing
with
the
same
golang
CLA.
It
is
because
of
high
memory
usage,
so
I
did
I
tested
on
my
system
and
it
ran
up
to
about
40
gigs
of
memory
and
then
terminated.
It
got
terminated
so
I'm
I'm,
seeing
the
same
issue
in
the
job
on
the
CI
pipeline
as
well,
so
I've
updated
the
version.
Please.
C
Sure
also
one
more
thing:
whenever
I
open
a
PR
I'm
still
wait
for
the
test
job
to
run
it's
possible
if
I'm
added
to
the
the
viewers
or
the
maintenance
list.
B
That
sounds
right,
like
Michael
I'm
sure
you
know
better
than
me,
I
think
if
you're
actually
I
I,
never
remembered
the
exact
details,
I
don't
want
to
be
in
inaccurate,
but
I
think
if
you're
a
if
you're
marked
as
a
reviewer.
Maybe
you
get
the
okay
for
test
for
free.
A
I
I
think
so
and
Imran
you
know
please
open
up
a
PR
and
if
you'd
like
to
add
yourself
to
the
reviewers
there
I
think
it'd
be
that'd,
be
definitely
worthwhile.
Yeah
I'll.
C
Do
that,
okay,
while
I'm
speaking
I,
have
updated
my
PR
for
this
smart
Readiness
check,
I.
Think
Walter
had
a
comment
on.
You
know
not
disturbing
the
existing
way
that
agent
acted
in
terms
of
physical
Readiness,
though,
because
Readiness
folk
makes
cubic
one
to
whether
restart
pod
or
not
so
I
have
added
the
this
Readiness
check.
The
smart
Readiness
check
behind
a
flag.
C
So
if
the
flag
is
disabled,
only
the
Ping
health
check
would
would
be
enabled,
and
that
is
exactly
pretty
much
similarly
to
what
we
had
before
and
if
the
flag
is
enabled,
then
we
have
the
smart
resolution.
Yes,
yes,.
B
I
just
noticed
you
had
begun
speaking
about
the
agent
Readiness
check
and
Walter's
feedback
and
then
Walter
joins,
so
you
might
want
to
start
over
from
the
beginning.
Just
to
let
him
hear
the
whole
thing.
C
Okay,
so
I
saw
your
comment
on
the
the
Readiness
PR
a
smart
reading
Str,
and
you
mentioned
that
the
Readiness
flow,
the
the
facility
of
that
cubelet,
wants
to
restart
the
port
or
not
based
on
the
weather
is
still
so.
We
might
want
to
keep
the
the
logic
same
in
terms
of
what
we
did
before
so
I've
updated
the
pr
to
have
that
smart
Readiness
check
behind
acli
flag.
C
So
if
that
is
enabled
only,
then
we
would
test
the
evidence
based
on
whether
the
agent
is
able
to
connect
to
the
server
or
not.
Otherwise,
it
is
similarly
just
just
a
ping
Check
Yes
order.
D
I
I
agree
with
most
of
what
you're
saying:
I
just
want
to
make
a
a
subtle
but
very
important
comment
for
those
for
anyone
who's
not
just
on
the
naming
and
it's
it's
a
naming
thing.
We
have
Readiness
and
we
have
health,
slash,
liveness
and
so
Readiness,
whatever
happy
it's
the
liveness,
slash
Health,
which
is
a
separate
check
and
that's
the
one
that
cubelet
uses
to
restart
pods.
D
D
Readiness
do
what
you
want
with.
We
have
some
ideas.
Generally,
most
people
acknowledge
it's
for
load,
balancers
and
but
it
doesn't
actually
affect
core.
Kubernetes
functionality,
healthiness
or
liveness
does,
and
you
have
to
be
very
careful
with
that
check,
and
that
was
what
my
my
feedback
was
on.
So
I
just
want
to
make
sure
we're
careful
with
you
with
healthiness
liveness.
C
Okay,
okay,
got
it
so
yeah
I
have
updated
the
Fiat.
It's
a
separate
comic.
So
please,
let
me
know
your
feedback.
A
Okay,
cool:
do
we
want
to
go
back?
Do
we
want
to
go
back
to
the
last
few
PR's
being
blocked
or
I?
Think
that
golang
CI
lint
issue
was
kind
of
like
a
known
issue
that
was
going
around
for
a
little
bit
with
the
1.20
go
update,
so
it
sounds
like
we
kind
of
we
have
a
fix
in
place
for
that
that
should
unblock
the
PRS
that
are
there
did
we
did.
We
have
anything
else
to
cover
on
that
topic.
B
A
Okay,
so
we
have
some,
we
have
some
next
topics
from
that
were
kind
of
held
over
from
before.
Do
we
want
to
go
into
any
of
these
about,
like
the
ga
requirements
draft
or
the?
If
there
is
there
any
update,
I
guess
on
that
or
any
update
on
the
connection
management
extensible
pull?
Should
we
cover
these.
B
I
would
I
was
on
vacation
for
the
first
half
of
July
I'm,
okay,
to
skip
the
ga
requirements.
Draft.
A
Okay,
cool
I,
don't
know
if
there's
any
update
to
cover
here
on
the
connection
management,
extensible
PR.
It
looks
like
this
is
still
work
in
progress.
So
I
don't
know
Walter
did
you
have
an
update
or
anything
here.
D
I,
don't
I
I
meant
to
get
to
it,
and
then
there
was
an
unfortunate
set
of
stuff
showed
up
in
the
last
couple
weeks
and
I've
had
to
be
dealing
with
personal
issues
rather
than
this
sort
of
thing.
But
I'm
hoping
to
get
to
this
Friday
I
mean
I,
will
say
there.
So
there
are,
the
current
state
of
things
is
conceptually.
This
is
the
fixes
in
it's
worth.
D
D
So
when
we
talk
about
the
connection
management,
what
we
have
today
is
during
the
handshake
the
agent
reaches
out
to
the
connection
the
connectivity
server
and
basically
asks
the
connectivity
server.
D
How
many
things
should
I
connect
to,
and
it's
conceptually
it
is
of
the
belief
that
it's
going
to
go
through
a
load
balancer,
and
so
it
cannot
definitively
as
an
agent
determine
determine
what
the
correct
things
to
connect
to
are
because
the
load
balancer
is
going
to
throw
that
all
out
and
all
it
can
do
in
that
scenario
is
connect
to
one
once
it's
connected
to
one.
It
gets
back
in
the
number
of
connectivity
servers
that
it
should
connect
to,
and
the
ID
or
I
may
be
wrong.
D
It's
I,
don't
forget
I,
don't
remember
whose
ID
matters,
but
essentially
we'll
get.
The
IDS
will
be
transferred
between
the
agent
and
the
server,
and
then
it
will
once
it
knows
that
it
needs
more
than
one.
It
will
then
try
and
make
another
connection
if
it
gets
goes
to
a
new
connectivity
server.
A
new
connector
is
established
all
good
if
it
gets
to
a
connectivity
server,
it's
already
connected
to
then
one
end
or
the
other
and
I
off
the
top
of
my
head.
I.
D
Don't
remember
which
realizes
that
this
is
a
duplicate
connection,
we'll
break
it
and
cause
the
agent
to
keep
retrying
and
then,
depending
on
the
mode
you're
in
once
you
reach
capacity.
So
let's
say
You're
supposed
to
have
three
you
get
to
three.
You
can
configure
it
to
keep
trying
once
a
minute
whatever
period
so
that,
if,
like,
let's
say
you're
doing
an
aha
upgrade
and
you're
adding
extras
connectivity
servers,
it
can
actually
start
rolling
onto,
and
so
it
can
do
that.
D
That's
how
connection
management
works
today
and
it's
pretty
hardwired
into
everything.
D
And
so
the
idea
is,
you
can
just
say,
tell
the
connectivity
a
agent.
These
are
my
connectivity
servers.
Please
just
connect
all
of
them
and
there's
no
need
for
any
of
the
rest
of
this,
so
that
493
is
basically
designed
to
implement
that
those
two
things:
the
idea
not
just
being
that
we
get
that
new
functionality
that
I'm
adding,
but
that
other
people
who
have
other
ways
they
want
to
manage
the
connection
between
agent
server
can
more
easily
actually
implement
the
interface
and
do
that
work.
D
So
the
work,
the
the
the
the
the
most
of
the
heavy
lifting
is
done.
I've
got
a
couple
of
failing
tests
that
I
need
to
investigate
and
in
for
specifically
for
the
when
you
have
multiple
aging
or
multiple
servers
you're
connecting
to
when
you
lose
a
server.
How
you
handle
that
loss
and
reconnect.
There's
some
Edge,
weird
edge
cases
in
there
that
I'm
still
trying
to
get
through
and
I
haven't
had
time
to
work
on
it
in
the
last
month,
but
that
that
is
in
essence,
What
I've
Done.
D
A
Okay,
do
you
know
which
file
on
my
screen
I'm
looking
at
the.
D
Okay,
I
I
I
apologize
I'm
right
now.
It
switched
me
back
to
looking
at
your
your
agenda,
which
is
making
it
hard
for
me
to.
Let
me
see
if
I
can
get
my
PR
up
and
I'll
apologize.
I
will.
B
While
you're
pulling
it
up,
Walter
I
can
chime
in
I.
Remember
these
details:
All
Too
Well
since
I'm,
the
one
that
had
the
that
extended
the
agent
to
sync
forever
optionally
with
the
flag.
I.
B
Remember
that
a
server
responds
to
that
agent
connection
with
a
server
ID,
which
is
a
uid,
a
uuid
and
a
server
count,
and
it's
the
agent
that
makes
the
decision
looking
at
the
uniqueness
of
a
given
server
ID
to
drop
the
connection,
not
the
server
so
then
sort
of
as
a
result
of
this
there's
been
complaints
about
server
log
noise.
D
And
it's
possible
I
should
just
break
this
out
into
its
own
file,
but
you
should
find
what
is
currently
pretty
simple.
Is
the
proxy
connection
manager,
interface.
A
D
I
got
it
up,
and
so
right
now
there
there
are
basically
two
methods
in
short
connectivity
and
set
client
set.
D
And
so
ensure
connectivity
is
basically
a
get
initialize
or
get
started
method.
It
basically
just
says:
hey
I
need
you
to
go
ahead
and
begin
doing
connectivity
and
then
set
client
set,
is
just
having
it
and
we
could
break
it
out
into
add,
remove
if
we,
if
we
felt
that
was
better,
but
it's
basically
responsible
for
exposing
the
clients
that
are
available
and
so
right
now
those
are
the
only
two
methods
you
really
need
to
I.
D
Those
are
the
only
methods
I
needed
to
expose
to
make
connection
the
The
Connection
Manager
able
to
do
anything
and
then
the
two
implementations
of
that
which
are
below
it
are
the
proxy
connection
manager,
which
is
I
the
version
of
that's
the
existing,
basically,
the
existing
code
repackaged
and
then
you'll
see
one
called
list
proxy
connection
manager
and
that's
the
other
one
I
described
where
you
just
give
the
agent
a
list
of
connection
or
a
list
of
connectivity
servers
and
possibly
I
should
actually
put
nice
big
comment
blocks
explaining
what
each
of
them
do.
D
That
would
make
a
lot
of
sense,
but
it's
a
work
in
progress,
so
I
I
would
recommend.
Take
a
look
at
that
any
feedback
at
this
point
would
be
appreciated
before
I
go
too
far
down
the
rabbit
hole.
A
B
So
can
I
can
I
ask
more
about
the
original
motivation
for
this
I
think
you
already
covered
that
it
was
sort
of
for
the
cases
where
you
already
know
exactly
what
set
of
servers
you
want
each
of
the
agents
to
connect
to,
but
is
there
sort
of?
Is
there
also
sort
of
a
mapping
to
what,
like
some
clusters
are
going
to
have
control
plan,
Network
and
cluster
or
data
plan
Network
that
are
not
IP
routable
to
each
other?
B
So
connectivity
proxy
is
allowing
you
to
Tunnel
across
those
two
separated
Networks,
which
I'm
familiar
with
that
use
case,
and
then
is
this
supporting
a
use
case
where
you
do
have
a
flat
network,
but
you're
still
choosing
to
run
connectivity
proxy
for
like,
for
other
reasons,
like
security
posture
reasons?
D
I
I
think
that
would
definitely
it's
funny.
The
routable
is
an
interesting
thing
and
I
think
that
is
definitely
involved
in
this.
The
interesting
thing
is:
this
only
requires
traffic
to
sort
of
be
connection
traffic.
So
one
of
the
interesting
Concepts
is:
what
direction
can
you
connect
in?
So
this
one
requires
things
to
be
routable
from
the
data
plane
to
the
control
plane
and
you
to
be
able
to
make
a
connection
in
that
direction.
It
doesn't
require
connect
that
they're
connecting
the
other
direction
be
possible.
D
So
the
use
case
I'm
aware
of
is
it
right
is
two
separate
networks
that
are
routable,
but
where
connections
you
cannot
make
a
connection
into
the
data
plane
right.
So
this
is
basically
tunneling
out
so
that
they're,
so
that
when
the
cube
API
server
wants
to
make
a
connection
into
the
data
play
or
into
the
data
plane,
it
doesn't
have
to
make
an
actual
connection
through
the
routable.
D
B
That
make
sense
it
does,
but
would
you
so
to
turn
to
maybe
spin
the
question.
A
little
is
the
feature
agnostic
to
which
of
the
two
and
we
want
to
support
either
one
the
cases
I
laid
out,
whether
it
where
it
is
either
truly
a
flat
network,
but
you
want
to
you
know,
still
oh
yeah
around
possible
firewall
rules
or
it's
a
disparate
network,
but
we
still
have
like
a
set
of
n
specific
servers
that
are
somehow
routable
in
the
reverse
Direction,
but
not
the
forward.
Direction.
D
Yeah
I
think
that's
true,
and
it's
it's
also
I
mean
one
of
the
interesting
things
is,
it
might
even
later
be
used
for
more
complicated
I
mean
this
is
this
is
putting
in
the
option
for
others,
so
I
mean,
as
as
an
example,
you
could
extend
what
I've
already
done
and
put
in
an
actual
VPN
on
top
of
the
list
and
like
so
that
you
could,
you
know,
go
through
instead
of
giving
a
list
of
IP
addresses,
you
could
potentially
give
an
illness.
C
Yeah
Imran
you've
got
your
hand
up
so
I
have
a
question
when
he
said
about
the
the
upgrade
scenario.
So
in
the
existing
setup,
where
the
connection
agent
goes
through,
the
load
balancer
The
Voice
got
coupled
at
my
end.
I
wanted
to
know
like
what
happens
when
you
do
an
upgrade,
and
is
there
a
like
a
flag
that
helps
you
connect
to
the
the
newer
instances?
The
time
fly
like
that.
D
Joseph
could
probably
address
this
better
than
that
again,
but
the
part
of
the
idea
here
is,
if
you
run,
if
you
in
the
old
model,
when
we
originally
wrote
the
code
before
Joseph
made
his
changes.
If
you
removed
a
connectivity
server,
the
agent
would
go
and
attempt
to
find
a
new
one.
Great
works
great.
If
what
you
wanted
to
do
was
add
a
new
one
before
you
removed
the
old.
D
You
had
this
problem
because
they've
had
been
told
there
are
only
three
connectivity
servers
to
connect
to,
and
it
would
never
go
looking
for
that.
Fourth,
and
so
the
fourth
one
would
come
up,
but
the
agents
wouldn't
none
of
the
agents
would
try
to
connect
to
it
because
they
were
all
saturated.
So
the
idea
was
hey.
D
If,
if
you
want
to
be
able
to
find
extra
connectivity
servers
that
have
been
added
to
the
load
balancer,
we
can
have
this
an
agent
once
a
minute
try
to
see
if
there's
a
new
connectivity
server
that
has
been
added
to
the
load
balancer
list,
and
so
that
is
based.
That's
the
basic
idea
behind
how
that
particular
feature
that
Joseph
added
works.
D
Joseph,
you
know
the
I,
don't
actually
remember
the
flags.
Do
you
remember.
B
I
remember
the
related
Flags
there.
The
work
I
did
so
previous
to
the
work
I
did
it's
been
like
a
year
and
a
half
the
agent
already
sort
of
assumed
that
it
would
need
to
handle
a
rolling
restart
of
servers,
so
it
it
remembered
server
count.
Let's
say
that
you're
in
the
the
server
account
equals
three
case,
which
is
more
interesting
than
one.
B
You
have
a
rolling
restart
of
the
servers.
A
given
agent
would
suddenly
notice
that
it
lost
one
of
its
servers
and
it
went
from
server
count
equals
three
to
active.
Number
of
servers
is
two,
so
then
it
would
it's.
Its
sync
Loop
would
basically
try
again
until
it
eventually
gets
that
third
distinct
server
ID
and
then
it's
sort
of
steady
state
happy
the
work
I
did
a
year
and
a
half
back
was
to
support.
You've
got
a
cluster.
Let's
say:
you've
got
a
three
control
plane,
node
cluster.
B
It's
it's
been
that
way
for
steady
state,
but
then
you
want
to
dynamically
grow
it
to
a
four
control
plane.
Node
cluster,
the
agents
prior
to
this
change.
I
made
couldn't
really
support
that
and
handle
it,
because
it
would
essentially
cache
the
server
count
three
forever
and
what
I
did
was
essentially
to
have
I
added
a
flag
called
think
forever
to
the
agent
and
it
just
basically,
instead
of
blocking
its
sync
Loop.
B
B
So
I
think
last
about
maybe
two
meetings
ago.
So,
a
month
ago,
we
briefly
discussed
holding
off
on
the
next
release.
Tag
for
the
agent
Readiness
Improvement
that
Imran
is
working
on.
I
would
still
like
to
wait
for
it
because
I,
don't
I,
don't
think,
there's
a
burning
need
for
a
new
release,
tag
and
I.
Think
it's
it's
really
close.
B
There
was
one
of
the
Pierre's
changed,
go
mod
to
upgrade
to
sort
of
kubernetes
127
compatible,
and
it
reminded
me
that
we
had
that
consensus
a
while
back
to
probably
future
directions,
switch
to
One
release
branch
in
API
server,
Network
proxy
repo
per
kubernetes
minor
version,
so
I
just
wanted
to
know
if
there's
any
strong
feelings
one
way
or
another.
A
Mean
I,
don't
have
a
strong
opinion
on
this,
but
having
some
way
to
tie
it
to
kubernetes
releases
seems
useful,
but
I
thought
the
last
time
we
talked
about
this.
We
talked
about
the
notion
that
it
doesn't
actually
need
to
be
tied
to
a
specific
kubernetes
release.
Is
that
do
like?
Do?
We
need
to
keep
track
of
which
release
it
was
released
from.
B
What
I
remember
was
I
think
because
of
part
of
the
path
to
GA
is
like
version
SKU
test
coverage
which
we
don't
have
properly
today,
and
our
previous
branching
strategy
was
Branch
sort
of
as
needed
in
order
to
if
you
need,
if
you
need
an
older
release
branch
in
the
network,
proxy
repo,
the
Blackboard
to
kubernetes
at
a
still
supported,
but
older
minor
version-
and
it
was
a
little
bit
hard
for
contributors
to
reason
about-
and
we
said
it
might
just
simplify
things
for
contributors
and
for
test
SKU
implementation
later
to
just
have
one
network
proxy
Branch
per
kubernetes
minor
version
and
just
establish
the
mapping
similar
to
client
go.
B
A
B
Yeah
you're
right
yeah
I
would
do
that.
First,
okay,.
A
Cool
yeah
I
mean,
if
you
wanna,
if
you
want
to
take
that
out,
that'd
be
awesome,
and
then
we
can
all
just
kind
of
drop
our
plus
ones
there
or
any
concerns
we
have.
You
know.
Maybe
we
can
give
it
a
week
or
two
lazy
consensus
and
then
figure
out
like
okay
at
the
next
meeting.
If,
if
people
are
cool
with
it,
then
yeah
we'll
just
go
ahead
with
that.
A
Is
there
any
kind
of
last
minute
topics
people
wanted
to
bring
up
here,
I
will
just
I
will
make
a
public
service
announcement
that
we
do
have
a
talk
submitted
under
the
Sig
cloud
provider,
maintainer
track
about
API
server,
Network
proxy,
and
thank
you
Joseph
for
volunteering
to
be
a
co-speaker
on
that
I
just
want
to
open
it
up
and
say
if
anyone
else
in
this
working
group
would
like
to
be
a
co-speaker
on
that
talk,
please
reach
out
to
me
on
slack
or
email
or
whatever.