►
Description
Meeting Agenda: https://docs.google.com/document/d/1aPgGRl4WewM3txrCYvkepsxLUvGdMG1EzlVfCNeV74M/edit#bookmark=id.2g97y0i3it5o
A
Welcome
everyone
today
is
Wednesday
June,
the
28th
2023,
and
this
is
the
API
server
Network
proxy
meeting.
This
meeting
is
part
of
the
many
sigs
and
it's
being
hosted
by
kubernetes
sigs.
So
as
such,
we
follow
their
code
of
conduct,
which
basically
means
please
be
kind
to
each
other
and
if
you'd
like
to
talk
raise
your
hand,
although
there's
only
four
of
us
right
now,
so
it's
probably
not
a
big
deal.
A
A
B
Cool
that
first
ability
to
control
at
a
finer
level,
I
know
Walter
and
I
synced
offline
about
it
as
well.
I
think
that
he
and
I
are
both
in
agreement
that
it
should
be
moved
over
to
the
KK
repo
as
an
open
issue
there
for
more
API
Machinery
folks,
because
it's
a
pure
API
server,
feature
or
issue.
D
Follow
it
I
think,
specifically
to
your
point
David.
This
is
an
enhancement
to
what
we
already
have
and
we
would
like
to
treat
it
as
an
enhancement
rather
than
a
requirement.
C
That
sounds
reasonable.
I
I
agree
that
I
don't
see
that
it
would
have
any
impact
on
the
connectivity
server
itself.
This
looks
like
it
would
be
a
request
having
no
proxy
setting
in
the
configuration.
B
And
the
second
one,
the
Cosmetic
past-
that's
been
merged,
so
we
can
move
on
to
the
delete
stale,
UDS
PR
I
was
really
hoping
to
kind
of
finalize
our
decision
on
this
one
because
it's
been
kind
of
hanging
open
unresolved
for
a
little
while.
D
B
Oh
actually
so
this
is
this
is
a
a
PR
from
gwedley
I'm,
not
sure
his
name
and
then
I
think
I
sent
a
counter
proposal.
Pr
actually.
B
B
C
I
take
it.
This
is
trying
to
do
the
equivalent
of
what
normally
happens
when
something
actually
running
a
server
fails.
Eventually
the
port
gets
cleaned
up,
and
when
you
restart
again,
the
portion
is
open,
and
in
this
case
the
file
doesn't
get
cleaned
up
and
you're,
hoping
that
it
would,
or
you
would
like
the
default
Behavior
to
be
I
claim
this
file.
D
Right
and
so
the
question
is,
do
we
allow
you
to
do?
We
remain
at
least
mod
like
we're
going
to
be
making
a
non-back
the
the
BR
is
about
not
making
it
the
default
Behavior
different
than
the
old
one
which
isn't
a
backward
incompatible,
and
the
question
is
how
close
to
Backward
Compatible.
Do
we
want
to
be?
C
D
B
D
B
D
D
The
advantages
of
being
the
the
Sig
lead,
you
have
more
power
now
Mika.
A
D
A
Okay,
so
on
to
the
next
topics
we
have
did
we
want
to
get
into
the
ga
requirements?
Draft.
D
B
I,
honestly,
don't
have
a
large
change
since
the
last
time,
I
don't
think
I've
changed
the
like
bulleted
high
level
swag
in
a
while,
and
the
latest
feedback
I
received
was.
It
would
be
good
to
massage
this
cap
into
the
latest
template
so
that
all
the
new
questions
around
like
reliability
and
observability
can
be
more
clear.
I
haven't
done
that
yet
so
I'm
not
sure
we
should
jump
into
this
here.
C
C
B
Okay,
I
will
say:
I
will
mention
one
possible
topic
for
either
now
or
next
meeting
as
I
was
sort
of
thinking
about,
I
took
a
look
at
the
template
and
one
of
the
questions
is
around
cross
version
and
testing
test
coverage
and
I
thought
that
it
might
lead
us
to
revisit
the
recent
approach
we've
made
to
our
branching
strategy
and
actually
branching
strategy
is
a
topic.
I,
remember,
Imran,
who
I
know
I
see
isn't
here
today
was
curious
about
so
we
may
want
to.
B
D
B
A
good
idea
we
had
the
time
today
that.
B
Okay,
so
this
is
a
little
bit
off
the
cuff,
but
from
memory
and
I've
been
involved.
So
as
a
reminder,
my
involvement
in
connectivity
proxy
is
sort
of
goes
back
a
couple
years,
but
it
was
after
it
was
largely
designed
and
implemented
and
I've
been
on
the
the
gke
team
at
Google
and
basically
I
worked
to
integrate
it
and
use
it
within
gke
and
to
sort
of
stabilize
all
the
issues
that
we
found,
which,
in
the
beginning
were
a
little
more
severe
and
had
to
do
with
reliability.
B
But
then
sort
of
the
tail
end
was
more
just
you
know,
resource
leaks
and
other
not
quite
as
serious
and
user-facing
issues
and
things
feel
pretty
stable
now.
So
then,
as
we
part
of
that
recent
stabilization
push,
we.
B
B
How
can
we
move
forward
with
some
stabilization
bug,
fixes
and
get
them
backported
in
a
way,
because
at
that
time
there
was
only
a
0.0.x
and
a
single
I
think
the
master
branch
in
the
repo
and
we
had
historically
a
chronic
problem
where
contributors
would
sort
of
send
a
PR
and
update
go
mod
dependencies
in
a
way
that
would
make
in
particular
there's
a
connectivity,
Dash
client
sub
like
go
module,
that's
a
subdirectory
within
this
repo
and
that
is
compiled
into
API
server
over
in
KK
in
particular,
we
would
sort
of
frequently
run
into
updates
to
that
go
mod.
B
D
I,
don't
think
you
made
a
mistake,
but
I
think
it
you
touched
kind
of
lightly
on
the
client
and
I
think
it.
It
is
kind
of
key,
so
I
would
actually
spend
a
little
more
time
on
it.
So
we
have
two
go
mod
files
and
we
have
two
we've
broken.
We've
got
one
library
and
a
bunch
of
binary
binaries.
The
purpose
of
the
library
is
to
be
consumed
by
whoever
is
using
the
connectivity,
which
is
almost
always
going
to
be
the
cube
API
server
so
that
go
mod.
D
The
cube,
AP,
the
the
library
for
the
go
mod
file
has
to
be
compatible
with
the
kubernetes
version
that
it
is
going
to
be
compiled
into,
and
we
want
to
be
careful
about
ever
letting
it
get
ahead
of
the
version
of
kubernetes
that
it's
going
to
be
compiled
into,
because
if
it
is
then
go
is
going
to
upgrade
the
the
dependent
libraries
sort
of
the
diamond
problem,
and
so
for
things
like
grpc
klog
Etc.
We
need
to
be
very.
B
Right
and
it's
one
repo
that
has
at
least
three
big
important
moving
Parts
the
connectivity,
client
library
that
Walter
just
summarized
also
the
connectivity
server
binary
and
then
the
connectivity
agent
binary.
So
ultimately
in
a
typical
cluster.
What
you
see
is
at
least
three
different
binaries,
typically
it'll
be
API
server
with
the
version
of
connectivity,
client
that's
compiled
into
it
and
those
other
two
binaries
all
talking
to
each
other,
which
actually
sort
of
dovetails
with
that
other
topic
of
version.
Sku
testing
strategy
I'm,
not
sure
how
to
approach
that.
B
Actually,
so,
let's
see
back
to
branching
strategy,
so
we
had
like
a
a
year
ago,
time
frame,
we
had
a
single
branch
and
we
were
tagging
releases
as
0.0.x
and
just
keeping
that
client's
client
libraries
go
mod
compatible
with
as
far
back
as
we
would
ever
think
that
we
would
need
to
reasonably
back
Port
bug
fixes
around
half
a
year
or
so
ago.
We
introduced
a
second
Branch,
so
the
current
branching
strategy.
We
have
a.
B
We
have
a
documentation
of
this,
but
I
think
the
current
branching
strategy
is
KK
at
1.27
and
newer,
we'll
use
the
0.1
tag
and
and
Mainline
Branch
or
Master
branch
and
older
KK
versions
will
continue
using
the
0.0
release,
tag
series
and
there's
a
corresponding
Branch
to
that
release.
Dash
0.0,
but
we
may,
in
the
future,
want
to
have
a
one-to-one
branching
strategy
with
KK.
That's
also
an
idea
on
the
table,
but
it
would
seem
to
have
more
overhead.
B
You
know
there's
kind
of
this
tension
between
being
more
tightly
coupled
or
more
Loosely,
coupled
with
KK
when
I
was
adding
instrumentation
some
Prometheus
metrics
to
connectivity
client,
it
was
actually
very
difficult
to
get
those
emitted
from
API
server
and
if,
if
the
whole
connectivity
project
we're
more
tightly
coupled
and
like
lived
in
a
staging
directory,
that
would
have
been
avoided,
but
I
can
see
that
things
are
structured
to
be
a
little
more
Loosely
coupled.
C
I
like
having
a
separate
project
that
owns
here
is
the
library
you
import
to
speak.
To
me
here
is
my
agent,
and
here
is
my:
is
it
called
server
that
built
the
third
component
I
like
having
those
as
their
own
project
as
opposed
to
telling
somebody
like
okay,
here's
the
project?
But
if
you
want
a
client
half
it
is
our
client
for
actually
connecting
to
this
and
using
it
is
in
KK
I.
C
B
Yeah
and
just
to
point
out
just
to
point
out
a
detail
here:
there's
the
two
modes:
there's
the
grpc
mode,
which
gke
uses
and
then
there's
HTTP
connect
mode
and
your
your
point
of
view
is
even
strongest
with
HTTP
connect
mode.
D
D
C
Be
potentially
more
convenient
to
have
an
SCD
client
baked
into
Cube,
it
would
always
match
we
would
never
conflict
on
grpc
levels,
but
it's
a
separate
client
for
a
separate
project
and
Stage
separate
we've
been
doing.
D
So
a
couple
of
points
worth
talking
about
why
we
did
point
one
Joseph
mentioned
the
go
mod
and
the
go
mod
is
certainly
part
of
it,
but
there
were
two
other
triggering
events
that
all
happened
around
the
same
time.
D
So
one
is
a
couple
of
the
kubernetes
security
folks
had
cves
that
they
wanted
fixed
and
they
just
ran
a
script
that
uniformly
upgraded
our
go
mods,
that's
fine
on
the
agent
and
server
side,
but
it
actually
introduced
some
problems
on
the
client
side
because
it
was
accidentally
upgrading
the
API
server,
which
is
where
we
learned.
We
needed
to
be
very
careful
about
the
go
mod
in
the
client
side,
so
that
was
one
piece
with
Lessons
Learned.
D
The
other
piece
of
this
was
that
even
for
ourselves,
there
were
I
think
it
was
a
channel
problem,
but
we
had
discovered
a
problem,
a
performance
problem
in
the
connectivity
server,
and
we
wanted
to
do
some
I,
wouldn't
say
radical
surgery,
but
at
least
some
significantly
some
significant
surgery
in
the
system
to
make
the
change,
and
there
was
some
concern
about
how
stable
that
those
changes
were,
and
there
was
resistance
to
merging
those
changes
in
on
something
that
we
kind
of
considered
to
be
the
stable
line.
D
And
so
one
of
the
accommodations
we
came
up
we
came
with
was
we
will
we
will
we'll
branch
that'll
allow
us
to
keep
something
that
works
with
the
older,
with
the
go
mods
with
the
cve?
We
we
can
put
the
CDs
in
the
new
one
with
the
old
127
and
before
where
we
don't
want
to
upgrade
accidentally
upgrade
we'll
do
that.
But
it
also
allows
us
to
then
merge
some
of
the
more
more
interesting
changes
into
the
new
line
and
get
them
stay.
C
D
D
Yeah,
so
so
the
interesting
problem
is,
we
have
gone
back
and
forth
on
exactly
what
is
our
criteria
for
creating
a
new
Branch
like?
Do
we
do
it
with
every
release?
Do
whatever
do
or
do
we
do
it
every
time
we
have
a
need
to
make
an
upgrade
say
for
a
cve
or
some
other
major
change
where
that
change
would
be
moving.
Libraries
ahead
of
the
oldest
supported,
API
server,.
B
C
I
will
say
there
that
I
think
you
could
also
very
justly
say,
look
company
X.
If,
if
you
want
to
be
able
to
do
maintenance
on
a
thing
Fork
it
and
run
run
your
maintenance
and
it's
not
it's
not
evil
Fork,
it's
not
Fork,
but
it's
evolving
independently.
It
is
and
trying
to
pitch
itself
as
I'm
the
source
of
Truth.
It
is
I,
have
a
different
maintenance
guarantee
and
my
maintenance
guarantee
requires
me
to
be
able
to
build
my
service
separately.
That's.
C
Red
hat
does
that
to
be
able
to
provide
CDE
fixes
for
things.
For
instance,
we
also
have
an
extended
support,
so
we
have
a
repo
where
we
can
do
that
and
we
are
trying
to
to
for
her
community,
but
we
are
maintaining
very
old
versions
of
a
product,
so
I'm.
Also,
okay,
if
you
do
want
to
make
branches,
that's
also
fine.
C
I,
like
the
way
Cube
has
a
branch
for
every
release,
because
I
think
it
does
make
it
easier
just
if
there
was
some
reason
why
you
didn't
want
to
coming
back
and
saying:
look
company
X,
it's
a
little
awkward
than
it's
your
company,
but
this
is
this
is
the
cost
of
doing
business.
The
community
cannot
maintain
this,
for
you.
B
Yeah
I
think
the
the
downside
to
where
we've
arrived
is
that
contributors
are
confused,
which
branch
they
should
be
interacting
with
I.
Think
it's
a
little
easy
to
miss
the
the
documentation
that
we
do
have
on
the
readme
file,
or
it
might
be.
The
releases
file.
I,
can't
remember
and
like
cluster
operators
are
not
sure
which
of
these
minor
series
tag
versions.
D
So
I
mean
it
would
tie
us
more
closely
to
kubernetes,
but
we
could
go
with
the
client
go
strategy
of
naming
our
minor
version
based
on
the
apis
or
the
the
Kate's
version
that
it
was
designed
to
go
with.
B
C
C
B
B
B
I'm,
not
sure
how
other
repo
repo
maintainers
might
feel
like.
If,
if
I'm
hit
by
a
bus,
whether
things
might
kind
of
dangle.
C
They
will
be
able
to
renegotiate
if
they
have
to
so,
and
anyone
who
feels
strongly
I
mean
it's
an
open
source
project.
If
someone
feels
strongly
they'll
be
here.
D
B
B
D
I
agree
one
one,
quick
topic:
I'll
bring
up,
not
because
I
want
an
answer
now,
but
I
I
think
it
is
worth
thinking
about
with
the
Jiang
and
other
things.
One
issue
that
we
and
I
think
it's
really
more
of
a
performance
issue.
I,
don't
think
it's
a
correctness
issue,
but
if
you
look
at
the
the
usage
mechanism
has
two
tunnel
segments
we
think
of
it
as
one
tunnel,
but
it's
actually
two
tunnel
segments.
D
If
you
look
at
the
connect
the
the
tunnel
segment
between
the
connectivity
server,
the
connectivity
agent,
it
is
one
it
is
always
grpc
and
there
is
one
HTTP
connection
for
every
server
agent
pair
and
we
Multiplex
all
traffic
over
that
single
tunnel
segment.
I
mean
for
each
of
those
segments.
We
just
Multiplex
all
the
traffic
that
needs
to
go
over
that
segment.
It
gets
multiplexed
for
the
API
server
to
connectivity,
server
segment.
We
actually
set
up
a
separate
tunnel
segment
for
every
connection.
D
D
Who
is
no
longer
here
felt
it
was
easiest
to
just
keep
that
same
behavior,
whether
it
was
HCB
connect
or
grpc
HTTP
connected
does
not
support
multiplexing,
so
that
was
there
really
wasn't
another
choice
for
The
hdb
Connect,
and
it
was
felt
easiest
just
to
maintain
the
gr
that
same
behavior
with
grpc
it's
expensive,
to
create
a
new
connection
and,
at
some
point
I
think
we
should
consider
using
the
same
mechanism
we
use
between
the
the
agent
and
the
connectivity
server
to
actually
be
able,
specifically
for
grpc,
to
be
able
to
Multiplex
traffic
across
that
tunnel
segment
and
I'll
only
only
actually
create
one
such
segment.
C
C
C
I,
that's
actually
I'm
surprised.
You
guys
have
tolerated
that.
So
long
I
could
see
accepting
that
as
a
ga
criteria
where
it's
this
is
such
a
significant
behavioral
change
in
performance
fix.
We
want
it
to
be
there
when
we
go
to
GA,
so
that
people
have
a
very
clear,
a
habit
or
I.
Don't
I'm
also
willing.
If
you
say
you
know
what
it's
bad
but
but
we
tolerated
it.
This
is
actually
GA.
We
should
just
go
ahead
and
say
what
we
have
is
GA
and
add
an
additional
feature.
C
B
Well,
I
was
going
to
chime
in
with
my
own
just
remark:
Walla,
there's
a
so
there's
a
there's,
an
a
proposal
to
simplify
the
connection
identifiers
scheme
and
I
think
that
it
might
make
sense
to
do
that
first
and
then
it
would
allow
us
to
write
that
support
for
multiplexed
client
usage
after
I
think
it
would
just
it
would
help
us
avoid
bugs
I
I
when
I
got
in
there
with
some
recent
memory
leak.
Fixes,
I
I
feel
I
feel
sitting
here
that
those
are
that
there
are
interrelated
issues
there.
B
D
I
mean
it's
a
bit
of
surgery
and
I.
That
doesn't
mean
I
necessarily
have
an
answer,
one
way
or
the
other.
It's
one
of
those
I'd
love
to
do
the
work
but
realistically
I
I
have
had
I
have
so
many
things
on
my
plate
that
if
I
tried
to
do
it,
it
probably
wouldn't
happen
until
it's
a
winter
break,
I
mean
so
the
my
I
guess-
and
this
is
a
silly,
maybe
a
bit
of
a
silly
reason
to
say.
D
Maybe
we
shouldn't
do
it
in
the
ga,
but
right
now,
I
just
don't
think
we
have.
Unless
someone
knows
of
a
resource
to
do
this
work.
I,
don't
see
this
getting
fixed
in
the
next
six
months,
because
I
don't
think,
there's
anyone
with
availability
to
get
it
done
in
the
next
six
months.
C
C
Actually
doing
it,
so
if
that's
that's
where
it
stands
for.
B
I
have
sort
of
a
Counterpoint,
maybe
I
might
disagree
you
might
before
declaring
GA.
You
might
want
to
prove
to
yourself
that
your
proxy
server
implementation
can
support
multiplexing
so
that
it
allows
in
the
future
the
clients
to
take
advantage
of
it,
which
might
involve
a
backwards
incompatible
change
to
the
the
proxy
server.
C
D
I
mean
we
know
that
multiplexing
can
work
and
I
think
maybe
there's
a
subtle
difference
here.
The
question
is:
how
can
we
add
what
would
amount
to
some
sort
of
cash
into
the
API
server
so
that
when
a
new
connection
is
desired,
it
automatically
grabs
the
the
connection
if
it
exists,
Cast
Connection,
if
it
already
exists
to
the
connectivity,
server
and
I,
think
that
should
be
doable
but
I
mean
to
Joseph's
point.
D
We'd
have
to
actually
confirm
that
and
to
be
Backward,
Compatible
I.
Think
one
of
the
interesting
things,
especially
if
we've
gone
GA
is
we
would
probably
need
to
extend
the
egress
configuration
resource
to
have
a
flag
that
controls
the
behavior
so
that
we
could
maintain
the
old
Behavior
and
allow
someone
to
go
forward
with
the
new
Behavior
if
they
wanted
it.
Now,
if
we
decided,
we
wanted
to
be
Backward
Compatible.
B
D
B
The
the
so
if
a
given,
let's
see
a
given
proxy
RPC,
so
there's
a
proxy
RPC,
the
connectivity,
server
exposes
and
I
think
that
I
think
the
the
protocol
assumes
that
it
will
never
receive
more
than
one
dial
packet,
but
it
could
be
extended
to
support
multiple
dial
packets
so
over
the
same
proxy
RPC
stream,
bi-directional
stream,
the
client
could
be
multiplexing.
Multiple
proxy
connections
right.
D
Yeah,
no
no,
but
I
mean
the
just
at
the
network
level.
It
would
just
be
there
wouldn't
be
a
new
dial
request
between
the
server
and
the
connectivity,
the
API
server
and
the
connectivity
server,
but
there
would
still
be
a
new
that
we
would
still
would
need
a
new
dial
request
at
the
protocol
level,
which
means
that
you
do
need
to
to
be
able
to
support
multiple
dial
requests
which
we
do
between
the
agent
and
between
the
connectivity
server
and
the
connectivity
agent.
At
that
segment.
We
allow
multiple
dial
requests
to
go
through.
B
D
D
It
is
going
up
right
now,
but
the
river,
but
but
the
thing
is
that
I
I
mean
a
lot
of.
It
depends
on
how
you
have
your
API
server
configured.
Only
API
server
initiated
traffic
to
the
data
plane
is
in
the
standard.
Configuration
goes
through
this
tunnel,
which
usually
means
you're
talking
your
web
hooks
right.
C
Go
ahead,
David
and
what
they
hook,
because
you
know
pods
matter
more
I
if
it,
if
it
isn't
severe
enough
to
to
encourage
you
or
the
people
mostly
running
this,
that
it
needs
to
have
e-fixed,
then
you
know
that's
a
decent
case,
for
we
can
fix
it
later
and
just
figure
out
how
to
handle
a
rollout
to
ensure
that
consumers
are
able
to
have
are
able
to
get
what
they
expect
and
not
fail.
D
It's
fair
I
mean
I
I,
just
cards
on
the
table
a
little
too
too
much
honesty.
Every
time
we
have
a
web
hook
problem
and
I
mean
we
all
run
kubernetes
clusters.
We
all
know
we
have
web
hook
problems.
This
is
why
we
have
things
like
the
cell
effort.
Thank
you,
David
awesome,
but
every
time
there
is
a
web
hook
issue,
at
least
in
our
part
of
the
neck
of
the
woods.
There
are
three
standards.
D
In
the
beginning,
it
was
legit
when
I
say
this
I
mean
the
the
proxy
server,
not
this
particular
issue
within
the
proxy
server,
but
the
proxy
server
was
one
of
the
three
standard
candidates
for
the
problems
with
a
web
with
web
hook.
If
web
books
were
causing
issues,
I
was
like
this
gatekeeper
and.
C
So
Walter
we
can't
hear
you,
and
that
probably
means
you
can't
hear
us
yeah
at
least
he's
on
video
yeah.
We
should
dub
him
like
assign
assigned
Ben
to
dub
Walter
and
tell
us
what
he's
saying.
C
We're
multi-channel
here
all
right,
I
love
it,
because
what
this
means
is
that
Google
was
watching
this
meeting.
It
was
like
Walter
has
said
too
much,
but
we
will
squash
this.
B
B
B
C
C
C
Okay,
we
can
leave
it
off
the
ga
criteria
and
put
it
as
a
post.
Ga
recommendation.
We've
done
that
in
capital
before
where
it's
a
post
year
recommendation,
but
it
doesn't
block
GA
of
the
enhancement.
B
C
C
A
Yeah,
it's
it's
unusable
thanks.
Everybody.