►
From YouTube: Kubernetes SIG Network meeting 20210805
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
we're
recording
to
the
cloud.
This
is
the
kubernetes
signup
meeting
for
august
5th,
2021
tim.
I
think
you
said
you
had
issue
triage
lined
up
for
us.
I
do
have
it
lined.
B
Everybody's
got
the
screen.
Yes,
looking
good
all
right,
we
have
a
whopping
six
to
look
at
today,
so
the
first
one
is
a
report
about
conformance.
I
did
respond
to
it.
It
looks
like
they
have
something
in
their
cluster.
B
That
is
doing
some
additional
validation
that
this
back-end
service
exists
and
then
failing
the
ingress
creation.
So
I
don't
think
this
is
actually
a
kate's
bug.
I
just
responded
here
a
little
while
ago,
they're
doing
something
that
is
intentionally
non-conformant,
so
I
don't
think
there's
anything
for
us
to
do
here
unless
I'm
totally
misinterpreting
this
bug.
A
I
haven't
read
it
yet.
That
sounds
right
to
me.
B
B
A
C
A
B
All
right,
flaky
test,
mutability
test
to
have
no
load
balancer.
I
didn't
have
time
to
go
and
crack
this
one
open.
Yet
yeah.
D
B
Beautiful
external
traffic
policy
does
not
work
for
node
port
services
all
right,
so
I
was
looking
at
some
code
recently
as
I
was
working
on
some
of
the
service
rest
stuff,
and
I
realized
we
don't
create
a
health
check,
node
port
when
we
create
a
node
port
service
and
that
seemed
suspicious
to
me,
and
then
I
saw
this
and
did
we
intend
for
external
traffic
policy
to
apply
to
help
to
node
ports.
I
can't
remember
I.
D
B
D
E
Okay,
this
is
a
confusing
thing,
because
also,
why
would
why,
wouldn't
you
be
using
the
cluster
ip
of
the
service
if
you're
internal
to
the
cluster.
D
B
D
B
F
B
B
That
would
be
wonderful.
Thank
you
I'll,
leave
this
open.
Then
we
can
do
a
little
bit
more
conversation,
but
it
sounds
to
me
like
at
most.
The
bug
fix
here
is
documentation
or
or
we
decide
that
ipvs
is
right
or
we
decide
that
ipvs
is
wrong
boy.
I
guess
there's
open
issue
all
right
I'll
leave
it
open
for
some
discussion.
Then.
B
Next
contract
and
tcp,
I
see
antonio
on
it,
I
haven't
had
a
chance
to
look
at
the
other
email
threads.
Yet
do
you
want
to
give
us
a
summary
of
this
one.
D
Yeah
this,
this
is
something
that
is
that's.
D
I
think
that
this
is
because
people
start
to
use
deployments
as
as
vms,
so
they
have
a
deployment
with
two
backends
and
they
decide
to
use
a
pro
or
something
to
remove
the
backend
from
the
service,
but
the
the
the
backend
is
still
alive,
so
they
keep
sending
tcp
traffic
to
that
and
they
want
to
to
cut
that
connection,
so
it
failed
over
to
the
to
the
endpoint
that
is
available,
but
because
we
don't
clean
the
contract
and
they
keep
hitting
the
and
they
all
the
you
know
the
backend
that
is
not
in
the
service,
but
this
is
still
alive.
D
This
is
for
me
something
that
what
we
commented
in
in
another
one
that
has,
I
think
that
I
linked
it,
and
maybe
the
endpoint
termination
thing
is
control
this
well.
I.
B
Don't
I
don't,
I
don't
know,
because
the
terminating
endpoints
was
simply
so
that
we
keep
the
representation
of
those
terminating
endpoints
in
the
endpoint
structure,
but
it
doesn't
change
the
way.
We're
like
we're
not
going
to
go
through
and
manually
clear
those
contract
records,
because
that's
how
people
do
traffic
training
right
like
if
we
did
that
it
would
break
everybody
who
currently
works.
D
Right
that
to
me
is
is
why
I
think,
but
I
didn't
have
time.
I
think
that
this
the
problem
is
that
this
is
not
the
model
that
they
should
use
to
to
deploy
this.
This
aha,
the
promise
you
know
if
they
want
to
have
two
back-ends
and
and
switch
to
another,
they
shouldn't
use
being
using
deployment.
So
maybe,
if
we
can
document-
but
I
didn't
have
time
to
to
think
how
they
can
deploy
that
that
thing
they
could
they.
B
B
All
right,
well
I'll
I'll,
take
a
look
at
this
one
later
too.
It's
interesting
thanks
for
jumping
on
that.
The
last.
D
B
Yeah,
that
might
be
a
really
good,
like
a
blog
post
or
something.
C
B
Okay
last
issue
to
look
at
today,
then,
is
an
old
one.
I
mean
a
few
weeks
old.
It's
this
request
for
changing
between
c
group
name
spaces.
My
initial
thought
was,
you
know.
Maybe
it
seems
reasonable
ben's
making
some
decent
arguments
about
why
the
existing
workarounds
are
acceptable.
B
B
B
D
Okay,
so
let's
go
with
the
the
endpoints
controller
and
the
the
problem
that
we
have
so
the
situation
is
then
points
controller
doesn't
have
a
notion
of
ownership
of
endpoints
okay,
and
we
have
this
thing
now.
That
is
ga
that
is
server
side
apply.
I
have
a
pr
with
a
prototype
and
well
I
don't
know,
if
rob
scott
is
here
what
what
we
want
to
do
within
points
controller,
we
want
to
enhance
it
to
have
more
control,
opting
points
we
want
to
solve
only
this
pack
that
when
it
starts
it
deletes
customer
points.
I
I
Yeah,
I
I
think
it's
too
there's
there's
a
couple
problems
that
you've
surfaced
in
your
pr,
and
I
think
one
of
them
is
too
difficult
to
solve
in
this
scope
and
probably
will
never
be
solved
for
endpoints,
and
that
is
that
we're
not
watching
for
endpoints
updates
and
reacting
to
them.
So
so
a
user
can
change
an
endpoint's
resource
and
nothing
will
happen
until
a
pod
or
something
else
changes
related
to
that
service.
That
will
trigger
another
sink.
I
That's
unfortunate,
but
that's
the
reality
we
have
today
and
it's
at
least,
if
it's
anything
like
endpoint
slices,
we
had
to
implement
a
kind
of
tracker
to
ensure
that
we
weren't
just
re-cueing
the
same
service
over
and
over
again
in
that.
In
that
case,
I
think
you're.
A
simpler
fix
that
you're,
already
working
on
is,
is
sensible
in
just
adding
that
note
to
the
manage
fields.
I
Entry
to
say
the
controller
has
updated
this
endpoints
resource
and
and
therefore
is
managing
it
and
then,
at
the
end
of
your
when
it,
when
it's
time
to
clear
out
that
endpoints
resource.
If
we
see
that
field
and
if
we
see
that
the
endpoints
controller
is
the
only
manager
of
the
endpoints
resource,
it
feels
like
it's
safe
to
clean
that
up.
If
something
else
has
interacted
with
that
endpoints
resource,
I
think
we
should
just
leave
it
alone.
I
D
D
I
I
think
we
need
the
check
leftover
endpoint
still
because
we
haven't
really
solved
the
rest
there.
It's
still
very
possible
for
the
controller
to
be
down
for
minutes
whatever
and
for
endpoints
to
not
get
deleted
during
that
time
frame
because
rest
fails
or
something
else
fails,
it's
admittedly
an
edge
case,
but
I
don't
think
we've
removed
every
potential
case
where
that
would
be
valuable.
I
D
D
G
The
122
was
out
yesterday
so
thanks
for
everybody
who
worked
on
it
time
to
open
and
then
worms
for
23..
Yes,
exactly
the
next
tweet
is
like:
okay,
let's
go
for
a
v2
command.
Yeah
v23
is
looking
at
november
time
frame
for
go
freeze
so
keep
your
keep
that
date
in
mind.
A
Cool
in
that
case,
everybody
gets
40
minutes
back.
D
B
C
To
let
it
so
yeah
we
wanted
to
let
it
soak
a
bit
and
it's
good
that
we
did
that,
since
there
was
at
least
that
one,
the
fix
reconciliation
bug
that
we
found,
but
I
think
we
just
have
to
look
and
see
if
we've
got
any
more
feedback
coming
from
the
community.
Otherwise
it
looks
like
it's
probably
pretty
good.
B
The
the
biggest
issue-
I
think,
if
I
recall
correctly,
was
that
it
went,
but
it
was
beta
and
20
right.
So
we
know
that
20
wasn't
available
in
any
of
the
like
managed
providers
pipelines
and
we
wanted
to
wait
until
it
hit
some
number
of
those
providers
pipelines
so
that
actual
users
are
using
the
api,
even
if
they're
not
using
the
dual
stack
part
of
it
they're
using
the
rest
of
the
api,
and
we
wanted
to
get
some
feedback
on
that.
So
yeah,
I
think,
and
we've
got
time,
is
coming
up.
Yeah
yeah.
C
Yeah,
we've
definitely
had
it
out
there
in
the
managed
providers
for
the
last
couple
months
now
so
so
I
think
we
can
consider.
G
It
for
23.
yeah,
yeah
aks
had
it
march
31st.
I
just
looked
it
up
and
I
think
gke
v120
yeah,
yes,
gte
v120
has
has
it
as
well.
C
Yes,
so
no
rest
for
the
wicked.
I
guess
I'm
gonna
go
start
editing.
Now
that
we've
got
122
out,
I'm
going
to
start
editing
stuff
to
get
dual
stack
out
there
for
123.
If
that
sounds
good
to
folks,.