►
From YouTube: Kubernetes SIG Network Bi-Weekly Meeting for 20220929
Description
Kubernetes SIG Network Bi-Weekly Meeting for 20220929
B
B
First
on
the
agenda,
something
something
UDP
from
the
host
I
see
Antonio
in
on
it
and
linking
to
PRS
Antonio.
Are
you
here,
foreign
okay,
since
he's
on
it,
I'm
inclined
to
just
leave
it
alone
for
the
moment.
A
A
C
So
this
came
up
in
the
minimize
iptables
restore
thing.
Basically,
the
change
trackers.
When
you
ask
them
what
changed?
They
then
immediately
forget
the
answer
after
they've
told
you,
and
so
if
we
go
through
a
round
of
sync
proxy
rules
and
fail
IP
tables
restore
at
the
end,
then
the
next
time
we
go
through.
C
It
won't
remember
about
the
changes
that
happened
last
time
and
they'll
already
have
been
they'll
take
effect
in
the
IP
tables
rules,
but
for
the
other
things
that
we
use
them
for
which
at
this
point
is
just
updating
contract,
it
will
forget
about
them.
So
we'll
leave
contract
stuff
dangling
in
the
past.
We
would
have
left
nodeport
stuff
dangling
as
well.
B
Cool
is
there
an
obvious
way
to
attack
it?
There.
C
B
A
D
E
There
are
uses
where
endpoint
slice
controller
is
being
used,
like
I,
think
of
multi-cluster
service.
For
an
example,
implementations
of
those
are
taking
endpoint
slice
controller
code,
forking
it
and
doing
their
own
thing.
I
would
love
to
make
that
a
slightly
easier
process.
Maybe
we
take
some
of
the
logic
and
endpoint
slice
controller,
move
it
to
a
library
that
can
be
used
more
broadly
I,
I'm,
not
sure
we
need
to
be
very
careful
here,
but
I
think
there's
something,
but
I
need
to
form
my
thoughts.
Okay,.
B
C
B
D
B
B
Any
anybody
who
Imports
kubernetes
kubernetes
as
arcades.io
kubernetes
as
a
go
module
is
doing
it
wrong.
There
are
no
guarantees
about
safety
for
that
module.
I
wish
that
we
could
move
all
the
code
into
an
internal
directory
and
just
stop
people
from
doing
it.
B
That
said,
people
do
it
anyway
and
I'm
not
above
breaking
them,
but
if
we
want
to
move
it
out,
we
should
be
careful
not
to
make
our
own
life
difficult
right.
This
is
why
the
staging
tree
exists.
B
Okay,
that's
on
Rob,
cubelet,
Cube
proxy
does
not
work
with
iptables
188
Dan
I
started
to
read
this
and
I
got
so
lost
in
the
weeds.
Can
can
you
catch
us
all
up.
C
Okay,
so
I
haven't
seen
this
latest
or
oh
no
I
guess
I
am
caught
up.
Somebody
claimed
that
if
you
have
a
system
with
iptables
188
on
the
system
and
iptables
187
in
the
cube
proxy
container,
that
it
will
not
work-
and
they
explained
why
and
we
freaked
out
and
filed
an
upstream
IV
tables
bug,
and
then
we
realized
that
the
explanation
that
they
gave
makes
no
sense.
It
just
seems
cute
Roxy
does
things
which
cute
proxy
does
not
actually
do
so
we
we
have
at
least
two
people
reporting
this
they're,
both
using
ipvs.
B
C
That
is
not
an
unfair
assessment,
like
yeah
they're
they're
opinion
is
that
there
is
no
guarantee
of
compatibility
in
iptable,
save
IP
tables,
restore
output
between
even
minor
versions
of
iptables
and
and
that,
while
it
will
always
work
going
up
if
you're
going
to
downgrade
you
have
to
like
delete
all
your
rules
and
recreate
them
like
the
you
know,
but
hey
we're
we're
you
know
using
less
IB
tables
these
days
like
the
so
so
the
the
IV
tables
cleanup
thing
actually
really
helps
here,
because
it
means
that
we're
definitely
not
using
the
same
rules
within
the
cube
proxy
container
as
we're
using
from
from
cubelet
anymore.
C
They're
they're
they're
trying
to
change
the
the
IP
tables
Legacy
interface,
to
use
better
NF
tables
rules
on
the
back
end,
so
that
it
will
be
more
efficient
which,
which
is
a
good
thing,
but
there's
no
easy
way
to
do
that
and
completely
preserve
compatibility
there.
There
have
been
some
discussions,
Offline
that
didn't
make
it
into
that
their
bug,
report
and
they're
actually
going
to
be
talking
about
it
at
some
upcoming.
C
It's
not
LPC,
but
some
other
net
Dev
or
something
some
oh
kernel,
Network
hackers,
meeting
and
and
figure
out
if
there
is
anything
better
that
they
can
be
doing
in
the
future.
But
okay.
B
Well,
I'll
leave
this
one
as
it
is
for
now
we'll
revisit
it.
In
a
couple
weeks,
Network
policy
blocks
access
to
nginx,
Ingress
controller
I
haven't
looked
at
this
one
Ricardo's
on
it.
Ricardo
are
you
here.
F
Yeah
I
didn't
look
at
that,
yet
I
will
think.
Oh
sorry,
I
didn't
have
much
yeah.
F
D
A
B
I
just
reading
the
title,
I
wondered
if
it
was
the
the
case
of
because
the
Ingress
controller
is
in
the
cluster.
They
need
to
actually
add
and
allow
from
that
namespace.
G
B
B
All
right
all
right,
well
Antonio's,
looking
at
it
wow,
okay,
leave
it
open
and
revisit
cubelet
does
not
recognize
multiple
options
in
resolve
comps.
So
this
is
the
oh
sorry,
there's
two
there's
two
resolved
confopend
issues
right,
there's
the
search
dot,
one
which
I
think
is
now
resolved
and
there's
this
one.
B
So
the
problem
at
hand
is
that
somebody
found
in
the
wild
and
that's
your
resolve.
Conflict
looks
like
this,
and
specifically
that
is
two
lines
that
are
both
options
and
our
code
just
picks,
one
of
them
I
think
it
just
takes
the
latest
the
later
one
and
the
proposal
is
to
try
to
merge
them
together.
D
B
On
this
yeah
and
my
inclination
was
there's
no
specification
for
resolve
conf.
That
says
what
we're
supposed
to
do
in
this
case,
and
so
I
was
trying
to
say.
Let's
just
do
nothing,
but
it
sounds
like
it
actually
is
impacting
somebody
so
I
just
saw
this
this
morning.
So
I
guess
this
is
on
mine.
This
is
a
sign
to
me.
Yeah
all
right,
I'll,
look
at
this
today
and
and
follow
up
on
it
pods
getting
killed
after
startup
probe
failure.
B
A
B
Okay,
so
we
gave
them
a
ping
two
weeks
ago,
give
them
one
more
cycle
and
then,
if
no
response
will
just
close
it,
okay,
Q
proxy
cannot
run
in
non-privileged
mode.
I.
Remember
looking
at
this,
but
I
totally
forget
what
we
said.
D
B
D
D
Yes,
that's
that's
the
the
current
thinking,
I
think
it,
but
it's
a
spot
on
it's
it's
a
point
like
right
now
we're
on
to
our
own
privilege
mode
right,
or
at
least
we
tell
people
to
run
beverage
mode
or
we
don't
tell
people
the
maximum
privilege
that
we
need
so
people
just
run
it
in
privilege
mode
which
not
nice.
We
can
do
better
here.
B
Okay,
are
we
are
we
committing
to
work
on
this,
or
do
we
just
agree
that
this
is
an
issue
and
we
should
triage
it
and
throw
it
into
the
hopper
which
we
know
will
not
result?
It
will
not
result
in
action.
B
B
A
Right,
good
enough
cool
Dan,
your
next
with
podcast
Network,
again.
G
Yeah
I
just
wanted
to
bring
that
up
again.
I
think
we
talked
about
it
like
a
month
or
six
weeks
ago
and
I.
Remember
that
we
maybe
had
some
concerns
about
it,
because
if
I
remember
correctly,
it
was
adding
a
new
pod
phase
or
State
sandbox
State
that
was
like
pod
has
Network,
and
but
that
pod
has
network
was
actually
the
and
state
encapsulating
a
bunch
of
other
stuff,
not
just
networking.
G
It
was
relying
on
the
fact
that
networking
was
one
of
the
last
things
for
a
pod
or
something
like
that.
G
I
remembered
that
we
had
said:
oh,
we
might
have
some
concerns
with
this,
and
I
wanted
to
see
what
the
current
status
was,
but
it
looks
like
some
pieces
of
that
had
merged
already.
It.
B
Is
it's
merged
and
it's
in
25.,
okay,
yeah,
ouch
and
so
I
opened
I.
Think
you
linked
to
my
PR.
Basically
I
just
touched
every
file
every
line
in
the
file,
so
that
I
could
add
comments
which
I
don't
know,
maybe
there's
a
better
way
but
I
haven't
gotten
any
response.
Yet
I
did,
however,
put
us
in
the
approvers
list
for
the
cap,
so
it
can't
go
to
Beta
without
having
talked
to
us.
B
We
did
some
spelunking
and
it
looks
like
signode
had
some
real
conversations
about.
Maybe
we
should
call
this
sandbox
is
ready.
No,
no!
No!
Let's
use
Network
and
like
of
course
that
we
would
have
all
argued.
That's
completely
the
wrong
decision,
so
communication
breakdown
for
sure,
but
at
least
there's
a
stopper
in
place.
So
it
shouldn't
it
shouldn't
proceed
without
conversation.
Okay,
all
right
I've
been
paying
attention
to
it.
B
For
this
cap
cycle,
I
haven't
seen
any
responses
and
honestly
I
haven't
had
a
whole
lot
of
time
to
just
push
real
hard
on
it.
So.
H
Hey
this
is
Deep
by
the
way
I
was
driving
that
feature
so.
B
H
So
I
just
saw
that
item
in
a
second
agenda
and
therefore
wanted
to
drop
by
yeah.
As
you
said,
Kim,
it
was
exactly
the
case
like
we
started
off
a
bit
more
like
a
Sandbox
ready
being
like
a
more
appropriate
name
for
the
condition,
but
there
were
some
concerns
around
that
and
in
signode
and
basically
the
suggestion
I
got
from
approvers.
There
was,
let's
go
with
Potter's
Network,
with
the
idea
that
maybe
later
we
can
also
have
like
podcast
storage
when,
like
CSI
volumes
are
up
and
Blake
can't
evolve
on
that.
H
That
was
kind
of
like
the
thought
which
drove
us
towards
bought
us
networks
and
networking
the
IP
address
allocation
was
the
last
step
that
we
found.
B
Yeah
I
think,
unfortunately,
that's
a
an
implementation
detail
and
not
a
guarantee
and
if
you
were
to
add
something
like
has
storage
I'm,
not
sure
how
somebody
would
use
it.
Because
are
we
now
making
a
guarantee
that
storage
always
happens
before
Network
or
that
they
happen
concurrently
and
if
we
do
eventually
add
multi-network
does
has
Network
mean,
has
all
networks
or
has
some
networks.
D
B
H
That
was
just
a
decision
from
signaled.
Basically.
B
So
you
don't
need
to
Hash
it
all
out
here.
You
know
we
I
filed
a
bunch
of
comments
on
that
PR
that
awful
PR,
which
I'm
happy
to
close
in
favor
of
a
better
mechanism
but
I
I,
didn't
know
what
it
was,
but
I'd
love
to
discuss
at
some
point.
I
know
we
have
a
very
packed
agenda
today,
so
I
don't
want
to
spend
too
too
much
time
on
it.
But
if
you've
got
a
few
free
minutes,
I
I'm
not
expecting
any
major
changes
for
this
kept
cycle.
B
Since
it's
not
moving
forward
to
Beta
right
so,
but
before
it
does
move
to
Beta,
we
should
spend
some
quality
time
talking
about.
It
sounds
good
thanks,
thanks
for
coming
by.
D
H
Cool
yeah,
this
one
should
be
pretty
brief
too.
This
is
basically
more
cross
Sig
battles
and
swords
and
Spears
actually
not
available.
H
We
have
a
packed
agenda,
so
we
don't
have
to
talk
about
it
in
a
ton
of
detail,
but
the
tldr
is
a
request,
came
through
saying.
Well,
can
we
please
make
the
docs
a
little
bit
better?
The
docs
have
now
been
improved,
or
at
least
a
PR
as
in
to
improve
the
tax
and
I
think
it
would
be
pretty
great
to
get
this
corner
case
fixed
because
it's
basically
not
every
cloud
provider
handles
things
this
way.
H
But
if
they
do,
if
the
IP
changes,
then
we
could
have
undefined
Behavior
with
respect
to
routes,
and
it
would
be
better
to
not
have
undefined
behavior
and
have
handle.
H
H
E
H
Andrew
SEC
Kim
I
see
is
on
the
call
and
he
has
a
lot
more
understanding
of
this
part
of
the
code
base
too
Andrew.
F
So
so
my
understanding
was
that
we
allow
the
status
to
change,
of
course,
but
then
the
way
different
consumers,
like
the
expectation
from
different
consumers,
could
be
different.
So,
like
the
main
case
that
comes
up
a
lot,
is
the
Pod
IP
for
host
Network
pods,
like
if
the
node
status
changes
yeah.
We
don't
like
recreate
pods
to
have
that
updated,
so
yeah
I
think
like
when
I
came
up
in
the
sick
cop
protocol
recall
we
were
basically
okay
with
route
controller.
F
Applying
the
change,
but
it
would
be
good
to
know
like
it'd,
be
good
to
actually
document
it
for
that
for
the
status
field
or
for
the
addresses
field
like
what
consumers
should
expect
from
that
field
and
how
it
should
change,
because
there's
no
mention
of
like
what
we
actually
do
if
it
changes
yeah.
F
H
The
and
I
think
that
the
the
request
that
I'm
bringing
is,
can
people
from
stick
Network,
take
a
look
at
this
code
change
and
see
if
they
agree
that
the
code
change
actually
is
acceptable
and
that
the
docs
changes
which
are
also
linked
in
there
cover
what
we
think
they
should
cover,
because
the
requested
docs
changes
in
theory
have
been
made.
But
maybe
they
are
not
exactly
what
we
need,
but
yeah.
Just
looking
to
see
like
does
the
Sig
Network
think
that
we've
covered
everything.
Yet,
if
not,
we
would
love
to
keep
iterating
on
it.
B
Who
wants
to
take
this
Andrew?
Are
you
okay?
Well,
let's
see
on
the
assigned
to
it.
Right
now
are
our
Cal
and
Antonio
and
Dan,
and
not
Andrew,
on
this
PR
108095,
who
wants
to
own
it.
F
I'm
happy
to
I'm
happy
to
like
help
review
it
and
refine
it
a
bit
more
and
I
know
like
damage.
It
also
has
some
opinions
about
like
how
we
should
handle
and
though
that
just
changes,
so
maybe
we
can
review
it
and
then
assign
it
to
you
when
it's
good
all.
B
Right,
I'm
gonna,
let
you
guys
fight
it
all
out
and
then
I'm
not
gonna
have
a
chance
to
look
at
non-cap
PRS
in
the
next
week
anyway.
So
you
guys
go
fight
it
out
and
then,
when
I
Circle
back
to
it,
we'll
see
what
it
is.
A
C
Yeah
so
session
Affinity
like
it
is
not
very
consistently
implemented
or
implementable,
and
yet
we
have
conformance
level
tests
for
it,
and
this
in
particular
came
out
because
two
different
non-conformant
implementations
happen
to
pass
by
me
in
the
same
week
and
I
was
like
okay.
We
should
like
do
something
about
this,
but
in
particular
like
we're,
adding
a
feature
to
ovn
just
so
that
we
can
make
OB
and
kubernetes
Implement
timeouts
in
the
way
that
the
conformance
tests
require
them
to
work
and
I'm.
C
So
I'm
wondering
if
we
should
like
loosen
the
sense
of
conformance
around
Affinity
and
also,
if
anybody
knows
of
other
examples
of
plugins
that
have
trouble
implementing
that
movie.
It's
currently
specified.
G
And
or
better
document
our
expectations
around
it
so
that
you
don't
have
to
say
or
go
investigate
the
iptables
or
the
ipvs
code,
necessarily
to
know
what
the
behavior
should
be
and.
C
More
about
timeouts
than
anything
else,
although
apparently
some
Implement,
like
contrail,
apparently
does
not
or
implements
per
flow
rather
than
per
client
IP
address
and
I.
Don't
actually
that's
what
it
means.
I
just
saw
this.
B
So
that
was
the
one
that
I'm
aware
of
that
I
think
actually,
between
ipvs
and
iptables,
it's
different,
which
is
one
uses
all
this
is
client
IP,
the
other
uses,
client,
IP
and
Source
port,
three
Tuple
versus
five
Tuple
or
I
guess
it
would
be
four
because
destination
Port
matters,
but
so
and
like
we,
our
Affinity
is
called
client
IP,
but
I
think
Cube
proxy
implemented
by
ipn
port,
or
at
least
the
IP
tables
mode
does
or
maybe
I've
got
that
backwards.
B
I
forget
the
details,
so,
yes,
it
is
at
best
ambiguous
and
sort
of
at
worst
on
un
resolvable.
We
can't
change
either
one,
so
the
best
we
could
do
would
be
to
find
new
consts
and
with
clear
semantics
and
and
deprecate
the
old
one.
B
So
two
two
bugs
for
your
Hopper
Define
new
wallets
right,
you're,
going
to
find
a
new
new,
constant,
like
concept
of
setting
client
Affinity
to
client
IP.
It
would
be
client
IP
only
and
client
IP
port,
for
example.
Right
and
then
those
are
clear
semantics
and
the
old
one
is
whatever
the
implementation
defined
Affinity
to
mean.
C
B
I
mean
the
more
we
lock
it
down
the
more
we're
going
to
find
cases
that
don't
implement
it
at
all
that
can't
Implement
what
we
Define
right
right,
so
I
guess
I'm
in
favor
of
strategically
loosening
it
as
long
as
it
doesn't
lose
its
value.
B
B
Yeah
I
honestly
have
not
looked
at
those
tests
in
Forever.
If
that's
the
yeah
I,
don't
have
a
problem
with
relaxing
that
I
have
to
think
about
what
the
implications
are
really
like.
How
many
of
these
apis
we're
going
to
end
up
with
that
are
partially
supported,
but.
D
B
Well
and
I'm,
not
even
arguing
that
what
we
have
is
the
right
thing.
What
we
have
is
what
fell
out
of
the
iptables
and
user
space
implementations
right.
C
B
B
F
Andy
I've
been
joining
recently
just
following
this
service
internal
traffic
policy,
PR,
and
but
it
seems
like
recently,
there
is
some
debate
in
the
cap.
Pr
and
I.
Just
was
wondering
if
this
feature
I
was
just
looking
for
like
status
on
the
feature
I'm
trying
to
work
a
bit
with
Andrew
to
add
tests,
but
I
know
Dan,
raised
questions
about
the
routing
or
the
availability
of
services.
When
this
is
feature
is
enabled.
So
I
wasn't
sure
for
my
use.
I
should
investigate
other
Solutions.
B
So
I
haven't
looked
at
all
the
conversations
yet
I
did
see
them.
I
was
spending
my
time
first
on
on
merged
caps,
but
my
my
experience
talking
with
customers
is
that
there
are
some
cases
where
this
semantically
correct
thing
to
do
is
fail.
If
I
can't
reach
this
service,
it's
like
my
node's
agent
and
I.
Don't
want
to
use
anybody
else's
agent
and
I'd
rather
fail
and
reconnect
later.
B
C
Said
is
that,
given
that
semantic
and
use
case,
it
means
that
internal
traffic
policy,
local
and
external
traffic
policy,
local,
have
almost
nothing
to
do
with
each
other,
like
they're
they're,
they
mean
completely
different
things
right,
yeah,
and
so
maybe,
if
we
do
want
that
semantic
internal
traffic
policy
isn't
the
right
name
for
it,
and
especially
people
seem
to
want
that
same
thing
at
the
cluster
level.
In
the
multi-cluster
networking
thing,
where
you
have
a
service,
that
means
different
things
in
different
clusters
and
theoretically,
maybe
you
would
want
it
at
the
Zone
level.
B
So
you
know
so:
yes,
I've
in
the
past
argued
that
we
need
something
like
a
demon
set
for
zones,
but
I
have
not
tried
to
write
that
cap
yet
because
it
makes
my
head
spin,
but
cluster
we
already
get
in
the
form
of
services.
Right
like
a
cluster
service,
is
always
going
to
be.
Cluster
local
and
a
multi-cluster
service
is
going
to
be
potentially
multi-cluster,
so
that's
resolved
but
zonal
or
Regional
well,
which
hopefully
nobody's
doing
multi-regional,
but
we
don't
actually
forbid
it.
B
So
having
some
concept
like
that
is
probably
true,
I
know
your
topology
change.
Your
your
kept
rather
changed
from
a
prefer
local
ITP
into
something
topology
oriented.
Does
that
get
closer?
It
feels
like
a
topology
problem.
Doesn't
it
the
prefer
local
version?
I
mean
or
prefer
same
zone
or
prefer
region
like
we've.
C
B
But
it's
a
little
different
like
when,
usually
when
we
hear
people
say
prefer
same
X,
it's
use
the
same
X
unless
it's
not
available
and
then
overflow,
whereas
topology,
tries
to
balance
things
a
little
bit
more.
Heuristically
like
there
might
be
an
end
point
in
your
Zone,
but
it's
not
assigned
to
you
so
you're
going
to
cross
zones
poor.
You
can.
B
E
It
is
we
have,
we
have
a
lack
of
a
feedback
mechanism,
so
there's
no
way
to
know
if
something's
full
to
trigger
a
waterfall
mechanism.
So
you
basically
just
have
to
try
your
best
to
give
each
Zone
a
reasonable,
proportional
number
of
endpoints
and
hope
for
the
best
and
that's
topology
awareness.
C
So,
anyway,
on
the
the
service
and
the
internal
traffic
policy,
I
I
feel
like
we
really
ought
to
remove
the
DNS
example
from
the
cap,
because
nobody
wants
their
DNS
to
fail
completely.
When
you
know
a
pod
is
being
updated,
but
other
than
that
you
know,
I
had
suggestions
but
not
strong
objections.
So,
okay,
you
know
you'll
see
that
when
you
get
to
it,
I
guess:
okay,.
B
I
I
do
hope
to
get
to
that
today
or
tomorrow,
and
I
promised
Andrew
that,
if
I
don't
get
to
it
today
or
tomorrow,
we'll
sit
on
a
call
and
go
through
it
together
next
week,
so
that
we
can
pay
attention.
I
would
like
to
see
it
move
forward.
I
do
think
it's
a
reasonable
thing,
but
I
do
want
to
I
want
to
consider
Dan.
What
you're
saying
with
this
idea
of
prefer
I
was
initially
against
prefer
in
any
form,
but
I've
softened
on
that,
after
talking
to
real
users.
A
Anxiety
then,
the
last
topic
was
from
Syria.
I
Yeah
I'm,
actually
implementing
topology
of
our
hints
Downstream
in
OV
and
kubernetes
plugin
and
I
almost
had
it
implemented,
but
I
had
this
corner
case
that
I
wanted
to
ask,
and
maybe
this
has
already
been
Revisited
I'm.
Sorry,
if
I'm
wasting
time
and
if
it's
already
been
discussed.
But
my
question
was
the
definition
of
originates
from
a
Zone
and
a
Zone
being
defined
by
the
labels
on
the
Node.
So
a
specific
set
of
nodes
could
be
in
the
same
Zone.
How
does
that
play
with
node
ports?
I
On
let's
say
if
you
have
two
different
zones
and
then
you
have
you're
trying
to
reach
a
node
Port
service
on
the
other
node
from
I?
Guess
the
example
that
I've
mentioned
on
the
on
the
dock,
where
you
have
two
different
zones
and
then
two
different
nodes,
but
then
you're
trying
to
reach
the
node
port
on
the
other
Zone.
Does
that
count
as
coming
from
outside?
Is
it
how
do
I
guess?
E
At
least
I've
thought
of
it
as
something
that
the
logic
starts
when
it
hits
coup
proxy
I'm
open
to
other
interpretations
of
that,
but
my
at
least
right
now.
The
idea
is
that,
based
on
the
configuration
of
topology,
aware
hints
when,
when
that
request
reaches
or
connection
reaches,
that
specific
Zone,
that
specific
node
and
Coupe
proxy
on
it,
cooproxy
will
not
well,
ideally
not
Route
traffic
outside
of
the
zone
that
that
Coupe
proxy
is
in.
Ideally,
oh
I,
don't
know
if
that
is
or
go
ahead.
I
No,
so
if
I
understand
what
you're
saying
correctly,
if
traffic
is
coming
from
a
node
in
zone
a
and
it's
going
towards
a
node
in
zone
B,
because
it's
coming
because
the
traffic
will
leave
the
node
right
and
then
it
will
enter
Zone
B
again,
we
still
consider
the
traffic
as
originating
from
Zone
B
and
not
Zone
a
because
it's
left
its
own
and
it's
gone
outside.
And
it's
now
external
traffic
right.
So.
B
I
Yes,
yeah
for
external
traffic
policy,
that's
how
we
treat
it,
but
when
you
think
about
topology
of
our
hints
the
node
a
is
also
in
a
zone
right.
So
it's
in
the
zone
a.
B
I
B
If
you're,
if
you're
intentionally
reaching
out
to
node
B
and
hitting
its
node
Port,
then
your
Zone
doesn't
matter
because
it
only
begins
once
it
comes
in
the
external
destination
or
whatever
to
be
come
up
with
a
word
for
that
that
the
node
Port
isn't
is
an
external
entry
point.
I
B
Yeah
I
mean
yeah
exactly
and
and
then,
if,
if
you
were
to
try
to
be
extra
smart,
you
would
go
from
Zone
a
to
Zone,
B
and
back
to
Zone
a
when
and
then
the
response
would
of
course
go
from
a
back
to
B
and
back
to
a
again
versus
once
you
land
in
B.
You
say:
well,
I'm,
gonna,
stay
and
B,
so
you
go
to
A
to
B
and
back
to
a
so
you
the
that's
the
optimal
path,
given
that
you've
already
made
a
bad
choice
of
node.
I
Yeah
yeah,
thank
you.
That
was
my
question,
and
so
can
we
document
this
or
is
this
still
open,
like
I
brought
this
offer
external
internal
traffic
policy
as
well
and
I
keep
asking
that
Downstream
a
lot
I
just
keep
bugging
him
on,
because
I
get
confused
all
the
time.
So
is
it
possible
to
like
draw
this
out
in
the
cap
or
maybe
somewhere
else
where
this
is
how
we
treat
a
Zone
s,
so
if
you're
coming
from
outside,
then
you
know
wherever
you
land,
that's
your
Zone!
You
cannot
yeah.
B
I'm
happy
to
update
the
cap
or
update
API
docs.
Unfortunately,
the
way
our
API
docs
work.
There
isn't
a
good
place
to
like
write
about
the
theory
of
operation
and
how
the
whole
thing
works.
It's
really
kind
of
per
struct
and
per
field.
B
So,
if
you,
if
you
have
a
moment,
I
would
love
for
you
to
propose
a
like
API,
docs
change
in
the
comments
on
the
API,
so
that
that'll
go
into
the
generated
code.
The
generated
API
docs
I
mean.
B
I
Yeah
I
agree,
thank
you
and
thanks
danmanship
I
I.
Guess
this
solves
the
question.
I
always
ask
you
and
I
end
up
getting
confused,
so
I
thought
I'll
just
bring
it
up
here
to
the
larger
team.
Thanks.
B
B
Right,
two
weeks
from
now
before
coupon
are
we
doing
a
Signet
Deep
dive
who's
driving
that
if
we
are.
E
Yeah,
a
lot
of
people
on
this
call.
Actually,
we've
got
Andrew
Syria,
myself
and
Bowie
are
gonna,
be
making
sound
cool.
B
So
call
out
now
again
for
anybody
who
wants
to
get
their
their
work
mentioned
in
that
reach
out
to
one
of
those
people
and
make
sure
that
they're
aware
of
they're
going
to
do
a
good
job
spelunking
on
their
own.
But
if
you
want
your
stuff
mentioned,
specifically
call
it
out.
F
B
While
you're
on
my
screen,
when
we're
doing
triage,
there
was
there's
a
question
about
we
didn't
get
to
it.
Actually
do
we
have
time
we
just
run
back
to
triage
yeah.
Why
not
wait?
Did
we
talk
about
the
disabled
flag?
Oh
no!
I
closed
it!
So
there's
a
there's,
an
issue
open
where
somebody
was
asking
for
a
disabled
field
on
network
policy
and
it's
I
mean
it
sounds
reasonable.
B
It's
pretty
common
in
firewalling,
apis,
I
guess,
but
I
wanted
to
flag
with
you
on
a
p,
because
if
we're
going
to
consider
adding
it
to
netpaul,
we
might
consider
adding
it
to
admin.
Netball.
F
Yeah
100
percent
I'll
keep
that
in
mind
for
the
second
iteration
everyone's
kind
of
chugging
through
implementations
right
now,
I
believe
so
after
that
I
also
have
some
some
more
info,
and
there
is
a
website
now
by
the
way,
I
posted
it
in
the
slack
but
go
check
it
out.
I
spent.
We
spent
a
lot
of
time
putting
that
together,
so
cool.
B
F
B
F
Yeah
I
also
just
submitted
a
talk
for
the
contributor
Summit
to
give
an
excellent
overview
with
Syria
on
it.
Sorry,
it's
been
helping
us
starting
to
help
out
a
lot
there.
So
I'm.
B
B
Excellent
excellent
I
will
be
there
not
doing
a
Signet
talk.
I
will
be
at
kubecon
I,
see
in
the
chat,
I'm
doing
a
talk
on
something
else,
but
I'll
be
there
for
sure.
So
we
should
have
a
Signet
get
together.
Maybe
next
meeting
we'll
plan
a
a
meeting
time.
B
Oh
I
would
like
to
bring
one
up
sorry
that
I
didn't
get
to
throw
on
the
agenda.
The
cap
about
load
balancer
back
end
set
given
so
we're
have.
Apparently,
we've
got
some
bug
in
the
previous
changes
that
we
made
changing.
Load
balancer
back
in
sets.
We're
I
see
people
on
slack
working
on
diagnosing
right
now,
I'm
going
to
propose
that
for
this
kept
cycle
we
pause.
B
We
don't
proceed
with
the
cap,
one
one
one
and
we
instead
focus
on
settling
down
what
we've
already
got,
making
sure
that
the
tests
are
actually
good
and
then
we
push
forward
with
one
one
one.
One
in
the
next
release.
Alexander
is
that
okay.
B
B
Okay,
yeah
I,
don't
I,
don't
know
what
the
issue
is.
I
see,
there's
some
chatter
going
on
even
as
we're
talking
here,
but
given
that
we
missed
it
before
and
there's
already
an
open
PR
and
we
still
don't
quite
understand
how
it
relates
to
Auto
scaler
because
they
haven't
responded.
Yet,
let's
just
not
overload
too
much.
D
D
A
It's
just
that
it's
locally
on
my
computer
I'm
gonna
push
it.
Maybe
in
the
day
too,.
B
The
project
board,
thank
you
for
calling
that
out
is
now
on
the
project
board.
We
want
to
run
through
the
project
board
if
we've
got
a
few
minutes.
A
Yeah
we've
got
10
minutes.
That
would
be.
That
would
help
me
a
lot.
Thank
you
sure.
All
right,
let
me
share
a
window.
Where
are
you
I
have
entirely
too
many
windows
open?
Can
I
search
search?
No!
A
B
B
You
see
it
yes,
yeah
yeah,
okay,
cool,
so
a
few
things.
Let's
go
from
the
end.
Dual
stack,
color
you're,
going
to
send
APR
to
remove
the
gate.
B
Sure
we
can
move
that
to
the
removed
category,
which
means
we
have
nothing
in
the
ga
category.
So,
let's
see
what
we
can
move
into
ga
until
this
just
moved
into
beta.
B
H
A
H
Think
this
one,
this
one
I
think
we
can
move
it.
I
might
need
to
write
more
docs.
I
need
to
look
at
that,
but.
A
H
B
The
these
are
the
ones
that
I
need
to
all
flag
as
opting
into
the
the
main
project
board.
So
if
you
send
me
a
PR
I
will
I
will
flag
the
cap.
If
we
don't,
then
it'll
wait
till
next
cycle,
not
You
Bridget,
but
you
everybody,
grpc,
probes,
Bowie
or
Rob.
Do
you
know
anything
if
we
want
to
push
that
forward.
H
B
Okay,
I
think.
All
of
these,
this
whole
category
is
is
candidates
to
move
to
GA
topology
is
probably
the
least
ready,
Rob.
E
E
The
amount
of
feedback
has
definitely
picked
up
and
in
many
cases
it
is
working
as
intended,
and
the
only
thing
is
it's
more
confusing
than
one
would
like
to
figure
it
out,
because
not
everyone
has
access
to
controller
logs,
for
instance,
so
I'm
thinking
of
publishing
some
more
events
in
place.
You
know
when,
when
transitions
happen,
and
why
but
I
would
love
to
try
to
get
this
to
GA.
If
not
in
this
cycle.
The
next
I
will
have
a
kept
update
in
the
next
week.
B
So
everybody
who's
got
a
cap,
one
two,
three,
four
five,
these
five:
if
you
send
a
PR
in
the
next
few
days
and
you
move
the
pr
to
GA,
know
that
it
will
have
to
get
a
prr
glance
at
least
Rob.
If,
for
example,
if
you're
changing
the
number
of
events
and
sort
of
stuff
that
we
send,
then
it
will
need
a
little
bit
more
than
just
a
glance.
Please
don't
send
them
on
like
Wednesday
and
hope
that
they'll
get
done
by
Thursday.
B
So
the
sooner
we
can
get
these
in
the
better.
If
they
don't
make
it,
they
don't
make
it
like
I'm,
not
telling
anybody
to
work
on
the
weekend
or
anything
right,
but
these
all
seem
like
candidates
to
move
GA
that
would
be
exciting
moving
backwards
in
the
queue
Network
policy
status.
I
think
this
is
paused
for
the
moment
right,
Ricardo.
A
B
Andrew
did
we
decide
if
proxy
terminating
is
moving
forward?
Is
that
the
pr
that
you
sent
me
yes,
okay,
Extended,
DNS,
config,
oh
I,
lost
track
of
this
one
I
think
that's
on
me
to
go
poke
the
contributor
and
tracking
terminating
is
the
same
boat,
Network
policy.
Port
range.
B
B
F
Yeah
there
was
a
lot
of
like
scalability
testing
that
I
had
to
do
with
some
folks
on
the
HK
team
and
that
all
came
out
great.
So,
okay.
B
I
think
yeah
all
right,
I'll
put
it
in
the
same
category.
Then.
If
you
send
a
PR
I'll,
be
happy
to
look
at
it
and
move
it
to
GA.
Port
range
I
can
follow
up
with
Ricardo
and
see
what
we're
going
to
do
there.
This
is
all
async.
We
should
figure
out
how
to
represent
that
better
and
iptables
chain
ownership.
Dan
is.
A
B
B
I
wonder
if
I
can
delete
that
Milestone
somebody's
gonna
get
mad
at
me.
Oh
well,
okay
and
then
pre-alpha
stuff,
that's
not
making
any
progress
service!
Ciders.
B
That's
I
know
Antonio's
trying
to
get
it
in
the
cycle,
so
that
would
be
pre-alpha
Cube
proxies
ping
iptables
restore
we've
approved.
Is
that
approved
now
Dan
I
forget?
Did
you
got
a
prr
from
voycheck
right.
C
Yeah,
maybe
he's
waiting
for
you
to
read
lgtm
it
I,
don't
know:
okay,.
B
Okay,
so
we'll
push
that
in
that'll,
hopefully
move
to
the
alpha
column.
This
cycle.
F
And
quick
note
on
the
KP
and
G1,
like
I'm,
pretty
sure
the
community
is,
is
assembling
to
where
we're
going
to
be
asking
for
more
review
on
the
cap.
There
like
not
necessarily
looking
to
move
it
anywhere,
but
we're
getting
to
the
point
where
it's
mature
enough,
that
we
need
the
rest
of
Texas
opinion
on
how.
B
F
B
Excellent
I
was
at
a
conference
last
week
and
I
happened
to
see
Thomas
from
psyllium
and
I
mentioned
companion.
I
asked
hey:
are
you
guys
tracking
this
because,
like
if
kaping
doesn't
make
your
life
easier,
then
kaping
isn't
doing
its
job
and
he
said
that
they
would
go
make
sure
that
they
were
paying
attention.
B
He's
aware
of
it,
okay
I'm
sure
he's
got
a
million
threads
that
he
needs
to
link
together,
but
that's
great
yes,
I
would
love
to
see
us
push
forward
on
that
one:
dual
stack:
API
server
support.
That's
another
danwindship,
nothing
happening
there
right.
B
Happening
there,
okay,
don't
feel
bad
there's,
so
many
things
going
on
all
ports
I
know
is
stalled.
Node,
ipam,
multi
cider,
as.
D
B
B
B
Everybody,
when
you
get
a
minute,
if
you,
if
you
have
caps
open,
go,
look
at
them.
Look
at
the
Milestone
make
sure
that
the
Milestone
represents
the
right
next
milestone
for
us
to
touch,
and
let
me
know
if
you
think
it's
in
the
wrong
place
on
the
board
here.
A
Great
well,
that's
that's
definitely
time
so
we'll
see
each
other
again
in
two
weeks.
Thanks.