►
From YouTube: Kubernetes SIG Network Bi-Weekly Meeting for 20220120
Description
Kubernetes SIG Network Bi-Weekly Meeting for 20220120
A
And
we're
recording
to
the
cloud.
This
is
the
sig
network
meeting
for
january
20th
2022.
we're
going
to
start
off
with
some
issue
triage
and
tim
bridget.
I
forget
which
of
you
decided
was
going
to
go
for
it,
but
you
should
be
free
to
start.
C
I
got
it.
Can
you
all
see
that
yeah
right?
So
I
I
made
a
pass
through.
There's
been
a
lot
of
people
who
are
updating
their
issues
this
week.
So
thank
you.
I
pinged
a
couple.
These
are
the
ones
that
are
left
or
that
I
didn't
get
to
before
the
meeting
so
go
routine
leaks
in
several
tests.
I've
already
seen
at
least
one
pr
for
this
there
may
have
been
more.
What
would
be
cool
is
if
we
got
volunteers
to
just
go,
look
at
them.
C
It
seems
like
there's
some
cookie
cutter
that
you
can
follow
to
at
least
understand
where
the
go
routine
leaks
are.
This
is
not
super
urgent,
but
you
know
if
everybody
who's
interested
signs
up
for
one
test.
I
think
there's
six
here
that
are
ours
range,
allocator,
repair
controller,
oh
just
three,
that
are
ours,
cool
and
endpoints
controller.
So
anybody
want
to
volunteer
to
go.
Take
a
look
at
these
go
routine
leaks
in
tests
not
prod
anyway.
D
E
You
have
wanted
a
lot
of
people
fix
them.
You
know
as
fast.
C
Yeah
it's
hard
for
me
to
tell
whether
all
of
these
are
closed
or
not.
I
know
that
I
looked
at
at
least
one
pull
request
around
this,
so
I
don't
even
remember
which
one
it
was
so
hey.
Maybe
all
of
them
are
done
and
we
can
take
ourselves
off
this
list,
but
I
just
need
somebody
to
take
a
look
at
them
and
confirm.
C
I'll
point
you
guys
at
the
issue:
it's
107
372
load.
It
up
pick
one
of
the
three
that
look
like
ours
and
just
go
poke
at
it.
If
you
can
it
look
at
the
long
look
at
history,
and
maybe
it
was
changed
recently
and
then
we
can
just
scratch
it
off
here
or,
if
not,
maybe
we
can
apply
the
cookie
cutter
to
it
so
I'll
leave
this
open.
In
fact,
I
will
accept
it
for
triage.
C
E
Okay,
the
thing
is
this
is
using
udp,
so
all
the
udp
tests
should
cover
that,
but
I
wanted
to
to
go
further
and
try
to
install
an
http
3
server.
The
problem
is
that
to
add
that,
in
in
the
young
host
it
brings
a
lot
of
dependence,
but
in
parallel
I
want
to
add
support
to
the
api
server
to
a
ttp
three,
and
I
have
a
cap.
C
D
C
This
one
was
assigned
to
bowie
last
session
boy.
Are
you
here.
C
C
This
one,
I
think,
was
assigned
to
dan
there
we
go,
I'm
not
sure
that
we're
gonna
do
anything
here,
but
I
think
two
weeks
ago
we
decided
to
assign
it
to
dan.
Maybe
dan
wasn't
here
and
that's
why.
G
Nope
I
was,
and
I've
looked
at
it
a
few
times
and
been
like
yeah.
I
don't
know
like
I
I
kind
of
feel
like
if
cubelet
is
going
to
have
a
cleanup
option,
then
it's
a
cubelet
bug
and
we
should
let
them
decide
whether
they
want
to
do
it,
but
then
also
well.
You
know
it
is
a
networking
related
thing.
So
maybe
it
is
our
bug
and
it
can't
make
up
my
mind.
E
C
G
C
Okay,
I'll
leave
it
for
you,
but
if
we
can't
come
up
with
some
resolution,
then
I
think
it
dies
by
pocket
veto.
H
C
Udp
something
something
antonio
was
all
over
it.
Wasn't
he
no.
E
C
Yeah
so
I
didn't
read
this
whole
issue,
but
we
haven't
touched
it
since
december.
Should
we
assign
it
to
someone
to
try
to
reproduce
or.
F
I
E
E
H
H
C
Yeah,
okay,
that
was,
as
far
as
I
got
with
my
prep
work,
so
we've
got
two
more
that
are
sort
of
long
standing
that
we
haven't
looked
at
in
a
while
pod
loses
network
connection.
This
is
on
outgoing
traffic
during
graceful
shutdown.
A
A
I
vaguely
remember,
and
I'm
assigned
to
this
one
and
it
slipped-
aha
slipped,
my
radar.
I
think
there
was
a
recent
yeah
there's
a
recent
post
that
I
need
to
digest.
It
doesn't
include
some
of
the
diags
that
we
asked
for,
but
I'm
not
sure
if
it
has
oh
and
you
are
assigned
to
something
that's
useful
yeah.
It.
C
Okay,
all
right
well
I'll!
Let
you
have
a
look
and
then
this
one
I
didn't
look
at
it
all
today.
No
life
cycle
controller.
L
C
E
C
C
Okay,
cal,
why
don't
you
take
a
look
at
it?
If
it's
not
you,
you
can
unassign
or
stick
it
back
on
sergey
or
alana
yeah.
It's
on
the
signaled
bugs
cool.
That's.
J
A
Cool
tim
you're
actually
up
next
as
well,
with
cut
with
you.
J
Oh
all,
right
shoot.
Let
me
share
this
one.
M
J
One
more
issue
that
I
have
commented
just
oh.
D
J
C
Okay,
so
the
last
time
we
talked,
we
actually
made
it
through
most
of
the
caps
right.
Do
we
still
need
to
do
a
full
cap
review?
I
forget
what
we
had
decided.
We
were
going
to
do.
A
I
thought
we
had
gotten
through
most,
but
perhaps
not
all.
I
think
we
were
cut
a
little
bit
short.
I'm
also
struggling.
G
G
C
There
a
bot
command
for
that.
I
forget
all
the
bot
commands.
I
still
rely
on
the
click
ops,
but
maybe
I
don't
know
if
the
bots
do
projects,
maybe
they
should
okay?
Well,
I
just
did
it
and
that
will
put
it
in
the
unsorted
category.
So
let
me
go
ahead
and
project
again.
Then.
Let
me
move
you
to
a
different
workspace
and.
C
You
see
this
one,
yes,
okay,
so
where
did
I
just
put
you
down?
No,
I
have
to
reload
yeah.
There
prefer.
I
have
not
looked
at
this
cap
yet,
but
I
think.
C
G
C
Come
on,
oh,
you
know
what
I'm
in
trouble
chrome
has
broken,
drag
and
drop
in
this
release,
yay
for
being
on
the
front
edge.
All
right,
I
I
I'll
move
it
as
soon
as
I
get
the
chrome
update.
G
C
C
Okay,
so
noted,
so
let's
look
real
quickly,
make
sure
that
we're
all
good.
So
these
are
all
g.
We
don't
need
to
pay
attention
to
those
actually
things
that
were
g8
and
22
can
have
their
gate
removed
in
24
right.
So
I
know
rob
we
talked
about
this
last
week
or
something
you
signed
up
for
two.
N
C
N
C
N
C
C
No,
don't
all
right
so
running
back
through
beta
stuff,
that's
beta!
That
wants
to
go
ga,
which
ones
do
we
want
to
do
this
one
was
marked,
let's
start
at
the
top
terminating
endpoints.
This
is
probably
rob.
We
talked
about
this
one
too
right.
B
C
C
N
So
and
the
the
other
issue
is
that
the
actual
features
that
like
this,
as
I
remember
this
specific
cap
is
api
only
and
other
caps
are
actually
using
this
field.
So
we
may
want
those
other
caps
to
have
a
little
bit
more
time
to
okay
used.
G
N
O
C
Yeah
we've
tried
a
million
things
with
labels
for
metadata
and
they
all
suck
because
they
all
get
forgotten.
C
C
C
E
C
Yeah,
I
need
to
go
back
through
that,
after
the
caps
lock
in.
E
E
G
G
E
Yeah,
I
don't
tend
to
say
this.
What
I'm
trying
to
say
is:
do
we
know
how
do
we
want
the
traffic
to
behave
once
the
four
features
are
in
because
every
feature
are
going
to
come
with
their
own
assumptions
and
that's
the
problem
that
we
have
now.
You
know
and
that's
what
I
am
afraid,
because
service
internet
traffic
policy
is
going
to
assume
something,
but
then
attacking
terminating
importance
is
going
to
realize.
We
can't
do
this
because
this
is
doing
the
other
thing
and
that's
what
I'm
afraid.
E
C
Okay,
so
I'm
I
guess,
I'm
saying
we
should
not
ga
these
things
individually,
like
let's
consider
them
to
be
effectively
arm-in-arm
caps,
that
we're
going
to
move
forward
together.
So
the
focus
then
is
moving
them
from
alpha
to
beta,
so
they
all
catch
up.
Is
that
fair.
C
Is
there
so
the
email
I
got
from
andrew?
I
don't
think
yesterday
or
this
morning
was
maybe
we
can
do
that
in
the
external
he
was
proposing
originally
like.
Let's
add
a
new
cap
for
that,
and
then
the
the
summary
of
the
email
was.
Maybe
we
should
do
them
at
the
same
time?
It's
maybe
not
that
hard
yeah.
It
wouldn't
be
that
hard
to
do
them
together
and
there
was
actually
another
issue
that
was
triaged
this
afternoon.
That
was
somebody
wanted
that
too.
C
Okay,
so
I'm
gonna
move
on
for
the
sake
of
time,
we'll
leave
these
in
beta
network
policy.
Port
range
we're
looking
to
make
a
call
on
whether
we
move
this
one
or
not
right.
C
I
C
So
as
a
community
should
we
be
forcing
this
onto
implementations?
Should
we
say
well,
it's
optional
and
there
isn't
a
really
good
way
to
know
whether
it
works
or
not,
because
we
haven't
solved
that
problem
or
should
we
go
the
other
direction
and
say
well,
the
implementations
hold
the
api
hostage,
not
hostage
the
wrong
word.
We
they
get
to
delay
us
because
they
haven't
implemented.
Sorry
paragraphs.
H
H
When
you
have,
there
is
one
test
I
mean
antonio
knows
in
in
the
end
right
when
you
have
two
ports
in
the
service
and
then
two
pods
one
implementing
each
each
port
right
and
you
can
see
the
in
the
classical.
We
have
port
statements
right
in
the
multiport
and
that's
okay
for
a
call
to
implement
a
subset
of
them
and
with
the
port
range.
That's
my
question:
is
it
okay,
just
do
a
subset
of
the
portrait,
or
does
it
have
to
do
the
phone
portrait.
C
C
So
bridget
said
in
a
chat
exactly
what
I
was
thinking.
If
we
leave
it
in
beta
for
too
long,
then
it
becomes
permanent.
Now
it's
not
such
a
huge
deal,
because
it's
just
a
field
right
but
like
we
know,
I'm
guessing.
Everybody
has
felt
this
pain
of
like
customers
who
are
using
ingress
beta
and
don't
want
to
move
to
ingress
ga
because
it's
been
beta
for
two
and
a
half
years
and
it
already
works.
Why
are
you
changing
it
on
me?
C
Damn
it
and
it
doesn't
exactly
map
to
this
problem,
but
I
also
feel
the
desire
to
not
leave
things
in
beta
for
for
forever.
So
there's
no
way
out.
C
We
can
kill
it,
we
can
decide
we're
not
doing
this
right.
We
have
a
notification
period,
but
we'd
have
to
decide
we're
not
doing
this.
So
the
question
is:
is
the
motivation
really
dubious
or
is
it
just
not
the
strongest
motivation
we've
ever
had,
but
still
useful
for
what
exactly
sorry
port
ranges
on
policy.
P
One
thought
I
have
is
that
I
kind
of
do
believe
it's
useful
and
in
fact
the
more
generic
version
of
this
should
be
that
you
should
be
able
to
specify
a
list
of
ranges,
so
that
is
frequently
how
acls
allow
you
to
have
port
1,
port
7
and
then
port
10
through
20
sort
of
aggregation
of
ranges
as
opposed
to
start
and
stop
only
so
I
do
think
it's
useful.
In
fact,
a
more
generic
version
of
it
would
be
properly
useful.
P
This
is
sort
of
in
between
where
you
just
have
a
start
range
and
a
stop
range.
So
that's
one
thing
to
consider.
The
second
thing
is
that,
with
regard
to
celium,
not
being
able
to
do
it
well,
the
workaround
is
always
they
just
create
lots
of
policies
with
each
one,
individual
port
and
each
policy,
so
they
do
have
a
workaround
for
psyllium.
C
I
mean
that's
a
pretty
yes,
I
guess
you're
technically
correct,
but
it's
a
pretty
poor
workaround.
If
somebody
enables
thousands
of
ports
yeah.
P
L
I
just
think
that
it's
a
choice
for
for
cilium
to
say:
no,
we
can't
implement
this.
It's
a
tech
stack
choice,
that's
probably
bbf
maps
or
whatever
that
I
keep
using
bbf,
but
you
can
build
to
get.
Psyllium
can
choose
to
do
this
via
ib
tables
and
it
will
work
until
bbf
catches
on
with
the
ability
to
do
multiport
and
so
on.
The
point
I'm
trying
to
make
here
is:
there
is
event
like
two
venture
circles
of
users
who
are
there.
L
There
is
a
circle
of
people
who
are
asking
for
this,
that
this
works
for
a
lot
of
people
and
wanted,
and
then
there
is
a
venn
diagram
where
it
has
celia
right.
They
currently
don't
meet
yep.
I
don't
believe
we
should.
We
should
make
a
decision
based
on
one
or
the
ability
or
inability
of
one
or
more
people
who
build
on
top
of
the
ecosystem
to
keep
up
with
the
api
right.
I
agree,
don't
get
me
wrong.
I
love
like
as
a
technical
stack.
C
I
agree
the
the
corollary
question,
then
is:
should
we
move
as
the
first
example
of
a
feature
that
is
not
implemented
by
all
known
implementations?
Should
we
move
this
forward,
or
should
we
solve
the?
How
do
we
tell
users
about
it
problem
which,
if
we
decide
to
do,
would
be
you
know,
plus
nine
months
or
something
to
get
a
new
resource
that
shows
status
or
something
like
that
right,
like.
P
That's
a
whole
lot
of
speech.
I
don't
think
I
don't
think
this
is
the
only
feature
that's
not
implemented
by
all
cni's,
maybe
not
by
the
two
or
three
most
popular
cnns,
but
there
are
other
policy
features
that
not
all
cnc
you
know
support
one
particular
notable
note
for
the
one
being
the
ip
address,
cider
exception.
C
P
Right
and
in
terms
of
I
don't
see
any
reason
why
it
should
be
technically.
P
H
C
Okay,
I
I
feel,
like
it's,
not
really
reasonable
to
block
this
any
longer,
so
I
did
tag
it
for
24..
We
should
move
this
ahead.
We
should
probably
not
launch
any
new
network
policy,
specifically
network
policy
changes
and
cluster
network
policy
notwithstanding
in.
H
P
G
C
G
P
L
C
C
Yeah,
I
think
the
larger
question,
the
larger
question.
For
us
as
a
group
is
a
should.
We
have
a
conformance
program
for
network
policy
and
b.
How
should
we
be
exposing
these
partially
supported
features?
And
I
did
start.
I
know
one
of
my
ais
from
last
time
was
to
have
a
conversation
with
the
storage
folks.
I
have
some
notes
from
them
about
what
they
do.
The
short
answer
is
they're
in
the
same
boat
as
we
are
they've
done,
they've
tried
a
bunch
of
different
things,
but
they
don't
have
any
one
answer.
C
H
H
H
C
Let's
not
argue
with
them
in
their
absence
absolutely,
but
I
I
have.
I
have
asked
thomas
and
others
from
psyllium
to
think
really
hard
about
why
this
isn't
implemented,
or
it
isn't
implementable.
They
have
reasons
both
technical
and
non-technical.
That's
not
for
us
to
adjudicate.
The
question
is:
are
we
willing
to
wait
for
them
and
I
think
the
answer
is
no.
I
think,
like
the
the
feeling
that
I'm
getting
from
the
room
is
no.
We
ought
not
block
our
process
on
one
implementation
in
the
future.
C
L
C
L
I
agree
so
we
need
an
effort
and
I
think
nobody
like
from
the
implementation
side
should
own
this,
not
because
of
anything
other
than
there
is
no
value
for
them
to
own
this.
But
there
is
a
lot
of
value
for
us
to
say
somewhere
where
people
can
quickly
look
up,
it
can
be
conformance,
it
can
be.
I
don't
know
how
we
want
to
call
it,
but
we
need,
for.
We
need
for
people
to
say
just
to
see
that.
L
C
Owe
this
to
the
people,
so
I
I
completely
agree.
We
have
some
thoughts
about
it,
but
you
know
ultimately
it
needs
an
owner,
somebody
who
has
the
time
and
the
bandwidth
to
think
about
it
and
drive
it.
So
I
know
we're
eating
up
the
whole
agenda
or
the
whole
meeting
time
on
this
one
agenda.
So
I'm
gonna
stop
sharing
here
and
turn
it
over
to
see
what
other
topics
we
have
I'll
put
it
out
right
now.
M
But
even
before
we
tackle
the
bigger
problem,
if
we
push
this
forward,
don't
we
think
conformance
for
network
policy
needs
to
be
the
next
thing
on
the
chopping
block
like
like.
I
know
that
the
big
problem
still
needs
to
be
solved,
but
I
think
we
need
to
have
a
way
to
let
users
know
like
okay.
Psyllium
doesn't
support
this
little
bit
right.
I.
C
C
Two
sides
of
the
same
coin:
one
is
a
a
like
human-oriented
mechanism,
that's
conformance,
and
the
other
is
a
technical
mechanism
like
an
api.
How
can
I
programmatically
go
look
at
this,
so
I
have
a
question.
H
C
So
the
it's
funny
that
we
talk
about
this
in
terms
of
cni,
because
this
isn't
really
at
all
what
cni's
about,
but
the
it's
sort
of
the
larger
abstract
network
plug-in
and
the
the
problem
that
I
see.
If
I
understand
what
you're
describing
is
the
mechanisms
are
not
always
interchangeable
right.
I
can't
take
a
policy
implementation
and
say
well.
This
is
going
to
work
fine
on
any
cni
implementation.
No,
no!
No!
No!
No!
What.
H
I
mean
one
thing
that
implements:
if
you
talk
about
international
value,
you
have
the
policy
enforcement
points
and
you
have
a
policy
decision
functions
right
and
the
rules
that
really
what
can
talk
to
each
other?
That
should
be
common.
I
mean
you
can
have
a.
H
That
works
over
many
different
policy
types.
If
you
see
the
policies
as
an
arc
between
the
different
different
nodes
and
then
you
can
evaluate
for
your
policy
tree.
What
can
talk
to
which
I
mean
based
on
the
eagles
and
english
rules
that
could
not
come
up
with
answers,
that
more
look
like
firewall
rules.
Port
x
can
talk
to
part
y
over
these
p
people.
G
A
H
No,
the
tree
is
not
in
the
data
path,
that's
a
control
part,
and
then
you
today
today
in
there
right.
What
you
have
is
rules
right
that
says,
addresses
right
and
trees
can
be
broken
down
to
that.
So
I
mean
you
have
a
rule
that
says
this
address,
or
this
set
of
addresses
can
talk
to
this.
This
other
set
of
addresses
over
these
ports.
If
you
use
ipa
sets,
but
each
of
the
cmis
will
has
to
to
start
by
interpreting
the
different
policies,
understand
them
and
then
start
look
mapping,
policies
to
end
points.
B
A
Yeah
we
got
three
more
topics
and
a
bit
under
15
minutes
left,
so
we'll
have
to
keep
them
a
little
snappy
sanjeev.
I
think
you
were
next
with
a
multi-cluster
network
policy.
Cap
introduction.
P
P
I've
had
very
brief
initial
discussions
with
some
folks,
and
this
was
just
to
introduce
the
topic
and
started
inviting
review
comments.
We
obviously
don't
have
time
to
you
know,
go
into
it
in
too
much
detail
today.
How
much
time
should
I
spend
on
this,
and
what
would
you
like
me
to
cover.
P
So
I
just
sort
of
give
the
high
level
overview
here.
So
the
thought
here
is
that
this
thing
called
the
multi-cluster
services
api,
which
I
suppose
many
of
you
know,
is
being
defined
in
the
sig
multi-cluster
and
so
the
topic
of
multi-cluster
network
policy,
kind
of
straddles
between
sig
network
and
sigma
t
cluster
and
obviously
the
signal-
and
so
I
was
hoping
to
you
know,
get
feedback
from
both
this
group
and
and
the
sick
multicluster
group.
P
So
this
proposal
is
to
see
whether
we
can
have
some
extensions
to
network
policies
either
through
the
existing
api,
extensions
or
new
api
objects
to
implementations
of
the
mcs
api.
Okay.
So
the
mcs
api,
as
you
might
know,
is
a
generic
multi-cluster
services
api,
which
can
have
particular
instantiations
so
almost
like
a
cni
interface
with
cni
plug-ins
implementing
that
interface.
So
here
there's
the
mcs
api
and
then
projects
like
submariner
implement
that
mcs
api
interface.
P
P
In
fact,
network
policy
doesn't
even
have
reference
to
services
to
begin
with,
so
the
thoughts
here
were
to
put
some
thoughts
together.
Have
a
network
model
have
a
lot
of
open
questions,
so
I've
kind
of
kept
it
pretty
open
in
the
sense
that
okay
here
are,
you
know
there
are
reasons
to
do
it,
and
here
are
reasons
to
not
do
it
so
that
we
look
at
it
objectively
and
here
is
a
network
model.
P
You
know
that
has
to
do
with
you
know,
following
the
mcas
api,
whether
pod
ips
are
globally
unique
across
the
cluster
set
or
not.
You
know
what
kind
of
model
are
we
expecting
for
multi-cluster
services
and
based
on
that,
you
know
a
certain
set
of
initial
assumptions
and
then
also
you
know:
how
do
you
actually
attach
such
policies?
So
you
know
pictorially,
you
know.
P
Not
just
what
policies
are
configured
and
then
for
a
couple
of
the
stories
I
have.
Some
sample
potential
manifests.
So
here's
one
example
where
let's
say
I
want
to
implement
logic,
something
like
this
picture
here
right.
So
I've
got
a
cluster
here.
I've
got
two
parts:
both
of
them
are
accessing
a
remote
multi-cluster
service,
which
is
across
an
external
network
on
another
cluster.
P
P
That
kind
of
thing
you
can't
do
today,
because
you
either
just
globally
import
a
multi-cluster
service
into
a
namespace
or
you
don't
you
can't
so.
C
Yeah,
for
sake
of
time,
let's,
let's
let
other
folks
have
a
chance.
The
doc
has
been
shared.
The
the
big
questions
that
I
have
here
are
whether
services
with
a
capital
s
actually
has
any
role
in
the
multi-cluster
network
policy
api
or
whether
it
continues
to
speak
in
terms
of
workload.
C
P
O
C
C
Maybe
we
should
do
a
joint
session,
let's,
let's
we've
got
like
seven
minutes
left
and
I
hate
short-changing
people
which
I've
done
anyway.
A
Thanks
jay
you're
next.
I
C
C
C
Though
was
it
sure
it
was,
it
was
the
only
in
the
only
model
in
1.0
windows,
user
space.
Okay,
so
I
mean
there's
the
there's
the
windows
side
of
it,
but
there's
the
the
unix
side
of
it
too.
So
I'm
only
talking
about
the
windows,
but
we'll
do
the
linux
later,
but
the
linux
is
almost
done.
The
windows
one
is.
C
I
C
To
put
it
in
the
change
log
so
that
it's
public
and
then
we
just
sit
on
it,
for
whatever
the
rules
say:
three
releases.
Okay,
I'll
do.
L
You
something
jay
actually
been
watching
a
demo
internally
on
our
site,
dbdk
on
windows,
and
the
first
thing
that
jumped
at
me
was
oh.
Why
aren't
we
using
that
for
q,
proxy
and
c
nice,
so
there
is
public
material
available
and
demos
and
how
it
integrates
and
all
of
that
stuff,
especially
on
the
new
windows
os.
H
L
Yes,
I
listen.
This
is
literally
the
limit
of
what
I
know
right
like
I
just
wanted
you
to
know
that
this
is
happening
all
right
and
I
asked
people
who
are
doing
windows,
copyrights
and
windows
and
so
on
to
track
this,
and
I'm
telling
you
to
track
it
and
the
reason
behind
that
the
performance
is
very
different.
So.
H
C
C
Yes,
so
I
have
someone
on
the
google
side,
who's
been
looking
at
the
same
question:
I'm
gonna
connect.
The
dots
here
he's
been
thinking
about
how
to
reduce
that
set
to
the
sort
of
bare
minimum
set
and
then
figure
out,
if
that's
actually
an
acceptable
approach
or
if
we
need
something
more
radically
different.
H
I'm
looking
at
using
srv6
and
some
metadata,
it's
identified
not
an
address,
but
basically
the
pod
that
sent
the
traffic.
So
you
can,
you
don't
have
to
know
any
addresses
else
and
the
addresses
that
are
on
each
node
in
reality
can
be
used
and
sort
of.
What
you
instead
have
is
that
you
have
a
meta
label
for
the
pods.
P
You
know
so,
okay,
that
that's
these
are
exactly
some
of
the
topics
that
we
are
actually
partially
covering
in
the
draft,
so
the
thought
is
to
have
support
for
network
policy.
When
you
are
not,
you
cannot
assume
that
part
ips
are
globally
unique
right,
particularly
I
mean.
Maybe
you
can
do
something
special
in
the
special
case
where
they
are
indeed
globally.
Unique
is
small
number
of
clusters
or
whatever,
but
it's
got
to
work
without
requiring
part
ip's
to
be
globally
unique
and
without
just
doing
the
live
implementation
that
can
work.
P
It's
got
to
have
some
kind
of
service
awareness
and
it
probably
should
not
assume
any
kind
of
metadata
being
carried
from
cluster
to
cluster,
to
identify
the
source
at
a
more
granular
level
than
the
sports.
C
H
H
E
H
And
so
on,
and
that
you
can
even
use
as
a
path,
so
you
can
do
something
that
becomes
unique
and
that
becomes
then
by
then
they
call
it
type
whatever
you
want
to
call
it
for
this
font
metadata
and
then,
if
you
have
many
replicas,
of
course,
you
you
fold
it
up
to
the
deployment
or
similar
because
and
then
all
addresses
local
you
map
to
that
name.
When
someone
wants
to
talk
to
the
other
side,
so
so.
C
I'll
make
my
last
remark
like
I've
seen
over
and
over
again
people
trying
to
figure
out
this
source
service
when
they
build
the
service
graph
out
and
it's
a
sort
of
a
nonsensical
statement.
I
wish
that
we
had
leaned
more
on
service
account
here,
because
it's
got
much
more,
the
right
semantics
that
we
want
it's
one
to
one
with
pods
and
it's
required,
except
for
static
pods,
which
is
unfortunate
and
you
can't
change
it
after
birth.
So
it
makes
all
these
sort
of
source
destination
tuples
that
much
easier.