►
From YouTube: Kubernetes SIG Network Bi-Weekly Meeting for 20220609
Description
Kubernetes SIG Network Bi-Weekly Meeting for 20220609
A
B
B
Everybody
see
that
all
right,
so
there
are
10
open.
There
were
12
before
two
were
able
to
close,
and
I
got
to
all
of
them
except
the
last
one.
So
we
have
a
failing
test
report
which
I
didn't
get
to
dig
into.
It's
not
clear.
I
don't
know
some.
They
got
tagged.
Signaled
signet
work
not
really
clear
exactly
why,
but
it's
on
waiting
for
pods
to
be
running
and
ready
so
unclear
if
it's
just
a
time
out
or
if
it's
actually
something
systemically
wrong.
D
E
B
Always,
antonio
all
right,
this
bug
is
a
report
around
end
points,
not
endpoint
slice,
not
cleaning
up
when
the
last
end
point
leaves
a
service.
I
don't
know
if
it's
real,
if
it's
new
or
what
but
it's
a
good
bug
report
and
do
they
say.
F
D
B
Can
get
away
with
saying
er
don't
care,
but
if
it's
a
real
bug
like
if
we're
really
not
cleaning
up
endpoints,
there's
somebody
out
there
who's
still
using
endpoints.
It's
just
well,
it's
not
clear
whether
it's
actually
in
the
endpoints
or
if
it's
actually
just
cube
proxy,
that's
leaving
a
mess
that
I
didn't
pick
up.
B
Sure,
but
we
know
that
there
are
people
out
there
who
are
using
the
okay
right
right,
okay,
so
what
I'd
like
is?
If
someone
can
confirm
it's
just
cube
proxy
that
is
leaving
detritus
around
and
not
actually
in
the
end
points,
in
which
case
we
don't
care,
because
the
gate
will
be
locked
in
22,
right
yeah.
That's.
G
Then
that's
trouble,
I
think,
just
looking
at
the
the
bits
I'm
seeing
now,
you
can
see
the
endpoints
output
is
also.
It
shows
that
there's
no
end
points
either
in
in
that
command
for
the
nginx
service.
All
right,
so
I
think
that's
safe.
I
I
don't
mind
taking
this
one
just
to,
I
think
close
it
out,
but
yeah.
B
All
right
this
one,
I
just
tagged
somebody
from
google,
so
we
have
this
issue-
is
the
request
for
a
feature
really
being
able
to
change
ip
families
per
pod
and
there's
some
good
discussion
here.
It's
actually
a
really
interesting
thread
to
re-read.
B
I
thought
I
would
open
it
here
to
see
if
anybody
had
net
new
thoughts
they
wanted
to
discuss
about
it.
If
I
can
summarize
the
thread
it
comes
down
to,
but
how
would
you
tell
the
cni
what
you
really
want?
We
could
add
apis.
We
could
add
something
like
the
service
ip
family
policy
to
pods.
B
Is
that
what
we
really
want
to
do?
We
then
have
tell
cni,
as
they're,
supposed
to
respect
that.
H
C
F
It
I
mean
I
commented
in
the
bug.
I
think
the
hard
parts
of
pot
are
already
done,
but
I
made
a
suggestion.
I
think
further
down,
that
we
could
just
recommend
that
plugins
make
pod
ips
match
the
node
ips.
So
if
they
find
themselves
on
a
dual
stack
node,
they
should
give
dual
stack
pod
ips
and
if
they
find
themselves
on
a
single
stack
node,
they
should
do
single
stack
bot
aps.
B
That
seems
like
a
reasonable
thing
to
start
with,
I'm
not
sure
that
that's
the
semantic
that's
being
requested,
though,.
C
E
E
I
I
Yeah,
I
believe
that
this
has
come
from
ericsson
in
some
way
and
I
asked
another
guy
and
he
said
that
it's
it's
out
of
fear,
rather
than
anything
specific.
They
believe
that,
just
if
they
have
an
ipv6
address,
bad
things
could
happen,
but
the
more
experienced
network
users
say
that
it
cannot
happen.
If
you
open
an
ipv4
port,
you
will
not
receive
ipv6
traffic
and
that's
why
it
what
it
is,
but
I
believe
the
the
the
use
case
with
the
ipv4
addresses
being
a
scarce
commodity
and
ipv6.
You
can
use
as
many
as
you
want.
J
But
I
believe
just
just
just
a
comment:
there
is
the
unintended
consequences
out
of
this
when
you
think
about
a
service
that
says:
oh
hey,
I
I
want
dual
stack
ips
and
then
the
parts,
some
of
them
using
dual
stack,
some
of
them
using
v4
or
some
of
them
using
v6
we're
going
to
have
to
figure
out
a
way
out
of
this
as
well.
When
we,
if
we
choose
to
go
out
and
implement
this,
I
just.
B
J
B
J
So
I
I
want
to
I
wanna
say
something:
there
is
nothing
today:
stopping
a
user
from
running
a
cluster
with
nodes
that
are
single
stack
configured
to
c9.
That
is
a
single
stack
and
other
nodes
configured
to
the
cni
as
dual
stack.
So
there
is
nothing
stopping
them
to
do
that
today.
They
just
need
to
configure
the
cmi
accordingly,
which
is
pushes
the
problem
from
implementation
to
ops,
which
gives
them
exactly
what
you're
looking
for,
which
is.
Oh,
I
have
this
ipv4
that
I
need
to
it's
cars.
J
What
I
needed,
because
I
interface
with
other
systems
and
yeah-
you
can
do
that
today.
You
just
need
to
do
pinning
and
like
no
selectors
and
all
that
stuff
that
good
stuff.
But
I
don't
think
this
this.
This
needs
to
be
on
an
api
level,
especially
with
okay.
So
let's
put
this
exciting,
interesting
concept
into
play
and
say:
okay,
I
want
to
do
this
on
a
replica
and
then
the
rev,
like,
I
said,
grows
now.
What
should
I
grow
heading
on
a
new
part
should
which
one
should
I
follow?
J
They
have
dual
family
single
family
or
we
can
sit
down
and
iron.
All
of
that
out-
or
we
just
say,
okay,
the
default
system
is
either
single
stack.
Whatever
the
family
will
do
is
stack,
but
you
still
can
do
interesting
stuff
using
nodes
or
c9
like
nodes
that
I
think
somebody
was
talking
about
nodes
being
single
stack
or
just
a
c9.
Being
single
stack
configured.
B
Yeah
I
mean
ultimately,
the
question
is
it
will
kubernetes
will
work
with
whatever
the
cni
provides
for
the
pods
all
right?
The
question
here
is
only:
how
do
users
tell
the
cni
what
they
want
right
and
we
have
our
choice
of
like
ignoring
it.
It's
let
the
cni's
do
it
themselves
and
they
can
figure
it
out.
We
can
like.
I
wrote
here,
we
sort
of
standardize
it
or
we
can
actually
standardize
it.
The
the
node
granularity
works,
but
it's
pretty
coarse.
B
The
the
only
argument
that
really
rings
important
for
me
is
the
the
I
want
v6
all
ways
and
v4.
Sometimes
I've
heard
that
from
my
own
customers
too,
and
we
don't
have
a
good
way
to
express
it
honestly,
that's
the
only
thing
that
makes
me
think
it
might
be
worthwhile.
F
B
We
could
do
that,
do
the
cni
like
do
you
have
a
an
implementer's
meeting
where
you
get
like
all
the
various
implementations
together
to
chat
every
now,
and
then
I
think
this
is
it
no
okay.
B
Okay
well
again,
let
me
then
speak
to
the
cni
implementers
on
here.
It
would
be
really
awesome
if
you
all
could
pop
into
this
particular
issue
and
weigh
in
on
your
thoughts
here
like
if
we
added
a
pod
api
here,
would
you
respect
it?
Would
you
would
you
be
willing
to
implement
it?
That
way?
Do
you
hear
this
demand
from
your
own
customers?
B
It
would
be
useful
if
people
could
jump
in
here
and
I
you
can
ignore
my
multi-network,
it's
sort
of
a
side
track,
but
I
I
found
this
to
be
very
interesting
lars.
I
really
appreciate
that
you,
you
did
the
proof.
I
I
thought
that
it
would
work
that
way
and
I'm
glad
to
see
that
it
actually
does.
I
It
was
a
cool
test
and
it
was
very
simple.
I
just
rewrote
the
ipam
section
between
creating
new
deployments,
so
it
was
super
simple
to
do.
J
J
I
I
I
Good
yeah,
it's
similar
because
the
scene
I
plug
in
have
to
to
read
an
annotation
in
the
pods
while
they
are
creating
the
ports,
and
this
would
be
the
basically
the
same
thing.
So
I
believe
that
at
least
those
who
implement
the
bandwidth
code-
or
that
really
has
the
code
in
place
then
just
have
to
extend
it
to
some
family
setting.
B
I
wonder
if
this
would
like:
if
we
went
this
route
suppose
we
had
a
real
api,
would
we
make
the
cni's
themselves
interpret
the
api,
or
would
we
actually
like
up
level
it
into
the
runtime
and
make
the
runtime
call
cni's
with
parameters.
A
I
would
slightly
prefer
runtime
requesting
certain
features
that
makes
it
easier
for
some
of
the
plugins.
Perhaps
I
mean
not
the
ones
that
I
work
on.
They
don't
care.
They
talk
to
the
api
anyway,
but
it
might
be
easier
for
other
plugins
to
just
look
at
options
passed
to
it
by
the
runtime,
as
opposed
to
go
all
the
way
to
the
cube
api
and
have
to
watch
pods
or
things
like
that.
J
Too,
because
I
think
we
need
to
report
events
such
as,
which
is
something
we're
gonna
have
to
do
in
services,
because
the
filtering
that
works
today
just
assume
that
this
happens
during
some
transition
during
upgrade
or
downgrade
of
the
cluster
to
dual
stack.
But
if
we
use
it
as
a
runtime
thing,
we
need
to
report
it
to
the
user.
So
debugging
shouldn't
be
a
nightmare.
B
Yeah,
that's
a
good
point.
Okay,
let's
wrap
this
issue
I'll.
Do
some
follow-up,
try
to
capture
some
of
the
thoughts
and
propose
maybe
to
do
a
form
or
something
that
we
can
send
to
well
to
figure
out
the
right,
the
right
destination
audience?
B
The
last
issue
for
triage
was
cals
and
I
didn't
get
to
look
at
this
today.
So
do
you
guys
want
to
talk
about
it?
People
who've
been
responding.
B
B
C
J
B
Okay,
well,
this
looks
like
the
conversation
is
still
going
and
we're
23
minutes
in.
So
let's
move
it
on
to
the
agenda.
This
issue
stays
open,
I'll
circle
back
to
it
and
read
make
sure
I
comprehend
it.
A
C
Well,
I
I
painted
a
lot
of
people
just
as
a
health
app
to
try,
because
I
really
wasn't
tracking.
I
have
one
small
feature
that
is
going
to
that
they
want
to
promote
to
beta
and
that's
the
the
one
that
resets
ips
on
the
service
ip
to
set.
The
first
block
has
some
logic,
and
I
have
another
feature
that
I
booked.
I
wanted
to
book
five
minutes
to
present
and
do
a
small
demo
that
is
to
implement
service
ips
ranges.
So,
instead
of
right
now
we
have
had
code
cluster,
the
service
ip.
B
So
I
I
approved
the
ranges
to
move
to
beta
the
the
static
range
to
move
to
beta
right.
C
Yeah,
that's
I
mean
looking.
C
A
F
Okay,
I'll
look
at
it,
the
iptables
cleanup
one
also
ul
gtm
and
it
did
not
get
approved,
and
I
was
thinking
that
that
was
because
it
needed
a
pr
or
groover.
So
I
poked
john
von
martin,
but
maybe
you
just
need
to
slash
your
groove
on
it.
F
B
Directory,
but
that's
right
and
next
week
will
be
a
crush
for
them,
so
if
we
can
ping
them
sooner
rather
than
later.
I
have
also
time
put
aside
like
the
entirety
of
tomorrow,
just
to
go
through
caps
and
try
to
get
as
many
things
merged
as
I
can.
A
B
Okay,
let
me
find
the
I've
got
on
my
issue
here.
I've
got
my
list
here,
I'll
run
through
it
after
we're
done
talking
and
approve.
Okay,.
A
Multi-Cider
cap:
let's
see
we
shouldn't,
need
to
do
anything
more
all
right.
So
pr
for
the
api
and
follow
up.
C
A
Cool
okay,
so
in
that
case,
next
up
is
sanjeev.
Let's
try
to
time
box
this,
maybe
to
10
minutes.
If
we
can,
I
know
it
might
take
us
a
couple
of
sig
network
meetings
to
get
through
last
time
we
spent
quite
a
while
on
it
so
sanjeeva
if
you're
around.
H
H
So
we
discussed
this
and
discussed
a
couple
of
use
cases,
so
what
we
could
do
today
is
just
discuss
one
or
two
additional
use
cases
and
just
try
to
get
a
sense
of
where
the
sig
wants
to
go
yeah
with
this
right.
So
I'll
just
put
the
core
questions
out
so
that
we
can
think
about
this,
and
if
we
can
get
a
feel
from
the
group
about
some
answers
that
will
help
us
guide
what
we
should
do
or
should
not
do
going
forward.
H
So,
please
think
about
what
you
feel
in
terms
of
whether
this
is
high
priority
low
priority
or
something
we
should
never
do,
and
also
what
kinds
of
multi-cluster
deployment
models
do
we
want
to
target?
Do
we
want
to
target
all
kinds
of
multi-cluster
deployment
models
or
a
specific
subset
like
flat
network
models,
and
things
like
that?
H
Should
we
define
this
policy
assuming
there's
an
implementation
of
the
mcs
in
operation,
or
should
it
be
broader
than
that,
and
then
there
are
other
things
that
we
want
to
kind
of
try
to
get
some
kind
of
community
of
feedback
on
okay,
and
we
want
to
continue
to
get
use
cases.
We've
had
some
very
limited
feedback
on
use
cases,
but
definitely
would
welcome
more
okay.
So
these
are
the
core
questions.
If
you
have
answers
that
would
be
great.
H
We
looked
at
some
use
cases.
I
very
briefly
mentioned
the
first
use
case.
Was
you
know
you
have
a
service
that
is
getting
imported
from
another
cluster
and
on
on
on
cluster,
a
which
is
importing
that
service?
H
You
want
to
decide
which
parts
get
to
access
that
imported
service
and
which
parts
don't
get
to
access
that
import
service,
since
we
discussed
it
already
I'll
just
move
on
second
use
case
we
discussed
was
if
the
same
services
provided
by
multiple
remote
clusters,
we
should
be
able
to
control
whether
a
particular
class
of
pods
gets
to
access
that
service
from
a
certain
clusters
providing
that
service,
but
not
from
other
clusters
providing
that
same
service.
So
you
can
see
here.
H
Pod
p2
gets
to
access
mcs
service
foo
from
cluster
b,
but
not
from
cluster
c.
There
was
a
little
bit
of
a
debate
about
this,
the
relevance
of
this
use
case
in
the
past,
but
since
we
are
short
of
time,
I'll
skip
past
this
and
just
kind
of
wrap
up,
but
continue
to
provide
feedback.
If
you
could,
the
third
use
case,
which
I
don't
think
we
got
to-
was
an
ingress
filter,
so
here
on
the
receiving
cluster,
we
want
to
be
able
to
decide
which
sources
are
authorized
to
accept
multi-cluster
service.
H
B
About
that
topic,
I
think
this
is
the
most
salient
and
interesting
and
challenging
topic
of
this.
The
whole
discussion.
So
I
will
acknowledge
the
mcs
multinetwork
proposal
we
need
to.
We
need
to
move
that
forward.
We
need
to
figure
out
what
we're
doing
with
that,
and
I
think
we
do
need
to
support
it
for
what
it's
worth
mcs
kind
of,
assumed,
a
flat
model
from
the
beginning.
But
if
we're
going
to
assume
a
gatewayed
model,
then
we're
left
with
for
policy
either.
B
Well,
let
me
see
there's
three
options
that
I
think
there's
flat
which,
like
you
said
we
have.
We
can
have
information
about
the
original
client,
there's
gatewayed
without
any
metadata
or
encapsulation,
which
we
lose
all
sense
of
who
the
client
was
right,
so
the
only
place
you
could
really
enforce
policy
would
be
at
the
sending
gateway
or
we
have
a
gateway
multi-network
model
where
we
can
keep
metadata
with
the
traffic
right
so
like
some
encapsulation
or
or
something
right,
multiple
egress
ips,
I
don't
know
like.
B
B
Yes,
that
seems
reasonable
to
have
those
three
categories,
and
is
it
would
it
be
sufficient?
If
we
said
hypothetically,
we
expect
that
it
works
in
a
gatewayed
model,
but
it's
up
to
the
implementation
to
figure
out
how
to
implement
that
and
some
implementations.
H
Okay,
well,
they
they
can,
they
can
do
more.
They
can
enforce
policy
the
sender
side
and
a
courser
version
of
policy
at
the
receiver
side.
So
you
can
have
fine-grained
enforcement
at
the
sender,
but
if
the
receiver,
either
in
addition
or
for
whatever
reason,
also
wants
to
enforce
at
the
receiver,
but
the
receiver
would
be
coarser
grain
than
the
sender.
B
K
B
I
I
agree
with
you.
I
I
think
the
cluster
set
model
is
a
strength,
not
a
not
a,
not
a
deficiency.
B
H
B
Left,
okay,
so
let's
start
go
ahead
so
sanjeev
I
actually
started
writing
a
pr
and
it
got
back
back
bernard
started:
writing
a
pr
against
the
mcs
kepp,
which
explicitly
made
clear
the
or
tried
to
anyway
the
idea
that
even
within
a
cluster
set,
the
control
plane
can
decide
to
exclude
certain
clusters
based
on
policy.
That
is
a
like
cluster
set
implementation
option
like
you,
don't
have
to
assert
equivalence
across
all
clusters.
B
B
If
the
control
plane's
policy
says
you
know
what
never
in
cluster
a
there's,
no
reason
that
cluster
a
would
ever
host
x,
so
I'm
going
to
exclude
it
right
and
what
that
means
again
is
totally
implementation
specific.
So
if
you
had
a
multi-cluster
network
policy
here
you
could
say
there
is
no
way
that
I'm
going
to
receive
x
traffic
from
a
because
the
policy
says
that
can't
happen.
B
H
H
Yeah
and
then
the
finally
just
for
today
we
want
to
kind
of
wrap
up
use.
Case
d
is:
do
everything
so
basically
use
case
d?
Is
a
multi-cluster
is
identical
to
a
uni
cluster.
As
far
as
network
policy
is
concerned,
everything
that
you
can
do
in
a
single
cluster.
You
should
also
be
able
to
do
in
a
multi-cluster
okay,
which
means
there
are
multi-clusters,
essentially
a
single
cluster
that
has
just
happened
to
be
broken
out
into
several
physical
instantiations,
but
it's
really
just
one
plus
two.
H
That
has
implications
for
the
fact
that
this
means
a
that.
This
is
a
flat
network
model
right.
All
the
pods
are
unique,
just
like
they
would
be
in
a
single
cluster,
and
this
also
means
assumes
that
each
cluster
has
access
to
the
control
plane
of
every
other
cluster
so
that
it
can
actually
look
at
the
part,
labels
and
all
kinds
of
things
that
the
current
unique
cluster
network
policy
needs
to
also
operate
in
a
multi-cluster
policy.
B
H
Yes,
yeah,
I
yeah
that's
kind
of
what
I
want
to
say
as
well.
Yeah.
E
H
Control
plane,
however,
it
is
implemented
whether
it
is
actually
the
clusters
or
a
separate
management
cluster,
because
you,
if,
if
the
goal
is
that
everything
you
can
do
in
a
single
cluster,
you
can
also
do
in
a
multi-cluster.
Then
you
somebody
needs
to
know
all
the
labels
and
all
the
clusters
so
that
you
can.
B
Yeah.
Okay,
so
I
love
d
d
is
how
I
wish
things
could
work,
but
actually,
I
think
if
we're
gonna
define
it,
we
should
define
it
more
more,
like
you're
you're,
assuming
an
implementation
here
right.
I
think
you
can
achieve
the
idea
of
everything
that
works
in
a
single
cluster
works
across
clusters,
without
necessarily
having
a
flat
network.
You
just
need
a
smarter
control
plane,
but
let
me
throw
out
one
so.
Here's
here's
I've
been
thinking
a
lot
about
this.
B
Since
your
last
talk
here,
here's
what
I
would
love
to
see,
though
I
don't
know
100,
I'm
not
100,
convinced
that
it's
possible,
but
I
I
think
it
is.
I
would
love
to
see
us
say
that
multi-cluster
network
policy
is
network
policy.
Well,
first
I'd
like
to
see
us
consider
extending
network
policy
to
include
service
account.
B
H
H
H
K
B
Help
push
it
all
yeah.
You
know
what
sanjiv,
I
think,
that's
a
great
idea.
That's
probably
the
right
answer.
I
haven't
regularly
been
attending
those
meetings,
but
this
is
a
very
important
topic
to
me
personally.
So
let
me
know
when
you're
going
to
go
to
that
meeting
and
I
will
make
sure
that
I
attend
that
one
just
don't
make
it
this
coming
week
because
kept
freeze.
H
L
Yeah,
so
that's
my
first
thing
first
meeting
here
I.
E
Just
wanted
to
welcome
to.
L
Welcome
it's
a
gauge
to
gauge
the
sort
of
level
of,
let's
say
level
of
support
or
maintainability
of
of
the
conformance
tests
for
importance
controllers,
so
I've
submitted
one.
L
One
note
is
that
I've
I've
already
noticed
that
the
the
project
didn't
build
for
darwin,
so
it's
been
a
very
small
pr
that
that's
just
the
new
ones
and
then
the
second
thing
that
I
posted
that
I
wrote
in
the
doc
is
that
I
like
to
know
whether
anyone
actually
keep
keeps
tabs
on
on
the
state
of
the
project
and
whether
there's
sort
of
regular
checkups
being
done.
I
have
seen
that
there
was
no
no
sort
of
active
maintenance,
so
so
to
speak
recently.
G
G
A
lot
of
the
work
on
conformance,
I
think,
has
moved
towards
gateway.
Pi
now,
I'd
say
ingress
as
a
whole
has
generally
been
marked
as
a
complete
api.
L
All
right
so
do
you
maybe
know
or
anyone
hater
knows
what
that
there's
a
you
know,
active
effort
and
actually
running,
and
you
know,
maintaining
sort
of
maintaining
the
runt
runs
of
the
health
tests
of
those
conformant
tests
within
the
ingress
controllers
or
within
the
community,
at
all,
I'm
asking
without
any
broader
knowledge
or
context
with
respect
to
the
inter
congress
controllers,
because
I'm
just
starting
up
with
the
well
and
I
just
wanted
to
sort
of
bootstrap
the
whole
infrastructure
to
sort
of
test
suites
for
running
that
and
checking
out
with
the
conquering
boost
controller
and
whether
that
actually
makes
sense
at
their
current
state
of
affairs.
G
I
I
think
the
tests
as
they
exist
right
now
are
accurate.
If
so,
I
I
was
just
looking
at
your
item
in
the
agenda-
and
I
I
know
you
had
some
questions
about
you
know
is:
are
the
tests
as
they
exist?
G
A
reliable
baseline?
I
think,
is
really
what
you're
you're
looking
for
yeah
and
my
understanding
is
that
they
are.
I
looked.
I
looked
at
the
test
failures
and
they
the
tests,
make
sense
that
to
me
at
least
as
as
the
written,
but
I,
if
there's
something
that
doesn't
seem
to
add
up
or
maybe
doesn't
line
up
with
the
spec
or
something
like
that,
definitely
add
an
issue
or
maybe
ping
on
slack,
because
I
know
that,
like
like
you're
saying
I
don't
know
that
there
are
that
many
people
actively
watching
that
repo
anymore.
G
I
wish
there
were
but
yeah,
but
I
think
bowie's
not
on
this
call,
but
I
can
check
in
with
him
afterwards.
I
know
he
can
approve
prs
in
there.
I
don't
have
that
access,
but
that
may
be
good
to
get
like
you
said,
get
the
build
working
again.
M
M
I
I
don't
know
if
significant
is
a
driving
effort
on
adding
data
to
list
iv
tables,
I
is
just
just
mentor
checking
the
status
I
I
was
modified,
be
monitoring
the
pr
and
then
I
see
some
activity,
but
I'm
just
curious
if
it
is
something
that
we
look,
we
are
looking
to
get
in
like
pretty
soon
and
is
it
tied
to
any
like
kubernetes
release,
to
be
available
that
I
I
think
that
sort
of
question
is
what
I
want
to
ask
but
yeah.
I
don't
know
if
anyone
here
knew
about
this.
A
F
G
Yeah-
and
I
know
we're
really
interested
in
getting
that
in
there's-
been
a
cignet
mailing
list
thread
about
that
as
well.
M
Yes,
if
you
look
at
the
pr
in
the
issue,
you
can
see
a
lot
of
like
other
issues,
referencing
that
one
and
say
hey.
I
want
this
initially
they're
working
on
hey.
Maybe
we
take
another
approach
to
do
the
digital
list
like
how
he
still
does
it
and
then-
and
they
say:
okay,
oh
kubernetes
is
making
one.
So
let's
wait
for
that
one
and
say
everyone
is
on
it
depending
on
that
one
now.
So
definitely
if
you,
if
we
can
bring
the
attention
to
like
this
effort,
it
would
be
awesome.
A
A
A
C
I
cannot
share
a
tap.
This
is
the
linux
thing,
okay,
who
can
get
the
link
to
the
presentation
and
share
it.
G
C
So
this
this,
this
is
about
currently
the
way
that
services
are
implemented.
They
are
using
a
bitmap
to
provide
consistency
between
different
api
servers.
So
every
time
that
we
request
a
cluster
ipm
the
service,
we
get
the
bitmap
from
from
the
storage.
We
check
that
there
is
one
ip3
and
we
save
this
bitmap
this
bitmap.
C
What
the
implication
of
this
is
that
you
need
to
keep
writing
a
bitmap
in
the
initiate
for
every
server
that
you
that
you
want
to
create
and
that
imposes
limits
on
the
scalability
of
service,
for
example,
because
you,
the
bitmap,
you
cannot
really
compress
the
bitmap,
and
if
you
want
to
use
ipv6
networks,
it
grows
too
much
and
it
blows
it's
the
and
you
put
a
big
penalty
needs
to
be.
The
other
problem
that
you
have
is
that
service
cider,
the
same
that
we
have
with
custer
cider
are
not
disposed.
C
Those
are
flux
in
the
apa
server.
That
means
that
it's
very
difficult
to
migrate,
the
services
to
resize
the
services
or
to
any
component
in
the
cluster
to
be
able
to
know
which
is
the
service
either.
So
this
started
two
years
ago.
A
lot
of
discussions
in
the
initial
cap
and
after
I
don't
know
a
few
months,
we
we
come
up
with.
There
was
some
consensus.
Can
you
go
to
the
next
line.
C
This
one,
so
the
the
idea
is
to
have
an
object
that
is
ip
run
service
ip
wrench,
the
name
doesn't
matter
now
that
is
going
to
be
the
generator
for
the
ip
addresses.
C
So
you
have
two
objects:
there's
the
ip
range
that
generate
ip
addresses,
the
ip
addresses,
the
reference
dip
range
and
the
service
that
is
able
to
to
reference
to
refer
to
one
of
these
ip
runs
created,
and
so
these
are
more
or
less
the
relation.
So
service
referred
to
ip
writes.
The
ip
address
refers
to
the
iq
runs
too,
and
the
service
owns
the
ap
address.
C
As
you
can
imagine,
there
are
a
lot
of
problems
on
how
to
implement
this.
Can
you
go
to
the
to
the
next
line,
so
the
the
main
problems
are,
the
api
machinery
doesn't
have
transaction
services
has
because
it's
special,
so
we
cannot
really
do
multiple
operations,
but
since
that
we
can
do
an
session
for
certain
operation
so
that
the
idea
for
implementing
this
is
to
use
an
ip
wrench
object.
That
is
going
to
be
the
the
so
and
we
need
to
maintain
backwards
compatibility.
C
You
can
create
new,
so
let
me
see
how
I
start
this
to
make
this
sense.
Okay,
let's
start
for
ip
lines,
so
ip
range
objects,
they're
going
to
be
when
I
pre
run
object.
That
is
the
default.
That's
for
backwards
compatibility!
It's
going
to
be
configured
on
the
api
server
bootstrap
with
the
flags
and
what
are
the
the
type
runs,
has
the
different
these
characteristics.
C
So
when
you
create
one
ip
range
that
is
not
default,
we
set
a
finalizer
because
we
cannot
delay
delete
this
object.
Meanwhile,
you
have
ip
addresses
referencing
it
to
handle
the
life
cycle
of
ip
range,
because
we
want
to
resize
it
resize
it
or
change
the
default
network,
primary
primary
network.
We
we
can
use
the
status.
So
when
you
create
a
branch
they
branch,
the
status
goes
to
node,
ready
and
then
a
controller
that
is
the
range
controller,
can
handle
the
operations
and
update
the
status.
C
That
is
what
is
going
to
allow
the
other
components,
services
and
ip
addresses,
use
the
ip
runs
and
know
that
and
avoid
any
problems
or
conflicts
or
or
races
with
it
operations
on
ip
range.
I
foresee
two
operations
that
are
the
ones
that
were
requested
in
some
issues,
that
is
to
be
able
to
resize
resize.
C
The
the
the
network
and
the
other
one
is
to
be
able
to
switch
from
ipv4
is
primary
to
ipv6
is
primary
and,
as
I
explained
before,
I
think
that
we
should
do
this
in
a
kind
of
controlled
way,
with
the
controller
for
deleting
what
I
said
at
the
beginning
is
it
has
to
have
a
finalizer,
because
the
delete
you
need
to
to
check
that
nothing
is
referencing.
This
type
grants
for
ip
addresses,
so
the
idea
that
ip
addresses
is
they
are
created
and
they
had
to
match
an
ip
range.
C
So
there
is
a
cross
validation
here,
because
the
ipad
s
has
a
reference
to
the
ap
range.
We
need
to
get
the
object
and
we
need
to
to
be
able
to
cross
validate
it.
I
don't
know
if
there's
a
better
way
to
do
it
that
cannot
be
updated
and
to
avoid
the
problems
that
we
have
now
with
leaks,
we
can.
We
can
have
a
controller
that
is
handling
that
the
ip
address
is
not
or
fun
and
kind
of
things,
and
the
most
tricky
thing
is
services
how
the
service
is
going
to
use
this
okay.
C
Then
we
just
inside
the
service
panel
and
try
to
create
this
this
ip
address
and
if
it
already
sees
it,
will
fail
because
the
ip
ipad,
the
subject
has
the
it
will
create
a
conflict,
and
we
know
that
we
cannot
use
it
and
if
we
can
use
it
is,
is
just
we
said,
the
owner
reference.
So
when
you
delete
the
service,
it's
going
to
automatically
delete
the
ip
what's
happening
when,
when
the
more
tricky
thing
is
when
we
need
to
automatically
allocate
one
ip
address.
C
So
that's
when
we
need
to
do
multiple
round
trips
inside
the
service
api,
so
you
need
to
get
the
ip
range
you're
referencing.
Then
you
need
to
list
all
the
ip
addresses
that
are
already
allocated.
We
can
use
a
label
selector
for
that,
and
then
you
do
implement
the
same
logic
that
we
have
now.
You
just
start
to
try
to
to
get
one
three,
a
p
address
for
from
the
pool
and
for
updates.
C
We
don't
have
problems
because
caster
ips
are
unmutable
and
for
delete,
as
I
say
we
can
set
the
owner
on
the
ip
address
object,
so
the
garbage
collector
is
is
going
to
delete
and
in
case
that
something
happens
during
the
process
and
the
service
is
not
allocated
or
the
ip
server
dies.
We
have
an
ip
address
controller
that
is
going
to
to
check
this,
and
if
you
go
to
the
last
one
so
to
do
this
is
this.
C
C
B
So,
antonio,
I
I
think
I'm
excited
about
this
api
right
I
mean.
Obviously
I've
invested
in
it
already,
but
I
I
think
it's
not
as
scary
as
you
make
it
sound.
I
think
the
life
cycle
stuff
is
interesting,
but
it's
already
pretty
well
thought
out,
like
you've
thought
it
out.
I
think
the
garbage
collection
and
owner
references
handles
and
finalizers
handles
almost
everything
we
need
to
handle.
So
I'm
not
as
pessimistic
as
you
are
on
this,
but
maybe
that's
my
my
rosy
glasses.
C
C
No
okay,
then,
but
the
the
the,
as
I
say,
I
have
a
prototype
working
and
the
thing
is
it's
really
nice
and
it
works
really.
Well,
I
have
to
measure
the
performance
impact
and
the
other
thing
is:
is
all
these
edge
cases
between
you
create
something
something
dies?
What
happens,
but
I
think
that,
with
the
controllers
we
and
the
finalizes,
we
have.
B
B
C
B
E
B
C
Yeah
for
the
the
for
a
contest
for
the
people
that
was
in
not
indonesia,
okay,
the
main
pushback
from
boitek
and
daniel
smith
was
what
happens
if
in
between
an
operation.
You
are
not
able
to
finish.
How
do
you
clean
that
mess
clean
up
that
mess,
and
this
is
what
thing
is
saying
is
we
can
solve
it
with
the
finalizers
and
the
controllers?
We
are
sure
that
we
are
always
this.
A
One
random
question:
do
we
intend
to
allow
a
service,
the
person
creating
or
the
thing
creating
the
service
to
select
the
range
it
wants,
or
is
that
out
of
scope.
C
C
L
J
C
E
C
B
J
So
I
I
have
heard
that
on
the
external
ips,
because
it
matters
yes
but
vips
for
service
on
a
cluster.
I
did
not
hear
it
as
well.
That's
your
point
to
your
point.
You
can
do
that
with
some
admission
control
or
what,
like
again,
it's
the
same
argument.
We
had
around
the
pod
ip
early
on
in
this
discussion.
J
B
A
Well,
we
have
one
last
item
really
quick
and
that
looks
like
let's
see
progress,
report
on
ingress
class
name,
documentation,
improvement.
E
Hey
good
good
morning,
everybody
I'm
new
to
this
thing.
It's
because
you
moved
to
the
meeting
to
a
eu-friendly
time.
So
thank
you
for
this
thanks
for
coming
thanks.
E
I
it's
just
that
I
actually,
while
while
I
was
writing
this,
I
realized
that
the
pr
that
I
I
was
trying
to
get
the
status
on
has
been
approved
by
rob.
So
that's
that's
it
it's
a
success
thanks
for
the
so
I
participated
to
this
english
class
name
conversation
because
as
someone
so
I'm
on
the
submarine
team
and
I
interact
with
ingress
controllers
a
lot
and
when
we
implemented
the
gateway
api,
we
struggled
and
we
made
mistakes
because
we
didn't
sorry
not
the
gateway
apis.
E
A
Welcome-
and
one
note
is
that
it
does
need
api
approvers
to
slash
approve
it.
Unfortunately,
rob
can't
do
that.
G
E
Oh
finally,
that's
great
thanks,
so
the
person
who
actually
did
the
pr
is
not
me,
it's
noah
his
espace
he's,
not
here,
but
or
maybe
he's
here,
no
he's
not
here.
Anyways
thanks
thanks
for.