►
From YouTube: Kubernetes SIG Network meeting 20210624
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Looks
good
all
right,
I
was
able
to
I
just
pinged
a
few
automatically.
So
let's
go
through
the
interesting
ones.
This
one
session
affinity
for
same
service,
different
ports,
is
forwarded
to
different
endpoints.
The
issue
basically
comes
down
to
session
affinity.
B
Client
ip
is
actually
per
port,
not
per
service,
but
the
docs
don't
make
that
clear,
and
I
went
off
just
quickly
and
I
confirmed
in
fact
we
do
when
we
set
the
iptables
mode
recent
rule,
it
does
set
it
for
each
endpoint,
so
it
won't
cross
check
against
different
across
endpoints
in
the
same
service.
B
There's
a
question
in
here
from
lars
about
whether
ipvs
can
support
the
correct
mode
here
at
all
that,
so
I
think
that's
a
pretty
important
question
to
answer.
We
could
either
fix
it.
I
think
the
ip
tablespace
would
actually
be
pretty
straightforward
or
we
could
just
fix
the
documentation.
B
C
B
If,
if
we
fix
the
documentation
now,
then
I
think
we're
codifying
the
behavior
and
whereas,
if
we
were
to
fix
the
behavior
now,
which
could
be
a
breaking
change,
although
it
seems
really
really
unlikely
to
me,
I
can
manufacture
a
case
when
it
would
fail,
but
I
don't
know
that
anybody
would
do
that
in
real
life.
We
could,
we
could
add
a
an
option
to
do
it
per
port
per
client,
which
is
what
is
currently
doing.
B
D
Yeah
andrew
is
on
the
meeting
and
can
perhaps
confirm
it,
but
I
believe
if
you
make
ipvs
list,
you
see
a
number
of
targets
and
each
target
is
one
port.
There
is
nowhere
like
grouping
of
ports
possible.
As
far
as
I
know
in
in
in
ipvs.
E
F
B
E
I
B
B
If,
if
I'm
an
ftp
server,
you
talk
to
me-
and
I
say:
go
talk
to
me
on
port
x
and
I
have
a
range
of
ports
which
is
a
separate,
separate
problem.
But
I
have
a
range
of
ports
and
now
I'm
listening
on
that
port
for
you
and
you
come
back
to
the
load
balancer
and
you
get
steered
to
some
other
back
end.
He's
not
going
to
have
that
port
for
you
or
might.
G
Be
used
by
somebody,
that's
a
different
protocol,
so
what
I'm
expecting
is
I
have
a
service,
I
publish
two
parts,
I
mean
I,
I
don't
know.
How
do
you
guarantee
that
you
are
going
to
have
for
the
application?
The
same,
I
don't
see
the
affinity
between
multiple
parts,
I
don't.
How
do
you
develop
an
application
that
way.
B
I
don't
know
so
I
asked
on
the
issue
if
they
can
tell
us
more
about
the
use
case.
The
simple
answer
is
just
to
fix.
The
docs,
maybe
even
go
as
far
as
to
like
add
a
new
constant.
That
is.
That
means
the
same
thing,
but
is
more
clear
and
then
document
that
constant
and
and
say
the
old
cluster
ip
is
a
deprecated
symbol
and
and
per
port,
something
something
is
a
better
symbol,
but
I
thought
it
was
an
interesting
discussion
so
and
it
was
in
fact
interesting.
B
J
I
I
would
actually
think
that
this
is
it's
better,
that
the
client
should
go
to
the
same
endpoint
for
all
ports,
because
cluster
ip
is
representing
an
ip,
not
an
ip
plus
port,
so
different
different
ports
on
the
same
service
should
go
to
the
same
endpoint
just
like
they.
We
would,
if
you
were
a
single
client
talking
to
a
single
server.
B
And
you
know
so
yeah,
so
so
that's
that's
is
the
crux
right.
There's
a
reasonable
argument
in
both
directions
feel
free
to
jump
on
and
help
make
the
argument
either
way,
though
I
do
think
it
matters
very
much
whether
we
could
actually
practically
implement
the
correct
behavior
on
the
various
implementations.
A
B
And
ip,
I
have
to
assume
that
it
means
the
entire
ip,
including
all
protocols
I
mean
I
would
feel
bad
fixing
it
for
ports
and
not
for
protocols
all
right.
Let's
move
on
yep
this
one
is
a
cross
subnet
communication
issue
on
rel
without
a
whole
lot
of
information
about
the
networking
stuff
they
say
flammable.
B
So
lars
was
on
here
asking
for
more
information.
I
left
this
one
in
the
triage,
just
if
there's
anybody
else
who
wants
to
pay
attention
to
this
one
right
now,
it's
sort
of
back
in
the
original
posters
report
and
they're
sorry
in
their
court.
I
can't
read
and
talk
any
comments.
B
Sure
dual
stack,
a
report
of
issues
with
validation.
Cal
is
already
all
over
it,
the
interesting
one.
Let
me
go
back
to
the
top
sorry.
G
B
Oh
okay:
well,
this
issue
in
particular,
is
actually
two
two
issues.
Part
two,
I
think,
is
clearly
correct:
behavior,
maybe
not
obvious,
but
correct.
The
first
part,
I
think,
is
open
for
interpretation.
B
Specifically,
they
have
a
dual
stacks
or
required
dual
stack
service
with
both
families
and
they're,
changing
it
to
single
stack
policy
and
not
changing
the
families
they're
leaving
the
families
at
four
and
six,
and
they
expect
it,
I
think,
to
fail,
and
it
is
in
fact
succeeding
and
my
inclination
is
that
it's
correct,
because
the
user
clearly
meant
to
downgrade
from
require
dual
to
single,
and
I
think
that's
what
cal's
interpretation
is
too
cal
right.
Yeah.
H
B
H
We
validate
that
the
our
dual
stack
create
and
we
validate
on
update
as
well.
B
E
This
behavior
surprising,
I
would
have
said
that,
for
if
you're
modifying
a
new
field,
you
have
to
modify
all
the
other
new
fields
to
to
to
match
it
like
we
had
special
will
automatically
do
the
right
thing
for
people
who
were
modifying
pre-dual
stack
fields
because
they
didn't
know
they
had
to
modify
the
new
fields,
but
it
seems
weird
to
say
that
in
this
case
you
know,
if
you
modify
one
field,
we'll
just
change
other
things
to
match
it.
Assuming
that
that's
what
you
meant,
I
mean
do
we.
E
B
Okay,
so
it's
not
completely
unprecedented,
but
it's
again
it
was
interesting
enough
to
bring
up
here.
So
it's
open
if
dan,
if
you
want
to
jump
on
and
discuss
I'll,
also
assign
myself,
because
I
won't
see
it
otherwise
dan.
You
want
me
to
sign
you
sure.
C
B
B
Next,
we're
going
to
type
two
more
nginx
is.
This
is
a
request
for
metrics
that
I
don't
really
understand
the
request
on
ricardo.
You.
B
All
right-
and
maybe
we'll
make
this
the
last
one.
B
Cubelet
restart
so
apparently
cubelet
sets
node
not
ready
at
startup,
and
then
it
very
quickly
sets
it
back
to
node
ready,
which
is
causing
some
load
balancers
to
react
badly.
So
I
don't
know
what
the
real
answer
is.
Maybe
maybe
we
want
to
teach
the
cloud
controllers
some
hysteresis,
or
maybe
we
want
to
figure
out
why
cubelet
is
doing
that
and
have
it
not
do
that
or.
B
The
second
question
being
why?
Yes:
okay!
Well,
that's
what
I
asked
here,
I
I
didn't
try
grabbing
for
it,
because
this
was
just
a
few
minutes
ago,
but
we
can
go,
find
this
code
and
figure
out
if
it's
just
sort
of
doing
it
out
of
habit
or
if
it
is
actually
doing
something.
A
On
purpose
is
it
I
mean,
I
guess,
since
there's
cloud
load,
balancer
targets,
there's
a
cloud
provider
involved.
I
remember
we
used
to
set
not
ready
with
cloud
providers,
especially
gcp
until
we
could
get
the
cloud
routes
and
then
once
we
had
cloud
routes,
then
it
would
get
set
ready,
at
least
in
some
particular
configurations.
A
L
A
behavioral
change
in
service
controller
I
forget,
which
release
but
up
until
that
release
service
controller,
updated
the
note
pull
of
a
low
bouncer
on
a
60
second
periodic
loop,
and
there
were
folks
complaining
that
if,
when
a
node
goes
down
or
a
new
node
is
added
or
if
you
have
like
a
burst
of
nodes
being
field
up
quickly,
those
nodes
don't
get
added
to
the
low
balance
or
back
ends
maximum
60
seconds.
L
So
there
was
a
pr
that
updated
service
controller
to
have
a
node
informer
so
that
it
can
watch
node
objects
and
then
take
nodes
in
and
out
of
the
low
balancer
pool,
based
on
the
node
condition
and
other
factors.
But
I
guess,
like
the
side
effect
of
that
change,
is
that
now,
if
you
have
a
node
flapping
between
ready
and
not
ready,
then
service
controller
is
adding
it
back
and
forth
into
the
node
pool.
B
L
Think
I'd
rather
figure
out.
Why
cubelet's
doing
what
it
does
I
mean
I
don't
think
we
can.
I
mean
I'm
sure,
there's
like
a
million
reasons
why
cube
could
end
up
in
not
ready
and
ready
right
like
it
could
just
be
a
bad
network
or
whatever
so
I
mean
I.
I
do
think
that
I
don't
think
it'd
be
too
costly
for
service
controller
to
have
some
sort
of
back
off
logic
or
something
where
it
can't
do
more
than
like
two
updates,
but
then
some
some
known
interval
that
could
work.
B
I
think
we're
out
of
time
all
right,
I'm
done
andrew
if
you've
got
contacts,
please
go
ahead
and
throw
it
in
there.
B
Oh
right,
we
hunted
that
from
last
time,
all
right
long
long
ago,
when
we
were
all
first
thinking
about
network
policy,
we
designed
network
policy
to
be
this
really
cool
api,
which
is
really
focused
on
internal
intra-cluster
traffic,
and
it
was
all
based
on
selectors
and
and
all
the
kubernetes
goodness,
and
then
we
added
ip
blocks
because
we
wanted
to
allow
and
disallow
traffic
from
outside
the
cluster
directly
to
pods,
and
that
seemed
reasonable
at
the
time-
and
we
talked
about-
I
have
very
clear
memories
of
talking
about,
but
I'd
be
damned
if
I
can
find
the
notes
about
what
happens
with
network
policy
and
traffic
that
comes
in
through
a
load
balancer
or
through
a
node
port.
B
B
What's
supposed
to
happen,
and
this
came
up
recently
for
someone
I
was
talking
to-
who
was
very
surprised
that
it
didn't
work,
and
I
thought
no,
no,
it's
not
supposed
to
that
policy
is
not
supposed
to
apply
to
node
boards.
I
my
memory
was
that
we
had
decided
explicitly
it
doesn't
apply
to
external
traffic,
but
it
does
sometimes
depending
on
the
implementation
of
network
policy,
and
so
I
thought
it
was
worth
talking
about
what
what
do
we?
E
E
It
is
completely
undefined
what
ip
addresses
the
ip
block
matches
and
network
plugins
and
cloud
providers
and
other
combinations
of
things
can
basically
do
anything
and
it's
allowed
and
part
of
the
reason
for
that
is
because
actually
there
was
no
discussion
of
this
when
we
first
added
ip
block,
because
people
were
really
only
concerned
about
the
egress
case,
but
we
added
ip
black
to
ingress
as
well.
For
for.
E
The
parody,
whatever
just
that
they
all
have
the
same
things.
B
Yeah-
and
I
know
there
was
another
issue-
I
think
it
was
psyllium
that
interpreted
node,
ips
and
ip
blocks
in
a
mutually
exclusive
way.
So,
like
you,
couldn't
specify
an
ip
block.
That
was
your
nodes,
because
it
only
was
accepting
identities
or
something
for
nodes.
B
You,
you
can't
specify
pod
ips
in
an
ip
block
in
cilium.
Maybe
you
can't
do
nodes.
Maybe
then
maybe
it
was
potabi's,
but
yes,
and
it
wasn't
clear
to
me
whether
that
was
correct
or
not
to
so
undefined.
E
Corner
of
space
yeah,
I
think
that
has
also
been
explicitly
defined
as
undefined
now,
because
jay
was
running
into
that
with
the
new
network
policy
tests
and
like
actually,
I
think
psyllium
is
er.
No
sorry.
This
is
a
different
thing.
That's
undefined,
but
yeah
there's
a
lot
of
undefined
stuff
around
ipvlock.
Unfortunately,
we
should
not
do
that
again
in
cluster
network
policy
or
network
policy
v2.
J
B
Great
question:
it
seems
obvious
to
me
that
we
should
make
sure
that
all
traffic
is
subject
to
network
policy,
but
the
logical
implication
of
that
is
that
we
have
to
build
an
entire
network
policy
implementation
into
cube
proxy
because
it
has
to
process
it
all
before
it
does
an
snap
for
a
node
port
which
we
really
didn't
want
to
do.
We
really
did
not
want
to
build
a
full
policy
implementation.
J
J
So
I
I
think
it
would
be
fair
to
state
that
ip
block
when
used
on
in
on
an
english
policy
may
not
may
not
yield
the
desired
behavior
and-
and
if
you
want
to
do
something
which
is
explicitly
takes
separates
pre-nat
ips
from
post
net
ips,
there
should
be
a
new
feature
which
makes
it
clear
whether
we
are
matching
on
a
pre
pre
screen
added
or
postnated
ip.
E
So
the
documentation
that
says
it
applies
at
the
pod
was
was
written
well
after
the
fact
like
that
this
is,
you
know.
When
would
we
had
multiple
incompatible
implementations
and
we're
like?
What
do
we
do
and
we
said:
okay,
you
know
we'll
just
document
what
we
have
that's
at
least
better
than
not
documenting
what
we
have,
but
but.
E
J
B
It
it
also
makes
it
interesting
because
I
can
use
a
node
port
inside
the
cluster
to
bypass
network
policy.
M
No
policy
is
going
to
stop
that.
How
do
you
implement
that
and,
of
course,
the
sorry
go
ahead?
I
just
there's
some
there's
a
lot
of
scenarios
like
that
that
pop
up.
B
Yeah,
yes,
and
for
those
people
who
are
legitimately
using
load
balancer
and
they
want
to
use
network
policy
on
it,
like
it'll
work
on
vip,
like
load
balancers
and
not
on
proxy,
like
load
balancers
right.
B
Point
but
sure
well
and
ingress
has
no
standard
api
for
specifying
like
source
rules
at
least
service
type
load.
Balancer
has
load
balancer
source
ranges,
but
ingress
has
nothing
for
that,
although
gateway
maybe
could.
J
J
J
You
you
will
only
this
will
only
use
this
with
caution.
It
will
only
apply
to
unnatural
traffic
or
whatever
the
part
sees
after
you
know.
J
So
it
seems
like
it's
still
useful,
because
let's
say
that
a
class
cluster
admin
you
know
wants
to
block
intra-cluster
traffic
from
a
source.
You
know
he's
identified
that
a
particular
particular
part
is
emanating
fake,
malicious
traffic.
B
E
Well,
so,
okay,
virtually
all
ip
block
based
things
that
people
want
to
do,
I
think,
are
better
served
by
cluster
network
policy
than
regular
network
policy.
But
you
know
there
are
use
cases.
Well,
I
haven't
read
the
the
cluster
network
policy
cup
in
a
while,
but
there
are
use
cases
like
you
know.
I
want
to
block
this.
You
know
enemy
ip
range.
You
know
russia
or
whatever
from
getting
to
any
of
my
pods.
So
right.
I
want
a
cluster
network
policy
that
says
all
pods
reject
from
this
ip
range
and
that
only
works.
J
Now,
here's
one
more
thing
on
that
one
dan,
which
is
that
in
that
particular
case,
you
could
choose
to
apply
the
policy
on
the
part
that
is
actually,
if
you
happen
to
have
an
in-cluster
load
balancer,
you
could
apply
to
the
pod
that
is
actually
running
the
load
balancer
itself,
because
that
that
part
is
seeing
the
pre-prenatal
traffic.
J
Sure
but
then
it's
not
a
cluster
network
policy
feature
no,
it
could
still
be
a
cluster
network
policy
in
in
a
because
cluster
policy.
Firstly,
we
want
is
something
that
cannot
be
overwritten
by
a
namespace
user,
so
the
cluster
network
policy
could
have
a
designated
cluster
load
balancer
and
will
put
in
a
policy
which
puts
in
blocking
on
this
source
on
the
pods
labeled
by
the
cluster
admin
to
be
the
english
load
balancer.
E
M
M
You
aren't
you're
not
ever
going
to
be
able
to
fully
guarantee
that
you
will
ever
see
the
original
source
and
out
of
that
packet,
the
source
ip
of
that
packet
right
I
mean
there
is
no
guarantee
like
almost
ever,
which
is
why
there's
ambiguity
here
like
and
it's
almost
impossible
to
define.
So
I
think
it's
something
we
need
to
decide
upon
like.
Do
we
not
want
to
address
ingress
like?
M
B
It
is
it's
interesting
one
of
the
things
that
gateway
doesn't
yet
handle
is
node
ports.
It's
on
my
long
list
of
things
to
try
to
hack
in
if,
if
a
node
port
was
itself
a
gateway
definition,
maybe
that
would
change
how
we
think
about
this,
but
I
think
we
should
probably
push
ahead.
There's
a
lot
more
on
the
agenda.
She.
A
Can't
say
we
should
time
box.
This
is
this
something
that
we
can
point
off
to
cluster
network
policy,
working
group
or
network
policy
v2,
sure,
okay,
we
can.
J
A
Right
next
up,
bridgette,
you
had
crossed
half
of
your
item
out.
Okay,
you
said
no
need
to
discuss
it's
in
progress.
Okay,
thank
you
for
that
update.
Now,
next,
up
ingress
engine
x
status,
update.
K
Yeah
I
forgot
that
I've
put
that
I
probably
I
put
on
last
meeting
but
was
just
to
update
you
about
increasing
generic
status.
So
we've
released
an
alpha
version
of
the
one
deprecating
api.
If
you
want
better
one
in
gynex
and
f5
folks,
they
have
jumped
into
our
last
meeting
and
they
are
proposing
to
help
us
with
a
lot
of
stuffs,
including
gateway,
apis
and
implementation
into
the
community
ingress.
K
And
I
don't
remember
anything
else-
yeah
and
we
are
going
to
support
versions
smaller
than
119
just
for
the
next
six
months,
and
then
we
are
gonna
drop,
because
the
effort
for
us
is
too
much
to
keep
maintaining
two
versions
and
two
branches
of
erasing.
Genetics.
G
Know
that
we
used
to
do
this
of
waiting
until
last
week
to
to
run
in
a
rush
of
the
views
kept
city
things
I
created.
There
are
two
pull
requests
graduating
to
beta
that
are
easy
to
review.
I
mean
there
are
minor
changes
and
I
think
that
they
already
went
through
a
lot
of
reviews,
and
is
this
unused
grateful
terminator
endpoint
pr
that
worship
you
with
multiple
times
and
it's
a
stack
so
I
just
want
to.
I
think
that
these
are
the
ones
that
are.
G
G
C
G
G
That's
it.
I
don't
know
if
there
are
people
that
needs
that
has
another
pr
for
a
cap
or
something
that
want
to
bring
it
up,
because
I
didn't
find
any
any
else.
J
Yeah
I'd
like
to
take
this
moment
to
request
feedback
from
the
cluster
network
policy
cap.
You
know
we
did.
We
did
try
to
dm
a
few
folks
as
well,
but
it
will
be
nice
because
I
think
we
have
resolved
some
of
the
feedback
and
we
do
have
some
outstanding
issues
and
I
have
laid
out
the
summary
of
that
in
the
in
the
issue
and
on
the
cap.
But
it
will
be
great
if
we
can
have
some
some
more
feedback.
J
I
think
there
was
a
good
good
amount
of
feedback
on
on
the
clip
and
we
did
reply
to
some
comments
and
resolve
some
of
those
comments,
and
you
know
it
would
be
good
if
we
can
get
more
discussion
on
on
those
items,
and
there
are
definitely
some
some
outstanding
issues
and
we've
we've
kind
of
outlined,
especially
the
reactions
whether
it
works
for
everyone.
You
know
some
of
the
the
ip
block
discussion,
which
is
also
relevant
for
cluster
network
policy.
J
So
those
are
the
outstanding
issues
and
there
are
some
minor
ones
which
which
I
think
I'm
pretty
confident
we
can
hash
them
out
on
the
on
the
cap
itself,
but
the
the
key
items
of
like
you
know
essentially
making
sure
that
everyone's
on
board,
with
the
the
actions
proposal
that
we
have
or
if
if
the
alternatives,
makes
more
sense.
I
think
that's
that's
something
to
discuss.
O
Yeah
and
also
andrew,
I
think,
posted
in
the
check
that
we
have
a
bunch
of
user
stories
that
are
also
ready
for
review
and
some
of
them
are.
You
know,
new
use
cases
since
we
last
last
time.
We,
you
know
posted
the
cnp
capital
review.
I
think
some
use
cases
speak
to
you
know
whether
do
we
actually
have
the
need
for
a
policy
default
sort
of
like
policy
rules
that
that
are
prioritized
after
the
kubernetes
network
policies.
O
Instead
of
being
you
know,
rules
that
are
enforced,
so
those
are
those
are
in
the
new
network
policy,
api
repo
that
we
that
we
created-
and
we
would
definitely
appreciate
you-
know,
reviews
and
comments
on
those
user
stories
as
well.
That's
sort
of
like,
coupled
with
the
cmp
cap.
I
guess.
M
Future
our
goal
always
is
going
to
be
document
user
stories,
get
them
approved
and
and
for
network
policy,
related
objects,
get
them
approved
in
that
repo
merged
and
then
we'll
pull
them
into
the
kept.
This
one
kind
of
happened
backwards
because
our
first
stab
at
it.
So
there
was
problems
with
with
the
existing
user
stories
right,
so
we
took
a
step
back
reopened
them
and
put
them
up
for
review
and
we
haven't
gotten
any
review
yet
so
we've
kind
of
continued
down
back
to
the
kep,
but
I
think
I'm
saying
that
right,
abhishek.
J
Yeah,
but
to
answer
it,
then
the
those
user
stories
are
also
in
the
cat.
We
just
opened
up
prs
in
the
the
network
policy
sub
repo
to
to
discuss
them
individually,
but
we
have
reflected
those
changes
in
the
cap
as
well.
O
Yeah
essentially,
because
there
are,
you
know,
a
lot
of
comments
to
cap
and
people
you
know
can
get
lost
on
whether
I
have
opinion
on
this
user
story
has
just
been
discussed
before
and
blah
blah
blah.
So
we
wanted
to
sort
of
like
decouple
those
two
efforts
and
then,
if
you
know,
everybody
who
is
review
on
the
api
design
itself
have
questions
on.
Why
there
is
such
user
stories.
O
We
can
point
them
back
to
a
user
story
that
has
already
been
merged,
and
you
know
the
discussion
on
the
user
story,
which
is
already
there,
so
that
it
will
be
clear
on
to
other
people
rather
than
pointing
them
back
to
the
cap
and
find
them
in
the
in
a
long
list
of
cap
comments
to
to
find
those.
If
that
makes.
O
P
P
I
o
api
group,
that's
apparently
x
dash
is
intended
for
experimental
apis,
which
may
have
been
a
good
starting
point,
but
as
we
want
to
become
an
official
kubernetes
api,
it
should
be
kate
studio,
so
we're
trying
to
figure
out
what
that
group
should
be
the
so
right
now
we're
networking
dot
x,
dash
case
io,
but
we
can't
do
networking.case
io
because
that's
simply
already
used
and
you
can't
you
can't
serve
the
api
as
a
crd
and
also
as
a
built-in
resource.
P
So
all
of
that
to
say
we're
trying
to
find
a
name
that
works,
but
maybe
also
provides
some
form
of
consistency
with.
I
don't
know
network
policy
with
the
the
new
crds
that
are
being
built
brought
in
there
or
any
other
apis
we
may
develop
in
the
future.
You
know
should
this
be
as
one
example:
we
could
have
gateway
gateway.networking.kxio,
so
everything
is
kind
of
grouped
under
networking
or
we
could
just
have
a
top
level.
Gateway.Kate's
io
and
just
keep
on
having
top
level
things.
P
I
wanted
to
bring
it
up
here
because
it
feels
like
it,
you
know,
involves
the
broader
sig
in
terms
of
where
this
belongs
and
interested
in
feedback.
I
linked
an
issue
there.
I
know
some
of
the
network
policy.
People
have
already
given
some
feedback
on
that
issue,
but
if
there's
any
other
people
that
are
interested
comment
here
or
on
the
issue,
I
think
that,
right
now
the
leading
thing
might
be
gateway.ksio
or
gateway.networking,
I'm
not
sure,
but
one
of
those.
P
Yeah,
I
don't
think
that
I
can't
think
of
any
examples
or
I
haven't
found
any
that
do
that,
but
it
does
feel
gateway.kate's.
I
o
does
feel
maybe
a
little
too
generic,
so
I
don't
mind
dropping
it
down
one
level
yeah,
I'm
not
sure.
G
P
Yeah
I
can,
I
can
follow
up.
I
just
wanted
to
make
sure
people
were
aware
of
it,
and
you
know
I
I
can
make
requests
to.
You
know
that
this
will
have
to
go
through
because
we're
moving
to
the
case.
Io
api
group
will
have
to
go
through
the
formal
api
review
process
and
surely
api
reviewers
will
have
you
know
final
say
on.
P
If
an
api
group
makes
sense,
I
just
wanted
to
get
a
feel
of
what
we
preferred
requesting,
even
but
yeah
I
I
can
ask
around
as
far
as
what's
possible,
but
my
understanding
is
that
both
are
possible
there.
There
was
a
in
in
the
issue
I
linked
from
this
agenda.
I
also
linked
to
a
slack
thread
that
included
tim
and
jordan
and
a
few
others,
and
it
seemed
like
both
were
possible,
but
it's
more
of
a
question
of
what
makes
most
most
sense
from
cig
network
perspective,
and
I
don't
know.
R
P
A
S
Yep
yeah,
as
mentioned
this,
is
following
up
from
our
conversation.
Two
weeks
back,
I
think,
where
we
left
off
last
time
was
that
we
would
go
ahead
with
the
option
to
allow
empty
port
list
in
service
validation
in
the
service
pack
and
yeah.
The
plan
was,
or
at
least
what
we
roughly
discussed.
S
Feature
gate
it
let
it
soak
for
a
few
releases
until
the
service
api
clients,
like
q
proxy,
have
have
caught
up
so
yeah.
I
just
wanted
to
see
if,
if
there
was
any
objection
to
that,
of
course,
I'll
update
the
cap
to
reflect
this
and
have
reviews
on
there
but
yeah,
since
we
were
running
out
of
time
last
week,
just
last
class
meeting,
just
as
we
discussed
that
I
wanted
to
put
it
here.
If
there
was
any.
S
A
J
J
If,
if
we
are
okay,
just
going
back
to
that
ip
block
for
just
a
second,
I
I
think
that
maybe
it
appears
that,
would
it
be
fair
enough?
Would
everybody
feel
it
reasonable
enough
that,
for
the
first
iteration
of
cluster
network
policy,
we
leave
out
ip
block
altogether,
with
the
understanding
that
it
can
be
brought
into
a
future
version
seamlessly
without
having
an
api
backward
compatibility
issue?
E
B
J
J
M
E
Sense,
I
mean
you
know
it
all
depends
on
on
what
the
user
stories
you're
trying
to
solve
are
right
like
if,
if
cluster
network
policy
would
not
be
sufficiently
useful
to
administrators
without
ingress
ip
blocks,
then
it
doesn't
make
sense
to
define
it
without
ingress
ip
blocks.
If
it
is
still
a
good
useful
feature,
even
without
that,
then
you
know
might
make
sense
to
define
it
without
that.
J
So
if
that
is
the
logic,
then
I
think
that
it
is
still
useful
with
caveats,
meaning
that,
for
example,
the
cases
we
mentioned
earlier
if
either
you're
coming
in
without
any
netting
or
you
are
coming
in
with
natting.
But
you
get
to
apply
the
policy
before
your
netting
actually
happens.
E
G
J
I
think
casey
from
calico
is
here.
Can
you
comment
on
whether
the
calico
global
policy
or
even
the
local
policy,
handles
the
ingress
case
and
separates
out
pre-knited
and
post-knighted
ips.
F
We
do
have
a
concept
of
policy
that
applies
to
pre-dnat,
but
like
as
far
as
kubernetes
policies
go.
Our
ip
blocks
are
implemented
based
off
of
the
posts,
matted
traffic.
F
F
F
I
don't
think
that
it's
necessarily
a
perfect
solution.
I
mean
people,
don't
know
where
there
is
and
isn't
that
in
the
cluster
kubernetes
users,
it's
it's
not
intuitive
and
it's
not
consistent
and
I'm
not
super
comfortable
with
a
model
that
makes
the
user
have
to
understand
that.
H
H
You
are
using
this
thing
to
drive
that
thing
so
user,
fully
aware
of
what
they're
configuring?
That's
not
the
case
with
network
policy,
the
network
network
policy
has
to
be
split
into
two
parts,
a
part
that
is
works
the
same
way,
irrespective
of
how
you
go.
How
like
it's
a
copper
net
thing
all
right,
you
configure
it
in
a
certain
way.
You
get
a
certain
behavior,
irrespective
of
where
the
cluster
runs,
then,
for
anybody
who
wants
to
add
value-add
services
right
or
specific
behavior
or
a
differentiator,
because
we
have
to
allow
this.
H
H
O
J
One
other
sort
of
related
point
is
just
like:
we
are
re-documenting
the
networking
model.
There
still
seems
to
be
a
little
bit
of
ambiguity
on
networking
model
when
it
applies
to
host
networks
and
cubelet
accessing
others.
Maybe
we
should
have
the
networking
model
argument,
augmented
with
policy.
H
J
Things
yeah,
but
whatever
we've
been
calling
the
quote-unquote
network,
which
antonio
you
are
addressing
with
the
cubelet
yeah,
so
we
can,
we
can
add,
we
can
add
a
policy
section
there
as
well.
H
Yes,
so
I
I
think
I
have
raised
a
lot
of
complaints
about
the
way
volume,
controller
works
and
kubernetes,
but
I
have
to
admit
the
idea
of
bvc
bv,
the
abstraction
of
the
provider
and
that
the
deliberate
use
of
you
are
using
this
desk
type,
all
right,
you're
right,
we're
not
fooling
you
or
not
to
say
not
hiding
this
fact
from
you.
You
are
doing
it
as
a
user
all
right,
so
that
part,
I
think,
is
really
well
done
and
we
should
take
an
inspiration
for.
B
B
I'm
trying
to
figure
out
what
that
actually
means
cal.
I
I
just
see
that
we're
at
time,
but
I
would
love
if
you
I
feel
like
you
have
something
in
your
mind
when
you
say
that,
and
I
would
love
to
see
you
expand
on
that.
H
H
Q
Q
Not
at
the
moment,
but
we
do
well
yes,
actually
I
take
that
back.
We
do
have
a
generic
policy
attachment
framework
which
might
be
where
you
would
do.
A
B
Probably
I'm
I'm
struggling
to
come
up
with
a
coherent
suggestion
right
now.