►
From YouTube: Kubernetes SIG Network meeting 20200204
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
All
right
first
issue:
we
have
a
relatively
low
number,
which
is
great
first
issue:
node
local
dns,
with
no
track
breaks
calico
because
calico
uses
default
drop.
I
guess
and
therefore
without
running
through
the
nat
table,
to
get
approved.
This
connections
are
not
going
to
be
allowed.
B
Maybe
something
like
that.
I
asked
a
question
here
just
to
make
sure
I
understand.
What's
going
on
there,
the
rationale
for
no
track
is
listed
here.
I'm
happy
to
take
this
one.
I
guess
for
now,
unless
pavitra
is
here
and
wants
to
pay
some
time
on
this.
B
Okay
manager
defaults,
the
prune
white
list,
so
jordan
says
22.
We
will
stop
serving
extensions
v1b1.
We
apparently
still
have
some
stuff
in
tree
that
uses
extensions.
V1V1
ingress
seems
like
an
easy
cleanup.
D
B
We
have
a
failing
test
host
port,
no
conflicting
posts.
This
is
antonio
did.
Did
I
convince
you
on
that
other
thread
or
not?
I.
C
B
So
the
for
for
context
for
anybody
who
wasn't
in
the
conversation
with
antonio-
and
I
the
we
do
this
thing
where
cube
this
is
host
ports,
cubelet
and
cube
proxy.
Both
try
to
do
this.
They
look
at
a
port
that
they're
going
to
install
some
ip
tables
rules
for
and
try
to
open
the
port
to
make
sure
that
a
we
can
and
that
b.
B
Nobody
else
ever
does
so
that
nobody
gets
surprised
by
listening
on
port
22
and
then
having
a
service
steal
it
away
from
them,
because
the
kernel
won't
worn
or
anything.
If
you
install
a
port
forward
that
hijacks
a
real
port-
and
we
thought
I
thought
we
were
being
very
clever
back
then-
and
we'd
open
the
pork
and
we'd
just
sit
on
it,
not
do
anything
with
it,
so
that
if
we
couldn't
open
it,
we
could
at
least
scream
about
it
right.
B
But
it
turns
out
that
a
we
just
screaming
about
it
doesn't
really
do
any
good,
because
nobody
looks
at
the
logs
anyway
and
we're
not
really
willing
to
not
claim
the
port,
because
that
would
be
sort
of
a
violation
of
the
kubernetes
api.
So
what
we're
left
with
is
all
this
logic
around
opening
and
maintaining
known
ports,
which
needs
maintenance
and
apparently
doesn't
work
in
some
cases,
has
known
bugs
and
a
very
little
return
on
investment,
in
my
opinion,
so
I
was
prepared.
C
C
B
B
Okay,
I
just
opened
it
in
a
window.
I
will
take
a
look
at
it.
I
will
oh
it's
already
assigned
to
me
how
about
that?
Okay,
so
I
think
we
can
accept
this
one.
Sorry
for
the
side,
conversation
we'll
get
back
to
the
noteport
conversation.
Another
time.
B
Removal
of
service
proxy
label
at
endpoints
is
not
reconciled.
I
see
j
all
over
this
one.
Oh
yeah,
I'm
just
going
to
try
to
re-throw
it
on
a
node
port
because
it's
doing
an
external
load
balancer,
but
I
work
with
fingering,
so
I
can
help
okay
yeah.
My
guess
is:
if
this
is
really
a
bug-
and
it
sounds
like
it-
you
know
plausibly
could
be
you
shouldn't
even
need
node
port.
It
should
just
on
any
and
any
end
points.
It
should
be
the
same
yeah!
B
C
G
So
if
this
is
not
accepted,
it
should
be.
I
think
that
pr
for
the-
and
so
I
can
provide
a
little
content
on
this,
so
there's
a
few
customers
that
we
have
where
they
always
use
like
a
default
chain,
that's
drop
or
reject
and
we're
discovering
a
lot
of
corner
cases
where
we
assume
the
default
chain
is
accepted,
and
that's
why
it's
worked
before.
But
as
soon
as
you
change
the
default
chain
to
drop
like
there's
a
lot
that
breaks
so
the
health
check.
B
C
I
think
that
this
is
these
are
one
tests
that
are
these
people
from
conformance,
I
don't
know
the
name
of
the
they
are
adding
some
tests
to
validate
the
api.
Endpoints,
not
the
endpoints
and
services
got
it,
but
they
had
a
pr
for
this
and
it
just
scrolled
down.
B
Okay,
I'll,
let's
load
that
up
it
says
it
was
merged.
F
No,
I
mean
we
had
talked
about
this
in
the
cap
or
sorry
in
the
zig
meeting,
something
I
was
actually
starting
to
write
up.
H
E
C
B
We're
now
into
fairly
old
ones
november.
This
is
the
I
replied,
the
vxlan
one,
but
good.
A
B
Fun,
okay
and
improved
service
node.
This
is
a
good,
a
good
one.
From
way
back
that
I
it's
still
marked
as
not
triaged.
B
I
left
it
open
because
I
thought
it
was
funny
that
it
was
more
than
three
years
old
and
not
triage,
but
I
think
it's
more
of
an
artifact
of
the
way
the
triage
label
came
to
be,
but
I'm
actually
going
to
revisit
it.
So
I'll
assign
this
to
myself.
A
Got
all
right
thanks
tim
next
up
antonio,
wants
to
talk
about
daca
shim
deprecation
in
cubenet.
C
Yeah
I
send
an
anime
to
the
mailing
list,
the
I
don't
know
exactly
how
they
are
going
to
do
it,
but
if
they
just
remove
the
the
code
from
the
ripple,
we
are
going
to
lose
the
cube
net,
mainly
the
tune.
So
I
don't
know
if
I
don't
see
any
any
job
running
it
in
sick
network
discreet.
But
I
don't
know
everybody
else.
C
B
I
don't
know
of
I
mean
it
would
be
dependencies.
Users
would
be
users
using
it
right.
As
far
as
I
know,
the
major
providers
all
do
their
own
thing
with
one
of
the
well-known
cni's
or
do
something
custom.
B
A
H
Clusters
cubenet
the
concern.
The
other
concern
I
have
other
than
saying
is
the.
I
I've
seen
some
code
setting
up
the
network
using
calls
to
docker
shim
slash
network
package
that
that
was
about
a
year
and
a
half
back
so
I'll
check.
If
we
have
that
and
I'll
report
on
the
slack
and
if
not,
if
it's
there,
then
we
need
to
skew
some
work.
I
F
Okay,
so
I
think
we're
okay.
There.
A
A
Do
we
need
to
do
anything
with
like
cube
adm,
to
specify
other
kind
of
network
setups
by
default?
I
mean
yes
or
those
things
good.
That's
what
I'd
be
worried
about.
B
Good
question
I
haven't
been
paying
attention.
I
assumed
perhaps
wrongly,
that
the
folks
doing
the
docker
shim
deprecation
we're
paying
attention
to
that.
I
will
have
to
look
at.
B
B
B
I
believe
there
are
some
folks
looking
at
that
already
from
the
gce's
side,
but
I'm
not
sure
who
it
is.
I
can
look
around
there
too.
Okay.
A
All
right,
so
it
looks
like
there
is
some
work
to
do.
Are
the
people
deprecating
dr
shim,
going
to
do
that
work?
Do
we
think,
or
is
that
something
that
we
need
to
look
into.
B
I
don't
know
the
answer
to
that.
I
don't
even
know
who's
the
tip
of
the
spear
for
the
docker
shim.
At
this
point.
B
I'll
do
a
quick
reach
out
to
the
folks
that
I
know
are
working
on
it
and
see
if
I
can
get
some
positive
confirmation.
Okay,
thanks!
So
much
would
probably
be
aware
of
all
the
players
and
what
they're
doing
on
this,
it
seems
like
james
is
always
going
to
start.
A
Okay,
all
right
next
up,
cluster
network
policy.
A
All
right,
that
is
the
follow-up
from
last
time.
If
there's
any
way,
we
can
hold
it
to
20
minutes,
because
we
still
have
a
lot
of
other
items
on
the
list
to
go
through.
That
would
be
great.
Do
you
think
you
can
do
that
abhishek
yeah
sure
we
can
we'll
try
to
be
quick.
Okay,
thank
you
all
right.
All
right.
D
J
The
meantime
I'll
just
do
a
quick
recap,
grobin
presented
in
a
last
meeting
the
motivation
behind
the
cluster
network
policy
resource,
just
specifically
for
the
administrator
use
cases
to
complement
the
developer
network
policies.
J
Specifically,
what
we
want
to
do
is
that
cluster
network
policies
should
allow
admins
to
write
strict
tools
which
cannot
be
overridden
by
developer,
written
network
policies
and
also
write
some
default
rules
for
the
clusters,
which
can
be
overridden
by
the
kubernetes
policy.
J
We
also
looked
at
one
of
the
proposal
of
the
structure
for
the
cluster
network
policy
with
rule
actions.
Essentially,
we
introduced
three
three
types
of
actions:
the
deny
action
to
drop
traffic
that
matches
the
rule
the
allow
action
to
allow
strictly
allow
the
traffic
and
the
more
complicated
baseline
allow
action.
But
today
we
wanted
to
focus
a
bit
more
on
the
the
two
core
problems
that
we
are
trying
to
solve.
The
first
problem
is
that
are
the
precedence.
J
That
is
exactly
how
the
cluster
network
policy
operates,
with
kubernetes
network
policies
and
how
we
can
you
know,
create
some
sort
of
a
bookend.
I
think
in
terms
of
to
to
the
communities
that
are
policies
to
to
achieve
the
two
set
of
use
cases
that
we
were
looking
for.
J
So
previously
we
looked
at
our
rule
actions
to
solve
this
problem,
but
there
are
additional
ways
in
which
we
can
achieve
this,
and-
and
we
have
documented
those
alternate
proposals
towards
the
end
of
the
slide
deck,
and
you
know
every
proposal
in
a
bit
more
detail.
I
doubt
we'll
get
time
to
review
all
of
those
proposals,
but
we
can
do
that
offline.
J
The
other
core
problem
that
we
want
to
look
at
is
the
selector
semantics.
You
know,
since
this
resource
is
at
a
cluster
scope,
we
want
to
understand.
We
should
understand
what
up
what
the
selector
semantics
are
like.
What
does
a
paul
selector
mean
at
a
cluster
scope?
J
So
so
young
will,
you
know,
do
a
walkthrough
of
use
cases.
You
know,
with
example,
to
clarify
these
two
problems
and
and
how
we
solved
it
with
our
original
rule
action
proposal.
But
then,
of
course,
we
can
do
it
in
other
ways
as
well,
but
at
least
we
can
get
some
more
clarity
here
as
to
as
to
what
those
problems
are
and
also
before
we
end.
J
We
hope
to
get
some
feedback
on
additional
features
that
we
that
we
plan
to
propose,
along
with
the
cluster
network
policy
as
part
of
the
spec,
something
which
is
not
core
to
the
proposal,
but
definitely
adds
value
to
the
cluster
network
policy
resource.
So
having
said
that,
young
take
it
further.
K
Yeah
thanks
fgx,
so
I'll
go
as
quick
as
I
can
pass
the
logo
here.
So
we
have
some
sort
of
examples
on
precedence
that
we
prepared
here,
and
you
know
for
the
first
use
case
that
check
talked
about
where
you
know
we
want
to
import
some
default
sort
of
rules
for
every
namespace.
K
K
This
is
what
we
call
you
know
a
namespace
isolation,
sort
of
say,
and
if
we
consider
a
simple
cluster
shown
on
the
left,
there
are
four
parts
in
two
namespaces
essentially,
and
because
there
are
no
policies,
all
parts
can
communicate
with
each
other.
Now,
for
this
example,
we're
focusing
on
the
pod
color
equals
blue
and
suppose
the
cluster
mean
want
to
set
a
baseline
rule
for
that
part
to
only
accept
traffic
coming
from
its
own
namespace.
K
Now,
in
the
real
cases,
people
might
not
want
to
apply
a
policy
onto
a
specific
part.
They
usually
apply
policies
to
you
know,
name
spaces,
but
to
to
be
clear
with
this
example.
We
only
look
at
this
part
now.
This
particular
thing
can
be
achieved
with
a
cluster
network
policy
with
rule
action
based
landlord,
so
how
we
do
that
is
that
we
select
this
part
as
applied
to
and
then
ingress
rule.
K
We
say
that,
okay,
we
are
only
allowing
traffic
from
this
namespace,
which
has
the
metadata
name
equals
db,
because
the
baseline
allow
action.
We
want
the
baseline,
allow
action
to
have
the
implicit
isolation
meanings
just
that
the
current
kubernetes
network
policy
does
so
when
this
policy
is
applied,
then
what's
allowed
is
the
traffic
from
its
own
namespace
and
whatever
traffic
from
different
name
space
is
denied
because
of
the
implicit
isolation.
K
So
this
party
color
ecosport,
gets
isolated
because
of
because
of
this
policy,
and
then
it
is
entirely
possible
that
developers
of
the
db
name
space
comes
in
and
decided
that
the
baseline
rule
doesn't
actually
suit
their
needs
for
the
developers
of
namespace.
They
actually
wanted
this
pod
to
be
able
to
talk
to
the
web
namespace,
but
they
didn't
care
about.
You
know
whether
the
pod
can
talk
to
whatever
policy
in
its
own
namespace.
K
So
what
they
can
do
in
this
case
is
to
apply
a
good
old,
kubernetes
network
policy,
as
of
today
to
open
traffic
for
this
namespace.
Now,
with
this
kubernetes
network
policy
and
the
cluster
network
policy,
we
just
applied
on
with
those
two
policies
stacked.
K
We
can
see
that
you
know,
because
this
policy
selects
the
pod
app
equals
blue
and
it
has
an
ingress
rule
to
allow
traffic
coming
from
the
web
name
space.
Then
these
two
traffic
will
be
allowed
and
because
of
the
in
place,
isolation
of
the
criminal
policy,
this
part
equals
color
equals
orange,
can
no
longer
talk
to
the
pod.
Color
equals
blue.
So
what
essentially
this?
K
What
essentially
this
says
is
that
whatever,
if
those
two
policies
are
selecting
the
same
workload,
then
whatever
the
kubernetes
network
policy
says
will
completely
override
the
intention
of
the
cluster
network
policy
that
has
baseline
allow
action.
So
this
policy
one
becomes
obsolete
or
you
know
it's
stacked
upon.
So
everything
specified
here
in
the
ingress
rules
or
won't
take
effect,
because
the
kubernetes
network
policy
has
a
higher
precedence.
K
So
this
is
one
of
the
use
cases
that
we
wanted
to
present.
So
the
default
allow
the
baseline
alloy
actions.
The
intention
will
be.
This
is
something
that
the
cluster
means
syncs,
that
it
will
be
enough
for
the
developers
for
of
the
namespace
to
work
with.
So
the
intention
will
be
something
like
I
want
to
provide
you
a
baseline
set
of
defaults,
and
it
should
be
good,
but
you
have
the
ability
to
open
up
or
restrict
further
of
the
traffic
that
you
see
fit
so
so
that
you
know
this.
K
These
defaults
can
be
override,
but
there
are
some
other
cases
where
the
cluster
main
wants
to
enforce
some
policies.
That
cannot
be
overwritten.
For
example,
if
the
cluster
wants
something,
that's
you
know
for
each
of
the
namespaces,
no
matter
what
the
developers
say
on
the
namespace,
for
example,
should
always
be
able
to
allow
ingress
traffic
or
you
grass
traffic,
to
core
dns.
This
is
just
an
example,
or
they
should
always
allow
traffic
coming
from
a
monitoring
namespace.
K
So
the
developers
of
that
namespace
cannot
apply
a
kubernetes
network
policy
which
has
implicit
isolation,
and
you
know
accidentally
deny
those
kind
of
traffic
in.
In
that
scenario,
they
can
apply
a
higher
precedence.
Allow
rules
to
to
enforce
those
traffic
will
always
be
allowed.
K
On
the
other
hand,
if
the
cluster
mean
you
know,
finds
out
some
security
risks
posed
by
some
other
namespace
or
another
pods,
they
can
have
a
deny
rule
which
dictates
that
this
traffic
will
be
denied
and
it
would
never
be
overridden
by
developer,
focused
network
policies.
So
this
is
such
one
of
such
examples
so
consider
a
new
cluster
show
on
the
left.
We
have
three
namespaces
and
we're
still
looking
at
the
part
color
equals
to
blue.
K
Now
the
cluster
would
mean
look
at
this
traffic
pattern
and
it
determined
that
first
of
all,
the
traffic
from
the
monitoring
namespace
must
be
allowed,
and
the
second
thing
is
that
the
cluster
may
also
found
out
the
consumer
workload
poses
a
security
risk,
so
what
the
cluster
admin
can
do
in
that
case
is
that
he
he
or
she
can
apply
a
custom
network
policy
with
allow
and
deny
rules
which
specifies
I
want
to
explicitly
allow
traffic
coming
from
the
monitoring
namespace,
and
I
want
to
explicitly
specify
traffic.
K
That's
deny
traffic
from
the
consumer
namespace
and,
as
you
can
see
here,
if
those
two
policies
are
stacked,
then
the
cluster
network
policy
was
denied
and
allowed
rules
will
take
precedence
because
those
two
actions
have
higher
precedence
than
the
namespace
kubernetes
network
policy.
So
whatever
intention
specified
here
in
the
in
the
number
two
cluster
network
policy
cannot
be
overridden
by
developer,
focused
namespace
network
policy.
K
So
those
are
the
two
examples
we
prepared
here
to
explain
this.
This
precedence
level,
which
is
the
the
cnp
cluster
policy,
deny,
will
always
wing
against
cmp
allow
and
then
whatever.
What's
after
is
the
kubernetes
network
policy,
which
is
always
allowed
rules,
then
the
implicit
isolation
of
the
kubernetes
policy
and
then
the
baseline,
allow
and
the
the
cluster
network
policy
baseline,
implicit
isolate,
will
have
the
lowest
precedence
I'll
pause,
a
second
for
any
questions.
At
this
point.
B
K
Yes,
we
abject
do
you
want
to
talk
about
the
supplemental
slides
now.
J
Yeah
quickly
I'll
talk
about
a
few
alternatives
that
we
thought
about.
One
of
them
that
we
considered
was
splitting
up
the
baseline,
allow
into
its
own
crd.
Let's
call
it
default
network
policy
which
mirrors
the
kubernetes
network
policy
behavior,
except
that
it's
at
the
cluster
scope.
It
does
not
have
any
action
feel
to
it,
and
then
we
keep
the
cluster
network
policy,
which
will
be
the
higher
precedence.
J
The
left
hand
side
of
the
bookend,
which
will
have
the
allow
and
deny
action
so
that
kind
of,
like
you
know,
clears
the
or
rather
you
know
the
base.
The
default
network
policy
will
be
used
which
can
which
will
be,
which
can
be
overridden
by
kubernetes
network
policies
and
and
the
and
the
cluster
network
policy
cannot
be
overwritten.
There
were
other
alternatives
that
we
thought
about.
I
think
tim
you
suggested
the
you
know.
Maybe
a
boolean
field
for
is
overrideable,
that's
also.
J
We
have
documented
in
the
slide
there's
another
one.
You
know,
I
think
I
remember
ricardo,
mentioning
on
the
slack
channel
that
when
you
present
two
or
three
crds
it,
you
know
he
hates
it
as
a
user,
and
people
might
think
that
there
are
now
three
ways
of
defining
security
in
my
cluster.
So
the
the
third
approach
that
we
thought
about
was
to
you
know,
do
some
sort
of
a
hybrid
way
introduce
a
new
section
within
the
spec.
The
next
one
introduce
a
new
section
within
the
spec
called
default
rules.
J
The
the
next
slide
piece
yeah,
the
default
rules
field
is
going
to
be
similar
to
the
ingress
rules
and
egress
rules,
except
that
the
default
ingress
rules
or
egress
rules,
don't
have
an
action
field
and
the
english
and
egress
rules
have
an
action
field.
So
so
this
is
like
a
hybrid
approach
of
the
two
crds,
where
you
know
both
of.
K
B
J
Being
done
in
the
same
crd,
so
so
we
have
a
few,
you
know
we
still
brainstorming
on.
You
know
what
sticks,
what
what
feels
more
cleaner
and
I
think
eventually
we'll
you
know,
get
together
and
have
a
preferred
approach,
which
most
likely
will
be
one
of
these
alternative
proposals
and
and
then
and
then
you
know,
start
with
the
cap
and
have
also
document
the
alternative
proposals
in
the
gap.
K
Okay,
yeah
sure,
so
the
second
thing
we
wanted
to
discuss
is
that
the
past
lecture
semantics
so
as.
K
In
the
kubernetes
network
policy,
as
of
now,
it's
namespace
go
right
so
then
apply
to
it
has
a
pod
selector
and
it
only
selects
parts
from
its
own
namespace
and
then
in
the
ingress,
the
ingress
rules.
If
you
have
a
pass
selector
without
any
namespace
selector,
it
selects
whatever
it's
in
the
same
name
space,
but
with
a
namespace
selector.
It
will
first,
you
know,
select
namespaces
based
on
the
label
and
then
do
the
past
factor.
K
It
is
clear
in
the
in
the
current
equivalence
network
policy
sense,
but
so
with
the
cluster
network
policy,
the
same
part
selector
can
be
a
little
bit.
You
know
confusing,
because
it's
a
cluster
scope.
So,
for
example,
if
we
have
an
empty
pass
lecture
right,
does
it
mean
that
we
selected
basically
everything
in
a
cluster,
because
there
were
no
namespaces
bound
to
the
cluster
network
policy?
So
what
we
decided
is
that
we
want
to
propose
one
semantics
to
the
past.
K
Lectures
here
is
that
it
has
different
sort
of
semantics
in
the
applied
to
ending
ingress
and
egress
rules,
so
in
applied
to.
If
we
have
an
empty
pot
selector
it
will.
It
means
that
this
cluster
network
policy
will
apply
to
all
parts
in
all
namespaces.
K
If
it
has
a
pass
vector,
saying,
label
matches
at
p
cos
a
it
will
match
all
the
parts
which
has
label
fp
cos
a
in
all
name
spaces,
and
so
essentially,
what
the
path
lecture
here
in
the
applied
to
field
will
be
cluster
scope,
because
it's
a
cluster
scope
resource
what's
a
little
bit.
Tricky
is
in
the
ingress
and
egress
rules.
Let's
say:
there's
a
standalone
pass
lecture
and
it
doesn't
have
any
name
space
lecture
to
go
along
with
it.
Now.
K
What
this
means
is
that
we
didn't
want
this
to
mean
that
we
select
all
the
parts
from
all
namespaces,
because
an
empty
post
letter
in
that
fashion
will
essentially
mean
the
same
thing
as
an
empty
namespace
vector,
essentially
selecting
every
workload
in
the
cluster.
What
we
wanted,
this
power
selector
empty
path,
selector
to
mean
is
that
it
will
select
all
parts
from
the
name
space
of
the
part
for
which
this
rule
is
currently
being
evaluated
upon.
So
let's
say
there
are
three
name
spaces
in
the
cluster.
K
Now,
an
empty
pod
selector
will
select
all
the
parts
in
these
three
name
spaces.
When
we're
evaluating
this
rule
on
the
first
name
space,
this
part
selector
will
take
effect
on
the
first
namespace
and
next,
when
we
are
evaluating
the
rule
on
the
second
namespace,
this
part
selector
will
take
effect
on
the
second
namespace.
Instead,
it's
a
little
bit
tricky.
So
I
will.
I
will
show
some
examples
on
how
this
will
work.
First
is
a
it's
a
default,
namespace
isolation,
problem
that
presented
before
now.
K
If
we
have
a
policy
like
this
right,
let's
say
apply
to
an
empty
pot.
Connector
means
this
cluster
network
policy
will
apply
to
every
workload
in
the
cluster
right
now
there
are
only
two
namespaces
and
four
parts
and
if
I
say
ingress,
I
want
the
ingress
rule
to
allow
paw
selector
empty.
This
means
that
you
know
whenever
this
this
rule
is
being
evaluated.
K
It's
this
rule
is
bonded
to
that
namespace.
For
example.
Let's
let's
look
at
the
namespace
a
first,
so
a
part
web
app,
because
web
and
apicos
db
are
selected
by
this
selector
when
this
ingress
rule
is
evaluating.
On
this
part,
it's
bounded
to
namespace
a
so
what
this
means
is
that
the
part
at
p
cos
web
should
only
allow
traffic
coming
from
its
own
namespace,
but
not
from
other
namespaces.
K
So
that's
why
you
know
the
there
is
a
there's,
a
intra
namespace
allow,
but
inter
namespace
deny
and
and
the
same
thing
goes
for
the
path
in
the
name
space
b,
whereas
in
this
case
we
do
pause,
vector
equals
empty.
So
every
workload
in
the
cluster
is
applied
to
by
this
policy
and
the
ingress
is
named
space
selector
empty.
It
means
allow
traffic
from
all
namespaces.
K
A
K
Yes,
so
I
think,
as
a
as
tim
suggested,
on
the
slide,
there
are
some
possible
alternatives
to
this.
There
is
a
we
could
do,
an
explicit
field
for
the
same
namespace
for
the
past
lecture.
This
is
a
this
is
an
alternative
to
this,
where
we
introduce
a
new
field
say
same
namespace
to
go
along
with
the
past
selector
and
I
think
in
the
in
the
alternative
proposal.
Here.
K
The
alternative
is
that
if
we
want
to
select
things
from
the
same
namespace,
we
will
have
an
ingress
which
is
called
from
same
name
space
and
provide
parts
and
actors
there
and
for
egress
we'll
have
a
two
same
name:
space
and
the
provided
selectors
down
down
there,
so
that
you
know,
if
we
have
this,
then
without
the
to
save
namespace,
the
pass
selector
will
still
be
cluster
scoped.
And
if
the
plus
pass
selector
is
provided
under
the
same
namespace
field,
then
it
will
be.
B
So
the
main
value
I
see
of
doing
it,
the
with
just
the
selector,
is
that
it
is
syntactically
the
same
as
the
network
policy,
but
it
has
different,
meaning
which
I
don't
really
like,
but
also
it
means
that
the
difference
between
an
empty
selector
and
a
non-specified
selector
becomes
important
right
and
that's
bitten
us
in
the
past,
and
so
I
think,
like
I've
pushed
hard
that
we
just
don't
ever
do
that,
because
not
every
language
that
we
decode
into
can
accurately
represent
a
nil
map
versus
a
empty
map.
J
Yeah,
so
the
idea
is
that
if
we
do
a
namespace
selector,
the
semantics
says
that
you
know
allow
traffic
from
all
namespaces.
J
If
I
want
to
express
express
namespace
isolation,
that
is,
you
know,
with
a
single
policy,
only
traffic
intra
traffic
intraneum
space
traffic
should
be
allowed
inter
name
space
traffic
should
not
be
allowed,
so
allowing
an
empty
namespace
selector
would
mean
that
you're
allowing
traffic
for
more
link
spaces.
So
that's
the.
J
Oh
got
it
yeah
yeah,
that's
that's
true!
Yeah!
We,
you
can
apply
the
policy
to
an
empty
namespace
selector.
I
think
the
the
semantics
that
young
is
trying
to
talk
about
is
for
the
part
selectors
within
the
ingress,
egress
rules,
and
you
know
how
do
we?
How
do
we?
What
what
is
the
meaning
of
a
pod
selector
on
a
cluster
scope
by
itself,
especially
when
the
resources
are
also
in
the
resources
classes
code?
J
It
was
easy
for
kubernetes
network
policies,
because
we
can
just
say
that
you
know
this
policy
is
created
in
this
name
space.
So
a
part
selector
unaccompanied
with
I
mean
without
a
namespace
selector
means
that
the
pods
are
selected
from
the
same
namespace.
J
We
wanted
to
do
that,
similarly
with
it
on
a
clusterscope
basis,
so
we
thought
about
like
an
apothecary
by
itself
will
only
be
selected
from
the
you
know,
rules
that
is
being
currently
evaluated
for
the
applied
to
power
selector,
but
it
is.
It
is
very
confusing,
and
I
guess
the
explicit
semantics
is
a
bit
more
cleaner
or
easier
to
understand
it's
it's
just
that
you
know
what
then,
what
what
is
the
meaning
of
power
selector
by
itself.
You
know
on
a
cluster
scope.
J
That
would
then
mean
the
parts
pods
are
selected
from
all
namespaces.
Is
that
what
we
are
comfortable
with.
B
And
we
don't
have
to
do
a
full-on
api
review
at
this
stage.
In
fact,
it's
almost
impossible
right.
I
do
think
that
it's.
This
is
an
area
worth
thinking
about
making
sure
that
we
don't
make
the
mistakes
that
we've
made
in
the
past
and
the
general.
I
think
the
general
theme
of
those
mistakes
is
trying
to
be
clever,
always
bites
you
especially
around.
A
A
B
J
And
just
once,
just
one
slide
that
I
also
wanted
to
bring
attention
to
was
the
follow
up
like
one
of
the
things
that
we
were
looking
at.
You
know
this
is
orthogonal
to
the
precedence
and
the
selector
problem,
and
once
you
solve
the
code
problem,
I
think
these
are
like
even
a
little
more
like
value-add
features
to
a
cluster
scope
policy.
I
think
you
know
to
make
the
poking
holes
of
you
know,
deny
rules
and
allow
rules.
J
We
were
thinking
of,
like
maybe
having
an
accept
field,
so
you
know
allow
traffic
or
deny
traffic
from
all
names,
namespaces,
except
cube
system.
We
can
do
that
with
match
selector
or
match
expressions
with
the
not
in
today,
but
again
since,
if
we
want
to
do
we
want
to
be
more
explicit.
This
is
one
way
to
do
it.
The
other
thing
was,
you
know,
logging.
Since
we
are
introducing
deny
rules,
I
think
maybe
we
can
now
also
think
about
logging
on
a
policy
level
or
at
a
root
level.
J
The
other
is
also
rule
names
or
you,
identifier
for
rules.
I
think
it's
important
when
you
know
people
want
to
do
statistics
on
the
rules
a
way
to
uniquely
identify
a
particular
rule.
J
So
maybe
that's
something
that
we
can
think
about
and
then
also
node
selectors.
I
think
a
lot
of
people
were
requesting.
How
can
I
allow
traffic
from
pods
to
certain
nodes,
or
how
can
I
you
know,
secure
my
nodes
and
now
this
is
this
is
a
more
complex
problem
to
solve
and
I
think
we
should
solve
it
in
a
separate
cap
itself
altogether,
but
these
are
some
other
follow-ups
or
things
that
we
would
want
to
consider
as
part
of
the
cluster
network
policy.
B
And
in
the
chat,
maybe
you
want
to
add
to
this
list
in
the
chat.
There
is
a
discussion
going
about
what,
if
any
sort
of
pre-flight
we
could
start
to
think
up
or
how
could
we
make
cute
cuddle
describe,
show
us
something
that
is
comprehensive,
that
isn't
just
the
in
namespace
network
policy
and
not
just
the
cluster
network
policy,
but
something
that
says
show
me
the
total
policy
that
will
apply
to
this
pod
and
even
better
the
the
policy
that
would
apply
to
this
pod.
If
I
make
this
change.
J
M
K
K
K
B
Yeah,
exactly
yes,
and
and
do
the
do
the
stacking
and
overlapping
and
cancel.
Is
it
cancellation
and
show
me
in
a
human
digestible
format?
What
would
actually
happen?
If
I
did
this
right?
We
have
the
same
request
for
ingress,
so
we'll
have
to
have
that
conversation
at
some
point
matt.
If
you
could
do
a
five
minute
demo
at
one
of
the
cignets,
because
your
tool
does
exactly
that.
I've
used
it
before.
I
think.
N
Yeah
I've
been
working
on
something
that
does
this:
it
does
it
for
existing
net
polls,
so
I'd
have
to
you
know,
do
a
little
bit
more
work
to
support
abhishek's
stuff,
but
yeah
like
definitely
looking
forward
to
that.
If
I
can
help
anybody.
A
Okay
thanks:
we
should
move
on
and
let's
keep
any
follow-up
on
slack
or
the
google
group
or
come
back
next
time.
Next,
up
rob
with
service
api
to
gateway,
api
rename.
M
Is
there
one
before
me
the
migrating,
a
repo?
M
Oh,
no
sorry
I
I
got
I
got.
I
have
two,
so
yes,
the
first
one
is
service,
ap,
renaming
service
apis
to
gateway
api.
I
sent
something
out
to
mailing
list.
I
think
it's
pretty
straightforward.
M
When
we
started
with
the
service
apis
project,
we
didn't
really
know
what
the
resources
were
going
to
be.
We
built
an
api
gateway
is
the
most
prominent
resource
in
that
api
and
it
we've
seen
users
naturally
just
start
to
describe
it
as
gateway
api.
M
So,
while
we're
still
early
in
alpha,
it
seems
like
it
could
make
sense
to
make
that
rename.
I,
if
there's
no
objections,
we'll
move
forward
with
that,
but
wanted
to
get
some
quick
feedback
here.
If
anyone
had
any
hesitancy
towards
that
change,
cool
all
right.
O
Cool,
so
I
guess
tim
hawkins
might
be
already
aware
of
this,
but
just
to
give
context
to
everyone
else
like
we
recently
had
a
cve
where
an
unprivileged
user
can
go,
update
the
service
pack
external
ip
field
to
some
random
value,
and
then
they
can
hijack
the
traffic
so
like
when
we
announced
the
cv.
O
Whatever
comes
it
comes
up
in
the
service
status
field,
so
I
wasn't
sure
like
if
we
have
to
go,
add
it
or
like,
and
I
also
saw
that
tim
hawking,
like
you,
already
have
a
kept
to
like
block
this
particular
field
permanently.
So
I
was
wondering
like
whether
we
should
or
how
we
should
like.
Should
we
migrate
to
this
sig
network
group,
this
ownership
or
like
just
have
it
outside
of
that,
given
that
we
are
going
to
anyhow
remove
that
particular
field.
So
I
wanted
to
discuss
about
that
and
get
you
all
your
opinions.
B
So
I
think
the
thing
is
useful
for
quite
a
while.
There's
I
mean,
even
if
my
change
is
in
21,
but
it's
a
config
change.
So
if
you're
running
in
a
hosted
provider
or
something
you
know
who
knows
when
it
will
be
available,
but
still
we
know,
there's
a
lot
of
people
who
aren't
on
21
who
get
value
out
of
the
web
hook.
So
I
think
it's
super
valuable
to
keep
it
around,
at
least
for
a
year
or
two.
O
Sorry
go
ahead.
Oh
sorry,
I
was
saying
like
we
didn't
have
much
issue
on
that
so
far,
and
it
was
just
like
one
feature
that
somebody
wanted
to
add
as
well.
B
B
Is
there
an
open
issue
or
a
proposal
to
to
move
this.
O
There
wasn't
no,
there
wasn't
one
yet,
but
when
I
was
discussing
with
the
team
all
clear
he
said
like
bring
it
up
with
the
sig
network
to
see,
if
we'll
be
on
with
the
sig
network,
we'll
be
owning
it
and
if
the
so,
basically,
I
was
bringing
him
to
check
if
we
wanted
to
add
any
functionality
to
this
or
whether
we'll
duplicate
this
completely,
and
he
said,
probably
to
discuss
with
sick
network
community
first
before
we
take
any
decisions.
B
I
I
mean
if
we,
let's
I'd
rather
not
take
it,
but
if
we
have
to
take
it,
I'm
not
allergic
to
taking
it,
but
it
seems
like
it's
our
problem.
I
would
put
a
pretty
strong
moratorium
on
features
like
this
is
not
a
we're
not
going
to
build
a
policy
engine
here
right.
This
is
a
it's
a
pretty
blanket
thing
until
the
even
more
blanket
built-in
controller
is
is
in
everybody's
hands,
so
that
seems
reasonable.
B
O
What
we're
missing
yeah
sounds
good,
so
from
the
maintenance
point
of
view
like,
I
can
also
help
like
I
can
own
it.
So,
regarding
that
feature
request
now
for
that
particular
validation
web
book,
where
they
wanted
to
add
this
on
the
ingress
ips
world
on
the
service
status
field.
Like
do
you
think,
that's
valuable
to
do
or
I
can
follow
up
offline
as
well
on
the
pr
and
then.
E
B
O
Yeah
so
like
I
was
speaking
with
on
that
as
well
with
him,
and
my
thought
process
was
like,
since
it's
a
status
feel
like
whether
make
having
a
getting
the
input
as
a
cider
make
sense
or
whether
it's
validating
against
the
user.
Who's
updating
doesn't
make
sense.
B
B
M
I
will
be
quick,
so
there's
a
doc
that
I
added.
I
made
a
couple
quick
prs
last
week
that
we're
going
to
update
the
existing
caps
for
topology,
and
I
got
some
feedback
that
made
me,
rethink
everything
and
say:
hey:
can
we
do
this
a
little
bit
simpler?
Do
we
need
to
closely
tie
subsetting
with
topology?
M
Both
of
these
are
valid
in
their
own
way,
but
linking
them
together
may
limit
both
of
them
too
much
so
I've
outlined
what
I
think
could
be
a
simpler
and
potentially
better
path
forward,
and
I
would
really
appreciate
feedback
on
it.
I'm
hoping
to
translate
this
into
a
cap,
I'm
in
progress
and
hope
to
have
that
out
tomorrow.
M
This
is
this
keeps
a
lot
of
the
same
concepts
that
we
were
already
going
with.
It
just
says
we
could
do
subsetting
later
and
instead
we
can
keep
this
information
inside
each
endpoint
and
that
especially
helps
the
smaller
scale.
Use
cases
where
you
have
services
with
say
less
than
12
endpoints,
where
a
subsetting
approach
just
really
doesn't
work,
so
really
appreciate
any
feedback
on
this.
I
know
we
don't
have
enough
time
so
I'll
I'll
end.
M
L
Thank
you
to
antonio
and
to
cal
and
to
tim
and
everyone
who's
put
a
bunch
of
work
into
trying
to
get
this
over
the
line
and
we
are
almost
there
and
we
need
the
prr
people
to
click.
Okay,
thank
you,
dcbw
for
doing
so,
just
giving
you
a
quick
status,
update
we're
getting
there.
Hopefully
that
will
get
to
beta
in
1.21,
but
we
do
need
the
last
little
sign
off
before
you
know,
enhancements
freeze,
so
here's
hoping.
B
To
the
same
topic,
if
anybody
has
caps
that
they
need
me
to
look
at
before
tuesday,
please
ping
me
and
make
sure
that
I'm
looking
at
them,
I'm
trying
to
cycle
through
the
ones
that
I
have
open
and
ping
people
and
make
sure
that
everything
that
was
planning
to
get
in
is
going
to
get
in.
I
don't
want
to
hear
on
wednesday
that
I
didn't
review
your
cap.
If
you
didn't
ping
me
between
now
and
wednesday,
or
now
and
tuesday,.