►
From YouTube: Kubernetes SIG Network meeting 20200122
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
we're
now
recording
this
is
kubernetes
network
for
january
21st,
2021
tim,
why
don't
you
take
it
away
with
some
issue
triage.
B
Can
everybody
see
that
yep
looks
good
all
right,
so
I
pulled
it
up
and
needs
triage
was
was
about
20
I
went
through
and
I
was
able
to
close
some
or
or
accept
some
automatically,
let's
run
through
them.
First,
we
have
a
bunch
of
test
flakes,
so
I
didn't
want
to
assign
them
unilaterally.
I
thought
we'd
at
least
talk
about
them,
so
we
have
go
ahead.
Antonio.
B
Yeah,
it
looks
like
that.
Maybe
we'll
have
to
do
something
about
this
all
right,
I'll
sign
that
one
to
you,
network
policy,
ede,
test
connectivity,
information,
anyone
who's
on
the
network
policy
group.
D
B
Anyone
have
context,
oh
yarn
is
already
taking
it.
Oh
I
commented
on
it.
Okay,
cool
services
should
be
able
to
change
type
and
ports.
Another
flake,
but.
B
Right
so
I'll
mark
this
accepted
is
it
is
somebody
following
up
on
it?
Is
it
you.
C
B
B
C
B
B
I
will
accept
this
and
I
will
actually
assign
myself
and
hopefully
I
don't
just
let
it
die.
B
C
Yeah
this
is
so
the
there
was
a
test
in
node.com
in
node
conformance
that
was
doing
the
hospital,
but
the
problem
is
that
the
test
wasn't
testing
that
the
hospital
was
working,
so
it
only
tested
that
the
pots
were
scheduled,
but
it
turned
out
that
the
horsepower
part
wasn't
working
and
was
failing.
When
I
fixed
the
test,
I
had
to
use
a
host
network
path
to
do
the
checking
and
this
broke
the
windows
test.
B
B
Okay,
network
policy
tests,
folder.
D
We
are
removing
the
old
network
calls
folder
in
order
to
replace
with
the
netball
folder.
So
there
are
some
movements
in
this
desk.
B
All
right,
we
still
got
some
time
great
external
ipv
of
traffic
policy,
local.
I
saw
that
this
pavitra
responded
here
and
it's
back
to
the
original
poster
for
questions,
so
we
can
just
circle
back
to
this
one.
Next
time.
B
B
I
think
it
was
istio
a
something
gateway,
istio
gateway
as
the
parameter
block,
so
they
don't
have
to
define
things
twice,
which
seems
reasonable,
if
maybe
a
bit
clever,
and
now
that
I
understand
the
use
case,
I
wanted
to
see
if
somebody
from
service
apis
wants
to
take
a
look
at
this
and
weigh
in
on
it.
E
It's
funny
that
you
that
you
bring
this
up,
because
if
you
look
at
today's
agenda,
rob-
and
this
is
danian-
this
has
come
across
as
an
issue
for
service
api's
implementation
that
I'm
leading-
and
this
is
exactly
what
what
I
would
like
to
see
is
a
namespace
within
the
local
object.
Ref.
I
don't
know
if
we
could
necessarily
call
it
a
local
object
graph
if
we're
also
referring
to
a
name
space.
But
you
know
my
use
case
is
the
controller
runs
in
a
specific
namespace.
E
B
E
B
So
I
don't.
I
don't
personally
have
any
major
objection
to
this.
If
you
think
that
the
service
apis
is
going
to
allow
it,
then
I
guess
it
makes
sense
to
allow
it
for
ingress
class.
E
Well,
and
what's
interesting
is
so
why
this
is
a
topic
a
discussion
is,
is
I
went
back
to
our
service
api's
notes
from
a
year
ago,
and
this
exact
discussion
came
up
and
it
was
like.
Well,
you
know
ingress
classes.
You
know
these
parameters
are
cluster
scopes.
So,
let's
just
stick
with
that.
We
do
want
that
consistency
between
gateway,
class
and
ingress
class,
since
we
expect
you
know
users
to
potentially
migrate
in
the
future,
and
so
we
want
a
very
similar
experience
there,
and
so
you
know
long
story
short
is
the
parameters.
E
Rep
of
a
gateway
class
is
the
way
it
is
because
ingress
class
set
that
precedence.
So
that's
the
only
I
miss
from
the
discussions
we
had
in
detail
about
this
over
the
last
week
or
two.
It's
the
only
reason
that
we
decided
to
go
that
route
is
because
ingram's
class
set
the
presidency.
F
G
All
right
I'll
I'll
add
just
a
little
bit
to
that
when
we
discussed
this
last
time,
I
think
it
was
spring
and
what
we
were
looking
at
then
was
config
maps
and
secrets
as
things
that
we
might
want
to
refer
to
as
parameters
and
those
are
both
relatively
unpopular
use.
Cases
for
namespace
scope
references,
but
this
example,
plus
the
one
damian
brought
our
namespace
scope,
custom
resources,
so
we've
lost
kind
of
that.
G
That
baggage
of
you
know
referencing
config
maps
as
being
the
primary
reason
for
doing
this,
which
didn't
have
a
lot
of
buy-in.
I
I
think
there
is
increasingly
compelling
use
cases
for
referencing
custom
resources
at
a
namespace
scope,
so.
B
H
Danian,
are
you
daniel
who
wants
to
pick
this?
One
up
rob
seems
like
you
guys
are
the
most
obvious
candidates.
Yes,.
E
Rob
is
just
something
that
we
want
to
work
on
together
or
are
you
kind
of
overloaded
with
bandwidth
yeah?
I
could
definitely
use
the
help,
just
as
I
mentioned
to
you
in
the
past.
I'm
I'm
like
up
to
my
ears
and
trying
to
move
forward
with
the
service
apis
implementation
for
contour.
B
B
Okay,
all
right
and
the
last
one,
if
we
have
one
more
minute
yeah
this
was
it
looks
like
this
was
fixed
and
then
reverted.
I
Yes,
that's
correct,
I'm
not
sure
what
to
do
with
this
one,
maybe
the
person
it
looks
like
it's
very
freshly
reverted
and
some
of
the
node
team
is
still
looking
into
it.
Okay,
do
we
just
want
to
leave
it
for
two.
J
B
Yeah,
okay,
all
right
that
one's
for
me
and
that's
it
then.
B
K
A
So
last
week
we
said
we
were
going
to
reserve,
or
two
weeks
ago
last
meeting
reserved
30
minutes
to
talk
about
this
cluster
network
policy.
Work
that's
been
going
on,
so
I
think
that
we
should
do
that.
L
L
Okay,
I
hope
everyone
can
see
the
screen.
L
All
right
so
today
me
and
gobin
and
young
we're
gonna
go
through
the
proposal
for
cluster
scope,
network
policies
and
essentially
gobind
will
start
off
with
the
use
cases
and
the
motivation
and
goals
behind
this
new
resource
and
then
I'll
walk
through
the
proposal
on
the
high
level
and
then
young
will
go
a
deep
dive
on
the
sample
examples
and
help
clarify
some
of
those
nuances
of
the
proposal
so
govern.
Do
you
wanna,
start
and
feel
free
to
tell
me
to
skip
slides.
M
All
right
awesome
thanks
thanks,
abby
hi
everyone,
my
name
is
govind,
I'm
a
product
manager
at
google.
I'm
super
excited
to
present
cluster
network
policy
to
save
network
with
avi
and
young
and
all
the
other
folks
on
the
slide.
I
guess
it's
a
good
time
to
sort
of
introduce
everyone.
I
guess
we
have
everybody
on
the
call.
So
does
everybody
want
to
do
the
quick
intros,
the
contributors.
L
Okay,
I
can
go
first
from
vmware.
I've
been
working
on
kubernetes
for
a
while.
You
know
integrating
some
of
those
cni
specific
things
with
our
stuff
and
mainly
today,
working
on
andrea.
J
Yeah
sure
my
name
is
young
and
I'm
also
from
vmware,
and
I
also
work
with
a
lot
of
times
to
work
on
network
policy
related
features
in
project.
L
Hi
folks,
this
is
atheist.
I'm
part
of
gk
networking
team
at
google
all
right
chris.
B
Hey
chris
from
ibm
working
on
the
kubernetes
project
for
a
bit.
M
All
right
and
zhang
used
to
be
with
our
project,
but
I
think
she's
moved
teams,
so
we
we
want
to
give
her
credit
for
the
the
work
that
you
put
into
this.
Thank
you
zach
for
your
efforts
all
right.
So,
let's
start
with
the
motivation
of
why
we
need
cluster
number
policy
in
the
first
place.
Some
of
this
is
probably
you
know
well
understood
by
all
the
kubernetes
bigwigs.
So,
let's
just
summarize,
you
know
and
make
sure
that
everybody
has
the
same
understanding.
M
M
They
are
struggling
to
find
the
right
controls
and
the
right
mechanisms
to
afford
them
the
guarantees
that
they
need,
and
much
of
it
is
you
know
the
network
behavior
as
to
what
traffic
comes
in
what
traffic
gets
denied
and
allowed.
There's
no
way
to
make
those
strong
guarantees
today
within
kubernetes.
There
might
be
ways
to
do
it
say
externally,
on
the
vms
that
they're
running
on,
for
example,
but
there's
not
really
a
very
good
way
of
doing
this
today
or
anyway
doing
this
today
in
kubernetes
oss.
M
The
other
sort
of
complementary
use
case
to
this
is
cluster
administrators
motivation.
I
should
say:
is
cluster
administrators
want
to
deploy
a
policy
at
scale
without
worrying
about
when
new
workloads
slash
namespaces
pop
up?
What
this
means
is
right
now,
since
number
policies
and
namespace
scoped
resource
when
a
new
namespace
comes
up,
somebody
has
to
remember
to
go
and
add
more
number
policies
to
it
and
make
sure
that
the
workloads
are
are
protected.
With
this
use
case
or
motivation.
M
What
we're
trying
to
make
more
salient
is
the
fact
that
an
administrator
may
just
give
the
cluster
away
to
a
team
or
a
group
of
teams
and
say
you're
free
to
do
whatever
you
want
with
it.
Just
make
sure
that
some
baseline
stuff
is
always
there
and
anytime
a
new
workload
or
a
new
namespace
comes
up.
These
policies
automatically
extend
to
those
those
workloads
without
the
administrator
having
to
remember
each
time
to
to
go
and
add
them
explicitly.
M
Finally,
on
the
network
policy
resource
today,
it
is
a
completely
developer,
focused
api,
as
I
mentioned,
and
it
can
be
changed
without
the
administrator's
approval
and
rvac
doesn't
really
help
in
this
case,
because
administrators
really
don't
have
full
knowledge
of
each
pods
connectivity
needs,
and
you
know
our
back
is
also
at
the
name
space
level,
so
it
cannot
be
controlled
by
by
administrators
and
by
the
namespace
owners
as
well.
So
that's
the
motivation
that
we
have
so
far
I'll
take
a
breather
here.
Any
questions
on
this.
M
No
okay,
so
next
slide.
Please
thank
you.
So
there's
a
chat
message.
Let
me
just
take
a
look
at
that.
M
I'm
not
sure
if
I
understand
the
comments.
Sorry.
N
Is
there
a
question
in
the
chat?
No
just
english
as
opposed
to
american
english?
I'm
I'm
not
an
american
english
speaker.
So
thank
you
for
putting
at
you
and
behavior.
M
N
N
N
M
Oh,
I
see
all
right
moving
on.
So,
as
all
of
you
know,
number
policy
is,
you
know,
predominantly
a
conversation
to
have
in
multi-tenant
clusters.
I
just
wanted
to
sort
of
level-set
everyone
and
give
a
clear
understanding
of
what
we
mean
by
multi-tenant
clusters.
In
this
case,
you
know,
there's
multiple
name,
spaces
that
are
being
deployed
in
a
in
a
particular
kubernetes
cluster,
and
you
can
imagine
a
cluster
administrator.
M
You
know
giving
access
to
that
that
cluster
saved
by
namespace
to
each
of
the
tenants
tenant
one
is
namespace
one
tenant
two
is
namespace,
two
and
so
on
and
so
forth,
and
they
have
control
over
their
namespace
and
that's
their
perimeter
and
that's
their
territory,
and
then
never
policy
happens
to
define
the
boundaries
between
them,
because
you
know,
if,
without
those
policies,
everybody
can
talk
to
everything,
that's
the
basic
level
of
multi-tenancy
that
you
know
everybody
should
have
in
their
minds
when
we
talk
about
the
rest
of
our
slides
next
slide,
please
so
these
are
the.
M
You
know
explicit
use
cases
that
we
really
wanted
to
go
after
with
cluster
number
policy.
So
we
talked
about
the
the
motivations.
M
This
is
how
they
manifest
in
terms
of
everyday
use
cases,
so
they're
numbered
for
a
reason
you
will
see
in
the
next
slide
how
they
actually
look
in
a
graphical
format,
but
let
me
just
talk
through
them
first,
so
the
first
use
case
explicitly
denying
traffic
from
certain
sources,
so
this
could
be
sort
of
a
dos
protection
based
on
certain
source
ips.
M
If
you
discover
that
there
are
some
bad
actors
out
there
who
are
constantly
pinging
your
services,
you
can
write
some
network
policy
at
the
cluster
level
and
every
time
any
workload
is
deployed,
those
ips
will
not
not
be
allowed
to
send
packets
to
you
to
give
you
a
distributed.
Firewall
experience
each
time,
there's
a
new
workload.
M
M
You
know
on
when,
when
traffic,
when
traffic's
going
in
and
out
of
the
cluster-
and
this
is
usually
to
do
more
perimeter
based
security-
also
to
do
more
packet,
mirroring
and
other
you
know
fact,
inspection
use
cases
could
also
be
you
know,
l7
policy,
or
what
have
you
so
lots
of
different
sort
of
gateways
cases?
But
that's
not
the
point
of
this.
M
The
idea
here
is
to
use
network
policy
as
a
way
to
sort
of
do
guard
rails
and
and
really
direct
traffic
in
a
very
intentional
manner
through
the
cluster
sas
deployments
that
leverage
multi-tenancy.
This
directly
relates
to
this
slide
that
I
showed
you
before.
Typically
many
administrators
in
a
multi-tenant
system
want
want
to
allow
intra
name,
space
traffic
to
work
flawlessly,
but
deny
intern
name
space
traffic
by
default,
so
they
don't
want
any
new
name
spaces
to
be
able
to
just
talk
randomly
to
other
name
spaces.
M
They
want
it
denied
by
default,
enforce
baseline
security
in
every
cluster.
So
it's
a
very
common
thing
that
you
know
cube.
Dns
is
required
for
the
successful
operation
of
your
your
cluster
to
resolve
both
internal
and
external
fqdns.
M
But
if
you,
you
know,
deploy
a
default,
denying
error
policy,
you
just
killed
access
to
cube
dns,
so
you
obviously
want
to
make
sure
that
some
of
those
baseline
pathways
are
open
and
you
know,
developer,
doesn't
necessarily
accidentally
shoot
themselves
in
the
foot
by
implementing
the
wrong
policy
and
finally,
restricting
egress
in
hybrid
environments
many
times
there's
a
database
sticking
out
in
outside
the
cluster,
in
your
on-prem
environment
or
elsewhere,
and
it's
usually
running
in
a
static
ip
and
for
legacy
reasons.
M
You
only
want
to
allow
a
subset
of
workloads
to
connect
to
those
predefined
cider
blocks
outside
of
the
the
cluster
and
make
sure
that
none
of
the
other
workloads
are
able
to
egress
out
to
those
those
particular
ips
as
well.
So
that's
kind
of
the
general
way
we've
structured
this
conversation.
M
If
you
go
to
the
next
slide,
here's
where
I've
mapped
all
of
the
use
cases
from
the
previous
slide
on
to
the
cluster,
to
show
you
how
a
multi-tenant
enterprise
cluster
might
look
like
and
all
of
the
boxes
here
are
name
spaces
and
the
circles
are
pods,
and
the
first
use
case
that
I
talked
about
was
you
know,
implementing
explicit,
deny
from
certain
sources,
so
you
can
already
filter
some
traffic
second
use
case,
which
shows
up
in
two
places,
both
after
the
ingress
and
before
the
egress
is
to
force
traffic
to
go
through
a
certain
path,
sas
deployments
within
the
namespace.
M
It's
a
number
three.
You
can
make
sure
that
everything
talks
within
the
namespace
but
doesn't
talk
outside
of
it
to
other
namespaces
four
is
to
make
sure
that
cube
dns
works,
so
you
can
automatically
allow
list
cube
system
access
for
those
namespaces
and
then
five
is
the
inverse
of
one,
which
is
to
restrict
egress
to
certain
destinations.
M
Only
so
all
of
this
structure
is
put
in
place
by
yes,
you
can
do
this
with
network
policy,
but
the
whole
point
is
that
cluster
number
policy
will
make
sure
that
it
never
gets
preempted
or
undone
accidentally
by
a
developer.
M
I'll,
take
a
pause
here
and
just
ask
for
any
questions.
If
there
are
any
before
proceeding
abby,
can
you
go
back
one
slide,
please
thank
you.
Yeah
any
questions
up
until
here.
M
No
okay,
all
right.
Let's
keep
going
then
so,
with
those
use
cases
in
mind,
we
came
up
with
some
guiding
principles
and
design
goals
to
you
know,
get
build
the
proposal
that
that
we're
gonna
present
to
you
right
after
this,
so
our
guiding
principles
were
to
focus
on
the
needs
of
cluster
administrator,
not
the
app
developer.
M
M
Let's
do
this
right
and
we
made
conscious
decisions
about
picking
the
cluster
administrator's
needs
and
over
being
consistent
with
kubernetes.
Our
policy
necessarily
right
after
that
you'll
see
now
follow
the
network
policy.
If
you
as
much
as
possible
for
familiar
experience
but
as
I
said,
we
tried
to
be
familiar,
but
not
at
the
cost
of
sacrificing
the
needs
that
we're
trying
to
solve
limit
the
amount
of
api
complexity
by
focusing
on
the
use
cases
rather
than
building
a
fully
comprehensive,
hierarchical
firewall.
M
Many
of
you
have
probably
worked
in
network
security,
and
you
probably
know
what
hierarchical
firewalls
are
they're
complex,
they're,
very,
very
hard
to
write
and
to
read
and
process
and
many
enterprises
struggle
with
layers
and
layers
of
firewalls
at
different
priority
levels,
which
they
cannot
make
sense
of.
After
a
certain
point.
Now
it's
very
commonly
understood
in
this
industry
that
nobody
wants
to
remove
a
firewall
ever
they'll,
always
only
add
firewalls
and
they're
so
afraid
to
touch
firewalls
they'll,
never
delete
one.
M
So
what
we
did
was
we
wanted
to
limit
the
complexity
of
this
by
really
just
focusing
on
the
important
use
cases
rather
than
you
know,
for
completion
sake,
build
out
a
fully
comprehensive.
You
know
priority-based
hierarchical
firewall
and,
finally,
aps
structure
should
be
expandable
in
the
future
that
that's
hopefully
obvious
to
everyone,
so
the
design
goals
that
we
were
really
shooting
for,
and
these
are
the
things
that
we
don't
really
want
to
like.
We
want
to
make
sure
that
the
the
proposal
we
have
you
know
adheres
to
them.
M
These
are
the
design
goals
we
came
up
with
adding
denials
to
kubernetes
network
policy
has
implicit
denied,
not
explicit.
So
we
are
explicitly
adding
deny
rules,
and
these
are
our
objectives
with
the
cluster
network
policy
providing
baseline
security,
that
developers
can
override
in
certain
cases
and
providing
guard
rail
securities
that
developers
can't
override
in
certain
cases,
so
we'll
go
over
this
in
more
detail
and
you'll
see
both
cases
of
how
we
can
provide
some
sort
of
baseline
security.
M
That
you
know
is
always
there,
but
you
know
can
be,
can
be
malleable,
but
the
guard
rail
security
is
not
something
that
a
developer
can
actually
override.
So
we
provide
both
levels
of
comfort,
provide
the
ability
to
select
any
subset
of
pods
in
a
cluster.
So
you
based
on
how
our
pot
selector
semantic
works.
You
can
actually
select
anything
inside
the
namespace
anything
outside
the
namespace
in
terms
of
all
the
namespaces,
any
sort
of
slicing
and
dicing
you'd
like
to
do.
M
You
can
do
based
on
this,
because
it
does
apply
cluster
wide,
ensure
backwards,
compatibility
with
existing
kubernetes
network
policy.
This
was
fundamental
to
our
design
and
ensuring
clustering
policy
is
only
accessible
by
the
admins
via
proper
rbac.
That
was
also
a
key
decision
that
you
will
see
in
our
manifest.
In
our
proposal,
right
after
this
any
questions
so
far,
have
I
put
everybody
to
sleep.
M
All
right
sounds
good
next
slide,
please
before
we
get
into
the
proposal,
I
just
wanted
to
quickly
call
out
the
non-goals.
This
is
not
an
effort
to
improve
the
the
logging
or
error
reporting
of
number
policy.
We
are
focusing
entirely
on
the
cluster
and
our
policy
use
cases
and
we're
not
touching
the
logging
error
reporting.
Just
yet
that's
probably
effort
for
another
another
cap
or
another.
You
know
independent
effort
from
this
node
policies.
We
are
currently
actually.
H
How
much
of
this
is
possible
with
one
of
those
opa
emission
control
policy,
validation,
things
I
mean
it's
pretty
clear
that
this
is
much
easier
to
use.
I
would
assume.
M
So
op
and
gatekeeper
can
only
control
admission
control,
so
you
can
only
restrict
access
to
certain
apis,
but
you
can't
actually
influence
the
data
path
with
never
policy.
You
can
actually
affect
the
data
path
and
that's
the
big
value
here
that
plus
western
network
policy
will
give
administrators
a
way
to
control
network
activity.
M
L
O
M
Yeah,
I
think
we
did,
but
there
there
are
several
reasons,
concrete
reasons
for
why
cluster
neural
policy
does
add
value,
even
though
something
like
an
opiate
gatekeeper
can
restrict
creation
of
objects
with
with
that
api.
But
I
think,
though,
we've
actually
answered
that
question.
I
suppose
thanks,
I'm
not
sure
whose
voice
that
was,
but
thank
you
and
so
node
policies
is
the
next
point.
M
We
are
not
tackling
node
policies
with
this
we're
focusing
on
pods
but
of
course,
we
think
of
node
as
an
extension,
but
we
can
tackle
that
later.
New
policy
targets,
there's
a
lot
of
discussion
around
being
able
to
use
services
and
service
accounts,
a
few
dns
to
use
to
create
new
policy
types
oops.
What
what
just
happened
there?
M
Oh
here
we
go
so
we
are
assuming
that
much
of
that
work
will
end
up
inheriting
once
those
caps
start
taking
more
form
and
then
a
cluster
neuropolis
for
every
permutation.
That's
something
that
we
don't
want
to.
You
know
again
goes
back
to
our
you
know,
hierarchical
firewall
point
we
don't
want
to
create
a
you
know
massively
complex
priority
based
firewall
system.
We
just
want
to
focus
on
the
use
cases
to
provide
maximum
value
to
our
customers
with
minimal
complexity.
M
All
right.
I
think
at
that
point
I'll
I'll,
stop
and
pass
it
on
to
avi
any
questions
thus
far.
L
L
So
so
yeah,
I
think
you
know
before
I
get
into
the
proposal.
I
want
to
say
that
you
know
most
of
the
names
that
we
have
chosen
here
for
fields
and
the
action
and
the
values
is
up
for
grabs.
You
know
we're
not
attached
to
those
names,
so
I
know
everyone
has
three
opinions
on
naming
and
we
definitely
welcome
a
lot
of
feedback
and
we
wanna
name
fields
which
are
more
appropriate.
L
L
Essentially,
you
know
this.
We
hope
that
will
be
part
of
the
networking,
v1
apis
and
start
off
with
alpha
feature
and,
as
you
can
see,
it
has
the
spec
and
we
haven't
mentioned
any
status
for
the
cluster
network
policy,
because
you
know
I
I
see
that
dan
vinshap
has
a
as
is
as
they
kept
in
in
open
up
for
for
network
policy
status,
and
maybe
that's
something
that
we
can
leverage
or
extend
to
the
cluster
network
policy
resource
as
well
in
the
future
if
it
makes
sense.
L
But
let's
focus
on
the
the
spec,
so
you'll
see
a
lot
of
similarities
with
kubernetes
network
policy,
so
I'm
gonna
skip
some
of
those
similarities
because
everyone's
aware
of
what
they
are
in
in
interest
for
saving
some
time.
As
you
can
see,
the
ingress
and
egress
rules
are
similar
to
kubernetes
network
policy,
wherein
it
will
tell
you,
tell
the
policy
of
you
know
what
kind
of
rules
apply
to
the
applied
to
parts.
L
L
The
reason
being
is
that
we
want
to
be
able
to
apply
policies
not
just
to
the
pods,
but
also
to
the
whole
name,
space,
essentially
all
the
pods
in
the
name
space,
and
that's
why
we've
replaced
the
pod
selector
with
the
with
something
called
apply
to,
which
essentially
is
something
similar
to
a
network
policy
pier
you
know,
wherein
you
can
specify
a
port
selector
or
a
namespace
selector
to
apply
the
policy
to,
and
you
cannot
set
an
ip
block
in
there
because
it
doesn't
make
sense
to
apply
policy
to
ip
addresses.
L
If
we
take
a
closer
look
at
the
ingress
and
egress
food
structure,
again,
the
ports
from
slash
2
is
similar
in
behavior
or
meaning
to
what
kubernetes
network
policy
ports
and
from
2
has
to
offer,
and
so
I'll
talk
more
about
the
new
field
that
we
plan
to
introduce,
which
is
the
action
field.
So
I
think
it's
worthy
to
emphasize
on
the
on
the
fact
that
the
action
field
has
two
overloaded
meanings.
L
One
is
that
the
value
of
the
action
will
determine
whether
the
traffic
in
the
rule
is
is
allowed
or
dropped.
The
other
meaning
to
an
action
field
is
that
it
also
determines
the
order
in
which
this
rule
will
be
enforced
or
evaluated
for
enforcement.
So,
for
example,
when
I
say
the
order,
the
order
between
relative
action
types
and
between
a
cluster
network
policy
versus
a
kubernetes
network
policy.
L
So
let's
take
a
look
at
the
three
actions
that
we
propose.
The
first
action
is
the
deny
action,
and
now
the
first
meaning
of
this
deny
action.
Is
you
know
as
what
it
says
it
will
deny
any
traffic
that
matches
this
particular
rule
will
not
be
allowed,
so
it
will
be
dropped
and
the
second
semantics
of
the
rule
action
deny
is
that
the
priority
or
the
order
of
evaluation
for
the
deny
rules
is
at
the
top,
most
so
in
a
given
cluster.
L
First
of
all,
the
all
the
deny
rules
will
be
aggregated
and
you,
you
know,
the
traffic
that
matches
a
deny
rule
completely
will
will
then
be
denied,
and
if
there
is
no
match
for
any
of
the
deny
rules
that
exist
in
the
cluster
network
policy,
then
it
will
kind
of
waterfall
through
to
the
next
set
of
rules,
which
is
that
of
allow
rules.
L
The
allow
rule
again
is
a,
as
is
rule
that
is,
you
know
there
is
no
implicit
isolation
associated
with
an
allowable.
The
allow
rule
will
essentially
any
traffic
pattern
that
matches
this
rule
will
be
allowed
again.
It
will
be
enforced
after
all
the
deny
rules
or
the
drop
rules,
but
it
will
be
enforced
before
all
your
kubernetes
network
policy
rules.
L
L
So
so
that's
those
are
the
two
actions
which
will
which
will
be
evaluated
before
the
kubernetes
network
policies.
Any
rule
which
has
the
action
baseline.
Allow
will
be
enforced
after
all,
your
community
network
policies
are
evaluated
so
essentially
the
baseline.
Allow
has
semantics
similar
to
the
kubernetes
network
policy
rules.
That
is
it
will.
L
It
will
allow
the
traffic
you
know
as
long
as
there
exists
a
rule
in
in
in
a
cluster
network
policy,
if
no
baseline
allow
rule
allows
that
traffic,
then,
if
the
if
the
applied,
if
the
pod
is
selected
by
a
cluster
network
policy,
then
then
that
particular
pod
will
be
isolated,
similar
to
the
semantics
that
a
kubernetes
network
policy
holds
so.
P
I
have
a
bit
of
a
comment
on
from
the
last
slide,
the
realization
of
this.
These
proposals
are
going
to
be
implemented
by
the
likes
of
seeing
eyes
and
so
on
right,
which
makes
something
like
order
as
a
standard
contract
very
hard
to
do
if
it's
not
clearer
than
the
api.
It's
just
a
comment
and
I've
known
that.
Then
I've
seen
that
before
causing
some
expectations
and
so
on.
P
L
That's
correct,
so
so
I
think,
in
order
to
ensure
that
the
implementation
is
conforming
to
the
proposal,
I
think
we
can
take
help
of
the
you
know
the
end-to-end
test
and
ensure
that
we
have
enough
coverage
on
the
end-to-end
test
to
make
sure
that
the
the
order
of
rules
is
enforced
correctly
and
not
just
between
cluster
network
policies,
but
between
infrastructure
policy
and
kubernetes
policies.
L
So
I
think
that's
something
that
definitely
we
want
to
ensure
that
you
know
as
part
of
the
initial
proposal,
and
you
know
if
at
all,
if
this
is
implemented
and
approved,
we
ensure
that
the
end-to-end
tests
take
care
of
you
know
making
sure
that
any
cnn
implementer
implementer
is
conforming
to
those.
This
proposal.
L
Policies
we
we've
also
added
a
slide
towards
the
end,
wherein
we
discuss
the
alternates
that
we
also
thought
about.
We
looked
at
prior
art,
you
know
calico
and
he
has
has
global
network
policies
which
are
which
have
a
priority
associated
with
it
and
we've
kind
of
outlined.
The
reason
why
we
did
not
go
with
that
is
because
I
think
it
makes
more
sense
if
you
have
like
a
ui
or
a
dashboard
where
these
priorities
are
auto
generated.
L
But
if
you
have,
if
you
have
to
write
yamls,
it
might
get
very
complex.
Maybe,
but
of
course
I
mean
if
users
have
different
opinions
on
that,
maybe
you
can
consider
that
and
go
back
to
the
drawing
table
and
you
know
incorporate
that
feedback,
but
we'll
discuss
the
alternates
also
that
we
have
that
we
considered.
B
So
I
will
say
to
the
group
I
mean
I
read
over
the
slide
deck
like
a
week
ago
two
weeks
ago,
and
I
personally
I
found
this
baseline,
allow
to
be
very
confusing,
and
I
asked
a
bunch
of
questions
about
it
and
my
questions
were
all
mis
misguided
because
I
didn't
understand
what
it
was
really
doing
and
once
it
was
explained
to
me.
B
I
I
think
I
understand
it
better
right
now
and
it's
it's
very
clever
and
that's
both
a
compliment
and
and
not
in
the
sense
that
it
took
me
a
lot
of
thinking
to
really
wrap
my
brain
around
it.
It's
a
clever
expression
of
the
the
stacking
of
policies,
I'm
not
sure
if
it's
a
good
thing
or
a
bad
thing
that
it's
clever.
L
Yeah
we
we
spoke
about
it
after
reading
your
comments
and
we
thought
about
it,
and
you
know
if
it's
if
it
took
you
this
time
this
much
time
to
figure
that
out,
then
maybe
it's
not
really
that
clever,
because
it
would
be
hard
for
users
to
decipher
the
actual
meaning
of
it.
So
so
the
general
theme
of
our
proposal
is,
you
know
doing
these
implicit
things,
but
maybe
it
seems
like
being
explicit,
might
be
more
the
preferred
way
to
go.
L
We
don't
know
whether
that's
the
that's
the
right
answer
to
this,
but
but
I
think
that's
something
that
we
can.
You
know
go
back
to
the
cap
and
then
we
can,
you
know,
discuss
the
alternate.
O
I
guess
I
think
clearer
documentation
would
be
step
one
because
just
reading
this
year
it
seems
both
to
not
understand
how
network
policy
works
and
to
describe
something
that
wouldn't
be
useful.
So
I
I
trust,
tim
that
it
is
very
clever
but
but
reading
the
words
that
are
written
here.
I
can't
even
tell
what
it's
supposed
to
mean.
P
So
I
I
just
want
to
bring
you
there's
a
bit
of
paradox
happening
here.
The
opening
statement
was
centered
around
enterprises
and
guardrails,
and
it's
clear
that
you
understand
that
your
user
and
user-
not
us
all
right,
not
them
right,
not
somebody!
You
can
expect
like,
say
it's
clever,
I'm
not
in
the
same
statement.
Our
people
are
used
to
the
likes
of
f5
firewalls,
cisco
firewalls
and
those
things
are
very,
very
specific
and
I'm
gonna
quote
your
statement
around.
Nobody
touches
firewalls
even
with
that
expressiveness.
P
A
P
A
I
know
this
is
a
we've
only
scratched
the
surface,
but
we're
already
over
the
30
minute
time
that
we
had
set
for
this,
and
we
do
have.
I
think,
four
other
topics
on
the
agenda.
L
Yeah
that'll
be
helpful.
We
have
outlined
some
examples
in
the
in
the
slides,
so
hopefully
that
will
help
clarify
some
of
the
actions
that
we
have
used
and
then
we
can
follow
up,
and
hopefully
we
can
complete
the
discussion
next
week
or
the
next
meeting.
A
Cool
thanks
thanks
everybody
for
the.
K
And
that
is,
I
put
links
in
the
doc,
so
people
can
take
a
look.
Thank
you
by
the
way
so
much
to
tim's
colleague,
I
forget
his
name
exactly.
I
probably
can't
pronounce
it
correctly
anyway,
but
your
colleague
who
updated
the
cup
over
the
holidays.
I
appreciate
that
greatly.
We
think
that
we've
met
the
graduation
criteria
for
moving
to
beta
and
are
gonna,
be
adding
more
test
stuff
I'll
have
a
pr
coming
in
the
next
day
or
so.
K
K
A
We
spoke
about
this
a
little
bit
earlier,
rob
and
daniel.
Is
there
more
you'd
like
to
discuss
on
the
ingress
class?
E
O
Yeah
so
originally
we
had
in
the
dual
stack,
kept
that
we're
going
to
make
the
api
server
dual
stack,
and
then
we
were
like
yeah
too
much
else
to
do
and
dropped
that
what
were
people
thinking
do
we
still
want
to
eventually
make
the
api
server
stack
and
we're
just
doing
it
later
or
was
there
some
decision
at
some
point?
That
would
rather
keep
it
single
stacked
forever.
O
P
We
actually
had
a
discussion
on
this
and
the
drop
was
purposeful
because
kubernetes
default
services
single
stack
and
we
kept
it
single
stack
because
we
did
not
see
a
reason
why
it
should
be
a
dual
stack,
and
I
recall
this
discussion
because
I
wanted
I.
If
you
looked
at
the
history
edits
of
the
pr
and
the
last
pr
I
had.
P
I
had
an
entry
point
for,
like
an
entry
item,
a
list
item
for
all
cooper,
next,
full
service,
dual
stack,
which
means
the
advertised
address
for
the
api
server
and
we
talked
a
bit
about
it.
And
then
we
realized
that
yeah.
Well,
if
you,
if,
even
if
you
choose
to
use
a
single
stack
service,
your
code
can
still-
let's
say
I
close
the
cloud.
The
api
server
is
api
p4
and
the
service
you're
listening
to
are
sex.
You
still
can
go
from
your
code
to
the
api
server
using
the
copyrighted
to
full
service.
P
O
O
I
I
Dan,
like
this
has
been
proposed
to
us
and
we
pushed
back
on
it,
but
this
is
not
going
to
be
the
only
case
like
you
don't
anyway,
that
that's
the
case.
C
P
B
Like
we're,
making
an
assumption
with
dual
stack
that
all
of
the
pods
will
end
up
within
with
both
interfaces
in
them
right
we're
not
allowing
pods
at
the
moment
to
choose
only
one
family.
So
even
if
you
don't
want
a
v4
interface,
the
assumption
right
now
is
that
you'll
still
have
a
v4
interface.
You
can
just
choose
not
to
use
it.
So
the
lazy
developer,
who
just
wants
to
talk
to
the
api
server
is
just
going
to
do
a
connect
and
their
underlying
sockets
library
is
going
to
do.
The
right
thing
right.
B
You'd
have
to
actually
go
out
of
your
way
to
not
use
v4,
which
seems
unlikely
now.
It
seems
reasonable
that
there
may
one
day
be
a
case
where
a
v6
only
pod
only
has
v6
and
then
a
v4
interface
would
not
be
present
and
then
it
would
fail.
I
would
be
happy
to
look
at
those
use
cases
as
they
come
up.
I'm
not.
I
don't
have
any
real
objection
to
making
a
dual
stack,
but
I'm
with
cal,
and
that
I
want
to
see
the
the.
P
Reasons,
okay
and
I
do
believe
that
there
is
a
queue.
There
is
a
an
update
happening
on
the
cap,
soon,
okay,
refresh
on
the
cap
just
to
make
it
meet
the
reality
of
the
changes
we
we've
done
and
so
on,
and
I
do
apologize
that
we
did
not
go
ahead
and
remove
that
from
the
cap.
So
I'll
add
that
to
the
cab
updates
that
we're
about
to
do
and
we'll
have
to
leave
a
note
that
this
has
been
purposely
removed.
A
Cool
cool
danian-
I
believe
you
are
next
yeah.
E
Thank
you,
so
we've
been
using
dot
10
as
a
dns
service
ip
for
quite
some
time.
I
think
it
probably
even
predates
like
core
dns
and
so
forth.
E
We
have
a
controller
that
you
know:
instantiates
a
core
dns
environment
and
hard
codes,
the
service
cider.10
for
the
core
dns
service.
The
issues
that
we
are
seeing
is
that
multiple
controllers
get
spun
up.
At
the
same
time,
there's
no
guarantee
that
coordinates
can
get
this
dot
10
address,
and
so
I
wanted
to
just
kind
of
open
it
up
for
discussion
to
see
if
anyone
else
has
seen
this
this
issue
and.
E
Is
it
wrong
for
us
to
go
ahead
and
say:
hey
all
right:
we've
got
to
use
just
a
dynamic
address
and
not
reserve.10
for
dns.
Technically,
it
can
be
done.
I
believe,
but
I
am
concerned
with
the
user
experience
of
users
that
have
been
accustomed,
2.10
yeah,
and
so
I
do
wonder
like
does
it
make
sense
to
you
know,
maybe
have
an
option
where
similar
to
the
api
server
gets
dot
one.
E
Is
it
possible
to
do
something
like
that
for
for
dns
or
with
the
better
solution,
be
just
more
of
like
an
open
and
flexible
type
of
solution
where
any
any
user
can
reserve
a
particular
address
from
the
service
site
or
for
maybe
that
references?
You
know
a
service
name
and
name
space
or
something
like
that.
B
So
yes,
we've
seen
this
occasionally
that
something
else
ends
up
with
the
dot
10
address
and
then
everybody's
unhappy,
and
I
it's
been
in
the
back
of
my
mind
for
a
long
time
that
we
should
probably
just
change
cubelet
to
take
instead
of
a
dns
server
ip
to
have
either
the
dns
server
ip
or
the
dns
service
name
and
use
whatever
ip
that
resolves
to,
and
I
mean
not
result
like
kubernetes
resolves
to
not
dns
resolves
to,
and
then
it
could
just
be
dynamic
and
this
the
dns
service
wouldn't
be
special
in
any
way.
B
That
seems
like
the
most
general
answer.
What
I
don't
know
is
if
other
things
are
hard
coding,
the
ip
address
other
than
cubelet.
E
B
B
E
Around
that,
just
with
you
know
some
of
the
operators
that
we
use
where
you
can
go
and
and
plug
whatever
address
is
the
dns
service
ip
into
kubelet.
But
yeah
I
mean
to
your
point
where
we've
had
some
internal
discussions
and
it's
like,
I
think,
everyone's
in
the
same
position
where
it's
like.
Okay,
this
should
work,
but
man
you
know
like
10
has
been
so.
You
know
well
known
that
we're
just
afraid
of
breaking
stuff.
So
that's
you
know
that's
kind
of
our
mindset
right
now,
so
so.
P
P
B
B
Yeah,
yes,
it's
hiram's
law.
The
my
fear
is
any
solution
to
this
problem
requires
a
change
at
all
of
those
places
either
we
change
them
from
assuming
10
and
instead
assuming
a
service
name,
which
is
which
seems
reasonable,
or
we
do
some
sort
of
like
pre-allocation
scheme
here
right
and
so
let
me
pause
for
a
second
and
pull
a
different
topic.
B
B
I
think,
but
we'd
have
to
actually
we'd
have
to
actually
plan
for
that
as
part
of
the
api,
and
it
starts
to
look
a
little
bit
more
like
persistent
volume
claims
and
persistent
volumes,
which
you
know,
maybe
is
a
step
too
far.
If
this
is
really
the
only
use
case
for
it,
I
don't
know
if
there
are
other
use
cases.
O
I
just
linked
into
the
and
it's
there,
someone
had
filed
an
issue
a
while
back
that
you
wanted
to
be
able
to
reserve.
H
C
E
Can
you
can
you
link
the
any
of
the
work
that
you
have
in
the
notes
section
underneath
this
topic
just
so,
I
can
go
back
and
reference
it.
Yes,
look.
You
seem
to
tap
yeah
bowie
thanks
for
the
offer.
No,
this
is
it's
glad
we
discussed
this.
So
let
me
kind
of
digest
some
of
the
information
and
and
see
how
we
can
move
this
forward.
P
C
The
topic
is
nice,
the
people
that
wants
a
challenge
that
just
time
into
this,
this
back
bed
is
open
and
this
pull
request
and
if
he's
able
to
to
serve
it,
I
own
him
10
years
or
more,
because
it's
when
I'm
back
that
I
was
chasing
for
more
than
one
year.
A
Well,
I
moved
it
to
next
next
meeting
as
well,
so
if
it's
still
there,
then
we
can
can
talk
about
it.
Then.
A
Cool,
so
I
think
that
wraps
it
up
for
today
see
y'all
on
february,
4th.