►
From YouTube: [SIG Network] Network Policy API Meeting 20201116
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Okay,
great,
so
we
have
a
small,
a
small
agenda
here,
let's
pass
through
the
cap
reviews
and
see
if
someone
is
missing
something
jay.
Do
you
want
to
start
with
yours?
B
Labels,
what
what's
the
deal
with
the
synthetic
labels
I
keep
hearing
about
them,
but
nobody
wants
to
tell
me
what
they
are.
I
mean
I
was
thinking
about.
I
was
playing
around
with.
I
think
it
will
make
sense
like
if
you
just
put
in
my
mind
the
stupid
way
of
doing
this
is
well
if
you
just
put
an
annotation
on
a
damn
network
policy
that
said,
allow
all
traffic
from
these
namespaces
that
would
solve
80
of
the
problems.
I
would
care
about.
Obviously
that's
too
that's
too
hacky.
B
B
So
clients
that
knew
about
that
secret,
easter
egg
could
go
use
it
well,
then
you
could
have
a
way
that
clients
could
you
know.
I
don't
know.
A
B
Synthetic
labels
don't
have
so
there's
a
special
case
where
the
tim
brought
up
and
so
okay,
let's
start
from
the
top,
so
here's
there's
a
bunch
of
stuff
in
the
reviews.
But
there's
only
one
really
interesting
thing
to
talk
about,
which
is
everything
else
was,
is
pretty
much
addressed.
I
think-
or
it's
just
you
know
some
polishing
and
off,
but
tim
brought
up
this
qu
question
of
like
well.
B
If
we
had
a
so
let's,
let's
think
about
the
way,
the
peers
work
policy,
peers,
work
right,
there's
three
primitives
inside
of
a
policy
pier
pod,
selector,
namespace
selector
ip
block,
and
the
idea
is
that
the
more
of
those
primitives
you
switch
on
the
more
restrictive
the
policy
right.
So
if
I
turn
on
pod
and
namespace
selector,
I
have
a
more
restrictive
policy
than
if
I
just
turned
on
the
pod
selector
in
general,
right
all
things
being
equal.
B
So
now,
if
you
add
a
fourth
switch
okay
to
there
right,
then
you
get
this
weird
thing:
where
is
it?
If
you
have
a
pod
selector
and
you
have
a
namespace
selector,
add
by
name
namespace
selector
by
name,
then
you
have
now
made
a
very,
very
specific
policy.
But
if
a
client
that
doesn't
understand
the
namespace
by
name
selector
reads
that
policy,
the
client
is
going
to
interpret
a
looser
security
model.
So
then,
okay,
what
tim
said
was
yeah
okay.
B
He
doesn't
like
that,
it's
better,
maybe
to
it's
better
for
a
client
that
doesn't
know
about
some
specific
api
construct
to
interpret
that
construct
by
default
in
them,
as
a
more
in
such
a
way
that
the
output
policy
is
more
restrictive
than
less
okay.
So
how
does
this
lead
to
virtual
labels?
The
way
it
leads
to
virtual
labels
is
that
a
virtual
label
doesn't
have
the
same
drawback,
because
it's
not
inserting
itself
into
that
into
one
of
the
primitives
of
the
existing
peer
policies,
so
because
it's
not
being
inserted
in
there.
B
A
B
Yeah
I
mean
I
had
to
think
through
this
for
like
an
hour
and
a
half
before
it.
It
made
sense
to
me
like.
I
had
to
think,
because
I
could
didn't
understand
what
he
meant
by
virtual
labels,
so
I
kind
of
had
to
like
reverse
engineer
the
definition
of
it
from
the
consequence
that
he
was
stating
as
the
output
of
the
whole
thing
and
then
it
all
kind
of
made
sense.
B
It's
just
I
don't
know,
I
feel
like
that's
what
I'm
I'm
tending
towards,
but
I
I
don't
want
to
say
that,
like
I,
don't
think
that
the
namespace
says
name
thing
that
we
should
do
it,
but
I
feel
like
for
me
I'm
going
to
explore
this
virtual
labels
thing,
but
I
do
welcome
anybody
who's
interested
in
in
in
that
conversation
or,
in
the
other
conversation
around
how
we
can
back
out
of
that
problem,
and
he
did
have
a
a
potential
solution
to
that,
which
is
that
in
the
case
that
you
have
a
namespace
by
name
selector,
you
also
have
a
pod
selector
inside
of
the
namespace
by
name
selector,
so
that
the
entire
definition
of
namespace
selector
is
a
self-contained
thing
and
you
either
get
it
or
you
don't.
B
And
if
you
don't
you
don't
ever
combine
it
with
the
other
pod
selector.
You
know,
and
so
you
don't
have
that
weird
thing,
but
I
feel
like
that's
just
that's
just
very
ugly,
like
that's
making
the
network
policy
api
as
a
whole.
Worse
to
have
that
other
solution.
B
Then
I
wanted
to
ask,
but
I
didn't
really
ask,
because
I
assume
I'm
assuming
tim
would
have
thought
through
this
right,
so
I'm
assuming
he
thought
about
this.
It
just
feels
to
me,
like
maybe
there's
a
solution
around
how
the
api
translates
and
negotiates
api
versions
between
the
client.
B
I
mean,
there's
got
to
be
some
way
that
we
can
have
some
surgery
that
we
do
at
the
api
server
level
where
we
say
well,
if
you're
a
1.16
client
and
you
see
a
policy
with
the
network
name
space
by
name
in
it,
then
we
all,
then
we're
always
going
to
send
you
like
some
default,
that's
highly
secure,
so
I
feel
like
on
a
case-by-case
basis.
It
seems
like
you
could
do
some
kind
of
translation
surgery.
A
Think
this
is
also
related
to
then
to
that
windshield
cap
right
because
I
think
he
has
put
in
his
cap
about
what
happens
if
you
have
a
field
that
the
cni
doesn't
understand
so
when
he
says
that
he
wants
to
put
like
a
minor
version,
he's
saying
something
like
oh.
I
am
announcing
that
that
that
my
my
network
policy
is
this
version
and
if
the
cni
thinks
that
it's
not
secure
enough,
I'm
going
to
warn
the
user,
or
maybe
I'm
not
going
to
to
publish
that
network
policy
either.
B
I
mean
I
feel
like
it's
yeah.
I
feel
like
it's
very,
very
strongly
related
to
dan's
thing.
You
know,
I
feel
like
that's
why
I
feel
like,
though
the
virtual
labels
wind
up
getting
really
interesting,
because
you
might
be
able
to
create
a
bunch
of
you,
may
you
may
be
able
to
solve
an
entire.
What
I
don't
know
yet
is:
is
there
a
whole
class
of
problems
that
we
can
solve
with
those,
or
is
it
just
this
one
right,
but
I
feel
like
there's
a
class
of
problems
you
might
be
able
to
solve.
B
B
If,
if
nothing
else-
but
it's
like-
I
don't
know
whether
he
feels
I
don't
know
whether
it's
100
sure
clear
that
that's
the
default
interpretation
that
he
had
it's
just
I'm
kind
of
reverse
engineering-
that
from
what
he
said
so
to
me,
the
fact
that
this
is
not
backwards
compatible
in
the
purest
sense
and
that
old
clients
are
going
to
have
some
weird
interpretation
of
your
policy.
B
B
So
that's
the
that's
the
other
thing,
so
I
have
to
hear
back
from
him
on
that.
I
think
I
hope,
but
the
thing
is
that
my
answer
was
so
long
and
I
answered
it
in
three
parts
that
I
don't
know.
If
he's
gonna
have
time
to
read
it,
so
I
may
just
have
to
bring
it
up
in
cignet
work
next
week,
anyways!
That's
all
I
got
I'm
just
going
to
pick
up.
A
To
like
taking
the
short
path,
is
there
some
sort
of
validation
that
we
could
do
like
if
you
define
a
namespace
selector
by
name,
you
should
also
define
a
namespace
selector
by
label,
even
if
it's
empty,
so
you
are
going
to
warn
known
aware
clients
that
you
have
a
namespace
selector,
but
that's
empty.
So
it's
not
going
to
select
anything.
A
C
A
A
You
need
to
also
specify
a
namespace
selector
by
label,
so
all
their
cni's
are
going
to
know
that
you
have.
You
have
like
a
name
space
that
that's
a
selector,
that's
empty,
so
it's
not
going
to
select
anything
because
you
don't
have
any
any
label
selector
in
the
namespace
selector,
some
sort
of
giving
giving
users
a
warning
in
the
api
cited
or
even
a
lego
that
this
might
might
be
probe
to
to.
B
E
B
E
E
That
you
explicitly
opt
like
you're,
forcing
users
in
the
api
to
expl.
They
have
to
explicitly
say
empty
label
or
empty
selector,
as
opposed
to
like
having
to
put
some
like
dummy
maple
in
there.
That
has
no
semantic
meaning.
B
A
E
B
B
E
Yeah,
so
that,
so
I
think
we
need
to
clarify
that,
because
I
think
what
ricardo
was
saying
is
that
if
you
have
an
empty
label,
if
you
have
an
empty
namespace
selector,
it
defaults
to
no
namespaces,
so
it
by
default
would
be
more
secure
because
you
force
a
user
to
put
in
or
you
even
mutate.
C
A
Yeah
I've
just
said
to
you:
I
I
was
reading
either
it's
if,
if
present
but
empty,
select
sounding
space,
so
yeah
yeah.
B
B
So
if
it's
empty,
it
selects
all,
but
if
it's
nil
then
it's
also
basically
selecting
all
so
empty
and
nil
are
basically
the
same
thing.
It's
the
same
same
logical
intersection
and,
and
so
the
only
but
I
mean
the
garbage
data
thing
would
solve.
The
prop
would
simulate
the
exact
behavior
that
tim
is
suggesting.
B
Is
there
a
canonical
way
that
you
can
think
of
to
like
put
a
label
that
will
never
be
selected
into
a
ball?
I
mean
that
would
be
weird
though
right,
but
but
if
you
did
that
you
would
actually
be
able
to
have
that
backwards,
compatible
thing
right,
so
somebody
still
somebody
puts
in
a
namespace's
name
so
now
the
api
server
then
puts
like
some
huge,
auto
generated
time
stamped.
B
E
A
B
A
Match
match
an
expression,
and
you,
if
you
have
like
a
match
expression
with
a
key
and
like
does
not
exist
with
like
a
a
random
key
or
I
don't
know
something
like
that.
E
I
don't
know,
I
don't
even
know
if
the
match
expression
thing
is
reasonable,
yet
I'd
have
to
I'm
thinking
about
it
a
bit
more,
but
I
think
that
would
be
better
than
like
a
time
stamped
label
or
something
right.
You
have
one,
you
have
one
well-known
name,
you
have
one
well-known
label
reserved
for
mac,
expressions
that
have
to
fail
and
users
should
not
use
it,
but.
B
Yeah,
that's
the
thing!
Okay!
So
what's
your
take
on
the
virtual
labels,
because
I
I
just
am
starting
to
figure
them
out,
I'm
going
to
read
about
them
more
this
week
and
try
to
do
some
history
and
figure
out
who
started
talking
about
them
when
and
what
the
purpose
was
and
all
that.
But
what
what's
your
take
on?
It
opinion
if
nothing.
E
Else,
I
don't,
I
don't
have
one
yet,
but
I
think
ricardo
suggestion
is
not
crazy,
like
if
you,
if
you
set
a
namespace
by
name
field,
you,
the
api
strategy,
will
force
a
namespace
selector
that
does
not
match
anything.
That
way
an
old
client
will
reject
all
namespaces
and
in
the
validation
you
can
also
make
namespace
names
and
namespace
selector
mutually
exclusive,
so
that
a
user
can't
accept
both
yeah.
It's
pretty
sane.
B
A
A
B
B
B
So
I
really
like
the
idea
I
just
feel
like.
If
we
do
that,
I
don't
immediately
see
that
there's
a
clean
end
implementation
for
it
right
like
we're,
going
to
go
and
reserve
this
special
label
and
the
label
is
only
used
for
a
backwards
compatibility
case,
but
we're
not
even
sure
we
really
need
to
enforce.
B
B
I
think
the
ideal
solution
would
be
if
we
could
just
be
not
very
backwards,
compatible
and
say:
look
if
you
define
a
namespace's
name,
we
don't
really
know
what
older
clients
are
going
to
do
with
that
and
maybe
a
less
secure
interface.
I
think
that
that's
logically
makes
sense
to
me
right
if
I
have
a
namespace's
name
and
that's
a
field
that
nobody
else
knows
about,
then
treating
that
as
an
empty,
so
as
an
empty
field,
whatever
that
means
seems
like
it's
or
as
a
nil
field.
B
Kind
of
seems
like
it's,
you
know
seems
like
it's
reasonable.
I
don't
know
yeah,
that's
that's
kind
of
where
I
feel
it
I
feel
like,
but
tim
did
imply
that
this
was
a
deal
breaker.
So
but
it's
really
interesting.
So
I'm
kind
of
like
I'm
kind
of
happy
that
it's
gone
to
the
point
where
we
actually
are,
are
actually
really
talking
about
how
this
implementation
will
look.
B
B
A
A
So
I
don't.
I
don't
see
like
a
cone
point.
I
just.
I
just
think
that,
like
we
should
propose
some
some
short
path
and
see
what
what
team
and
and
then
in
case
they
think
about
that
and
okay.
This
is
viable
or
this
is
not
viable
and
assuming
they.
Their
answer
is
not
going
to
be
positive,
so
we
should
wait
until
contributor.
Labels
is
something
because
the
label
selector
is
going.
F
B
Well,
we
spent
30
minutes
on
my
problem.
So
let's
talk
about
your
problems
or
someone
else:
okay,
hey
yeah
and
gobin.
It's
good
to
see
y'all!
I
don't
think
matt's
ever
been
on
this
call
before
and
I
don't
know
who
vinai
is.
But
it's
good
to
see
you
all,
though,
hey
what's.
F
Up
hey,
let
me
do
a
quick
introduction.
I'm
vinay
I'm
one
of
the
I
work
at
google.
I
manage
the
gke
networking
teams,
especially
in
observability
and
security,
so
zhang
used
to
work
and
she's
moved
to
a
different
group,
so
I'm
decided
to
start
attending
this
meeting,
so
yeah
cool.
C
Hey
guys
and
ladies
yeah
I'm
mad,
I
work
at
synopsys
and
I've
been
doing
a
lot
of
net
poll
hacking
recently
and
you
know
applying
that
stuff.
It
synopsis
as
well.
So
you
know
happy
to
learn
more
about
this
stuff
and
possibly
do
some
projects
here.
B
Yeah
matt's
been
helping
a
lot
with
the
validation
stuff.
We've
been
doing,
okay,
cool.
H
So
so,
where
are
we
at
where
we
at
cluster
scope
policy
for
the
agenda.
I
Happens:
one
quick
question:
while
you're
on
the
name
space
by
name.
Sorry,
I
wanted
to
interject
but
did
not
get
an
opportunity.
I
I
feel
like
the
reserve.
Labels
is
going
to
be
a
long
shot.
It's
a
little
hacky.
I
I
don't
know-
or
at
least
I
won't
say
hacky,
but
at
least
it's
not
as
clean.
As
I
mean
an
api
should
be,
but
that's
my
opinion,
but
you
know,
maybe
if
others
agree
to
it.
That'd
be
great.
I
I
didn't
quite
get
your
stand
on
it,
but
at
least
I
feel
like
it's
a
little
unkempt.
I
Have
you
completely
shot
down
tim's
proposal
of
using
the
alpha
feature,
gate
and
and
doing
the
validation
based
on
that?
I
think
that's
what
he
suggested
in
towards
the
end.
I
I
don't
think
I
think
that
was
a
different
comment,
though
I
think
it's
the
same
comment
where
you
know
the
virtual
label
comment
from
that
thing.
Tim
had
a
before
that.
I
think
it's
one
of
the
very
first
few
comments.
If
you
scroll
up.
I
I
I
was
just
wondering
whether
this
is
this
acceptable
approach
or,
to
be
honest,
I
have
not
yet
completely
thought
through.
This
particular
proposed.
B
A
B
A
Just
waiting
for
some
reviews,
andrew
and
then
they
reached
they
reached
the
consensus
about
the
location
of
port
range.
So
as
as
we
have
like
two
attitudes
taking
a
look
into
me,
my
api,
my
proposal,
I've
moved
it
to
I've,
moved
it
to
denzen
and
andrew
proposal
to
have
a
part
range
separated
from
the
port,
and
I
think
that's
it
so
now
I
am
waiting
for
some
more
folks
review
about
that.
So,
okay,
I
don't
think
there
is
something
else,
so
I've
added
some
user
stories
either
because
then
ask
it
and
that's
it
yeah.
A
I
I
I
think
gobind
brought
a
lot
of
use
cases
from
his
end
and
we
added
a
few
and
basically,
we
went
through
all
those
kristascope
administrator
policy,
use
cases
and
kind
of
like
agree
upon
and
document
it
in
our
in
our
google
doc,
and
our
next
step
is
to
introduce
or
write
templates
based
on
the
respect
that
we
are
kind
of
converging
on,
and
the
idea
is
to
first
see
whether
the
proposal
that
we
have
satisfies
all
the
use
cases
that
were
documented
and
you
know,
if
not,
then
how
do
we
solve
that?
I
And
if
yes
is,
is
this
the
right
way
of
doing
it
and
then
poke
holes
around
it?
So
that's
that's
our
next
step
for
this
week,
gobind
that
amazon
anything
anything.
H
I
think
that
pretty
much
covers
it.
We
made
pretty
good
progress
and
I
don't
know
my
camera
is
frozen
on
my
local
preview.
Am
I
frozen
to
everyone.
F
H
Yeah,
I
I
see
a
frozen
preview
of
myself
and
and
zoom,
so
I
guess
it
doesn't
matter,
never
mind
so
yeah
and
nothing
to
add
here
what
what
I'm
chick
said
is
pretty
much
spot
on
we
this
week,
we're
just
gonna,
assess
the
use
cases
and
apply
our
policy
like
our
policy
structure
that
we've
come
up
with
against
the
use
cases
see
if
there's
any
shortcomings
and
gaps
fill
them
in
and
then
hopefully
we'll
have
a
proposal
in
the
next
few
weeks.
I
Yeah,
I
think
that's
about
it
from
our
end,
of
course,
I
think
thanksgiving
will
be
coming
in,
so
I
don't
know
how
much
we
have
we'll
be
progressing
but
yeah
in
december.
F
No,
she
she's
at
google.
It's
just
like
you
know:
there's
been
a
change
of
responsibilities
like
you
know,
so
there
is
so
she's
she's
gonna
be
involved
in
other
1000
gk
related
projects,
so
there's
been
a
reshuffling
of
people
want
to
move
on
because
they
have
other
projects
so
I'll,
be
taking
over.
So
I'll
have
a
new
a
couple
new
resources
joining
my
group
so
but
in
the
interim
I'm
going
to
be
participating
in
this.
B
Okay,
great
well,
thanks
tell
zang
thanks
for
helping
us
out
and
everything
everybody
send
zing
a
message
and
thank
her
for
all
the
work
she
did.
Helping
us
get.
C
A
H
Is
going
yes,
so
I
actually
redid
the
proposal
doc
with
the
feedback
that
I'd
gotten
and
the
comments
which
was
awesome
by
the
way.
So
I've
condensed
all
of
those
comments
into
sort
of
two
proposals,
one
it's
the
same
document.
It
has
two
proposals
now
and
thanks
to
andrew
and
rich
renner.
You
know
they've
both
been
very
helpful
in
sort
of
bouncing
ideas
with
me
and
talking
through
some
of
the
the
details,
and
so
we
put
it
all
together.
H
I
think
at
this
point
I
think
I
feel
like
we
almost
have
all
the
details,
there's
just
sort
of
the
implementation
like
that
we
need
to
sort
of
clarify
andrew
if
you're
still
on
the
line.
I
was
curious
to
know
if
you
know
you
and
I
can
sit
down-
maybe
one
more
time
and
just
go
over
some
of
the
implementation
details
or
if
you
want
to
do
it
offline.
H
Yeah,
so
that's
that's
pretty
much
it.
I
can
link
the
dock
in
there.
If
you
want
to
have
a
look
at
it,
yeah
you
can.
I
can
I
can
link
again
here.
Okay,
let
me
let
me
do
that
and
then
let
me
know
any
questions
that
I
can
answer.
If
people
are
here.
H
A
H
Time
here
yeah,
we
can
do
that
if
yeah,
we
can
go
over
this
so
effectively.
The
first
proposal
is
still
roughly
the
same
said:
nothing
changed
there,
but
what
I
did
is
I
took
all
the
feedback
and-
and
you
know
that
it
was
in
the
comments-
as
I
said,
and
came
up
with
a
second
proposal
with
the
help
of
andrew
and
rich
and
folks
from
google,
of
course.
H
So
the
idea
here
is
to
now
propose
a
another,
a
different
way
of
doing
this,
which
is
doing
it
through
a
new
crd,
I'm
not
married
to
the
name.
It
could
be
the
end,
our
policy
or
something
else,
but
the
idea
was
to
have
a
new
crd.
H
Primary
advantages
would
be
that
you
know
existing
plug-ins
won't
have
to
necessarily
have
an
implementation
or
have
any
conflicts
with
their
implementation.
This
could
be
pretty
much.
You
know,
sort
of
a
choice
where
you
could
say
the
advanced
like
like
calico
and
cilium
can
stay
on
their
implementation.
They
wanted
to,
and
you
know,
kubernetes
could
have
its
own
thing
here.
Without
really
you
know
interfering
with
each
other
and
also
the
other
advantage.
H
Is
that
if
you
wanted
to
provide
more
verbose
sort
of
semantics,
like
denies
and
stuff
like
that
in
this
policy,
we
could,
although
that's
not
the
proposal,
that's
not
what
the
proposal
is
calling
out
for
at
this
moment,
but
it
does
give
us
that
flexibility
that
added
flexibility
of
you
know
doing
things
slightly
differently
and
not
be
stuck
in
the
network
policy
sort
of
scaffolding.
H
So
those
are
some
of
the
big
advantages.
The
disadvantage,
of
course,
is
that
now
customers
will
have
to
be
very
conscious
about.
Oh,
I
have
a
never
policy
protecting
this
workload
and
I
have
an
fqdn
policy
protecting
this
workload,
so
they'll
have
to
search
both
to
be
very,
very,
like
you
know,
comprehensive
about
their.
E
I
think
one
clarifying
point
that
I'd
make
is
that
if
we
are
putting
this,
if
we're
putting
this
new
resource
in
the
networking
dot,
kubernetes
io
v1
group,
we
should
be
clear
that
it's
a
new
resource,
not
a
crd,
because
the
crd
implies
that
the
when
you
install
coordinates
or
cube.
You
know
whatever
you
install
the
crd
schema
into
the
api
server,
and
then
you
know
the
coordinates
understands
how
to
watch
it.
I
I
think
I
think,
where
we
landed.
H
Right
yeah,
I
I
think
I
I
agree
with
that,
but
I
want
to
hear
the
groups
sentiments
on
this
and
if
you
already
have
consensus
on
this
methodology,
then
that's
great,
I
can
just
we
can.
Just
sort
of
you
know
start
moving
forward
on
the
in
this
direction.
E
And
just
a
little
bit
more
context,
I
think
in
the
I
think,
there's
precedence
right
now,
where,
if
you
want
to
introduce
a
new
core
api,
like
you
start
with
crds
right,
we
did
this
for
gateway
like
service
v2
and
basically
like
yeah,
like
that's
kind
of
the
thing
we
do,
and
this
this
mostly
makes
sense
when
you
work
with
apis
like
ingress,
where
you
have
hundreds
of
implementations
of
the
ecosystem.
E
I
think
that
makes
a
lot
of
sense,
because
you
want
to
kind
of
see
how
people
implement
that
api
and
go
from
there.
I
I
think
with
this,
knowing
that
there
are
like
literally
just
two
implementations,
it
makes
sense
to
just
start
as
a
first
class
computer.
B
That's
what
I
was
missing,
that's
what
I
didn't
understand
like
I
didn't
understand
like
who
cares,
that's
what
I
was
thinking
like.
What,
if
you
did
a
crd
like,
why
would
it
matter
but
yeah,
that's
the
reason:
okay,
okay,
that
makes
sense.
H
B
There's
always
this
thing,
that's
confusing
like
andrew
said
right,
like
you
never
know
so,
a
crd
right
is
you
know
anybody
can
make
a
crd.
You
can
install
it
into
the
api
server
and
then
it
works
right.
So
there's
always
been
this
question
in
the
back
of
my
mind
with
not
only
this
but
the
other
ones,
it's
like
well.
What?
If
we
just
made
a
new
network
policy
api
as
a
crd
and
then
people
could
vendors
could
variably,
implement
it
or
not
implemented?
B
And
then
we
didn't
have
to
be
in
this
and
then
our
job
would
be
to
really
help
vendors.
Do
these
implementations
until
they
became
a
de
facto
standard
and
then
merged
into
the
api,
but
I
guess
what
andrew's
saying,
which
makes
sense,
is
that
yeah?
You
could
always
do
that,
but
it's
more
likely
to
be
a
fruitful
thing
to
do
a
crd
first,
if
you're
going
to
really
be
innovating
a
lot
in
terms
of
how
you
implement
it
and
who
implements
it
and
whatnot,
and
in
this
case
we
only
have
two
solutions
that
we're
proposing.
B
So
there's
not
a
there's,
not
a
lot
of
bandwidth.
In
terms
of
how
it's
implemented,
I
mean
they're,
not
literally
hundreds,
but
I
guess
andrew
there's,
probably
tens
right,
there's
probably
just
to
play
devil's
advocate,
there's,
probably
15
or
20
different
ways.
You
might
implement
this
right.
You
might
implement
it
somehow
in
a
hypervisor.
You
might
implement
it
as
cni.
You
might
implement
it
coordinates.
You
might
implement
it.
Using
that
operator
thing
right.
B
B
E
Well,
I
think
I
think
we
would
want
to
design
the
api
based
on
how
we
want
consumers
to
use
it
right
so
like
we
need
to
decide
like
if
we
wanted
cni
providers
to
implement
it,
then
maybe
this
does
belong
in
network
policy.
If
we
want
the
server
to
implement
it
like
to
me
like
a
built-in
new
resource
makes
sense.
D
H
Yeah,
I
think
jade.
So
what
andrew
and
I
are
sort
of
suggesting
here,
is
that
if
we
go
with
proposal
2,
it
almost
automatically
implies
the
implementation,
which
is
it's
going
to
be
in
the
dns
service,
whether
that
is
cube's,
cube,
dns
or
you
know
core
dns,
or
what
have
you
it's
it?
This
proposal,
too,
is
is
almost
suggesting
that
that's
the
implementation,
like
the
dns
service
is
gonna,
be
part
of
the
enforcement
perimeter.
H
Exactly
and
I
was
a
little
hesitant
about
sort
of
adding
a
new
enforcement
parameter
and
that's
sort
of
the
disadvantage.
H
I've
called
out
with
this
approach,
which
is
now
a
customer,
has
to
be
careful
or
like
at
least
be
cognizant
of
the
fact
that
there
is
another
enforcement
point
in
the
stack
where
policy
could
be
enforced
but
honestly
like,
given
the
downside
of
you,
know,
potentially
getting
it
wrong
in
never
policy,
because
you
can
be
out
of
sync
and
your
caches
are
out
of
date
or
whatever,
and
then
you
have
the
poll,
and
I
don't
know
like
for
one
dns
resolution.
H
H
Yeah,
well,
I
just
wanted
everybody
to
be,
like
you
know,
very,
very
like
clear
about
what
we
were
suggesting
and
so
that
we
can
develop
consensus
here
and
then,
if
everybody
like
thinks
that
this
is
the
right
way
to
do
it,
then
you
know
we
can
go,
make
the
case
to
cube
dns
and
core
dns
and
whatnot.
I
So
so
one
quick
question:
I
think
you
already
alluded
to
it,
but
I
don't
know
whether
you
clarified
it
is
that
how
does
if
you
have
a
conflicting
network
policy
with
this
fqdn
policy?
Let's
say
the
network
policy
allows
it
and
the
apparent
policy
doesn't
really
allow
it
or
the
other
way
around.
What
will
be
the
expectation?
What
will
be
the
outcome.
H
How
will
the
network
policy
allow
it
you
mean
by
ip
address
might
be
blockchain
yeah,
I
mean
the
question
is:
if
the,
how
is
the
packet
going
to
be
sent
out
as
if
like,
if
the
like,
if
you
already
have
the
ip
address,
I
guess
they're
not
going
to
make
the
fqd
and
query
and
they
can
just
go
out.
H
B
I
H
Exactly
if
you
have
the
ip
address
you're
going
out,
this
is
only
locking
down
the
resolution
in
the
dns
layer.
So
if
you,
if
the
dns
says
no
you're
not
allowed
access
to
this,
all
you're
missing
is
some
information
about
how
google.com
gets
resolved
to
an
ip.
But
if
you
have
the
ipad
google.com
you're
not
going
to
rely
on
the
dns
policy
or
the
dns
service
to
resolve
it,
and
therefore
you
can
just
go
out.
H
A
I've
put
also
a
suggestion
here
going
about
if,
if
this,
if
this
policy
is
not
related
to
only
blocking
but
like
if
we
can
control
how
the
how
the
dns
server
is
going
to
answer
for
that
bunch
of
namespaces
like
if
I
want
to
define
not
an
external
name
service,
but
something
with
like,
I
have
a
a
a
remote
server
that
have
a
that
have
a
defined
internal
certificate.
H
Sorry,
are
you
talking
about
whether
the
policy
will
be
able
to
lock
down
internal
dns
queries
so
like
for.
A
H
A
A
I
don't
know
like
inside
a
bbc
or
inside
the
nsx
data
center,
and
I
have
a
load
balancer
for
all
of
them,
so
I
have
the
same
certificate
and
the
same,
the
same
ip
and
the
same
certificate,
but
different
ids,
and
I
want
to
answer
with
a
different
id.
I
want
to
say
that,
instead
of
going
to
the
instead
of
going
to
the
return
of
diabetes,
you
want
to
provide
the
idea.
H
H
I
think
that's
scope
creep
at
this
point,
but
it's
an
interesting
use
case
andrew.
I
don't
know
if
you
agree
with
that
statement.
I'd
be
curious
to
know
if
you
think
this
is
sort
of
the
bread
and
butter
of
the
policy
that
we're
talking
about
right
now,.
H
Yeah,
yeah
ricardo,
would
you
mind
just
adding
a
note
just
in
the
dock
anywhere
I'll
I'll
clean
it
up?
Oh,
you
did
okay,
perfect,
yeah
I'll.
You
know
make
sure
that
this
gets.
You
know
at
least
covered
and
called
out,
and
if
it's
like
say
for
version
two
of
this
policy,
then
happily
we'll
consider
it
I'm
sure
there
will
be
more
requests,
but
I
I
would
love
to
sort
of
keep
it
focused
at
this
moment.
So
we
can
get
this
like
committed
and
built.
E
Yeah,
like
I
think,
the
the
way
I
see
it
is
like
that
sort
of
behavior
is
something
that
you
are
allowed
to
do
with
the
coordinates
plug-in
or
like
you
know,
accordions
features,
and
this
policy
is
really
just
saying:
can
a
pod
request
whatever
this
like?
Can
it
resolve
whatever
this
thing
is
going
to
end
up
resolving
by
other
plugins,
or
you
know.
A
H
A
H
No,
that's
fair,
I
think
I'll,
add,
like
future
possibilities.
As
a
section
to
this
document
and
I'll
say
like
these
are
the
extensions
we
could
consider
in
the
future,
but
very
quickly.
I
just
want
to
go
over
the
other
aspects
section
that
I
added
here,
and
this
is
to
really
sort
of
lock
down
the
scope
of
this
policy,
because
there
was
there
were
some
questions
around
hey.
H
What
about
regex
is-
and
you
know,
is
this:
an
l7
policy
are
or
even
open
up
the
packet
like
lots
of
sort
of
related
questions,
and
I
really
wanted
to
define
the
perimeter
of
what's
included
and,
what's
not
so
I'll,
just
quickly
go
over
these,
so
we
can
make
sure
we're
all
aligned
on
this.
H
So
only
allow
semantics,
we
don't
we
don't
need
or
want
the
deny
or
accept
semantics
in
this
case.
This
is
also
to
prevent
any
sort
of
weird
oddities
with
say,
opening
up
security
holes
where
you
deny
something,
but
you
know
only
a
certain
subset
of
the
ips
are
denied
and
some
other
ips
are
left
open.
H
It's
kind
of
a
security
hole,
so
we
don't
need
deny
and
accept
at
this
point,
we
can
just
need,
allows
only
egress
policies,
no
ingress
fqdn,
the
behaviorist
policy
entirely
dictated
by
the
clusters
dns
service.
So
that's
that
was
sort
of
the
implication
here,
which
is
we
cannot
make
any
more
guarantees
than
what
the
dns
service
does
and
that's
the
backstop,
so
ips
reported
by
the
dns
service
are
the
only
ips.
We
need
to
allow
pods
to
be
able
to
make
outbound
requests
to.
H
If
a
pod
somehow
gets
the
ip
address
of
a
website
that
is
not
known
to
the
dns
service
on
that
cluster,
we
won't
be
able
to
allow
connections
to
that
ip
address.
This
failure
mode
is
not
a
security
hole.
It's
it's
a
minor
annoyance.
This.
The
policy
enforcement
must
be
completely
in
sync,
with
the
dns
entries
at
all
times
again.
This
comes
down
to
the
implementation
and
if
we
put
it
in
the
dns
service,
it
becomes
true.
Automatically
policy
accepts
either
full
domain
names
as
from
here
or
a
wild
card
match.
H
For
the
first
label,
as
shown
here,
nowhere
else
can
you
put
a
star
and
we
will
not
support
any
other
format.
General
regex
and
pattern
matching
is
not
allowed
and
fqd
and
policy
enforcement
for
cluster
local
services
is
a
nice
to
have,
but
not
required
at
this
time.
So
you
know
we're
keeping
this
as
sort
of
a
possibility.
If,
if
we
can
get
it,
we
can
get
it
if
not
that's
fine,
but
because
we
already
have
a
way
to
do
this
today
with
never
policy,
it's
just
a
little
inconvenient.
So
yeah.
E
E
H
Yeah
yeah
yeah,
I
mean
yeah.
That
was
that's
what
I
was
trying
to
get
at,
like
you
know.
You
know
maybe
like
if
you
put
the
full
name
of
the
local
service,
like
the
servicename.local.cluster.local
or
whatever
yeah,
that
that
could
potentially
work,
and
that
would
be
nice
to
have
here.
E
H
I'm
just
saying
that
it's
a
nice
to
have,
although
like
if
this
becomes
like
a
sort
of
a
you,
know
a
blind
spot
for
us
going
forward
because
right
now
we
think
it's
probably
very
easy
to
do.
But,
let's
say
later
we
say
oh,
but
this
has
this
one
big
gotcha,
I'm
happy
to
sort
of,
say:
nah!
We
can
do
it
later.
Okay,
I
see.
E
Yeah,
I
can
so
just
just
to
just
to
be
clear
here
like
saw
what
you're
saying
is
that
by
default
we
don't
enforce
policy
on
the
cluster.local
domains
or
dns,
because
yeah
like
we,
don't
know
what
the
unforeseen
results
of
that
is.
H
H
Yeah
all
right
awesome,
any
questions
on
this.
H
This
would
be
the
like,
the
cluster
operator,
the
cluster
administrator,
whatever
you
want
to
call
it,
but
to
be
fair,
this
could
be
used
just
like
kubernetes,
never
pause.