►
From YouTube: Network Policy API Bi-Weekly Meeting 20201026
Description
Network Policy API Bi-Weekly Meeting 20201026
A
Okay,
I'll
do
the
intro
hi
folks.
This
is
the
weekly
network
policy,
sub
project
meeting
under
sick
network,
and
this
is
the
october
26th
meeting
ricardo.
Did
you
want
to
start
with
the
agenda
or.
C
D
D
D
D
B
Oh,
I
have
a
little.
I
did
the
thing
it
looks
like
it's
working
for
me.
I
I
changed
the
internal
implementation
of
the
data
structure
to
for
the
network
policy
struct
to
use
what
the
two
different,
whatever
two
different
types
of
like
potential
like
namespace
fields,
and
it
seems
to
be
working.
So
that's
my
update
there.
That
was
just
something
that
me
and
andrew
and
some
other
people
had
talked
about.
B
That's,
I
think
the
only
issue
we
have
around
it.
Everything
else
is
mostly
just
paperwork,
which
is
whether
or
not
we
can
not
break
the
api
and
enable
and
get
rid
of
the
label
selector
as
the
implementation
of
the
namespace
selector,
and
instead
have
the
namespace
selector
point
to
a
strut
like
a
struct
that
can
be
either
that
can
have
one
of
both
either
one
of
those
fields,
and
it
looks
like
it's
working.
B
That's
the
only
update
I
have
there.
I
should
probably
update
this
cap
to
have
that
implementation.
I
don't
know.
Should
the
cap
have
that
implementation
detail
at
this
point,
or
is
that
just
going
to
create
a
bunch
of.
B
A
I'm
I'm,
I
think,
I'm
a
little
confused
so
so
yeah,
like
my
understanding,
is
that
we're
all
in
agreement
of
zhang's
suggestion
last
week,
which
is
to
put
like
match
names
under
name,
selector,
right
and
so
you're,
seeing
that
you
did
some
validation
and
it
works
right.
But
if
we're
saying
that
the
change
is
potentially
breaking,
then
I
think
that
puts
the
solution
off
the
table.
Right
like
we
can't
break
the
b1
apis.
F
As
to
yeah,
I
don't,
I
think
it's
not
breaking
not
breaking.
Okay,.
A
Right,
I
have
no
doubts
that
the
like
yeah
like
if
we,
if
you
created
a
new
cluster
using
a
new
client
and
a
new
server
that
wouldn't
break
because
your
everything's
on
the
same
version.
I
think
the
concern
is
like,
if
you
have
an
old
client
against
a
new
server
or
or
like,
if
you're,
an
old
client
updating
to
the
news.
The
new
client
is.
Is
that
breaking,
I
think,
like
that's
the
breaking
change.
B
I
mean
yeah,
it's
a
different
type,
so
so
it
definitely,
but
it's
it's
just
depends
on
what
you
define
as
breaking,
because
dan
had
mentioned,
that
the
only
thing
we
actually
have
to
keep
is
the
yaml
but
yeah
you're
right
I
mean
it's
you're
changing
the
type
for
sure
I
mean
there's
always
potential
gray
area
like
maybe
there's
something
that
goes
on
with
clients
where
you
can
create
some
kind
of
a
migration
thing
right.
So
maybe
that's
the
thing
that
I
need
to
look
into
is
whether
cl
the
client
is,
but
then
again
you're.
A
Yeah
so
like
so
the
way
I
see
it
is
like
if
we,
if,
if
we
can
make
the
change
so
that
the
json
yaml
serialization
is
non-breaking,
both
ways
like
from
old
clients-
and
you
know,
new
clients
and
whatnot,
and
the
only
thing
breaking
is
the
the
go
types
like
if
a
cni
updates
the
latest
api
types
and
then
that
breaks.
If
that's
the
only
thing
that
breaks,
then
I
maybe
there's
a
maybe
there's
a
justification
here
where
we
can
say,
like
the
actual
like
api
server
traffic
isn't
breaking
it's
just
implementers
have
to.
A
You
know
fix
that
one
type,
I
think
that's
a
better
case,
but
it's
not
clear
to
me
yet
if
even
the
json
serialization
breaks.
B
Oh
okay,
so
yeah.
What
is
the?
What
is
the
yeah?
So
I
guess
that's
pretty
con
black
and
white,
because
I
can
actually
I
compiled
a
release
just
just
a
little
while
ago,
so
I
can
actually
run
that
release,
put
it
through
its
paces
and
see
whether
it
works
or
not.
That's
that's
a
black
and
white
thing.
Then
the
question
winds
up
being
about
the
client
go,
but
I
guess
we
can
do
baby
steps
right.
I
can.
B
I
can
see
if
this
packed
up
thing
works
and
if
that
works
for
you,
that's
just
what
I'll
do
I'll
just
see
how
far
I
can
go
with
it
and
try
to
I
mean
I
guess
the
simplest
thing
to
do
is
to
try
to
implement
this
right
and
then
see
what
what
see
what
breaks
where
right,
because
it's,
I
think,
that's
going
to
be
faster
than
you
know,
doing
a
bespoke,
specific
experiment
on
one
thing
or
other,
because
there's
probably
other
things.
We
want
to
look
at
right.
A
Yeah
and
then
I
think
the
other
thing
is
like
we
should
look
at
the
api
convention.
There's
an
api
guidelines
dock
or
something,
and
if
there's
something
in
there
explicitly
about
breaking,
go
types
then,
like
we
know
for
sure
that
the
go
type
has
to
break.
So
if
that
in
itself
is
enough
to
call
this
a
breaking
change,
then
we
should
yeah.
We
probably
want
to
give
up
on
this
early
right.
B
Yeah
yeah,
I
can
do
that
so.
B
It's
a
little
bit
of
a
searching
for
a
negative
result
thing,
so
it
might
be
unless
they
specifically
say
breaking
the
client.
But
I
can
at
least
do
a
cursory
look
for
it
right,
client
search,
because
I
don't
I
don't
think
they're
going
to
say:
don't
worry
if
you
break
somebody's
old
client
you're
still
like
that's
okay
right,
but
they
may
have
something
so
search
for
client,
go
backward
compatibility
and
then.
E
B
This
is
a
token
reminder:
okay,
yeah!
So
that's
all
I
got
yeah
I'll
do
that
if
anybody
else
wants
to
bang
away
at
this,
let
me
know
I'll
just
push
a
branch
at
some
point.
D
I
have
revealed
andrea's
and
then
windshield
and
also
abhishek
proposals
here
in
the
comments
and
I
have
removed
the
user
story
from
the
exceptions
as
as
we've
spoken
in
the
in
slack
implementing
exceptions
might
be
something
like
specifying
multiple
ranges
and
removing
the
exceptions
from
here,
but
like
we
can
put
them
again
and
also
yeah.
There
is
some
other
thing
that
I've
made
some
then
which
has
raised
about
the
the
concerns
from
the
open
v
switch
of
open
flow,
not
supporting
multi-parts
and
this
being
probably
okay,
a
a
problem
for
entry
and
for
openshift.
D
So
if
folks
from
korea,
I
know
that
there
is
a
lot
here
could
take
a
look
and
see
how
hard
is
this
going
to
be,
or
if
this
is
going
to
be
a
known,
not
not
hard
concern
would
be
great,
and
so
my
cap
is
almost
finished.
I
just
need
to
change
here.
I
we
have
just
only
one
story.
I
think
this
is
enough
because
we
can
like
say
why
use
part
range
and
the
only
thing
that
I
need
to
change
yet
is
the
min
and
max.
B
A
So
on
the
topic
of
open
flow
and
ranges,
is
it
like
so
for
for
implementations
that
don't
use
like
ip
tables
and
can't
specify
like
a
range
in
the
rules?
A
The
expectation
would
be
that
like,
if
I'm
managing
an
open
flow
table,
I'd
have
to
put
a
rule
per
port
in
the
range
right
is.
That
is
that
a
is
that
bad
enough
that
we
want
to
block
on
this,
like?
A
G
G
I
can
maybe
get
more
details
next
week,
because
this
week
we'll
try
to
pin
down
on
some
of
the
the
implementation
or
the
design
for
the
portrait,
and
I
know
some
of
the
team
members
are
working
working
towards
towards
this
using
some
masking
feature,
so
maybe
I'll
get
and
get
back
to
you
guys,
I
don't
think
I'll
be
able
to
give
you
how
efficient
was
inefficient.
This
this
new
method
would
be
it's
just
that
there
is
a
way,
and
I
do
believe
that
the
team
has
already
found
one.
D
G
D
Yeah,
so
so
there
is
the
same
concern
about
the
I
I
don't
know
if
there
is
someone
from
from
syria
from
valley
here
or
any
other
ebp
fcni,
but
I
know
there
isn't
the
same.
The
same
concern
here
about
the
vpn,
because
you
need
to
change
how
the
probably
how
you
are
you
are
going
to
populate
the
evpf
maps
or
the
or
like
using
a
different
map,
but
I
think
this
is
probably
going
to
be
like
okay,
we
have
this
in
the
api,
but
it's
up
to
the
cni,
how
it
is
going
to
be
implemented.
D
A
G
The
work
that
that
has
been
done
yeah,
so
I
thought
you
know
it's
a
good
time
to
just
provide
us
update
in
terms
of
what
the
what
few
of
us
have
been
doing
on
the
cluster
scope
part.
So
we've
been
meeting
weekly
between
zhang
gurban,
young,
chris
luciano
and
myself
on
on
the
cluster
scope
policy.
For
it's
it's
been
almost
a
month.
I
think
for
three
to
four
weeks
that
we
met
so
so
far.
You
know,
I
think,
in
terms
of
timelines.
G
What
we
think
is
that
maybe
in
the
1.21
cycle
is
when
we
will
open
the
cap
for
review,
but
you
know
target
merge
in
the
1.22
cycle,
considering
this
is
going
to
be
there's
going
to
be
a
lot
of
feedback
on
this
and
it's
it's
going
to
be
a
large
effort.
On
the
other
hand,
you
know
some
of
the
decisions
that
we
have
made
is
that
you
know
you
want
to
start.
G
You
know
we
want
to
start
with
pods.
You
know
target
them
as
the
workloads
but
keep
room
for
future
selectors.
You
know,
for
example,
nodes-
and
you
know,
external
workloads,
which
are
also
being
you
know,
considered
in
upstream
in
a
new
crd.
So
those
you
want
to
keep
a
room
so
that
the
selectors
can
select
those
nodes
or
external
workloads.
G
But
basically
you
want
to
model
it
closely
to
the
kubernetes
network
policies
and
you
know
start
with
the
part
as
the
workload
when
I
say
part
of
the
workload,
I
don't
mean
just
the
port
selector,
but
it
can
also
be
named
space
selector
or
maybe
in
future
service
selector,
but
essentially
it
will
be
applied
to
the
pods.
G
The
other
thing
that
you
know,
two
of
the
main
concerns
or
guarantees
that
we
want
to
give
with
the
cluster
network
policy,
is
that
the
administrators
can
provide.
You
know
these
strong,
guardrail
or
policies
which
a
developer
cannot
override
and
the
other
guarantee
is
that
administer
may
want
to
provide
some
sort
of
baseline
or
default
security
for
the
cluster
which
the
developers
can
override.
So
those
are
the
two
kinds
of
you
know:
use
cases
that
may
arrive
for
a
clusterscope
policy.
G
The
other
thing
is
that
we
have
decided
is,
or
we
have
been
recently
working
on-
is
that
how
to
do
precedence
and
priority
between
kubernetes
network
policy
and
cluster
policy?
We
have
already
looked
at.
You
know
existing
artwork,
like
calico,
cilia,
mantria
and
other
crds
that
are
available
today
in
this
space,
and
you
know
each
of
them
have
their
own
unique
way
of,
or
some
in
some
cases
overlapping
ways
of
solving
this
problem.
G
So
essentially,
as
a
team,
we
decided
that
we
will
come
up
with
two
proposals
and
we
will
back
one
proposal
as
a
main
proposal
and
have
the
other
proposal
as
an
alternate
proposal.
Now
the
two
proposals
would
be
broadly.
I
won't
go
into
details.
G
Maybe
we'll
have
a
you
know
once
we
have
a
proper
spec
for
the
resource,
we'll
present
it
to
you
guys,
but
essentially
broadly
the
two
proposals
are,
you
know
one
would
be
a
priority
based
proposal
where
you
have
some
priority
numbers,
which
would
you
know
indicate
which
policy
wins.
The
other
would
be
without
priorities.
That
is,
you
know.
It
is
similar
to
kubernetes
network
policy
that
if
you
know
what
the
semantics
mean,
this
is
how
what
you
get
you
don't
have
to
worry
about.
G
You
know,
writing
priority
numbers
from
policy
a
to
policy
b
and
then
wondering
how
one
proceeds
the
other.
So
these
are
the
two
broad
range
of
proposals
that
we'll
come
up
with
and
I
think
in
the
next
few
weeks
we
want
to
solidify
this.
The
two
proposals.
G
Our
aim
for
this
week
is,
to
you
know,
come
up
with
all
the
use
cases
and
come
up
with
a
model
and
then
run
those
use
cases
against
those
models
and
see
whether
they
satisfy
you
know
all
the
use
cases
that
you
want
to
solve
and
and
then
you
know,
mold
the
api
in
that
in
that
shape
and
and
maybe
in
the
next
couple
of
weeks,
and
we
will
be
able
to
come
up
with
a
with
a
sort
of
like
a
spec
or
a
api
sketch
for
for
this.
G
This
cluster
network
policy,
and
at
the
end
of
that,
we
will
come,
come
to
the
to
this
meeting
and
then
propose
the
the
respect
and
then
maybe
we
can,
you
know,
take
it
forward
from
there
and
once
we
have
like
a
broader
acceptance
from
everyone
on
the
call,
then
we
will
open
up
the
cap
for
review
and
then
get
a
broader
feedback.
D
C
B
G
We
in
our
google
doc,
we
have
maintained
the
agenda
and
the
the
meeting
notes
so
I'll
paste,
the
google
doc
link
and
we'll
keep
updating
the
that
section
and
you
everyone
can
give
that
and
yeah.
I
think
that's
a
good
idea.
We
can
keep
posting
some
notes,
maybe.
A
Yeah
or
even
to
add
to
that,
like
some
api
sketches
that
we
can
bring
up
in
this
call
just
so
we
can.
You
know
if
there's
anything
obvious,
that
we
might
be
able
to
catch
or,
like
compatibility
issues,
it'd
be
good
to
raise
those.
But
I
guess
I
guess
you
don't
have
to
worry
about
compatibility,
because
this
is
a
new
resource,
but.
D
D
A
Okay,
cool,
okay,
so
basically
ricardo
had
asked
me
a
few
weeks
back
to
telling
me
that
it'd
be
great.
If,
like
we
can,
you
know
kind
of
collect
all
the
data
that
we
had
in
our
agenda
dock.
So
basically
what
I
did
was
I
went
through.
Actually,
let
me
pull
it
up.
A
So
here
so,
this
is
the
working
dock
that
we
had
at
the
bottom
of
the
bottom
of
the
agenda.
There's
all
the
user
stories
that
jay
and
other
folks
have
collected
over
time.
So
basically,
what
I
did
was
I
went
through
all
the
user
stories
took
the
ones
that
sounded
like
coherent
and
sounded
kind
of
like
okay,
like
we've
talked
about
this
before,
and
you
know,
we've
said
that
you
know
these
are
reasonable.
A
There
are
some
items
where,
like
ricardo,
when
we
talked
about
scope
of
the
sub
project,
a
while
back,
we
said
we're
not
gonna
worry
about
like
container
level
restrictions.
So,
like
things
like
process
restriction,
I
just
kind
of
took
out.
A
A
So
these
first
three,
like
we've
already
discussed
you
know
a
million
times
port
rangers,
namespace
selection
by
name
and
cluster
scope,
network
policies
right
the
other
three.
So
so
these
all
have
status
kept
in
progress.
A
The
other
three
that
I
was
able
to
kind
of
pull
out
of
this
list
here
was
node
policies,
so
selecting
nodes
by
labels
or
by
names
or
whatever,
and
being
able
to
like
restrict
traffic
between
nodes
and
between
pods
in
the
cluster.
So
we
don't
support
that
today.
A
The
other
one
was
fqdn
policy,
which
gobin
has
brought
up
a
few
times,
and
there
was
another
proposal
that
he
put
together
on
this
one.
So
I
I
think
we
all
agree
that,
like
we
want
to
solve
ftv
and
policies,
we
just
don't
know
if
it
belongs
a
network
policy
or
another
resource
or
whatever,
and
then
the
last
one
that
was
kind
of
a
valid
use
case
from
the
list
was
selecting
pods
by
service.
A
Whatever
from
the
service
specification,
instead
of
assuming
that
the
author
of
the
network
policy
understands
like
what
the
pods
ports
are
so
like
to
me
based
on
kind
of
reading,
through
these
a
few
times,
these
were
the
concrete
user
stories
I
found
were
kind
of
like
I,
I
would
say
like
accepted.
A
These
are
kind
of
like
the
accepted
ones,
and
I
think
it'd
be
great
if,
like
over
time,
we
remove
or
add
things
to
this
list
based
on
kind
of
the
user
story
that
we've
been
kind
of
hearing
from
the
community
yeah.
That
was
it
questions
about
any
of
these.
Does
anyone
disagree
on
any
of
these
on
whether
any
of
you
should
be
in
here
or
do?
Does
anyone?
Does
anyone
know
the
user
story
in
this
list
where
we
felt
like
they
should
be
added
to
this
list
that
I
didn't.
D
A
Understanding
of
the
fpdm
policy
was
that
we
need
to
figure
out
whether
it
makes
sense
for
this
to
live
in
network
policy
or
if
it
makes
sense
to
live
in
another
resource
that
you
know
like
dns
policy
or
whatever.
Whatever
it's
going
to
be
that
at
least
that
was
my
understanding,
I
might
have
misheard.
D
B
No,
this
is
a
good
yeah.
I
really
like
this
summary
appreciate
it
for
sure.
I
think
this
is
all
the
ones
that
were
used
that
are
useful.
I
don't,
I
think,
all
the
other
ones
were
definitely
either
out
of
scope
or
just
too
complicated
to
implement.
I
have
a
at
one
comment,
which
is
that
the
service
one,
so
there
was
a
mailing
list
thread
in
2018
where
tim
hawkins
said
this
is
the
biggest
mistake
that
the
network
policy
api
originally
made
was
not
targeting
services.
B
I
don't
know
if
anybody
recalls
that
threat
or
or
not,
I
think
it
was
2018
or
something
like
that,
so
that
one
is
like
really
interesting
right,
because
the
minute
that
you
target
services
instead
of
pods,
like
I
wonder,
like
whether
that's
such
a
paradigm
shift
that
that
also
I
don't
want
that
to
be
out
of
scope
of
the
network
policy
api,
but
I
wonder
whether
it's
such
a
paradigm
shift
that
somebody
looking
at
the
network
policy
api
for
what
it
currently
is
today
would
say
yeah.
This
is
not
a
network
policy
api.
B
A
But,
like
I
think,
the
pod
selection
by
service
is
like
really
like
borderline
like
should
we
add
it
to
v1,
or
should
it
be
like
in
a
new
v2
thing
so
that
that's?
We
definitely
need
to
have
like
that
discussion,
because
you're
right,
like
it's
a
total
shift
in
the
way
the
network
policy
model
works.
So
introducing
it
now
might
be
a
little
too.
A
I
don't
think
we're
I'm
not
trying
to
be
prescriptive
here
to
to
any
sort
of
revolution.
I
just
wanna
make
sure
like
whatever
we
add
to
like,
like
the
final
list
of
your
stories
or
whatever
or
like
the
summarized
list,
are
the
use
cases
that
we've
kind
of
vetted
out
and
we've
discussed
and
we've
acknowledged.
That's
like
valid
use
cases
and
then,
once
we
kind
of
accepted
the
use
case,
we
can.
A
D
A
Actually
yeah,
I
think
I
think
so
like
so
like
the
no
policy.
One
is
a
good
example.
I
think
of
something
we
know
we
want
to
fix,
like
the
the
problem
is
glaringly
obvious,
but
the
solution
to
fix
the
problem
is
not
so
obvious
and
it's
complex
and
potentially
breaking
if
we
were
tried
if
we
were,
if
we
tried
to
add
it
to
v1,
so
I
think
yeah,
like
I
think
next
steps
are
like.
A
We
should
discuss
the
accepted
user
stories
like
over
and
over
again
until
we
kind
of
get
consensus
on
a
solution
that
we
think
is
reasonable,
and
then
we
can
start
writing
the
cups
for
that.
But,
like
the
no
policy
one
like,
I
think
it's
gonna
be
a
while
until
we
can
think
of
a
good
solution
for
this,
that
doesn't
involve
like
a
v2
api
or
maybe
the
consensus
out
of
that
is
like
okay.
A
We
need
a
v2
api
for
that
for
no
policies,
but
so
yeah,
like
I
think
next
step
is
like
talk
about
the
problem
enough.
Until
we
can
get
consensus
on
a
solution
all.
D
D
A
I
think
the
fqdn
one
is
more
or
it's
like
the
the
sense
I'm
getting
is
that
more
people
are
asking
about
the
fgdm
policy
so,
and
it
sounds
like
you
know,
it
sounds
like
goben
and
zang
are
like
like
willing
to
do
the
work.
So
it'd
be
great.
If
we
can
get
consensus
on
like
what
we
think
that
should
look
like,
and
then
that
way
we
can
get
that
kept
up
quickly,
even
even
like
today.
D
C
Sure
I
actually
didn't
know
that
goblin
is
not
here
today,
but
do
you
have
the
link
to
the
to
the
to
the
dock?
So
maybe
we
can
open
that
one
right.
So
if
you
scroll
it
down,
I
actually
we
actually
have
a
list
of
open
questions
there.
Right
I
mean
it
would
be
nice
if
we
can
go
over
them
and
see
if
you
have
any
opinion
or
preference
there
as
well.
C
So
so
I
think
overall,
you
already
know
this
proposal
right
basically
just
add
a
selector
for
fqdn
and
we
we
are
still
debating
on
what
exactly
format
would
be.
But
the
overall
idea
is
that
right
and
then
we
got
a
lot
of
comments
and
questions
there
right.
So
so
it
looks
to
us
that
the
first
thing
is:
should
this
belong
to
a
narrow
part
of
the
api,
or
it
should
be
some
different
places?
C
Would
it
be,
for
example,
also
in
for
some
data
plan
or
in
dns,
because
that's
right
change
things
here,
I'm
not
sure.
What's
the
convention
there,
because
if
we
do
it
on
dns,
I
feel
like
we
probably
cannot
do
it
in
a
very
horror-based,
because
the
ice
is
shared
by
everyone
right
and
then
we
also
have
a
derived
question.
That
is
what,
if
user
just
gets
the
ipad
just
by
himself,
it
doesn't
do
a
thing
as
lookup.
C
So
I'm
wondering
if
anyone
has
idea
like
what
should
be
the
behavior
here,
because
I
know
that
when
psyllium
is
do
this,
what
they
do
is
they
intercept
stimulus
package
and
the
query
and
then
program
the
ip
based
policy
on
those
ip
address
results
for
that.
Unless
right,
that
means,
if
you
are
using
some
ip,
that's
not
resolved
through
dns,
it's
it's
actually.
What
be
I
mean,
especially,
it
won't
work
right
if
you
allow
it,
then
that
ip
will
be
bypassed.
C
Obviously
it's
not
known
to
the
system
right,
so
I'm
also
curious,
like
what
would
be
the
general
consonants
here
when
we
talk
about
allowing
traffic
to
to
our
dns.
C
If
we
just
block
the
dns
query,
I
mean,
if
that's
the
case,
then
it
looks
to
me
that
even
easier
ways
to
don't
give
you
a
reply
right,
but
what,
if
you
just
figure
out
the
ipad
just
by
them?
Oh
and
directly,
do
it
go
ahead.
A
Yeah,
so
I
I
I
agree
that,
like
the
the
discussion
around
whether
it
should
be
a
its
own
research
or
not,
is
going
to
be
dependent
on
yeah
like
where
we
agree,
the
enforcement
should
happen,
and
so
yeah
I
had
the
same
question
like.
Are
we
assuming
that
the
pod
is
not
trusted
in
that?
We
don't
want
it
asking
another
dns
server?
A
What
the
resolved
ips
are
or
are
we
kind
of
assuming
that,
like
every
tenant,
is
like
trusted
enough
that
whatever
default
dns
server,
we
give
it
like
it's
going
to
use
that,
and
all
we
really
want
to
do
is
just
ensure
that
whatever
resolution,
like
only
the
allowed
of
qdns,
are
allowed
from
the
from
the
dns
server.
So
like.
I
agree
that,
like
the
fundamental
question
we
need
to
answer,
first
is
like
at
what
point
do
you
need
to
do?
The
enforcement
based
on
the
type
of
workload.
C
Right
yeah,
I
I
find
it's
pretty
hard
to
answer
that
question,
especially
if
you
look
at
those
in
general
dns
blocking
or
how
they
does
right.
They
probably
will
do
some
periodic
scanning
to
see
which
ip
resolve
to
those
blacklist
and
then
periodically
updating
internet.
So
if
we
want
to
create
all
those
mappings
on
fly.
C
I
don't
know
how
well
how
how
easy
would
that
be
broke
just
because
user
can
result
in
different
instruments,
so
we
are
also
I'm
also
asking
government
to
collect
more
user
cases
for
this
and
to
say:
okay,
what's
the
most
reasonable
way
to
to
solve
this,
so
I
would
say
there
are
still
many
questions
opening
there
is.
B
There
is
there
like
an
iterative
way.
You
could
think
about
building
something
like
this
or
designing.
Something
like
this
like
is
there.
Some
are
the
are
the
user
stories?
Do
they
boil
down
to
something
that
would
benefit
like?
Even
if
you
the
way
you
implemented
it
was
different,
they
would
benefit
from
the
same
core.
D
There
is
a
comment
from
them
that
we
ship
that
in
openshift
they
do
what
probably
what
you
are
thinking
jay.
There
is
a
controller
that
that
resolves
the
host
names
to
ips
and
and
keeps
updating
the
network.
I
don't
know
if
that's
the
network
policy
at
all,
but
keeps
updating
something
with
the
ttl
from
the
from
the
ips.
D
So
if
you,
if,
if
you
solve
something
like
www.ricardo.com
to
an
ip,
it's
going
to
create
a
a
network
policy
and
take
note
of
the
ttl
of
that
and
once
this
is,
this
is
like
the
the
tto
is
expired,
or
it's
reaching
expiring
or
is
becoming
expired.
It
refreshes
the
network
policy.
That's
that's
the
last
comment
here.
I
don't
know
if
that's
the
prior
art.
C
Are
you
I
mean
I
feel
like
that's
that's
some
implementation
detail
right,
you
say,
okay,
do
I
intercept
the
package
dns
query
from
client
or
I
have
a
separate
agent
which
was
periodically
resolved
the
dns,
assuming
that
every
part
is
is
within
the
same
location
so
that
they
will
get
the
same
thing
as
result
or
result.
C
I
think
I
mean
I
feel
like
we
could
do
either
way.
The
only
thing
is
it
may
not
be
that
accurate.
C
C
Right
and
also
if,
but
I
agree
that
has
a
little
impact
to
the
api,
though,
for
example,
if
you
want
to
have
a
separate
controller
to
resolve
things,
then
well
still,
it
could
work
either
way
right.
You
can
have
a
separate
option
for
the
us
domain
or
you
can
put
it
in
policy
and
the
other
controller
can
just
pass
them.
So.
C
Yeah,
I
feel
like
the
only
thing
is:
we
probably
won't
put
a
lot
of
burden
to
dinner,
so
if
we
have
a
but
that's
getting
introduced
anywhere,
there.
B
Yeah,
when
the
user's
stories
like
what
are
the
stories
behind
the
stories
like
what
are
people
like,
you
know,
that's
what
I
would
want
to
write,
because
you
know
when
you
look
at
all
the
solutions
to
this,
like
the
solutions.
Have
these
different
trade-offs
and
then
it
seems,
like
everybody,
has
an
opinion
on
the
trade-offs,
but
like
it's,
it's
really
tricky
to
figure
out
whether
the
trade-offs
are
just
because
people
have
different
network.
C
B
C
Yeah
yeah,
I
agree
I
I
would
suggest
that
since
goldman's
not
here,
I
don't
speak
for
him
either.
So
maybe
we
can
defer
this
discussion
and
let
goblin
to
collect
more
use
cases
to
to
to
think
to
think
about
here
more
and
also,
if
you
guys
have
any
preference
on
this.
Definitely
we
can
comment
on
the
stock
and
let
us
know.
C
A
Yeah,
I'm
starting
to
agree
that,
just
like
straight
up
intercepting
the
dns
request
might
be
the
simplest
solution,
maybe
like
I
think
it's
worth
entertaining
the
operator
idea,
because
it's
come
up
quite
a
bit
like
jay
mentioned,
and
I
mentioned
it
where
yeah
like
you
resolve
the
dns
story
out
of
band
and
then
you
apply
the
network
policy
rule.
A
I
think
it's
worth
noting
that
network
policy
today
is
not
very
controller
friendly,
like
it's
written
in
a
way
where,
like
the
user,
creates
a
network
policy
rule
like
in
in
the
spec,
but
then
there's
no
like
status,
there's,
no
there's
no
status
field.
Where
you
can
say
like
I
resolve
some
other
state,
and
here
are
the
new
rules
you
should
dynamically
apply.
A
If
you
wanted
to
do
that
today,
you'd
have
to
write
it
into
spec,
but
then,
if
you
did
that
you
could
be
potentially
overriding
what
a
user
puts
in
or
when
a
user
updates
network
policy,
you
would
overwrite
what
the
controller
put
in
so
like.
Maybe
the
more
fundamental
problem
with
the
fact
that,
like
network
policy,
doesn't
have
a
status
field,
so
leave
no
room
for
controllers
to
update
network
policy
dynamically,
like
maybe
that's
a
separate
conversation,
but
pattern.
C
C
Right,
I
mean
implementation
wise,
it's
definitely
pretty
complicated.
That's
why
we
also
got
this
question
like
how
long
can
you
update
your
your
entries
and
some
people
see
if
you
dynamically
add,
you
must
have
a
way
to
remove
the
isolated
information
as
well.
You
can
actually
never
act
down,
so
I
I
agree,
implementation
wise.
It's
going
to
be
pretty
complicated.
C
C
But
if,
when
we
think
about
cluster
network
parts,
you
want
to
deny
something,
then
I
don't
know
that
if
intercepting
ds
query
is
enough
or
not,
because
in
that
case,
right
user
can
just
bypass
dns,
lookup
and
use
ib
directly
and
your
purpose
is
denied,
and
then
you
miss
that
case
right,
so
that
that's
the
thing
I'm
I'm
a
little
concerned
here,
but
I
I
couldn't
find
a
good
solution
to
there
either
unless
we
always
get
updated
on
normative.
Usually
we
dns
query
or
not.
D
Yeah
is:
is
there
a
case
also
where
I
want
to
to
redirect
all
of
specific
dns
queries
to
an
id
or
a
bunch
of
ip?
You
say
like
this:
this
probably
defines
if
we
wanna
make
this
inside
the
network
policy
or
if
this
is
a
new
object,
also
also
with
your
question
about
if
this
should
have
like
an
alone
and
a
deny,
I
want
to
know
if,
if
like,
I
want
to
solve
the
my
like
www.ricardo
to
a
specific
ip,
but
I
do
not
want
to
use
the
host
aliases.
C
Id
I
I
don't
communicate
the
question.
You're
saying
that's,
the
thing
is
depends
on
which
one
you
carry.
We
should
just
probably
go
to
different
dns
servers.
C
C
I
remember
that
for
something
like
service,
they
actually
allow
you
to
hack
to
say
if
you
require
this
just
return
this
mp,
I
think
someone
also
mentioning
comments,
but
actually
should
we
do
this
on
on
the
next
slide,
it's
kind
of
around
the
same
line.
That's
definitely
interesting
idea.
Yeah,
yeah
and
maybe
yeah
could
be
easier.
Yeah.
A
Yeah,
I
think,
like
yeah,
like
that's,
come
up
quite
a
bit
and
I
think
the
and
the
question
to
answer
is
whether
the
user
who's
configuring,
the
policy
like.
Can
we
assume
that
the
can
we
assume
that
the
user
is
not
malicious
and
that
they're
not
going
to
go
out
of
their
way
and,
like
query
another
dns
server,
to
make
a
request
or
like?
A
Can
we
assume
that,
like
every
tenant
on
the
cluster,
is
like
barely
trusted
and
really
we're
just
applying
some
policy
to
ensure
that
no
application
is
accidentally
resolving
against
something
else
and
we're
just
putting
some
guardrails
right
so
like?
If
it's
just
like
we're
just
putting
guardrails
on
the
dns
server,
then
I
think,
like
dns
configuration
on
the
dns
server
side
makes
sense
if
we're
actually
trying
to
like
prevent
malicious
applications
from
querying
some
dns
server
or
querying
some
dns
out
of
band.
Then
I
think
the
policy
level.
C
Yeah,
I
agree,
that's
a
very
good
point
actually.
Well.
We
have
this
now
policy
right.
We
cannot
assume
that
not
everyone
trusted,
so
if
they
do
any
modification
on
dns
side
that
definitely
affect
everyone.
It's
it's
pretty
probable,
probably
very
hard
for
the
dinner
server
to
distinguish
oh
from
this
guy.
I
should
return
this
from
that
high
speed
to
that.
That
sounds
to
be
too
much
asked
for
things
right.
I
think
what
you
are
suggesting
basically
saying
that
we
probably
can
do
the
current
fqdn
on
the
network
policy
level
here.
C
If,
in
the
future,
we
want
to
extend
it
to
other,
like
freedom,
cluster
white,
all
those
things
we
might
need
to
resort
to
dns
server.
So
maybe
we
could
try
to
do
this
in
different
layer
that
that
is
another
thoughts
here.
B
A
I
don't
know
I
don't
know
so
saying
just
be
clear
like
I,
I'm
not
sure
that
I
would
say
the
dns
server
side
policy
is
like
off
the
table.
I
think
we
should
still
consider
that,
because
you
can
make
the
argument
that
there
should
be
a
policy
on
the
dns
server
for
what
fqdnc
could
resolve
and
then
there
should
be
network
policy
rules
to
ensure
that
you
can
only
send
dns
requests
to
that
dns
server.
A
So
that
kind
of
solves
the
case
of
like
an
application
going
rogue
and
trying
to
request
an
fqdm
from
another
dns
server
like
with
those
two
things
like
applied,
then
an
application
has
to
query
like
4dns
for
the
resolve,
and
then
we
can
rely
on
coordinates
to
filter
what
the
allot
of
qdns
are.
I
think
that's
pretty
reasonable
as
a
solution.
C
But
how
do
we
distinguish
the
behavior
for
different
deployments?
For
example,
this
deployment
is
allowed
to
access
this
fqd,
but
another
set
of
departments
don't
allow
you
are
going
to
use
separate
dns
service
for
it.
That's
that's
not
supported
right
now,
right.
A
Right
so
I
think
at
a
high
level,
when
I
was
suggesting
a
separate
resource,
what
it
would
look
like
is
like
there's
a
resource
called
dns
policy
and
it's
namespace,
and
it
has
the
label
selector
that
selects
pods
right
in
a
namespace
and
core
dns
would
watch
the
dns
policy
resource
and
like
allow
the
lookup,
based
on
whether
a
request
came
from
like
the
pod
selectors
of
that
resource.
A
C
But
if
that's
the
case,
then
why
not
just
implement
this
on
the
path
site
and
they
will
pass
it
because
you
can
do
something
right.
C
A
Yeah,
I
think
the
most
compelling
answer
to
that
is:
if
you
do
it
on
the
coordinate
side,
then
every
cluster
gets
it
like
like
for
free,
and
you
don't
have
to
rely
on
every
cni
to
implement
the
policy
blocking
rules
for
for
next
request.