►
From YouTube: Network Policy API Bi-Weekly Meeting for 20211220
Description
Network Policy API Bi-Weekly Meeting for 20211220
A
Awesome,
hello:
everyone
today
is
december
20th
2021,
almost
at
the
end
of
2021.
This
is
a
meeting
of
the
sig
network
policy,
api
working
group
under
sig
network.
This
is
a
cncf
certified
event,
so
please
be
kind
to
each
other.
Let's
have
a
nice
productive
day,
not
much
on
the
agenda
today,
two
major
things.
I
would
like
to
talk
about
our
review
of
the
fqdn
policy
document
to
start
and
then
go
over
some
of
the
maybe
start
talking
about
some
of
the
kept
comments
from
cmp
so
yeah.
A
B
I'm
not
totally
sure
banu
is
here.
Banu
works
with
me
on
gk
networking
awesome
cool.
I
think
it
might
just
be
the
two
of
us
today.
Okay,
all
that.
A
C
C
So
I
have
been
learning
about
this
kubernetes
since
from
last
one
year,
and
so
I
thought
to
start
begin,
my
contribution
from
network
sig
networking
that
a
networking
is
a
quite
interesting
topic
for
me,
especially
in
network,
and
so
I
thought
to
begin
contribute
towards
in
kubernetes.
A
That's
awesome
great
to
hear
well
feel
free.
If
you
want
to
get
involved.
We
have
some
good
places
to
start
and
I
think
anyone
in
the
community
is
more
than
happy
to
help.
So
you
can
reach
out
and
slack
if
you're
already
there
in
sig
network
or
sig
network
policy
api
and
if
you're
already
on
kubernetes
slack,
feel
free
to
reach
out
to
me
directly.
If
you
need
anything
but
yeah
great
to
have
you.
A
A
In
the
agenda,
let
me
share
it
in
the
chat
it.
I
think
we
we
have
talked
about
this
before
with
you
involved,
so
I
think
you
can
still
provide
some
good
insight
here.
A
I
don't
need
to
necessarily
go
over
every
little
detail
today.
I
just
would
like
to
kind
of
have
a
conversation
over
some
things
I
got
gathered
from
from
reading
through
this
google
doc
kind
of
describing
the
cap.
Well,
what
what
we
would
see
in
a
kept
for
fqdn
policy,
yeah,
so
really
good
job
role.
A
I
thought
it
was
pretty
concise,
like
you
laid
out
some
good
possible,
you
know
futures
here.
I
think
the
biggest
problem
isn't
necessarily
going
to
be
doing
this.
It's
going
to
be
how
you
end
up
implementing
you,
know
f2dn
policy
in
a
fail-closed
manner
and
also
in
a
way
that
makes
sense
yeah.
So
that's
kind
of
what
I'd
like
to
focus
on,
like
I
thought
the
spec
you
know
made
sense.
A
I
don't
feel
too
strongly
necessarily
about
the
how
you,
how
you
choose
to
organize
the
the
fqd
and
yaml
structure
and
policy
set
up,
but
I
think
it's
more
about
how
you're
going
to
implement
this
right.
So
the
two
proposals
you've
laid
out
are
new
resource
or
the
possibility
of
adding
a
new
egress
fqdn
set
of
rules
to
existing
network
policy.
Right.
B
A
So
I
guess
we
can
start
with
proposal
one
and
just
kind
of
talk
about
it
as
a
team.
If
that
works,
for
you.
A
Great,
so
the
first
proposal
is
creating
a
completely
new
resource
right,
fqdm
policy,
that
its
goal,
at
least
from
my
reading,
is
just
to
act
as
a
egress
policy.
That
you
know
is
a
same
as
a
whitelist
models
network
policy.
I
deny
everything
and
then
punch,
allow
holes
to
specific
host
names.
B
A
A
B
A
One
thing
I
was
thinking
of
is
there's
with
cmp
there's
a
big
hole
right,
cmp
is
only
addressing
east-west
traffic
really
in
cluster
traffic
right
and
our
goal
in
the
future
was
to
look
at
creating
another
object
that
addresses
you
know
the
s.
A
B
So
in,
like
the
user
story
that
I
snuck
in
after
I
saw
your
comment,
which
you
know
thanks
for
pointing
that
out,
it's
a
good
gap
like
the
basic
one
that
I
was
targeting
was
just
a
user
who
writes
an
application
and
wants
to
say
egress
to
certain
external
endpoints,
but
they
don't
want
to
maintain
a
list
of
ips
because
that's
cumbersome
and
kind
of
hard
to
read,
and
so
they
just
want
to
say.
Okay,
I
want
to
tell
you
that
you
can.
My
pods
can
talk
to
github.com.
B
I
don't
want
to
have
to
resolve
the
ip
manually
and
then
put
it
in
a
network
policy,
so
that
was
that
was
the
initial
story
that
I'm
targeting
here.
I
think
I
purposefully
ignored
the
cluster
admin
one
just
because
I
wasn't
sure
how
or
if
at
all
we
were
gonna
interact
with
cnp.
A
Right
now
that
makes
sense
the
only
I
think
I
don't
know
what
the
standard
would
be.
Actually
in
a
cluster
like
in
my
mind,
it
would
make
more
sense
to
have
kind
of
a
white
listed
set
of
host
names
at
a
cluster
level
that
are
allowed
rather
than
you
know,
each
each
application
developer
allowing
certain
host
names
just
because
I
don't
know
if
that
set
is
going
to
be
that
large
right.
You
know
in.
A
B
A
I'm
kind
of
thinking
full
stack,
which
I
think
is
putting
me
in
circles,
but
the
in
my
mind.
It
would
be
easier
to
write
fqdn
policy
for
the
cluster
admin
right
and
do
it
in
the
sense
of
that
that
missing
object.
I
was
talking
about
earlier
like
an
egress
firewall
for
the
cluster,
so
fqdn
policy
wouldn't
be
its
own
resource,
but
the
features
you're
looking
for
so
egress
ii,
dns
name,
would
be
built
into
something
like
an
egress
firewall
that
acts
as
a
moat
around
the
cluster
and
rather
than
an
application
developer
poking
holes
directly.
A
In
that
moat,
the
application
developer
would
have
to
ask
the
cluster
admin
and
say:
oh,
I
need
to
access
github.com.
Can
you
poke
a
hole?
You
could
say?
Can
you
poke
that
hole
if
the
cluster
admin
wants
to
poke
the
hole
for
the
whole
cluster
you
can
or
he
can
just
poke
that
hole
in
the
developer's
namespace
right
right?
That
seemed
like
an
easier
route
to
getting
this
feature
merged
than
a
creating
its
own
object,
which
the
more
I
read
into
it.
A
Some
constructs
in
cp
excuse
me,
but
I
you
know,
I
sat
here
today
and
tried
to
think
of
good
ways
to
add
this
to
network
policy,
and
I
still
still
couldn't
really
come
up
with
any
that
wouldn't
be
argued
venomly
from
both
sides.
B
Yeah,
that's
that's
good!
That's
good
feedback!
Yeah!
My
only
concern
with
the
the
broader
I
guess,
cluster
white
egress
thing
has
nothing
to
do
with,
like
a
philosophical
concern,
more
just
like
a
practical
constraints.
Concern
of
if
we
open
the.
If
we
start
saying
okay,
we
want
to
build
a
egress
moat,
essentially
we've.
It
feels,
like
we've
expanded
the
scope
of
this
cap
rather
substantially
yeah
again,
I
I'm
just
wary
of
getting
pulled
in
a
bunch
of
different
directions.
B
Right
off
the
right
out
of
the
gate,
oh
yeah,
it
would
be.
It
would
be
nice
to
try
to
constrain
ourselves,
make
slow
incremental
progress
rather
than
trying
to
swallow
the
whole
elephant
in
one
shot.
A
Totally
no,
I
agree.
Well
then,
let's
look
at
proposal
two.
Maybe
we
can
keep
iterating
that
and
see
if
we
can
get
closer
proposal
one
I
mean
I
don't
know
if
other
people
have
felt
strongly
about
this.
I
was
the
first
one
to
say.
A
Maybe
we
should
make
a
new
new
object,
but
I
have
a
feeling
when
you,
when
we
start
writing
out
these
user
stories,
there's
just
going
to
be
so
much
overlap
like
are
we
get
really
gonna
have
a
new
object
for
fqdm
policies
and
then
what
do
we
do
if
we
want
service
selectors
one
day
right
right,
make
a
service
selector
policy?
No.
E
A
So
that
was
my
immediate,
immediate
gut
reaction
to
proposal.
One
proposal
two
was
adding
a
new
type
of
of
rule
set
to
network
policy,
and
I
liked
how
you
used
policy
types
to
kind
of
fail.
A
This
closed
policy
types
was
actually
something
we
just
talked
about
this
week
in
a
google
group
chat,
that's
how
they
failed
close
the
addition
of
egress
rules
to
network
policy,
so
it
was
a
good
double
use
of
of
that
sort
of
you
know
explicit
rule
types,
but
something
that
I
found
as
a
contradiction
is
that
you
did
say
it's
just
one
more
type
of
endpoint
right
and
it
is,
and
if
it
is
just
one
more
type
of
endpoint,
then
why
does
it
need
its
own
set
of
rules?
A
B
Yeah,
I
think
that's
a
valid
concern,
and
I
mean
I
so
in
the
section
that
I
I
called
it
like.
Why
not
add
it
to
the
egress
struct,
I'm
trying
to
wrap
my
head
around
what
that
would
entail.
So,
basically,
my
thought
was
all
right.
If
we
tried
to
enhance
the
existing
egress
rule,
we'd
probably
do
something
like
have
another
repeated
field
called
two
fqdns.
Instead.
E
B
Would
be
a
list
of
these
sort
of
you
know
ftdn
matchers
or
pattern
matches
or
stuff
like
that,
and
my
main
concern
was
just
with
the
semantics
of
the
two
field
itself
right
so
like.
If
you
have
a
missing
or
an
empty
to
field,
it's
implied
that
you're
targeting
all
endpoints
yep,
and
so
you
know
for
old
cni's.
B
That
would
look
like
oh
just
open
up
this
workload,
to
talk
to
anything
that
it
wants
which,
like
at
surface
level,
I
think
that
doesn't
work,
but
I
that
that's
that's
a
perfect
thing
of.
Maybe
we
can
brainstorm
and
see
if
it
can't
be
tweaked
more
subtly.
A
And
that's
yeah
and
right:
that's
the
the
million
dollar
question
with
with
doing
anything
to
existing
network
policy
right
is
how
do
we
handle
that
scenario
because
yeah,
I
think
the
main
desire
is
that
we
say
we
we're
on
an
old
cluster
that
has
no
idea
about.
You
know
this
two
fqdn
and.
A
You
said
in
that
scenario,
rather
than
us,
applying
it
and
having
to
figure
out
after
the
fact
if
it
works
or
not
I.e
the
the
problem
with
status
as
well
right
because
it's
post
it's
postmortem,
we
just
want
to
be
able
to
apply
it
and
fail
if
it
doesn't
support
it.
But
I
don't
think
that's
possible.
A
I
know,
and
you
know
you
could
use
it
a
admission
controller
to
do
some
fancy
stuff,
but
in
the
cmp
cap
dan
winship,
basically
just
said
you
can't
ever
require
an
admission
controller
like
like.
If
you
want
to
build
in
validation,
it
needs
to
be
built
into
the
api
interesting,
so
I'm
tr,
and
that
was
something
I
didn't.
I
didn't
really
know.
A
So
I'm
wondering
if
there
is
some
clever
way
to
get
around
that
chicken
and
the
egg
problem
right
like
like
if
it
was
all
right
for
us
to
apply
this
network
policy
and
then
look
at
it
again
and
have
the
status
say.
Oh,
the
server
has
no
idea
what
an
egress
two
fqdn
section
is
like
that
may
solve.
B
I
was
trying
to
short
circuit
that
entire
conversation
with
the
egress
fqdns
thing,
because
in
theory
what
would
happen
is
someone
would
apply
the
rule
and
it
wouldn't
do
what
they
wanted.
It
would
just
fail
closed
and
they'd.
You
know
try
to
curl
github.com
and
it
wouldn't
work,
and
then
they
would
in
be
able
to
then
debug,
but
it
wouldn't
have
punched
a
hole
that
we
didn't
intend
to.
So
it
would
sort
of
force
them
to
start
looking.
B
A
Right
and
I
think
failing
closed
is
perfectly
reasonable
and
thankfully
the
whitelist
model
works
really
well
for
for
this
purpose,
because
we
never
really
trust
the
dns
right.
It's
it's
never
a
complete
line
of
trust,
so
I'm
trying
to
think
of
like
negatives.
To
this
I
mean
my
main
one
was
logically,
it
didn't
make
much
sense
in
terms
of
of
treating
it
as
just
another
endpoint,
but
it
does
kind
of
cleverly
get
around
the.
A
Yeah,
I
guess
the
next
point
of
view
would
be
to
try
to
get
someone
from
cig
network
to
also
review
this
and
say:
okay,
that
would
never
work
or
okay.
This
is
a
good
good
solution
right,
I
don't
know
what
do
you
think
yang
just
off
top
of
your
head.
D
To
me,
I
I
feel
like
proposal.
Two
does
make
sense
to
me
like
like,
if
you,
if
you
feel
like
the
egress
and
egress
fqdns,
sounds
like
the
same
thing
but
duplicated
in
different
places.
Then
maybe
we
can
you
know
rename
them.
For
example,
we
just
call
egress
fqdns,
as
I
don't
know,
on
top
of
my
head,
two
fqdns
or
something
like
something
else
right,
so
so
people
when
when
they
look
at
it,
you
know
you
don't
see
this
as
two
different
egress
rules.
D
We
see
this
these
as
two
different
things,
because
they
are
essentially
all
all
egress
endpoints,
but
they
are
entirely
different
in
terms
of
you
know,
selector
mechanisms,
I
guess,
because
one
is
selecting
pause
and
workloads
in
the
cluster
and
the
other,
as
you
put
in
the
cap
they're,
not
essentially
looking
at
in-cluster
services,
for
example,
something
like
myservice.cos.local
right.
So
that's
not
something
that
you're
actually
considering
you're.
Actually,
concerning
external,
you
know,
dns
names,
for
example,
github.com.
D
So
essentially
in
most
cases,
the
egress
rules
and
the
egress
of
kydians
will
be
selecting
entirely
different
things
and
they
cannot
overlap.
So
I
feel
like
it
makes
sense
to
just
you
know,
put
those
artists
as
two
different
strokes
and,
as
you
just
mentioned,
it
perfect,
it's
perfectly
solved
for
the
for
for
the
fill
for
open,
velcro's
problem.
So
so
I
feel
like
this
makes
sense
to
me.
I
was
also
reading
the
disadvantage
of
the
proposal,
which
is
requiring
existing
cni
providers.
D
To
me,
I
feel
like
this
is
also
fine,
because
you
know,
since
we're
doing
the
fail,
close
thing.
The
worst
thing
that
can
happen
is
that
you
know
when,
when
this
first
rolled
out,
some
cni's
had
doesn't
support
t-rex
fpdns
yet,
but
that
shouldn't
be
too
much
of
a
problem,
because
you
know,
if
that's
the
case,
when
some
people
define
egress
fqdns
rules,
the
cnl,
which
is
for
jess,
saying
that
I
don't
recognize
this
right.
So
so
I
cannot
apply
this
there
there's
nothing.
D
I
can
do
in
terms
of
that
regard,
but
but
as
as
soon
as
you
know,
because
if
if
this
is
a
cap
that
we
wanted
to
merge,
just
just
like
the
cmp,
we
definitely
wanted.
D
You
know
one
or
two
cnns
at
least
to
to
be
on
on
board
with
this
before
we
can
actually
bridge
this
as
a
cap
and
by
then,
if
you
know
those
cns
already
provided
an
implementation,
then
you
know,
I
don't
think
this
is
going
to
be
a
big
problem,
because
the
cns
will
support
this
can
basically
implement
this
already
and
the
cnp's
sorry,
the
cns
that
doesn't
support
this
will
just
do
would
will
just
you
know
not
not
support
it.
I
guess
which
is.
A
D
B
That's
that's
my
that's.
The
current
proposal
is,
it
would
yeah
it's
like
a
top
level
rule
just
like
ingress
and
egress
right.
B
Oh,
oh,
no.
I
guess
I
hadn't
planned
on
adding
a
new
policy
type.
I
I
don't
know
off
the
top
of
my
head.
What
the
benefit
of
keeping
of
just
using
ingress
and
egress
is
that
it
lets
us
fail
closed.
I
think
I
don't.
I
don't
know
if
adding
a
new
policy
type
changes
that
behavior.
A
The
whole
the
policy
type
field
was
added
to
allow
them
to
fail
closed
with
the
addition
of
egress
rules
right.
So
originally
they
only
wrote
network
policy
for
ingress,
and
then
they
specified
that
if
you,
if
you
write
any
rule
that
selects
a
pod,
even
if
or
any
policy
that
selects
a
pod,
even
if
it
doesn't
have
any
rules
in
it,
so
like
it's
totally
empty
that
there's
a
default
deny
ingress,
but
then
they
had
to
handle
the
upgrade.
A
So
they
had
to
go
from
clusters
that
understood
only
ingress
rules
to
now
clusters
that
were
also
understanding,
egress
rules.
In
that
case,
that
original
policy
would
read
like
deny
all
to
ingress
and
egress,
whereas
in
the
first
instantiation
of
it,
when
there
was
only
ingress.
That's
not
what
the
user
meant
right.
B
A
A
A
mechanism
for
for
explicitly
denying
or
default
denying
only
egress,
and
that
was
the
policy
types
thing,
the
logic's
kind
of
funny.
So
the
like,
if
you
write
just
a
standard
policy
with
an
ingress
rule,
it
doesn't
touch,
it
doesn't
default,
deny
egress
traffic
at
all,
but
the
only
way
to
explicitly
default
deny
only
egress
traffic
is
to
provide
an
egos
rule
or
to
provide
no
rules
and
specify
the
policy
type
as
egress.
E
A
So
what
I
was
saying
earlier
is
like
I
think
you
could
use
that
as
a
powerful
mechanism.
In
the
same,
in
the
same
way
that
egress
network
policy
rules
did
it
to
fail
close,
and
I
have
the
pr
I
was
actually
just
reading
through
it
the
other
day
that
will
make
more
sense.
D
Right,
I
think,
what's
going
on
in
my
mind,
is
that
you
know
if
we
don't
actually
have
a
third
policy
type.
For
example,
we
just
have
ingress
and
right
and
then
what
for
a
new
policy,
an
fqdn
network
policy,
it
will
look
like
a
policy
type
called
egress,
and
then
you
have
some
two.
D
You
have
some
egress
ftd
and
rules
in
there,
but
an
ocni
which
doesn't
really
support.
This
will
look
at
the
narrow
policy
spect
and
read
it
as
policy
type
egress
and
then
no
rules,
basically
because
it
doesn't
really
recognize
the
aggressive
kidneys
now
now.
Would
that
mean
something
different?
As
you
know
what
the
user
intended
me,
because
in
that
particular
case
you
know
every
egress
will
be
denied.
If
I
am
not,
if
I'm
not
mistaken,
because
you
just
put
the
egress
as
a
policy
type,
it
didn't
provide
any
rule
there.
So
it's
defined.
B
D
A
Yeah,
so
basically,
if,
if
you
want
to
explicitly
deny
all
egress,
you
could
either
supply
a
policy
type
egress
or
a
policy
type
igresakudian.
I
and
I
I
mean
it-
overlaps
there,
because
the
direction
of
the
traffic
you're
blocking
is
the
same.
I
don't
know
what
upstream
would
want
to
do
about
that
right,
because
you're
right,
you
don't
necessarily
need
to
add
a
new
policy
type
because,
yes,.
D
D
You
know
I
I
I
don't
think
it's
necessary
that
it's
it's
something
in
the
lines
of
you
know
when
we're
proposing
the
cmps
and
I
really
get
a
sense
of
the
the
sig
network,
guys
being
being
sort
of
like
you
know,
when
we
introduced
the
implicit
isolation
or
the
quote
unquote
wireless
model
in
the
network
policy.
We
kind
of
like
made
a
mistake
there
and
right
now
you
know,
even
though
I
understand
perfectly,
you
know
where
this
is
going.
I
I
feel
like
the
sig
networks,
guys
just
gonna
say
we
are.
D
A
I
agree
with
you,
but
I
you
know
it
just
there's
such
an
argument,
especially
for
fqdn,
because
you
just
never
trust
you
never
trust
dns
like
it.
So
you
want
to
be
inherently
only
whitelisting
right
and
I
think
you
know
it's
it's
very
different
from
cmp
right
cmp
right
now,
we're
only
talking
about
you
know:
east
west,
in
cluster
traffic
right
we're
sitting
in
that
overlay
our
risk.
D
A
I
mean
that's,
I
think,
there's
an
argument
here
for
the
whitelist,
which
trust
me.
I
know
I
don't
love
it
implicit
deny
but
being
the
developer,
that's
the
user
and
the
fact
that
it's
dns
are
two
rationales
for
advocating
for
it.
A
Yeah,
so
so
that's
where
I'm
at,
I
think
this
is
well
done.
There
are
there's
a
few
questions
that
we
need
to
answer
and
I
think
you're
almost
there.
You
know
if
this
is
something
that
can
be
accepted
or
can
get
accepted.
It
also
provides
us
a
path
to
do
the
same
for
other
sorts
of
selectors,
which
I
don't
know
if
that's
good
or
bad
and
yeah.
I
linked
dan
winship's
wrote
a
cap
on
this
kind
of
more
pertaining
to
a
moat
around
the
cluster
called
the
cluster.
B
Cherry
pick
a
few
more
of
dan's
comments
there,
because
yeah
thanks
for
pointing
out
they
are
good
contacts.
A
Yeah
and
that
kind
of
carries
me
into
the
last
little
bit.
I
think
we
need
to
have
ironed
out
here,
which
is
where
is
this
implemented
right?
So
we
talked
a
little
bit
about
the
fact
that
it's
it's
kind
of
a
negative.
A
A
If
that
makes
sense
yeah.
I.
B
I
I'm
just
I
I
saw
in
dan's
cap
that
he
mentioned
a
core
dns
plugin.
I
wasn't
sure
what
we
would
need
that
for
explicitly.
A
A
Be
the
root
of
truth
of
hostname
to
ip
right.
So
you
all
the
cni
duh
does
is
listen
to
well
yeah,
I
guess
listens
to
whatever
ips
it's
keeping
track
of
for
a
specific
host
right.
Okay
rules.
B
You'd
feed
it
the
fqdn
selectors
that
you've
gotten
and
every
time
it
resolves
that
dns
it.
I
guess,
responds
to
the
the
originating
querier
and
also
has
a
side
channel
out
to
you,
as
as
the
cni,
where
you
can
ingest
these
ips
and
then
add
them
to
your.
A
Allow
list
right,
okay,
and
the
only
reason
I
I
would
advocate
for
that,
like
it
adds
some
overhead,
I
guess,
but
what
you
don't
want
is
like
super
minut
differences
between
cni's
related
to
the
resolution
of
dns
records
like
the
ttl
constraints
stuff
like
that,
whereas
if
you,
you
know,
still
look
to
the
cni
to
implement
the
rules
at
a
layer,
three
level
but
look
to
you
know
core
dns,
which
is
deployed
in
almost
every
cluster,
to
resolve
those
hosts
to
ip.
Then
it
seems
a
little
more
streamlined.
Does
it
does
it.
B
B
To
the
plug-in
right
well,
but
the
plug-in
can't
control
the
actual
data
path.
Allow
list
right
like
even
if
the
data
path
or
even
if
the
plug-in
tells
you
a
new
list
of
ips.
The
cni's
also
have
to
honor
those
new
lists.
They
have
to
clean
the
old
ips,
add
in
the
new
ones.
So
I
mean
I
I
I
don't
disagree
with
you.
I
just
don't
think
it
necessarily
solves
our
problems.
I
think
those
problems
are
still
there.
A
Well,
all
the
cni's
doing
in
that
case
is
is
reconciling
whatever
the
plugin
thinks
to
be
correct.
So
you
know
the
core:
dns
plugin
has
a
ip
for
google.com
at
888
and
its
ttl
on
that
record.
Is
you
know,
30
seconds
after
the
30
seconds
is
up,
the
core
dns
plug-in,
tries
to
resolve
it
again
and
sends
the
new
ip
over
to
the
cni
right
and
then
the
cni
says.
Okay,
these
rules
are
wrong.
Now,
let's
reconcile
let's
program
the
right
ones,.
A
D
E
B
B
D
I
I
think
that's
going
to
be
advising
that
which
is
which
is
essentially
what
cm
does
today.
They
have
essentially
in
envoy
to
to
do
the
later
to
do
the
egress
fqdm
policies,
I
think,
which,
which
is
a
full.
You
know
a
proxy,
so
so,
essentially
it
intercepts
and
you
do
this
traffic
and
try
to
apply
policy
on
top
of
that.
But
this
fkdn
policy
for
catechol
is
closed
source.
D
So
we
cannot
tell
you
know
how
they're
essentially
doing
the
fkdm
policies,
but
my
I
guess
my
point
is
you
know,
because
we
were
talking
talking
about
cnn
implementations,
the
the
one
thing
that
in
terms
of
this
cap
right,
so
we
are
we're,
just
you
know,
doing
a
gap
where
we
don't
actually
necessarily
care
about
how
all
cni's
trying
to
implement
this.
But
essentially
we
do
need
a
fi.
At
the
end
of
day,
we
do
need
a
sort
of
like
criteria.
D
I
guess
a
sort
of
a
set
of
ete
test
cases,
for
example,
to
make
sure
that
you
know
the
intention
of
this
api
type
is
getting
correctly
implemented
by
cnns.
So
I
guess
my
question
will
be
is:
did
you
guys
have
any
thoughts
on
how
you
know
the
e2e
test
cases,
for
example,
would
look
like
for
for
such
api
types.
D
B
A
D
Sorry,
I
was
on
mute.
I
feel
I
think,
from
what
I,
what
I
saw
is
that
they
also
do
have
data
pass
things,
but
they
also
have
a
have
an
envoy
a
proxy.
I
guess
for
other
layer,
seven
things
which
is
a
little
bit,
which
is
a
little
bit
weird
to
me,
the
first
time
I
I
read
it,
but
I
feel
like
that
was
the
case
for
cedium.
D
If
you
guys,
if
you
can,
you
know,
get
in
touch
with
any
of
the
google
guys
who
are
actually
in
the
studio
team.
Maybe
they
can
confirm
this,
but
I
I
vaguely
remember
last
year
I
was
looking
at
the
city
of
implementations
and
they
do
actually
have
like
a
layer
four
thing
where
they
do
actively
maintain.
You
know:
host
names
to
ips
and
enforce
those
ips
in
the
data
pass
you're
right.
B
Right
that
is,
that
is
what
they
do.
They,
they
intercept
dns
requests
and
then
the
proxy
is
in
charge
of
telling
the
data
path
the
list
of
allowed
ips.
I
don't
know
how
they
do
ttl
reconciliation,
but
the
proxy
does
do
the
populating
of
a
policy
right.
A
I
I
think
that's
that
was
just
an
open
question
that
I
I
thought
the
answer
is
going
to
be
no
mainly
because
of
that
that
most
of
time
and
in
most
implementations
today
we're
just
keeping
a
mapping
and
secondly,
that
I
know
that
we've
asked
sig
network
about
layer,
7
policy
and
the
overarching
answer
has
been
like
heck
no
like
we
don't
want
to
get
into
layer,
7,
basically
or
pure
layer,
seven
right,
yeah,
okay,.
A
But
yeah,
I
think,
generally
you're
on
the
right
track.
If
we
can
get
some
of
these
open
questions
like
a
little
more
absolutely
answered
in
this
google
doc,
it
would
definitely
help.
I
I
think
you
know
going
back
to
the
implementation
like
yang,
brought
up
a
really
good
point.
You
either
have
a
standard
core
dns
plugin
responsible
for
doing
the
resolution,
or
you
have
a
good
suite
of
e2e
tests
and
I
think
either
could
be
argued
right
in
terms.
B
A
A
Thanks
for
doing
all
this
work,
though,
I
guess
we'll
just
keep
on
chugging
through
and
yeah.
If
we
talked
about
some
of
these
questions
today
in
the
doc
feel
free
to
or
I
can
go
through
and
resolve
some
of
these
questions
I
had
up
because
I
know
we
answered
a
few
of
them
here.
Yeah
sounds.
B
A
Around
fqdn,
it's
another
one
of
these
funny
things.
It's
like
every
cni
already
supports
it,
but
the
process
is,
is
so
intense
upstream
that
we
can't
figure
out
where
to
fit
it
in,
and
you
know
that's
I
wish
we
could
make
that
easier,
but
anyway,
yeah
there's
there's
absolutely
no
easy
way
to
do
that
anyway.
A
Cool
well,
thanks,
everyone,
who's
working
on
that,
let's
keep
on
chugging
after
we
get.
You
know
some
user
stories
and
some
of
these
questions
answered.
I
think
where
you
can
go
ahead
and
start
looking
at
a
kep.
If
you
don't
want
to
take
the
time
to
start
writing
a
cut
before
we've.
You
know
gotten
some
loose
approvals
from
from
the
power,
the
main
players
in
sig
network.
I
totally
understand
that
as
well.
We
can
you
could
give
a
quick
presentation
just
outlining
what
specifically
this
methodology
I'd
say.
B
Yeah,
I
think
it
would
be
it'd
be
good
to
maybe
hash
this
out
before
we
build
a
full
fledge
cap,
100
yeah,
there's
there's
a
lot
of
boilerplate
there.
D
D
I
guess
to
just
get
this
merch
in
the
in
into
upstream,
because,
as
ng
said,
I
feel
like
there's
a
reason
why,
for
example,
cnp
and
fpdm
policies
for
this
matter
are
exists
in
every
cni,
but
are
kind
of
not
in
the
upstream
as
of
day,
it's
kind
of
like
you're,
just
trying
to
pull
up
a
put
up
a
standard
that
you
know
everyone
just
want
needed
to
comply
to
and
the
standard
is
you
know,
a
lot
of
times
cannot
be
very,
can
only
be
very
vaguely
defined.
D
A
Agreed,
but
I
think
you've
done
a
good
job
of
answering
some
of
these
questions
like
this
was
clever
for
sure,
and
I
think
it
has
potential
and
and
if
not
you
know,
then
then
we
throw
it
in
the
bucket,
and
you
know
you
start
kicking,
kicking
down
the
road
of
v2
and
yeah,
and
I'm
not
saying
I
don't
I.
I
know
that
that's
like
super
daunting
right
and
that's
what
you
said
at
the
beginning.
You're
like
I
don't
want
to
take
on
the
whole
overhead
of
looking
at
creating
a
new
resource
to
absorb.
Just
this.
A
But
if
we
had
some
super
motivated
and
people
with
time,
I
think
it
could
be
done
together
and
cooperatively
and
actually
get
pushed
over
the
finish
line,
because
at
the
end
of
the
day,
all
these
new
selectors
and
desired
features
would
be
easier
to
design
in
a
new
object,
even
though
creating
a
new
object
is
like
pulling
teeth.
That
is
true.
Yeah.
A
No
agreed
yeah.
No,
this
is
like
the
implementation,
for
this,
at
least
in
the
openshift
side
of
things
is,
is
fairly,
I
would
say
extremely
complicated
and
yeah.
There's
a
lot
of
failure
modes
that
we
run
into
constantly,
which
is
you
know,
I
don't
know
the
the
cni,
the
the
standardization
will
be
that
and
how
you
integrate
it
into
network
policy
will
be
the
two
hardest
questions
to
answer:
yeah
cool:
okay.
I
only
have
10
minutes
left,
but
really
quick.
Does
anyone
else?
D
B
D
D
D
That's
why
I
wanted
to
you
know,
set
this
up
first,
so
that
we
don't.
You
know
we
at
least
see
what
our
you
know.
Edits,
are
or
and
sort
of
like
combine
those.
So,
let's
see,
thankfully,.
D
A
So
that
makes
things
a
little
easier.
Should
we
just
look
at
I
mean
I
don't
want
to
copy
each
comment
like
a
google
doc
or
something
and
say
what
we're.
D
Doing
no,
I
I
I,
I
honestly
feel
like
it's
a
little
bit
easier.
If
you
let's
say
let's
say
you
have
some
local
changes
already
right,
yeah,
maybe
just
stash
them
first
and
then
you
know
we
just
copy
paste,
the
entire
readme.md
to
a
google
doc,
and
then
you
just
you
know,
apply
your
changes
on
the
google
doc
in
a
common
mode
or
something,
and
then
I
can
do
the
same,
because
I
I
think
I
only
I
only
have
like
five
six
changes
to
to
the
doc
already.
D
So
I
can
see
you
know
if
my
change
and
yours
are
essentially
identical,
so
we
can
just
you
know,
use
one
of
them
or
if
we
do
edit
some
things
together.
At
some
point,
I
just
you
know,
combine
this
on
the
google
doc
and
we
just
highlight
the
the
the
sentences
or
stuff
that
we
edited
and
then
we
just
go
from
there
that
works.
That.
D
I
I
could
I
can
make
that
dog
first
and
if
it,
if
it's
easier
for
you
I'll,
just
I'll
just
copy
paste,
my
already
edited
documents
on
there
and
then
I'll
just
highlight
the
the
sentences
I
added
or
added,
and
then
you
can
just
put
whatever
you
have
added
it
on
there
and
combine
those
in
any
way
you
want.
Just
just.
I
guess,
just
make
sure
to
highlight
anything
that
we
changed
compared
to
the
to
the
current
to
the
to
the
current
version
of
it.
A
Cool,
I
just
send
me
the
branch
you're
working
on,
and
I
can
look
at
doing
that.
The
the
only
other
piece
that
I
will
just
go
ahead
and
take
care
of
is
there
was
a
bunch
of
comments
on
the
diagrams.
B
D
Okay,
you
I
mean,
I
mean
I
feel
like
the
only
way
that
I
was
proposing.
This
is
because
you
know
I
don't
want
us
to
go
ahead
and
you
know
basically
work
on
the
same
comments
and
the
wording
is
a
little
bit
different,
but
we
are
essentially
putting
up
the
same
work,
so
I
feel
like.
Yes,
definitely
you
can
work
on
the
diagrams
so
that
I
I
won't
step
on
your
toes
on
in
terms
of
that
and
I'll
probably
do
something
like
I'll
I'll
work.
D
Reversely
up
like.
I
will
resolve
the
last
comments.
First,
I
guess,
and
then
let's
see
we'll
just
you
know
open
the
pr
against
this
branch
together
and
see
if
there
are
any
conflicts
that
oh,
we
can
just
resolve
them.
There.
D
Yeah,
it's
quite
some
review.
A
Well,
that's
that's
dan
winship,
I'm
glad
he
I'm
glad
he
did
it.
At
least
he
said
at
the
end
that
this
is
looking
good.
B
D
Well,
I
I
I
can
just
say
that's
the
hope
like
yeah.
At
this
point
I
mean
I
mean
we're
working
on
it.
We
want
that
to
happen,
but
it
really
depends
on
you
know
if
you
know
the
other
guys,
for
example,
dan
and
tim
give
a
summed
up
at
that
point.
Yeah.
A
B
B
D
Think
I
think
it's
not
merge
the
cab,
but
I
think
it's
moved
us
to
beta
or
something
yeah
yeah.
D
One
output
alpha
two
yeah:
it's
not
it's
not
merging
this
cap
like
we
can
essentially
have
nobody
implement
this,
but
have
this
kept
merged?
Oh,
I
guess,
because
it's
just
a
crd
right,
yeah.
D
And
then,
after
I
think
at
least
two
c,
and
I
have
implementation
for
this,
then
we
can
move
this
to
v1r
for
two.
I
think
that's
the
that's
the
point.
Yeah
got
it
got
it.
E
E
A
Well,
have
a
good
holidays,
everyone,
and
I
guess
our
next
meeting
will
be
2022.
yeah.
It
will
be
in
2022.
and
actually,
I
think,
we'll
look
at
what
day
that
meeting
is
going
to
be.
I
might
not
be
around
that
day,
so
I'll,
let
you
know
accordingly.
D
Cool,
what's
your,
do
you
have
any
exciting
plans.
A
I'm
going
out
well,
I'm
headed
out
to
spend
the
rest
of
the
winter
out
west
doing
some
snowboarding.
So
I
guess
that's
exciting,
yeah
we're
at
taste
energy.