►
From YouTube: Network Policy API Bi-Weekly Meeting for 20220829
Description
Network Policy API Bi-Weekly Meeting for 20220829
B
Awesome
hello.
Everyone
today
is
august
29th.
This
is
a
meeting
of
the
sig
network
policy,
api
subgroup
to
sig
network.
This
is
a
cncf
certified
meeting,
so
let's
be
nice
to
each
other
and
have
a
good
day
today,
so
rolling
off
on
the
agenda,
pretty
quiet
agenda
per
usual,
essentially,
one
of
the
main
things
we're
still
working
on
is
the
admin
network
policy
and
getting
documentation
and
implementations
on
their
way.
So
that's
been
at
least
my
major
effort.
B
There
is
a
first
draft
of
the
website
up
and
ready
to
go
and
there's
also
some
accompanying
issues
to
actually
get
that
website
deployed,
but
we're
going
to
be
looking
for
reviews.
So,
if
anyone's
watching
this,
please
please
go
give
it
a
review
if
it
merges
and
it
is
like
it
is-
and
there's
stuff
that's
missing,
open
up
pr.
B
We're
looking
for
some
help
and
other
good
news
yang
is
coming
off
of
paternity
leave,
so
he'll
be
back
and
he'll,
be
helping
us
out
back
to
helping
us
out
regularly
and
he's
also
working
on
the
implementation
for
entry.
So
things
are
moving
along
there
for
the
website.
It's
deployed
using
mk
docs
and
you
can
actually
deploy
that
manually
on
your
local
and
just
run
like
a
local
host
deployment
of
it
and
ben
went
through
that
today.
B
So
he's
gonna
throw
a
comment
on
there
just
to
explain
like
the
dependencies
you
have
to
do
on
a
normal
linux
box,
cool
yeah,
I
mean
that's
all
I
had
today.
I
know
rahul
wanted
to
talk
and
pierre
good
to
see
you
pierre.
It's
been
a
while
they
wanted
to
talk
a
little
bit
about
fqdn.
For
those
who
don't
know,
we've
had
some
chats
about
adding
a
fully
qualified
domain
name
policy
in
the
past.
B
C
Awesome
yeah
thanks
for
the
recap:
that's
yep,
that's
pretty
much
where
we
left
it
off.
So
in
the
interim
pierre
and
I
have
put
together
a
survey
that
we
gave
out
to
a
few
of
our
customers
who
are
using
fpdn
related
behavior
and
just
to
try
to
get
get
a
pull
of
what
they're
up
to.
We
don't
have
a
massive
sample
size
as
of
yet
we're
hoping
more
results
will
trickle
in.
C
But
probably
what
we
can
do
is,
I
think,
we'll
present
some
of
our
findings
and
then
I
I'll
probably
clone
our
survey
and
make
it
like
publicly
available
and
I'll
link
it
in
the
meeting
notes,
and
we
can
maybe
brainstorm
one
if
we
want
to
send
this
to
the
broader
sig
network.
Mailing
list.
B
Yeah,
I
think,
having
like
real
customer
data
is
huge.
I
don't
think
we
have
enough
that
upstream,
and
I
know
if
you
shared
the
survey
with
us,
it
might
be
something
that
or
ben,
and
I
can
try
to
push
out
to
openshift
customers
as
well,
like
kind
of
following
the
same
sort
of
data
points,
so
that
could
be
really
helpful.
C
Yeah,
that
would
be
pretty
good.
I
think,
as
a
recap
for
like
the
goals
of
this
survey.
This
is
something
that
we
think
with
tim
about
more
recently.
Basically,
the
idea
is
we're
reasonably
sure
that
fqdn,
as
a
I
think,
makes
sense,
given
that
so
many
cni
providers
have
independently
created
their
own
crds.
For
this,
the
question
really
boils
down
to.
C
Can
we
come
up
with
a
reasonable
spec
for
what
fqdn
support
looks
like
because
it's
a
complicated
feature
with
a
lot
of
edge
cases
and
the
goal
is
to
say:
can
we
come
up
with
a
consistent
spec
that
every
cni
is
more
or
less
happy
with
that?
We
can
then
say
yeah.
This
is
something
that
kubernetes
is
declaring
as
the
standard
minimum
set,
obviously
much
easier
said
than
done,
but
that's
the
that's
the
goal
at
any
rate,
that's
the
north
star
pierre.
C
A
Okay,
is
there
anything
showing
up
on
your
end
or
yep,
okay,
cool,
so
very
quickly?
I
mean
the
goal
of
this
survey
was
to
really
get
some
more
data
from
customers,
and
obviously
our
customers
were
the
first
target
for
that
survey.
A
A
At
least
the
the
people
who
responded
to
the
survey
and,
and
mostly
what
we're
we're
seeing
is
that
most
of
the
people
are
deploying
like
from
small
to
large
clusters,
which
obviously
has
an
impact
on
the
way
network
policy
is
going
to
behave
overall
and
then
the
number
of
workloads
that
are
actually
deployed
is
also
like
in.
A
Amount
like
it's
for
most
of
our
users,
it's
like
more
than
50
workloads.
Some
of
them
are
going
on.
The
high
end
for,
like
a
thousand
of
workloads,
are
deployed
in
the
clusters,
so
just
compiling
the
the
different
things
that
we've
noticed
from
the
survey
and,
more
importantly,
what
we're
observing-
and
that
was
actually
a
surprise
to
us-
is
first
of
all.
A
Network
policy
is
not
necessarily
given
to
application
developers
and
that's
something
that
we
sort
of
do
think
that
you
know
like
application
owners,
service
owners
are
actually
using
or
the
one
defining
authoring
network
policy,
but
it
turns
out
that
it's
not
exactly
the
case.
A
It
may
not
be
the
case
for
every
single
customer
and
I'm
I'm
pretty
sure
that,
with
a
bigger
sample,
we
would
have
like
slightly
different
answers,
but
that's
a
good
data
point
to
see
that
there's
still
like
platform
admins
and
security
engineers
defining
the
network
policies
for
the
different
applications
and,
and
especially
the
namespace
bound
network
application
and
their
work
policies.
A
I
think
another
important
result
is
that
we
were
also
trying
to
understand
how
people
would
use
crds
or
custom
definitions
of
network
policies,
and
why
would
they
need
to
use
that
so
in
kubernetes,
you
have
obviously
the
network
policy
that
is
l304
by
default,
but
then,
if
you're,
using
a
crd
from
whatever
cni,
that
also
means
that
you
need
you're
doing
it.
A
For
some
reason
it
might
be
for
cluster
white
for
sorry
for
cluster-wide
policy,
or
it
could
be
for
other
aspects
and
one
of
them
being
the
fqdn
resources
that
they
want
to
use
and
100
of
them
are
actually
saying
I'm
using
a
crd
or
an
extension
of
the
kubernetes
network
policy
to
have
the
ability
to
define
an
fqd,
an
endpoint
in
my
network
policy
and
besides
that,
what
they're
saying
is
that
when
they
use
an
fqdn
object
in
the
network
policy,
it's
mainly
to
define
two
things:
first,
being
an
api
endpoint
that
is
outside
of
the
cluster,
something
that
is
hosted
by
a
cloud
provider.
A
Some
resources
that
are
available,
that
you
don't
control.
You
only
have
an
fqdn.
The
second
use
case
that
they're
they're.
Seeing
is
they
want
to
define
something
that
belongs
to
a
different
cluster,
and
so
they
have
a
second
cluster
or
many
other
clusters,
and
they
need
to
define
an
endpoint
that
belongs
to
a
different
cluster,
with
a
different.
A
Domain
name
and
that's
the
reason
why
they're
using
an
fqdn
object
in
the
policy.
A
Other
results
that
were
also
observing
and-
and
that
was
actually
an
interesting
question.
One
of
the
questions
that
we're
asking
them
is:
do
you
prefer
or
do
you
want
us
to
filter
the
dns
request
instead
of
filtering
the
the
traffic
that
is
actually
going
to
the
data
path?
A
They
don't
necessarily
want
to
have
like
a
dns
filtering
mechanism,
but
they
want
to
have
an
ip
traffic
filtering
mechanism
based
on
dns
names.
Okay,
also,
one
important
thing
for
most
of
the
respondents,
if
not
all,
is
that
they
do
also
want
to
filter
the
communication
on
the
ip
and
and
they
do
accept
the
fact
that,
for
example,
if
from
an
implementation
perspective,
you
are
actually
filtering
based
on
the
you
send
a
dns
request.
You
get
a
dns
response
so
that
there
is
an
ip
associated
to
your
dns
request.
A
If
there
is
no
dns
request
that
iep
shouldn't
be
allowed
in
the
data
path,
that
was
actually
an
interesting
question
like
is
that,
are
you
allowing
traffic
to
an
ip,
even
if
the
workload
or
the
application
is
actually
not
sending
the
dns
request
so
based
on,
I
have
a
static
ip.
I
should
be
able
to
send
traffic
to
that
ip
most
of
the
respondents
said.
A
B
A
You
you
wouldn't
and
at
the
same
time
that
shouldn't
match
your
fqdn
right
network
policy
in
this
case.
Okay.
So
if
you
really
want
to
use
like
static
ipvs
in
your
network
policy
definition,
then
you
should
just
use
kubernetes
network
policy
and
not
the
fqdn
policy
right
right,
so
we're
just
like
I
mean
customers,
and
especially
respondents
for
that
are
trying
to
be
consistent
right,
like
if
I'm
using
fqd
network
policy
there's
a
reason
for
it,
and
this
is
because
I
need
to
resolve
the
name
before
being
allowed
to
send
traffic
to
that
id.
B
A
No,
that's
fine,
that's
fine,
totally
fine,
and
also
because
we
wanted
to
make
sure
that
at
least
customers
were
not
trying
to
mess
up
with
their
dns.
We'll
also
ask
them
like.
Are
you
tuning
the
dns
configuration
or
the
podca,
the
dns
client
in
the
pods
or
the
workloads
that
we're
running,
and
some
of
them
are
having
like
a
very
slight
configuration
that
they
do
like
to
put,
but
most
of
them?
A
If,
like
sorry,
the
default
bulk
of
customers
do
not
touch
anything
in
the
dns
client
in
the
bots,
so
they
basically
rely
on
whatever
the
cloud
provider
give
them
in
our
case,
in
our
case,
in
gke,
but
the
point
being
is
that
most
of
the
customers
don't
tweak
the
configuration
just
to
you
know:
mess
up
with
the
dns
part.
A
Interestingly,
we're
asking
also
those
customers.
Are
they
using
a
service
mesh,
because
that
could
be
a
valid
like
show
stopper
for
us
saying
like
if
all
of
the
customers
are
actually
using
the
service
mesh?
Why
don't
they
just
do
it
in
istio,
for
example,
some
other
service
mesh
implementations
right,
but
not
all
of
them,
implement
that.
So
that
means
that
there
is
a
real
need
from
customers
saying
like
we
do
understand
the
value
of
a
service
mesh.
A
We
may
go
there
at
some
point,
but
we're
not
ready
yet,
but
still
we
have
some
api
endpoints
that
we
need
to
address
or
filter
and
we
don't
want
to
set
up
a
service
mesh
just
for
having
that
capability
that
l7
capabilities.
So
we
want
to
have
this
straight
into
kubernetes
and
yes,
some
of
them
were
using
a
service
mesh,
but
most
of
them
were
like
vanilla,
kubernetes,
vanilla,
gke
and
still
want
to
have
fqdn
filtering
capabilities
in
the
classroom.
A
Yeah
I
mean
we
obviously
more
customers
and
more
data
points
will
definitely
help
to
refine
the
numbers,
but
this
is
the
trend
that
we're
seeing
at
least
from
among
our
customers.
Right
now,.
B
Cool,
so
then
the
answer
or
the
question
becomes:
how
can
we
take
this
data
to
spur?
You
know
aligning
on
some
sort
of
api
to
represent
what
we're
seeing
here
right,
yeah,
that's
the
question
I
think
raul.
You
probably
are
most
well
positioned
to
try
to
answer
since
we've
already
gotten
a
good
start
on
it
and
just
for
those
listening,
I
posted
the
existing.
B
B
I
was
hoping
you
might
have
a
data
point
on
how
many
folks
are
using
wild
card
host
names
like
host
name
or
like
explicit
host
names
versus
wild
ones
using
wild
cards.
I.
A
Yeah,
let
me
just
go
back
to
the
survey
results
and
give
you
an
update
on
this.
B
Cool
yeah,
because
I
think
that
like
when
we
when
rahul
first
brought
this
proposal
to
cignet
the
wild
card
argument
like
descended
into
chaos
right
due
to,
is
it
possible?
What
does
the
implementation
looks
like
look
like
like
we're,
calling
this
a
layer,
seven
policy,
but
if
we're
doing,
if
we're
just
mapping
to
ips
we're
it's
really
layer
three,
I
mean
there's
so
many
questions
that
come
up
with
that,
but
at
the
same
token,
like
every,
like
you
all
know,
every
cni
a
lot
of
cni's
have
their
own
implementation.
Already.
B
For
this
I
know
openshift,
does
you
know
psyllium
does
calico
does
so
it
makes
sense
to
bring
to
bring
upstream
for
sure,
and
this
data
would
definitely
help.
I
think
the
argument
against
service
mesh
or
wanting
you
know,
having
customers
who
don't
have
service
mesh
would
still
want.
This
policy
is
key
as
well.
So
I
don't
know
what
what
do
you
all
think
for
next
steps
or
like
what
do
you
envision
doing
next
to
push
this
forward
and
actually
try
to
make
progress.
C
I
I
think
what
I'm
gonna
do
is
take
this
data
to
sig
network.
Just
as
it
is
like
you
know,
don't
let
what
is
it?
Don't
let
perfect
be
the
enemy
of
good
or
whatever,
but
just
take
what
we
have
to
sig
network
and
say:
here's
where
we're
at.
I
think
a
reasonable
interpretation
of
this
data
is
there's
a
set
of
customers
who
don't
want
a
full
service
mesh
capability.
They
do
want
a
l3
l4
filtering
just
using
the
dns
data,
so
I
I
just
want
to
at
this
point.
C
B
C
B
A
good
place,
I
think
I
definitely
agree
like
let's
bring
this
before
signet,
because
it's
actual
real
data
and
they
don't
get
to
see
a
lot
of
that,
and
I
think
it
was
well
done
so
good
job
to
you
all.
I
know
it's
a
small
data
set
for
now,
but
it's
something
and
it's
something
to
go
off
of
and
a
good
landing
place
like
if
we
can
get
them
to
agree
on
the
use
case
might
be
admin
network
policy.
B
I
mean
it's
built
in
such
a
way
that
it's
pretty
easily
extensible
right,
we're
not
going
to
face
all
these
problems
we're
facing,
and
it's
still
really
young.
So,
although,
like
folks
want
this
for
network
policy,
like
maybe
the
fastest
way
to
get
it
functionally
in
an
api,
we
own
is
to
put
an
atom
in
our
policy,
see
how
it
goes
with
implementers
and
then
down
the
road
put
it
into
network
policy
v2
right
right,
just
a
thought
did.
C
Like
how
cni's
would
reject
admin
network
policies
that
they
don't
understand,
did
we
close
that
discussion
like
we
did
a
lot
of
talking
about
it,
but.
B
We
have
some
notes
on
it
and
I'm
gonna
have
to
go
back
and
look
because
it's
super
hazy,
I'm
pretty
sure.
At
the
end
of
the
day,
it
was
kind
of
two-pronged
like
now.
The
api
is
designed
such
that
the
the
implementer
isn't
going
to
do
something
stupid.
If
they
don't
understand
the
api
and
right
our
assumption
is,
I
think
what
we've
agreed
on
is
that
if
they
don't
understand
the
api,
they
need
to
do
something
about
it.
Like
you
know,
they
shouldn't
fail
and
you
know
fail
open
and
allow
traffic
they
should
update.
B
You
know
at
this.
At
the
end
of
the
day,
we
can't
ensure
that
every
cni
is
up
to
date,
but
we
can
help
through
api
design,
ensure
that
they
aren't
going
to
do
something
totally
stupid
and
that
was
done
via
taking
like
implicit
behavior
away.
No
longer
does
nothing,
there
means
select
all
right
right
right
core.
B
Let
me
go
back
and
look
at
the
notes.
I
think
there
was
a
lot
of
discussion
on
maybe
possibly
providing
like
an
upstream
admission
web
hook
or
something
of
the
sort.
B
On
on
serializing,
unstructured
data
from
cube
into
what
the
cni
knows
is
to
be
structured,
and
if
that
fails,
the
cni
should
fail
right.
Those
were
the
two
things
I
think
we
were
talking
about.
C
Makes
sense
makes
sense
yeah,
I
think
the
admin
network
policy
suggestion
is
a
good
one,
because
definitely
no
one's
gonna
spring
for
extending
regular
network
policy.
B
B
A
C
B
B
At
all,
at
all,
you
know:
I've
seen
customers,
network
policy
setups
and
it's
always
a
cluster
admin,
who's
asking
me
questions
and
guess
what
they
have
like.
Hundreds
of
policies
like
it's
disgusting
on
a
large
cluster
right,
so
pretty
funny,
and
I
would
think
like
for
fqdn
if,
if
we're
talking
about
external
services
or
even
cluster
services
like
it
would
hopefully
be
the
admin
doing
such
a
configuration
right.
A
I
was
actually
checking
in
the
back
end
and
there
are
half
of
the
respondents
that
are
actually
using
some
patterns,
so
like
regular
expression
to
express
like
a
piece
of
a
domain
yep,
and
most
of
them
are
actually
using
more
than
more
than
10
entries
per
rule.
A
A
But,
interestingly,
you
know,
half
of
them
are
using
more
than
10
entries
in
their
rules.
B
B
Yeah,
that's
definitely
a
piece
of
data
like
the
key
is
just
try
as
hard
as
you
can
not
to
get
bogged
down
in
the
implementation
side
of
things
with
cigna
it's
like
yeah.
I
don't
think
it's
relevant
for
the
api
design.
Like
obviously,
is
it
possible
or
not,
but
even
with
this
api
we
just
merged
like
we
aren't
exactly
100
sure
how
easy
it's
going
to
be
to
implement
for
some
folks
and
the
whole
goal
is
saying
like:
let's
see,
let's
figure
it
out,
let's
see
what
happens
right
so
cool.
B
That
was
really
great
thanks
all
for
bringing
that
to
here
something
to
go
in
line
with
the
idea
about
possibly
putting
it
one
day.
An
admin
network
policy
is
the
issue
about
north-south
traffic.
So
just
keep
that
in
mind.
I'm
gonna
post
it
here.
Oh
that's
right!
We've
already
had
I've
already
had
stakeholders
within
red
hat
talking
about
wanting
this
and
we
knew
it
was
gonna
come
up.
B
We
dished
it
to
the
one
alpha
two
or
a
v,
one
beta
discussion,
but
it's
something
we
need
to
talk
about
and
we
could
add
in
one
of
you
know
if
we
added
north
south
north
south
traffic,
we're
gonna
have
to
be
able
to
select
that
traffic
and
we're
probably
not
gonna
use
ip
blocks.
So
this
could
be
a
selector
right,
cool
anything
else.
Y'all
wanna
bring
up
today
or.
B
Awesome
so
you
kind
of
have
the
action
item
of
just
updating
that
fkdn
doc,
and
then
I
think
you
can
just
give
like
a
really
short
presentation,
maybe
not
even
talking
about
how
you
want
to
get
this
implemented
yet,
but
just
presenting
the
data
and
see
how
folks
respond
to
that
right
could
be
yeah.
B
B
Where
we're
going
to
put
it
just
say
like
do
we
need
these
use
cases
or
not
so
maybe
in
line
with
that
data
put
together
like
you
already
had
some
use
cases,
but
maybe
revamp
them
to
what
you've
heard
and
just
spread
a
couple
super
simple
key
user
stories.
You
want
to
try
to
get
together,
yeah
simplification,
that's
what
I've
learned
after
a
year
and
a
half
of
trying
to
get
people
to
align.
B
B
Hopefully,
folks
come
and
watch
this
presentation,
I'm
giving
a
shout
out,
because
this
was
a
good
one
but
yeah.
If
there's
nothing
else,
we
will
get
about
30
minutes
back
and
keep
chugging
on
what
we're
doing
thanks
so
much
for
coming
today,
really
appreciate
it.