►
From YouTube: Network Policy API Bi-Weekly Meeting for 20210412
Description
Network Policy API Bi-Weekly Meeting for 20210412
A
B
Okay,
cool:
I
want
to
go
ahead.
I
know,
there's
I
think,
is
there
some
new
people?
If
there's
some
new
people
here
feel
free
to
introduce
yourself
to
the
group,
no
particular
order
just
hop
in
and
say
who
you
are
and
why
you're
here.
C
C
Yet
I've
been
pretty
active
in
the
andrea
project,
just
trying
to
coach
them
along
on
building
an
open
source
community
over
there,
and
I
got
invited
to
join
this
meeting
today
by
jay
and
matthew.
So
that's
why
I'm
here.
B
Intro,
okay,
cool,
no
one
else
will
speak
up
today.
Moving
on
through
the
agenda
issue
triage,
I
just
looked,
I
didn't
see
any
issues
that
were
pertaining
to
us.
However,
I
can
share
my
screen.
There
are,
there
is
oh
shoot.
I
thought
I
was
host,
but
it
won't.
Let
me
share
screen
I
gotta
go
into
this
happens
to
me
every
week.
B
A
B
Oh
wait!
That's
the
wrong
one!
More
rolls
yeah!
So
like
more
positions
within
civic
network
overall,
so
if
anyone's
interested,
they
can
check
this
issue
out.
It's
three
two
five,
eight
they're
just
looking,
basically
for
a
couple
couple:
new
rules
in
the
main
sig
network
group
to
be
filled,
so
I
thought
I
would
just
point
that
out.
A
Oh
wow,
these
are
all
new
are
these:
like?
Did
we
introduce
like
10
new
roles
to
upstream.
B
B
B
Usually
we
just
look
at
all
the
sig
network
issues
but
yeah
cool
and
if
no
one
has
anyone
else,
I
will
gladly
pass
the
mic
over
to
steven
vlad
and
matt
to
give
us
a
overview
on
sono
buoy.
C
Vlad
couldn't
make
it
to
this
meeting,
so
vlad
is
tech
lead
of
the
sona
boy
project
in
case
people
on
this
in
this
meeting,
aren't
even
familiar
with
it.
Sona
boy
is
an
open
source,
kubernetes
compliance
test
suite,
so
it's
years
old
and
it
is
used
as
the
de
facto
and
I
think,
official
standard
for
whether
somebody's
kubernetes
distribution
is
is
compliant
for
purposes
of
claiming
that
you're
a
cncf
compliant
kubernetes
distribution
most,
but
not
all
kubernetes
distributions
fall
in
that
category.
C
So
I
don't
know
I
haven't
looked
for
a
couple
years,
but
I
believe
there
were
over
30
that
passed
the
conformance
suite.
So
it
runs
through
tests
that
check
out
how
well
implemented
the
kubernetes
api
is
on.
Whatever
your
distribution
is,
and
if
you
get
a
passing
grade,
then
you
sort
of
get
qualified
for
a
self-certified
badge
to
say,
you're
compliant
and
then
there's
a
lot
of
plugins
and
collateral
tools
for
kubernetes
that
make
a
statement
that
they
should
work
with
any
compliant
kubernetes.
So
yeah
that
that's
what
it's
been
used
for
now.
C
But
it's
had
kind
of
an
undocumented
plug-in
interface,
where
the
test
suite
is
really
just
a
pod
that
runs
under
kubernetes
and
that
it
is
possible
to
run
other
forms
of
compliance.
C
You
know
kubernetes
initially
was
just
kubernetes,
but
they've
gone
with
plug-in
architectures,
where
you
know,
I
think
the
container
runtime
the
cni
more
recently
the
storage
plug-ins
they're
well
in
underway
to
be
kicked
out
of
tree
and
moving
to
a
csi,
plug-in
architecture
and
there's
a
lot
of
these
collateral
things
with
kubernetes.
That
could
call
for
a
compliance
test
suite
and
then,
if
you
even
wanted
to
take
it
a
step
further,
you
could
go
beyond
compliance
in
the
sense
that
some
of
these
plugins
have
optional
features.
C
You
know
like
with
c
I
I'm
very
familiar
with
csi
and
storage
more
familiar
than
I
am
with
networking
plugins,
but
their
csi
has
evolved
for
in
kind
of
the
initial
releases,
just
basic
functionality
and
they've.
Incrementally
added
features
that
first
get
enabled
in
a
spec
document,
then
in
an
api,
and
they
first
get
enabled
in
cni
itself,
which
in
theory
is
a
broad
storage
plug-in
that
goes
beyond
kubernetes.
C
Maybe
some
of
them
will
choose
never
to
do
it
just
because
they're
a
plug-in
for
some
form
of
storage
hardware
that
just
isn't
capable
of
supporting
the
underlying
feature.
But
there
would
be.
The
storage
sync
has
been
kicking
around
the
idea
that
there
would
be
great
value
in
putting
together
a
compliance,
and
you
know
optional
feature
tester.
That
just
gives
a
hey.
Is
this
feature
present
or
not
kind
of
thing,
and
it
seems
kind
of
silly
for
each
of
these
groups
to
go
off
and
implement
their
own
thing
rather
than
you
know.
C
C
I'd
rather
just
have
one
place
to
go
and
get
it
all
done
and
have
some
kind
of
a
consistent
expectation
of
how
these
things
would
work.
Log
the
results-
and
if
you
look
at
sona
boy
as
an
example,
this
thing
is
useful
to
a
user.
Just
to
see
you,
you
can
use
it
to
test
that
your
distribution
started
out
of
the
gate
compliant,
but
in
many
cases
it's
possible
to
misconfigure
them
so
that
they
no
longer
work
right.
So
there
are
users
who
use
sonaboy
just
as
a
daily
compliance
check.
C
You
know,
maybe
they
run
it
once
a
day,
just
in
case
somebody
messed
something
up
or
did
some
update
that
broke
something,
and
this
would
give
them
a
warning
that
hey
this
has
happened,
and
so
there's
an
aspect
that
it
can
be
used
by
publishers
of
open
source
to
make
a
public
declaration
of
their
compliance
and
have
a
means
to
prove
they're
compliant.
C
Then
it
can
be
used
later
by
users
who
might
run
it
at
their
choice,
either
one
time
or
really
just
run
it.
You
know
on
a
scheduled
basis,
because
you
know,
as
your
production
kubernetes
goes
on
with
its
life,
things
could
go
wrong
through
an
update
or
a
misconfiguration,
and
if
we
have
this
one
base
that
would
maybe
go
beyond
kubernetes
into
networking
storage.
C
We've
even
kicked
around
the
idea
of
possibly
using
you
know.
The
extreme
end
of
this
might
even
be
to
allow
application
vendors
to
write
their
own
test
compliance
suites.
You
know
certain
applications.
This
might
be
overkill.
C
You
know
if
you
look
at
something
I
don't
know
like
a
web
server.
You
could
probably
ping
the
thing
to
do
a
test.
Even
then,
it
should
be
easy
to
write
a
plug-in,
but
some
of
these
things,
like
a
clustered
database,
a
consul,
cassandra
mongodb,
typically
go
out
there
with
multiple
nodes
behind
a
load
balancer
and
it
I've
heard.
C
It
would
be
way
better
if
there
was
some
kind
of
a
consistent
framework
for
hosting
these.
Anyway,
that's
just
the
background
of
this
idea.
We
were
taking
around,
I
think,
matt
and
jay,
and
I
had
talks
on
slack
with
vlad
and
a
few
other
individuals.
It's
still
very
early
stages,
so
we
haven't
fully
established
whether
this
is
feasible.
Although
there's
strong
reasons
to
believe
it
is
matt
already
did
this
without
sona.
C
Without
sona
boy,
but
it
just
strikes
me
that
having
a
home
for
this
to
live
in
would
be
a
good
place,
and
we
could
take
this
pretty
far.
You
know
right
now,
even
though
sona
boy
is
like
the
official
cncf
standard,
I
went
and
looked
at
the
project
and
it's
open
source
apache
license,
but
it
goes
back
so
far.
C
Therefore,
we
should
put
it
under
the
cncf
and
start
hosting
community
meetings
and
things,
and
this
thing
has
kind
of
just
been
living
along
on
inertia
based
on
it
worked
okay
before
this
kind
of
whole
community
thing
got
big
and
nobody
ever
went
back
and
took
a
look,
but
I
think
we've
got
some
broader
issues
we
have
to
take
on
with
it.
So
anyway,
that's
my
intro.
C
I'm
not
really
prepared
at
this
stage
to
go
into
like
deep
dive
architecture
diagrams
or
any
of
that,
because
I
think
this
is
work
that
we
still
have
to
do
so.
The
the
talks
we
had
about
putting
this
together
with
sono
boy,
just
started
like
in
the
last
couple
of
weeks,
so
real
early
stages,
but
I
am
interested
in
people's
thoughts
on
you
know
whether
this
sounds
good,
bad
or
indifferent
and
anything
anybody
would
like
to
contribute
on
things
we
maybe
haven't
thought
of,
but
should
yeah.
E
Yeah,
just
I
hacked
on
this
a
little
bit
over
the
weekend
and
it
got
a
got
a
plug-in
put
together
for
sona
bowie
cyclonus.
So
you
can
run
those
network
policy
tests
through
sonobuoy.
It
looks
like
it's
pretty
straightforward.
It
doesn't
seem
to
be.
You
know,
any
anything
that
needs
to
change
on
either
end.
So
I'm
pretty
you
know,
I'm
pretty
excited
about
that.
So
now
it's
just
yeah.
I
guess
it's
just
sitting
there.
It's
ready
to
go
at
least
that
first
step
of
cyclonus
and
you
know
having
a
plug-in
and
stuff.
E
You
know
if
there's
any,
if
there's
any
more
steps
that
people
are
excited
about.
I
don't
know
if
I'll
be
able
to
do
those
myself
but,
like
you
know,
if
you
want
to
leverage
cyclonus
or
something
to
do
that,
definitely
happy
to
help
out
or
you
know,
give
advice
or
whatever
but
yeah.
It
definitely
looks
like
it'll
work.
A
E
C
E
C
A
You
know
we
have
all
the
sig
network
tests
right.
So
when
you
run
conformance
alone,
you
get
pretty
good
verification
of
your
cni.
You've
got
as
you
know,
300
tests
there
and
then,
if
you
extend
past,
what's
in
conformance
you
get,
I
think
I
think
we've
got
150
or
something
like
that.
Sig
network
tests-
and
you
know,
there's
five
thousand
total
upstream
ginkgo
tests
that
you
can
trigger
through
sonoboo.
C
Well,
maybe
these
are
already
in
sona
boy,
but
it
might
be
nice
to
call
them
out.
You
know
into
a
kind
of
a
network,
specialty
area
or
summary,
because
I
know
the
storage
stake
is
taught.
The
storage
testing
for
csi
is
definitely
not
in
there
now,
but
they'd
like
it
to
go
in
there,
and
you
know
to
broaden
this
out
that
that
would
be
a,
I
think,
a
valuable
concept
so
that
the
users
get
a
breakdown
if
they
find
that
their
installation
isn't
compliant.
G
A
A
So
sono
bowie,
what
sonogui
does
it
just
bundles
the
entrance
the
upstream
end
to
ends
so
the
nice
thing
that
sonically
gives
you
like,
like
steve
mentioned,
is
it
it
runs
all
the
end
to
end
tests
in
a
pod
and
then
inside
of
that
pod?
You
can
you
know
what
I
can
just
you
can
just
jump
in
and
I
can
show
you
also.
I
have
a
cluster
here
and
I'm
actually
running
sony
blue
right
now.
A
Yeah
good,
so
what
we
do
and
what
most
people
do
you
all
see?
My
terminal
right
is,
you
know
typically
the
way
I
use
sonogui
and
is
you
know,
you'll
do
like
coop
ctl
create
well
you'll.
Just
do
like
something
like
this.
So
I'll
do
like
you
know,
sonubui
run
and
what
I
do
is
I
tell
it
what
ginkgo
tests
I
want
it
to
run
right.
So,
for
example,
this
one
networking
should
check
coupeproxy,
urls
right
and
so
say
I
want
to
name
run
in
this
name:
space
s3,
let's
say
right.
A
So
usually
I
just
do
this
and
then
well,
let's
run
it
in
a
different
name
space,
because
I
already
ran
it
a
different
one
in
that
name
space.
So
let's
make
a
new
main
space
here.
Okay,
so
now
there
we
go
now
I
do
coupe
ctl
get
pods
dash,
n
s4,
okay,
goop
ctl
logs.
Now
what
I
can
do
is
I
can
go,
and
I
can
look
at
this
job
dash
ns4,
and
this
is
running
my
upstream
e2e.
A
So
if
I
just
go
in
here
dash
cd2e
right,
I
can
just
see
them
and
I
could
see
the
tests
running.
I
could
grab
the
logs
now.
This
is
the
end-to-end
test.
Already
do
all
this,
so
the
question
is
well
why
you
sonaboo?
Well,
the
nice
thing
is
sonaby
wraps
it
all
in
a
pod
for
you
and
when
the
test
is
done,
I
can
say
sonobuoy
status,
dash,
ns4
right
and
it's
complete
right
so
and
in
a
second
after
sono
buoy
finishes.
Bundling
those
results.
It's
now
passed
and
I
can
do
sonaboui.
A
Why
is
my
thing
I
like
lost
my
terminal
connection
hold
on.
Let
me
get
back
in
here:
yeah,
okay,
cool,
oh
weird!
I
can't
type
in
this
terminal
anymore.
G
So
I
guess
my
I
guess
the
point
here
is
that
if,
if
the
sono
boy
is
essentially
an
umbrella
for
running
end-to-end
tests,
then
we
should
look
at
whether
the
end-to-end
tests
need
additional
network
tests
and
so
on
will
automatically
quickly
automatically
pick
it
up,
but
it'll
also
be
available
to
anybody.
That's
using
the
end-to-end
tests,
with
or
without
sonaboy.
A
Yeah
exactly
yeah,
so
that's
I
guess.
The
point
is
that
yeah,
so
sonography
gives
you
life
cycle
around
the
whole
thing
and
I
think
matt's
stuff
is
not
part
of
upstream
e2e.
It's
it's
like
a
separate
project
called
cyclonus
that
we
just
kind
of
have,
and
it
runs
hundreds
of
gen
automatically
generated
network,
false
tests
as
opposed
to
what
we
have
in
upstream,
which
is
30
or
40
of
them.
A
So
yeah,
I
guess
what
we're
talking
about
here
is,
and
I
don't
think
it
would
here's
the
thing,
though
right
so
this
is
where
it
gets.
Interesting
sanjeev
is
that
it
might
not
be
the
best
thing
in
the
world
to
put
400
network
policy
tests
into
upstream
k8s,
which
is
why
steven's
idea
well,
if
we
integrate
this
into
sonobuoy
as
a
plug-in
as
a
good
home,
it's
kind
of
a
good
idea,
because
we
could
then
have
a
sonography
plug-in
that
managed
these
tests
and
if
we
wanted
these
to,
you
know.
A
G
And
the
other
thing
is
so
yeah,
that's
one
sort
of
thought
to
think
about
you
know
and
sonoboy
versus
end-to-end,
suites
and
and
what
coverage
is
already
there
in
terms
of
just
network
tests
as
well
as
network
policy
tests,
and
maybe
we
can
develop
a
develop
an
opinion
and
and
share
that
with
sig
network
yeah.
Definitely
sonubo
is
a
very
good
project
and
it's
been
very
popular
for
a
number
of
years.
It's
everybody
uses
it
the
other.
The
other
point
was
network
policy
tests.
G
G
Is
is
there
with
the
goal
being
some
kind
of
compliance
that,
if,
if
you've,
if
you've
got
a
dist,
if
you've
got
a
cluster,
that
that
claims
to
support
network
policy
here
are
the
mandatory
features
it
better
support,
maybe
or
maybe
priority
one
versus
priority
two
kinds
of
network
policies?
Some
some
of
them
may
support,
for
example,
ingress
policies,
but
not
egress,
and
so
on.
G
Is
that
the
goal
to
sort
of
develop
a
metric
to
describe
the.
A
B
B
B
A
That
was
an
interesting
kept
because
the
whole
idea
that
was
the
whole
idea
behind
it,
was
so
this
all
this
stuff
sort
of
comes
together
right,
which
is
why
I'm
glad
you
kind
of
brought
it
to
the
table,
stephen
and
matt-
is
that
all
this
stuff
comes
together,
which
is
that
we've
got
cyclones,
which
we
can
give
you
a
very
precise
definition
of
what
net
netpol
apis
you
support.
Then
you've
got
this
whole
thing
of
this
other
thing
of
like
sort
of
dance
cap
of
supports
of
micro
versioning.
C
C
You
know,
puts
you
in
a
specific
class
and
you're.
Only
reque
you're
only
allowed
to
have
three
different
combinat
combinatorial
permutations.
You
know.
Ultimately,
this
could
be
an
infinity
of
different
features
which
some
some
plug-ins
have
and
some
don't,
and
even
if
that's
not
the
case
now
you
should
architect
it.
So
you
could
support
it.
If
you
need
to
go
there
yeah,
I
think
this
is
story.
Three.
B
Story,
three
and-
and
I
think
with
network
policy
too,
like
it's
at
least
for
now,
it's
it's.
It
would
be
good
to
have.
I
know
it's
somewhat
rigid,
but
it
wouldn't
be
the
worst
thing
to
have
gates
where,
if
someone
wants
to
use
kubernetes-
and
they
have
no
idea
what
cni
they
want
to
use,
but
they
want
to
look
at
a
features
they
want
to
use.
Look
at
based
on
feature
sets
like
this
is
a
pretty
important
feature
set.
A
C
A
C
And,
and
also
you
know,
it's
it's
potentially
a
little
pejorative
to
classify
these
as,
like.
You
know,
platinum,
gold,
silver,
because
there's
valid
reasons
why
certain
things
don't
implement
every
feature
in
storage.
It
relates
to
price,
where
you
know
it's
valid:
to
have
a
low
cost
storage
that
doesn't
implement
every
feature
with
kubernetes
itself.
C
I
happen
to
be
tech,
lead
of
the
kubernetes
iot
edge
working
group
and
when
you
try
to
take
kubernetes
out
to
edge
where
you
have
low
resources
in
compute,
you
know
you've
got
things
like
k3s
and
micro
cades
they
intentionally
leave
stuff
out
because
they
want
to
run
it
on
raspberry,
pi's
and
intel
nooks
and
I'd.
Imagine
that
there
might
be
some
sort
of
a
case
for
a
low
featured
cni
by
design
just
because
it's
lighter
weight.
C
I
don't
know
if
there
is
such
a
thing,
but
it
you
know
we
we
definitely
don't
want
to
preclude
it
and
by
putting
this
label
that
one's
not
better,
rather
than
just
taking
a
step
back
and
say:
look
you
have
this
feature
or
not,
but
it's
we're
not
going
to
consider
it
a
low
quality
implementation.
Just
because
you
don't
have
this
feature
it,
it
may
well
be
a
conscious
choice
and
that
some
class
of
users
actually
prefer.
You
know,
because
there
are
trade-offs
for
where
you
save
resource
by
leaving
out
this
feature.
B
Yeah,
100
and-
and
I
don't
think
that's
what
it
should
ever
be-
is
like
a
gold,
a
ranking
it's
more
just
like
these
are
these
options.
These
are
what
they
support
on
the
network
policy
side,
but
yeah.
I
think
it's
a
really
cool
idea
and
also
in
this
cap
j
is
like
the
thing
I
knew
about.
What
I
was
talking
about
is
a
basically
a
status
for
network
policy.
B
So,
right
now,
when
you,
when
you
implement
a
network
policy,
the
cni
sees
that
and
then
it
takes,
however,
long
to
actually
implement
that
policy,
so
that
status
flag
would
just
allow
the
cni
to
come
back
and
say:
okay,
the
policy's
implemented
and
ready
to
go.
So
I
guess
that's
the
status
part
of
it.
I
guess
you
kind
of
bundled
that
together,
but
it
could
be
useful
for
us
in
a
couple
other
things
as
well.
C
Another
thing
that
might
be
useful-
and
I
don't
know
if
this
is
already
built
into
sona
boy-
that
you
know
of
jay,
but
in
a
lot
of
cases.
Some
of
these
things
are
feature
gated.
You
know
by
throwing
alpha
beta
flags
and
things,
but
there
might
be
some
value
in
having
an
option
to
run
the
test
case
in
all
these
different
versions.
A
D
C
Results
that
would
be
a
nice
embellishment
to
say,
hey.
We
didn't
even
bother
testing
this
because
of
the
feature
gate
setting
or
whatever,
so
that
you
don't
get
false
positives
or
negatives
when
you
wouldn't
even
expect
the
thing
to
be
engaged
just
kind
of
brainstorming
here
on
trying
to
you
know,
do
this
the
best
way
we
can.
B
For
sure
sweet
well,
thanks
for
all
that
work
and
matthew
feel
free
to
keep
posting
about
it.
So
I
know
you
said
you,
you
already
made
a
plug-in
for
sono
boy
if
you
could
just
throw
that
on
the
agenda
or
yeah
on
the
agenda,
the
link
to
it
unless
it's
just
built
into
the
cyclonus
link,
sure
yeah,
that
wouldn't
hurt
to
throw
up
there
and
take
a
look
at
it,
but
I
think
it
brings
up
some
good
points
that
we
should
keep
revisiting.
A
Yeah,
I'm
happy
to
help
test
this
stephen.
If
you
want
me
and
steven
work
together,
so
stephen,
if
you
want
to
just
ping
me
tomorrow,
I'm
happy
to
like
hack
on
that
plug-in
and
see
if
it
works
for
us
on
some
of
our
internal
clusters.
Just
let
me
know:
okay,
I'm
down
to
that
yeah.
B
Cool
great,
thank
you
guys.
So,
let's
keep
moving
on.
I
just
thought:
it'd
be
good
to
we've
been
having
a
lot
of
discussions
in
the
past
couple
weeks
about
v2,
and
you
know
looking
into
stuff
like
that,
but
I
noticed
abhishek
posted
a
pr
for
the
cluster
scope
network
policy
and
I
think
a
lot
of
stuff
they're
gonna
end
up
doing
for
that.
New
object
is
going
to
be
what
v2
ends
up
following
following.
I
Yeah
guys
yeah,
I
I
see
young
and
satish
also
on
the
call,
so
maybe
you
guys
can
feel
free
to
interject
whenever
you
want
to.
Essentially
you
know,
we,
we
hold
a
separate
cluster
network
policy,
focus
meeting
every
thursday
and
you
know
just
to
make
sure
that
we
are
all
on
the
same
page
and
continue
the
work,
and
you
know
make
sure
that
the
kepler
is
progressing.
F
I
From
calico
and
antonian
from
in
korea
to
talk
about,
you
know
get
some
perspective
from
different
cni's,
because
these
are
the
cni
providers
who
have
network
policy
crds
which
extend
the
extends
to
kubernetes
network
policies.
So
we
got
a
lot
of
good
feedback
from
all
of
them,
and
we've
incorporated
that.
So
at
the
at
the
moment,
I
would
say
that
the
majority
of
the
or
the
or
the
the
main
comments
to
resolve
or
address
would
be.
First
of
all.
Where
do
these
crds
live?
I
And
I
guess
there
is
leaning
towards
us
opening
up
a
new
repository
under
kubernetes
sig,
similar
to
the
gateway
api.
How
those
folks
have
are
evolving
those
gateway,
api
crds
in
a
separate
repository
instead
of
you,
know,
directly
merging
them
in
in
the
networking
group.
I
So
so
that
is
what
I
guess
we
will
also
be
heading
towards,
and
I
think
it
makes
more
sense
to
have
a
more
generic
repository
instead
of
a
cluster
network
policy
focused
one
because
you
know,
since
we
are
talking
about
v2
network
policy,
which
will
be
an
object
in
itself
and
nothing
related
to
the
existing
v1
policy.
So
so
it
makes
more
sense
to
have
a
generic
repository,
so
maybe
we
can
kickstart
that
discussion.
I
I
just
left
a
comment
this
morning.
I
So
so
perhaps
you
know
we
can
move
in
that
direction.
So
that
was
one
of
the
major
you
know
comments
the
other
couple
of
them
are,
you
know,
I
think
it's
more
towards.
You
know
how
how
to
handle.
You
know
the
defaulting
of
policy
or
rules
for
the
cluster
versus
the
strict
rules.
So
you
know
some
people
are
against
the
2cid
approach.
I
Some
people
are
for
the
2crd
approach,
so
you
know
this
is
where
we
are
getting
more
feedback
and
you
want
to
see
what
makes
sense
for
the
users
is
too
many
kinds.
A
lot
of
you
know
new
things
for
them
to
learn,
or
they
are
okay
or
users
will
be
okay
with
two
separate
crds,
whose
job
is
to
do
define
policies
for
different
use
cases.
So
so
those
are
the
kind
of
feedback
that
we
are
still
trying
to
receive
from
the
community
and
at
the
moment
we
still
have
it
as
2cid
approach.
J
I
Users
who
who
are
interested
in
cluster
scope
policies,
I
think
it
would
be
great
if
they
can
come
and
review
the
cap.
I
I
think,
apart
from
that,
there
are
a
few
other
technical
aspects
that
we
want
to
hash
out,
so
I'm
perhaps
young
and
suspicious.
I
have
more
to
talk
about
because
you
know
I'm
on
a
I'm
an
opportunity,
so
I'm
not
following
it
as
much
as
you
know,
these
two
guys
are
so
they're.
Definitely
on
top
of
everything.
G
Hey
abhishek,
so
are
you
saying
that,
in
this
kept
for
cluster
network
policy,
there
are
several
comments
that
apply
to
what
you
would
call
english
policy
v2
and
not
directly
cluster
network
policy?
I
feel
like
the
boundary
between
cluster
network
policy
and
policy.
V2
gets
blurred.
I
I
So
network
policy
v1
is
under
the
networking,
skates
api
group
right
and
and
and
since
we
are
introducing
new
crds,
our
question
to
the
community
was
that
whether
we
want
to
write
these
new
crds
or
new
resources
as
alpha
resources
under
this
api
group,
or
should
we
create
a
separate
repository
where
you
know
we
evolve
faster
and
then
mature,
faster
and
and
once
they're
all
well
matured?
Then
we
merge
them
in
in
the
networking
api
group.
B
Yeah-
and
I
think
that
that
was
kind
of
the
main
thing
coming
into
today-
that
us
as
a
group
would
have
a
hand
in
right.
I
mean,
if
you
guys,
as
cluster
scope
number
policy
folks
decide
that
the
crd
option
is
the
way
to
go
hey.
It
would
allow
you
to
get
your
work
out
there
faster,
be
I
mean
I
think
it
would
be
easier
to
iterate
in
the
first
five
months.
B
If
you
know
you
got
it,
you
got
to
see
crds
up
there,
people
started
using
them
implementing
them,
and
then
you
realized.
Oh,
this
isn't
going
to
work
for
x
number
of
reasons,
but
I
think
it's
also
important,
because
v2
or
an
extension
of
of
functionality
for
network
policy
is
going
to
follow
the
same
track,
as
you
guys
end
up
doing
like
you're
setting
the
precedent
so.
I
In
a
separate
repository
so
that
we
are
a
little
more,
we
evolve
a
little
more
faster
than
than
otherwise
we
would.
I
F
Yeah,
I
kind
of
agree
with
abhishek.
I
think
I'm
to
have
like
a
separate
repository
like
where
we
can
actually
have
like
all
the
different
proposals.
Like
I
mean
once
like,
once
we
have
like
the
kept
approved,
we
can
actually
have
an
intermediate
repository
that
has
all
the
to
be
matured
apis.
I
think
yeah
that
that
kind
of
makes
sense,
but
I
don't
know
like
if
young
and
govind
have
like
different
opinions,
probably
like
then
we
haven't
discussed
last
week.
B
Cool
and
yeah
some
precursor
knowledge
here.
I
think
the
main
problem
was
antonio
asked
like
how
can
you
add
a
v1
alpha
one
to
the
networking
api
group,
because
v1
alpha
one's
already
been
deprecated
in
the
networking
api
group
and
that's
kind
of
what
led
to
here
ricardo
I
saw
you
did
have
a
you
had
a
comment
basically
saying
this
shouldn't
be
a
problem.
I
don't
really
know.
D
So,
if
you
take
a
look
only
to
the
code,
it
should
not
be
a
problem,
but,
as
you
have
like
things
that
point
to
the
to
to
the
api
like
what
was
that,
like,
like
a
client
go,
I
guess
or
some
some
somehow
like
when
you
do
vendoring
you
do
rendering,
based
on
the
api
version,
not
only
on
the
api,
so
it
would
be
like
networking
v1
alpha
one
again
and
then
the
networking
v1
it
might
generate
some
conflict
so
yeah.
I.
D
One
actually
did
that
test
right,
no,
not
not
even
like
dan
or
our
team
or
or
jordan
legit.
So
we
are
waiting
someone
to
make
that
test.
Otherwise,
I
think
we
should
probably
bring
this
again
and
say:
okay,
who
is
going
to
who
is
going
to
to
to
to
own
this
team?
Can
you
deal
with
with
jordan?
Egypt?
Do
you
think
we
should
actually
just
go
to
v2
and
okay,
let's
go
to
v2
yeah.
D
B
Mean
I
think,
there's
a
merit
to
both
approaches
and
obviously
you
know
the
the
details
of
cluster
network
policy.
We
can
leave
to.
You
know
commenting
on
the
kep
if
you
have
opinions
and
stuff,
but
I
think
when
it
comes
to
how
it's
going
to
be
delivered,
it's
really
important
for
us
to
decide
like
how
are
we
going
to
do
this
and
who's
going
to
support
it
right?
So
these
crds
that
if
we
did
make
up
a
repo
kind
of
like
the
gateway
api,
it's
essentially
in
my
mind.
It's
it's
provided,
but
not
fully
supported.
B
Crds
right,
I
mean
they're,
not
part
of
the
official
api
but
they're
in
the
kubernetes
organization.
You
know
for
people
to
use
and
it
might
help.
You
know
that
problem
stephen
talked
about
earlier,
where
we
have
a
new
feature
coming
in
cluster
scope,
network
policy
or
new
features
or
network
policy
v2
to
ease
the
implementation.
B
I
And
I
think
we
also
need
the
different
cni
providers
who
provide
network
policy
to
be
on
board
with
that,
because
you
know
they
they
need
to
be,
and
at
the
end
of
the
day
they
are
the
ones
who
are
going
to
be
realizing
these
apis.
So
as
long
as
they
are
on
board-
and
I
I'm
sure
I
mean
I
can
speak
from
entry
community-
we
will
be
supporting
whatever
is
decided
upstream
and
I
believe
teleco
and
salim
are
also
on
board
with
it
and
so
so
yeah.
I
I
think
the
next
steps
would
be
to
figure
out
figure
out.
How
do
we
move
this
conversation
ahead
about
getting
a
repo?
And
you
know
setting
this
up?
There
are
going
to
be
a
few
things
that
we
need
to
do,
including,
I
think
I
saw
the
gateway
apis.
They
have
a
website
of
their
own
with
you
know,
extensive
documentation
of
those
apis
and
then
examples-
and
you
know,
along
with
the
api
type
so
so,
and
also
the
validation
code.
So
those
are
the
things
that
I
think
we
need
to
hash
out
a
bit.
I
How
do
we
begin,
especially?
You
know
the
gateway
apis.
They're
all
core
related
apis
that
they're
introducing.
So
it's
like
a
single
goal,
but
here
we
have
like
multiple
concurrent
proposals
that
are
coming
through:
v2
will
have
its
own,
like
lifespan,
the
customer
policies
will
have
their
own
evolution.
So
we
need
to
coordinate
a
little
better
and
you
know
probably
have
a
have
a
thought
to
approach
for
this.
B
Right,
no,
I
agree
and
that
work
you
know
the
work
of
this
repo.
Putting
it
together
and
stuff
is.
Is
this
whole
team
like
we
all
need
to
take
that
on?
I
guess,
then
I
I
guess
we
can
talk
to
some
gateway
api
folks
and
ask
how
they
went
about
doing
it.
I
don't
know
the
workflow
off
the
top
of
my
head,
jay
ricardo
to
like
getting
a
a
new
repo
set
up
for
something
like
this
or
how
we
would
even
get
started.
I
G
Another
thing
is
that
I've
actually
been
in
a
part
of
a
group
where
we
did
this
in
a
totally
different
group,
so
you
have
to
essentially
put
together
a
proposal
for
your
parent
group,
so
in
this
case
it's
sig
network
so
put
together
a
proposal
like
unless
it
has
changed
recently
in
which
you
make
the
proposal
that
you
want
to
have
a
new
repo
for
essentially
everything
to
do
with
network
policy.
G
Futures
and
one
of
those
is
cluster
network
policy,
and
there
could
be
others,
and
this
could
be
just
multiple
crds
within
the
same
repo.
And
then
your
parent
group
would
bless
it
and
you
would
get
space
allocated
for
your
repo
and
then
you
could
call
it.
You
know:
network
policy,
v2
or
network
policy.
G
You
know
you
could
come
up
with
a
different
name
or
whatever,
and-
and
your
proposal
would
need
to-
you
know,
cover
things
like:
is
it
backward
compatible
with
network
policy?
V1?
Is
it?
Can
it
coexist
like
all
of
those
things
that
your
parent
group
would
need
to
know,
and
then
they
will
grant
you
the
space
in
their
allocated.
I
Try
to
agree,
I
think
you
make
valid
points.
So
maybe
you
know,
I
guess,
as
part
of
the
team,
I
think
maybe
satish
young
you
guys
can
probably
sync
up
with
jay
and
others
on
the
group
here
to
figure
out
the
next
steps
on.
How
do
we
get
the
people.
A
D
As
soon
as
we
justify
it,
it's
it's
a
doable
because,
as
we
are
saying
that
we
are
trying
to
make
the
approach
of
crds
and
and
developing
the
crds,
we
can
do
the
same
approach
as
kaping
right
kabing
is
running
on
on
kubernetes
sigs,
kubernetes
seeds,
or
they
they
have.
D
The
bar
is
lower
for
the
requirements
like
approvers
or
being
member
or
something
so
if
you
think
it
would
help
you
just
just.
Let
me
know
I
can.
I
am
pretty.
I
am
pretty
free
this
week,
so
I
can
run
run
and
and
make
this
happen.
Ask
team,
hawking
and
casey
david
park
authorization
to
create
that
ripple.
It's
a
it's
really
doable.
I
I
G
I
think
you
would
might
be
want
to
put
together
like
again
like
three
or
four
slides
to
present
to
sig
network
at
their
next
meeting
and
say:
here's
the
proposal.
Here's
the
caps
that
already
exist
here
are
the
comments.
So
your
your
formal
proposal
to
sig
network-
and
let
me
know
if
this
is
overkill,
so
I
don't
want
to
make
it
overkill.
But
this
is
a
process.
G
That's
been
followed
in
other
groups
where
you
summarize
your
proposal
to
your
parent
in
in
their
next
working
group
meeting
with
you,
know
three
or
four
slides
that
capture
all
your
caps,
relevant
caps
and
prototypes,
and
so
on,
and
or
maybe
that's
already
been
done
because
I'm
new.
So
I
apologize
if
I'm
speaking
about
something
that
you
guys
have
already
explored.
D
Yeah,
I
guess
I
guess
folks
already
did
that
with
cluster
api.
Sorry
with
the
cluster
scoped
network
policy,
so
the
sponsorship
for
that
is
going
to
be,
I
hope,
easy,
usually
when
I
say
that
something
is
going
to
be
easy.
It's
hard
like
I
can
spend
something
like
a
month
or
two
trying,
but
I
I
think
it's
a
it's
something
that
we
can
do
and
also
say.
Okay,
we
are
going
to
put
everything
here:
the
crds
and
also
the
slides
and
documentations.
D
G
B
Just
I
want
to
see
this
move
forward
so
feel
free
to
ping
me
on
slack
about
it
and
we
can
move
forward.
B
B
I
think
I'd
like
to
write-
or
I
guess
well
now-
I'm
volunteering
myself
I'd
like
to
see
something
written
up.
You
know
that
says:
here's
here's
where
we're
at
with
network
policy,
here's
why
we
need
this
new
repo.
This
is
kind
of
the
way
sig
network
has
already
instructed
us
to
move
forward
and
here's
our
actual
proposal
to
do
so
and
gateway
api
has
already
done
it
just
to
clarify.
D
Okay,
we
can.
We
can
use
that
as
like
cluster
scoped
network
policy,
just
like
to
do
not
generate
too
much
too
much
questions
of.
What
actually
are
we
trying
to
do
that
and
say?
Okay,
we
are
extending
the
scope
of
this
repo
to
network
policy
v2
because
we
are
using
the
lessons
learned
in
clusters
copy
network
policy.
To
this
thing
I
guess
it
would,
it
will
make
things
easier.
It's
my
opinion.
A
I
would
start
with
something
I
may
be
in
the
minority
here,
but
I
I
would
start
with
something
that
you
owned
yourself
and
then
transition
it
later.
I
don't
really
see
much
point,
even
though
ricardo
is
very
generous,
running
around
and
finding
stuff
like
and
getting
people
to
make
it
up.
Replays,
I'm
like
I,
I
just
I'd,
get
something
useful
into
a
personal,
github,
repo
or
or
you
know
something
like
that
and
then
and
then
kind
of
be
like
all
right.
I
A
I
I
think
it's
it's
not
it's
not
about
where
we
want
to
or
whether
you
want
us.
You
know
whether
this
is
hindering
our
progress
in
terms
of
writing
those
apis
in
you
know
in
the
structs
and
placing
it
somewhere.
I
think
it's
more
about
defining
in
the
cap
as
to
where
this
will
eventually
go,
and
you
know
because
at
the
moment
we
you
know
we,
we
can't
just
say
that
it's
going
to
live
in
my
repo,
something
something
somewhere
yeah
we
just
want
to.
We
just
want
to
solidify
okay.
A
I
B
A
D
I'm
gonna,
I'm
gonna,
I'm
gonna
just
just
just
remember
me
tomorrow,
because
it's
a
6
p.m.
Here
in
brazil,
I
still
need
some
things
I
will
forgot.
I
will
forget
for
tomorrow,
but
remember
me
tomorrow.
I
I
can
open
the
dpr
saying:
okay,
folks,
working
on
cluster
scope,
network
policy.
They
need
a
place
to
put
the
crds
and
the
documents
and,
like
the
the
simple
controller,
you
will
need
to
to
create
those
things,
but
they
need
somewhere
to
put
everything
here.
B
I
know
we're
reaching
the
end
of
time
here,
so
we
can
go
ahead
and
finish
up.
The
last
thing
I
had
on
the
agenda
still
working
on.
You
know
a
v2
dock,
just
sort
of
preliminary.
Why?
Why
are
we
doing
it
get
some
of
our
thoughts
from
our
last
couple.
Meetings
down
on
paper
should
have
hopefully
have
something
around
that
next
week,
but
that's
all
we
have
there.
Otherwise,
I
think
that's
it.
Does
anyone
have
anything
else?
Yeah.
E
I
got
a
I
got
a
quick
request:
does
anybody
have
any
proposed
yaml
for
cluster
network
policies
or
something
like
that?
I
just
wanted
to
kind
of
like
start
thinking
about
that
stuff
in
cyclones,
potentially.
K
The
it's
on
the
cap,
which
is
the
I
think,
andrew
just
post,
post
it
in
the
in
the
chat
I
can
repost
it.
B
Okay,
cool
and
I
think,
there's
some
there's
at
the
top
of
our
agenda.
There's
some
slides
and
there's
some
examples
in
there.
I
believe
too.
G
All
right
awesome
thanks
I'll.
Just
add
that
the
report,
the
report
will
contain
more
than
just
cluster
network
policy,
so
the
name
of
the
report
should
be
more
like
english
policy
v2
whatever
and
cluster
network
policy
is
one
of
the
components
of
english
policy
v2,
because
otherwise
there'll
be
three
things:
there'll
be
policy,
v1
policy,
v2
and
cluster
network.
So
this
should
be
one
repo
for
policy
v2,
which
includes
cluster
network
policy
as
one
crd
and
one
or
more
crds
for
other
features,
and
that
all
will
be
in
a
single
repo
with
a
common
name.
G
G
I
G
I
H
Wondering
if
it's
just
going
to
be
a
long-lived
repo,
that
any
new
project
would
kind
of
start
out
there
develop
a
crd
and
then.
K
That's
also
that's
a
good
question.
That's
also
what
I
was
wondering
sorry.
I
was
away
for
a
little
while,
but
I
guess
my
question
will
be:
is
the
intent
of
the
repo
being
it
always
hosted
the
new
things
that
we're
proposing
and
once
we
graduate
things
to
we
won.
For
example,
the
news
king
can
come
back
to
this
repo
and
and
the
state
v1
alpha
and
we
want
beta
there.
G
I
think
typically,
they
haven't
done
that
they,
the
a
new
kubernetes
sig
repo,
is
for
a
relatively
well-defined
project
like
the
gateway
api
is
basically
usb2
gateway.
Api
will
not
hold
anything
other
than
ingress
v2.
Logically,
so
you
would
want
to
justify
another
repo
for
a
something
that
has
nothing
to
do
with
network
policy.
This
is
generally
in
the
umbrella
of
network
policy,
whether
it's
cluster
network
policy
or
network
policy
we
do.
A
G
A
H
F
B
H
B
Well,
ricardo's
flexible.
We
can
do
that
all
in
comments
in
that
yeah
sounds
good
for
me,
cool
great!
Well!
Thank
you
so
much
everything
thanks
for
coming,
I
was
check.
I
know
you're,
on
parental
leave.
I'm
sorry
about
that.
I
kind
of
spaced
when
I
message
you
in
the
chat,
but
I
really
appreciate
it
actually.