►
From YouTube: Network Policy API Meeting for 20230801
Description
Network Policy API Meeting for 20230801
A
Awesome:
hey
everyone
today
is
Tuesday
August
1st
2023.
This
is
a
meeting
of
the
Sig
Network
policy
API
subgroup
to
Sig
Network.
This
is
a
cncf
sponsored
meeting,
so
please
follow
their
protocols,
be
nice
to
each
other
and
keep
the
language
G-rated
thanks
so
much
for
coming
today
we
don't
have
a
super
packed
agenda
so
as
we're
talking
feel
free
to
add
stuff.
A
There
are
quite
a
few
new
faces
here:
I'm
not
going
to
go
through
everyone
individually,
but
definitely
will
open
the
floor
for
folks
to
kind
of
give
a
quick
intro
if
they
like,
if
they've
never
been
here
before-
and
you
know
just
say
why
you're
here
or
maybe
what
you're
interested
in
working
on
around
Network
policy,
so
I'll
open
the
floor.
I'm
not
looking
for
everyone
so
feel
free.
A
Going
once,
oh
as
you
want
to
go,
yeah.
B
As
I'm
I
tried
to
join
this,
this
team,
but
timing
was,
was
challenging
so
I'm
a
product
manager
for
stack
rocks
or
what
red
hat
calls
Advanced
cluster
security,
ACS
and
specifically
we're
doing
some
interesting
stuff
with
Network
policies.
B
One
of
the
topics
of
most
interest
was
admin,
admin,
Network
policy
or
the
lack
of,
and
it
once
we
found
out
that
this
group
is
working
on
admin
or
policy,
it
really
makes
a
ton
of
sense.
It's
it's
a
super
critical
requirement,
We
Believe
for
for
people
whose
job
it
is
to
write,
Network
policies.
B
A
A
Okay,
let's
go
ahead
and
get
started
today,
as
I
said
before,
feel
free
to
add
to
the
agenda,
while
I'm
talking
I
posted
a
link
in
our
slack
I'll
go
ahead
and
Sh
or
in
our
message
cue
I
will
go
ahead
and
share
my
screen.
A
I,
don't
have
to
see
my
embarrassing
number
of
tabs.
Okay
first
thing
on
the
agenda:
it's
something
we've
been
talking
about
a
little
bit.
Well,
actually,
no
sorry!
This
is
something
new
I
think.
What
is
this?
Oh
yeah?
Sorry,
do
you
want
to
kind
of
spearhead
this?
This
survey.
A
Can
you
hear
me
yeah
no
problem.
I
was
just
going
over
your
first
item
on
the
agenda
regarding
status.
Fields,
so
do
you
want
to
kind
of
go
into
so
this
okay?
This
PR
has
been
open
for
a
while
it's
a
PR
to
print
status
as
part
of
like
the
shorthand
Cube
cuddle
get
ANP
right.
D
A
So
is
it
two
separate
problems,
one
being
it
would
be
a
really
nice
it'd,
be
really
nice
to
have
standard
condition,
types
and
messages
Etc
as
built
into
our
API
and
another
being
Cube
Builder
doesn't
provide
an
easy
way
to
view
just
the
most
recent
status
or
most
recent
condition
in
a
list
of
conditions.
Are
those
two
separate
problems,
or
are
you
kind
of
lumping
them
all
together
here.
D
Yeah
I
think
there
are
two
separate
problems
but
related
because
so,
if
I
want
to,
let's
say
I
have
two
different
condition:
types
that
I
Define
for
my
implementation
and
I
want
to
be
able
to
put
one
of
those
condition.
Types
in
my
shorthand
I
have
to
come
upstream
and
Implement
that
right
like,
but
there
is
no
type
that
we
Define
that
is
uniform
across
all
implementations
Upstream.
C
A
That
one
I
think
I
I
mean
I,
think
it's
totally
possible
I
feel
like
at
the
end.
At
the
end
of
the
day,
when
we
first
implemented
status
for
amp,
the
understanding
was
that
we
would
kind
of
eventually
conform
on
a
set
of
statuses,
or
at
least
a
set
of
shared
status,
condition
properties
I
still
don't
really
see
how
we're
gonna
solve
the
shorthand
notation
like
this.
D
Could
be
trying
to
find
the
type
right
like
so
let's
say
that
we
have
a
type
that
says
admin
policy
ready
in
that
case,
I
can
come
to
the
queue
Builder
and
in
my
shorthand,
the
additional
printer
columns
that
you
see
there
in
the
Json
path.
I
can
just
say:
I
want
to
print
that
specific
type
of
status.
In
my
shorthand.
D
A
Nope
not
at
least
not
in
shorthand,
and
that
was
kind
of
I.
Think
one
of
my
first
comments
on
here
was
I
mean
you
can
essentially
for
now
do
the
same
thing
with
Json
path,
right
or
most
of
the
same
logic
like
it
would
be
more
complex
if
you
were
matching
on
a
specific
condition
type,
but
you
should
be
able
to
do
the
same
thing.
So
I
think
Dan
has
a
good
point
right.
C
D
E
But
I
mean
you
know:
if,
if
we
knew
what
what
statuses
we
wanted,
we
would
already
have
added
them
right,
or
you
know,
I
I
assumed.
The
idea
was
that
we
would.
We
would
figure
it
out
once
we
had
implementations
or
something
I,
don't
know,
I'm,
not
quite
sure
what
we
would
have,
because
we
kind
of
agreed
with
the
network
policy
status
that
there
was
no
good
way
to
make
it
work.
F
Yeah
I
just
wanted
to
add,
also
based
on
the
network
policy
experience
of
the
status
that
actually,
if
we
Define
that
condition,
we
need
to
probably
all
agree
what
exactly.
That
means
right
for,
like
every
implementation
and
I
think
being
ready,
May
mean
different
things
in
different
implementations
like
as,
for
example,
some
of
them
may
only
consider
the
policies
ready
one.
All
the
existing
pods
are
updated
like
as
peers
or
subjects
right,
and
then
the
other
one
is
maybe
similar
to
the
network
policy.
F
One
which
is
a
I,
could
parse
the
network
policies
back
so
to
say
or
I've
seen
it
right.
So
there
are
multiple
definitions
or
actually
statuses
that
plugins
may
want
to
expose
or
actually
be
able
to
expose.
If
they
are
very
distributed,
they
may
never
be
able
to
set
this
status
ready
in
a
centralized
manner.
So
to
see.
G
Yes,
I
think
I
think
French
policies
we
actually
do
and
to
my
knowledge,
it
does
have
something
to
do
with
data
past
realization,
so
so
I
think
the
the
if
we
do
wanted
to
have
some
sort
of
like
unified
status
thing
it
to
me.
It
should
definitely
mean
something
like
if
a
status
is
reported
as
completed
or
succeeded
or
whatever
it
should
mean
that
the
network
policy
has
been
taken
into
effect
and
every
traffic
regulated
by
that
Network
policy
will
hit
that
now
policy.
G
Basically-
and
that's
my
opinion
on
this
I,
don't
know
if
it's
sort
of
like
doable
for
all
implementations
for
the
I
mean
Network
policy
as
an
added
that
it,
maybe
it's
a
little
bit
challenging
for
them
to
to
have
a
centralized,
but
for
entry
we
do
and
we
for
I
mean
now
honestly.
We
have
that
actually
implemented
this
yet,
but
I
think
it
should
be
pretty
straightforward
for
us
as
well.
A
Do
you
think
Yang
I
like
I?
Look
at
it
this
way,
it's
kind
of
two
steps
right
like
I,
think
the
one
thing
we
could
coalesce
on
Upstream
is
is
a
set
of
standard
types
like
literally
not
like
set
statuses
but
a
set
of
standard
types
that
an
implementation
can
report,
whereas,
like
message,
reason
status
all
the
other
standard
condition,
Fields
can
be
left
up
to
the
implementation.
I'm,
pretty
sure.
A
That's
what
Gateway
API
does
not
saying
we're
ready
to
do
that
yet,
but
I
feel
like
that's
what
we
would
end
up
doing
one
day
and
make
it
so
that
we
can
add
new
types
if
needed.
A
It
could
be
generic
across
our
holic
guy
right,
like
we
could
say
you
could,
or
it
could
be
specific
to
each
object,
right,
I
admin,
Network
policy,
ready
admin,
policy,
processing,
etc,
etc,
or
it
could
be
resource
creating
resource,
reconciling
resource
finished.
You
know
what
I
mean.
A
C
G
I
also
think
that
you
know
there
might
be
something
that
you
know,
even
though
we
have
a
list
of
statuses,
obviously,
but
there
might
be
some
logic
we
can
Implement
in
their
needs
so
that
it's
solo
flexor
identifies
all
of
them
to
a
single
one,
because
look
at
parts
right,
because
pods
has
the
same
thing.
G
It
has
a
a
bunch
of
status
conditions
like
like
containers,
ready
pause
scheduled
on
the
Node
or
whatever,
but
by
the
end
of
the
day,
if
you
do
coupon
or
get
pod,
there's
only
going
to
be
one
status
which
is
going
to
be
pending
or
running
or
something
they
might
have
sort
of
already
figured
out
something
underneath
so
that
it
looks
at
all
the
list
of
conditions
and
say
well
if
I
were
to
just
display
one
status
on
the
coupe
cuddle
get
what
it
will
be
right.
So
you
know
we.
G
This
is
something
that
we
can
also
go
for,
maybe
so
that
we
usually
people.
Don't
definitely
don't
want
it
to
see
a
list
of
you
know,
status
conditions,
they
don't
care
about
the
history
they
just
care
about.
You
know
what's
the
latest
status
so
but
I
I
don't
know
how
that
particularly
is
implemented.
Oh
I'm,
just
you
know
thinking
a
lot
here.
F
Yeah
I
just
wanted
to
say
that
it
may
be
a
bit
more
difficult
for
Network
policies
and
that
method
policies
to
Define
like
what
every
status
means,
because,
if
even
based
on
the
definition,
young
just
gave
right,
Network
policy
may
never
become
Regis.
So
to
say,
let's
say
like
you,
create
a
new
pod
pure
pod,
and
it's
not
yet
handled
by
all
the
network
policies
that
select
it
as
peer.
Should
it
go
unready
or
not,
because
it's
not
at
the
moment
is
actually
applied
to
all
the
pods.
It
should
select
right.
F
A
Right
that
that
was
our
whole
rationale
at
the
beginning,
because,
like
we've
talked
in
circles
in
this
group
about
either
first
it
was
Network
policy
status
which
got
implemented
as
a
cap.
No
one
used
it
got
backed
back
out
and
now
with
admin
air
policy,
so
it
was
always
like.
Let's
allow
cnis
to
do
whatever
they
want
right
and
then,
hopefully
down
the
line,
we'll
see
what
a
couple
invitations
are
doing
and
maybe
there's
some
overlap.
A
Maybe
there's
not
you
know
what
I
mean
reporting
a
centralized
status
for
something
like
admin
hour
policy
is
hard
because
you're
having
to
report,
you
know
a
data
plane
object,
that's
high
in
flux,
so
I
I
don't
know
what
the
best
way
to
do.
It
is.
F
Maybe
actually
in
this
condition
we
could,
if
we
don't
set
the
message.
I,
don't
know
if
that's
possible
for
a
predefined
type.
So
let's
say
we
have
one
type
that
will
be
called
the
single
admin,
Network
policy
status
and
then
every
implementation
defines
what
exactly
it
means.
But
we
can't
display
that
as
a
single
current
status
and
then
you
can
say
you
can
set
it
or
not
set
it.
But
what
exactly
that
means
will
be
reflected
in
the
message
and
then
different,
cnis
May
Define.
It
differently.
A
What,
if
we
open
so
take
this
PR,
that
this
area
has
already
done
possibly
shelf
it
for
now,
but
open
an
endpap
out
of
it,
and
we
can
just
like
continue
discussion
there
rather
than
doing
it
here.
Does
that
sound
good.
D
Yeah
I
think
that
looks
I
agree
that
this
PR
is
definitely
not
in
a
state
for
much
right
because
it's
useless
like
even
if
you
do
the
Json
path
right
now,
I,
don't
know
what
value
it
can
give
to
the
users
because
it
depends
from
implementation
and
implementation.
So
let's
do
it
the
right
way
and
let's
not
hurry
it
up
like
all
of
them
said
so.
A
Yeah
that
works
great
and
maybe
I'm
trying
to
man
I
was
trying
to
think.
Maybe
we
should
add
a
note
somewhere
that
says
you
know
we
know
folks
are
wanting
to
report
status
for
now
jq's
the
best
way
to
do
it
kind
of
in
one
line,
but
I
I.
Don't
think
we
need
to
do
that.
I
think
an
m-pept
is
all
right.
A
Sweet
thanks,
sir
yeah
I
know
this
is
important
and,
like
we've,
never
really
figured
out
how
to
do
this
properly.
So
if
we
can
document
it
and
have
I
think
we
can
come
to
some
sort
of
conclusion
and
that'll
be
it
right.
Hopefully,
we
can
figure
out
how
to
do
statuses
for
any
and
all
objects.
We
create.
A
A
A
Cool
we'll
move
on
this
is
an
n-pep
we've
been
talking
about
for
quite
some
time.
It's
been
through
a
few
review
Cycles
beforehand
from
Dan
and
yang.
My
goal
is
to
get
it
merged
this
week,
so
it's
kind
of
like
a
final
call
for
reviews.
I,
don't
think
I
have
much
to
say
about
it
here,
because
we've
already
talked
about
it
quite
a
bit.
Do
you
want
to
say
anything
sorry
or
did
I
cover
it.
D
Yeah
I
think
this
is
good.
I
can
see
some
reviews
from
Rahul
and
Andrew,
which
I
will
try
to
address
asap.
I
did
change
it
to
meet
the
input,
documentation,
requirements,
I'm,
hoping
the
format,
looks
better
now.
Also,
one
thing
that
I
wanted
to
bring
up
was:
oh,
yes,
we
spoke
a
little
bit
about
it
in
the
last
Upstream
meeting
around
the
differentiation
between
traffic
towards
a
node
and
traffic
towards
a
host
Network
part.
D
It's
again
one
of
those
we've
spoken
about
it,
a
lot
in
Cycles
thing,
but
I
want
to
point
out
here
and
Nadia
raised
a
great
Point
here
right
like
there's
three
distinct
cases
that
we
can
run
into
right,
like
one
of
them
is
the
traffic
towards
in
node
can
be
selected
using
a
node
selector
and
a
node
object
in
kubernetes
is
the
entire
object.
So
it
means
it
includes
the
host
Network
pods
on
the
Node
yeah.
That's
one
case.
D
The
second
case
is
where
we
have
the
host
Network
pod
selector,
which
will
be
an
experimental
flag
that
I
plan
to
add,
because
maybe
cnis
cannot
differentiate
between
a
node
traffic
and
a
host
Network
traffic,
which
is
fine.
They
don't
have
to
implement
that
label,
but
a
host
Central
Parts
selector
is,
as
the
name
implies,
traffic
towards
a
host
network
card
alone.
D
That
is
matched
by
that
selector
and
not
the
entire
node
right,
but
there's
a
third
case
which
I
had
never
even
thought
about,
and
thanks
nowaday
for
bringing
it
up,
which
is
a
node
selector,
excluding
host
network
cards.
Right
like
if
that's
the
case,
anybody
wants.
That's,
not
something
I'm,
considering
in
the
zenpep
and
I'm
outlining
it
as
a
non-goal
and
I
want
to
make
sure
we're
clear
on
that.
F
F
On
that
and
I
think
we
can
so
we
do
have
priorities
now
right.
So
if
you
want
something
to
apply
to
node
traffic
only
but
not
host
Network
pods
in
case
you
have
House
Network
pods
implemented,
you
can
just
set
a
host
Network
pod
selector
with
a
higher
priority
and
say
what
it
needs
to
do.
Then
everything
that
just
falls
through
right
will
get
to
the
node
selector,
which
will
not
match
host
Network
Port,
so
I
think
maybe
we
can
just
say
it
in
under
known
goals
as
like
one
way
to
work
around
that
case.
A
A
You
know,
allow
traffic
to
a
node,
but
don't
allow
to
host
Network
pods.
It
might
just
not
look
like
that
in
yaml
right.
It
might
look
like
only
allow
traffic
to
I,
don't
know
Port
880
on
the
host
right
and
then
there
might
be
other
ports
being
used
by
host
Network
pods.
A
F
Yeah
I
think
that's
correct,
but
it
I
think
it's
also
good
to
mention
that
in
known
goals,
right
when
you
say
okay,
there
is
a
all
kinds
of
traffic
with
node
selectors
and
we
also
have
a
separate
use
case
for
a
subset
which
is
host
Network
pods.
Then
you
kind
of
ask
okay:
what's
with
the
left
part
of
it,
which
is
not
only
traffic
and
then
we
say
yeah,
we
are
not
going
to
consider
it
in
this
yeah.
A
H
A
Okay,
sweet
so,
in
my
opinion,
admin
Network
policy
has
been
designed.
You
know
specked
and
implemented
in
reference
to
one
cluster
as
it
pertains
to
multiple
clusters.
I
think
that's
going
to
require
a
different
n-pep
that
maybe
presents
even
a
new
object
like
I.
Don't
know
if,
if
we're
ready
to
tackle
the
multi-cluster
question
just
now
in
admin
Network
policy,
but
it's
really
good
to
know
that
folks
are
looking
for
that
right.
A
H
I,
don't
think
ciders
are
gonna,
be
something
that
we
could
use
so
it's
possible
to
identify
the
cluster
itself
based
on
the
cider,
but
I'm
not
sure
that
namespaces
would
necessarily
be
identifiable
like
that.
So
one
possible
idea
was
that
namespaces
are
synced
between
the
Clusters,
so
namespace
one.
It
means
the
same
thing
as
same
space.
You
know
in
two
clusters
now
exactly
how
this
would
work
is
yeah,
a
good
question
and
I
think
it's
it's
also
a
question
of
how
do
you
actually
specify
a
different
cluster
based
on
IP
address
or
cider?
H
As
you
said,
is
one
way
or
there
could
be
other
identifiers,
but
yeah,
it's
it's
possible
that
adminator
policy
is
not
not
the
best
place
for
this
I
just
wanted
to
raise
it
just
to
keep
this
story
in
mind.
A
I
D
I
I
think
that's
an
interesting
use
case
of
multi
cluster
Network
policies.
My
question
was:
if
namespace
B
means
the
same
thing
in
cluster
one
and
cluster
2,
then
that's
just
east
west
traffic,
like
it's
like
it's
an
implementation
detail
on
how
the
cluster
the
cluster
networking
is
set
up
and
how
the
policy
itself
is
implemented.
But
as
far
as
selection
goes,
it's
a
namespace
lecture
at
the
end
of
the
day,
so
I.
G
Think
it's
already
it's
not
as
simple
as
that,
because
Indian
Trail,
we
do
have
some.
You
know,
multi-cluster
Network
policy,
I.
Think
the
problem
is
that
when
you
define
a
policy
object,
it's
still
going
to
be
in
a
single
cluster.
Now,
when
your
subject
says
that
I
want
to
select
namespace
X
right,
it
usually
means
that
I
wanted
to
select
namespace
X
in
this
particular
cluster.
But
you
know,
people
who
apply
this
network
policy
might
be
in
the
mind
of
thinking.
G
Well
am
I
applying
this
only
to
the
namespace
x
of
my
cluster
or
it's
all
namespace
X,
it's
cars
clusters.
In
that
particular
case,
you
were
sort
of
like
find
a
way
to
propagate
this
policy
across
all
the
Clusters
in
the
cluster
set
right.
So
that's
a
different
story.
Now
in
the
pier,
you
might
have
a
point
that
you
know
if
I
select
namespace
b
as
a
peer
and
since
all
the
Clusters
has
the
namespace
B.
G
Maybe
and
those
shouldn't
mean
the
same
thing
now
the
policy
might
need
to
be
native
for
the
multi-cluster
aware,
but
in
terms
of
subject,
that's
a
totally
different
story.
I
think
in
normal
senses
you
would
need
to
apply
a
policy
in
each
of
the
Clusters
down
for
for
the
for
the
policy
to
work.
Well,
you
know
that's
definitely
under
the
under
the
assumption
that
we
still
use
the
admin
now
policy
object
and
you
know,
as
Andrew
has
suggested,
maybe
for
the
multi-cluster
use
cases.
Another
crd
might
make
more
sense.
D
A
So
so
yeah,
that's
a
very
I
think
we
round
about.
We
did
a
long
round
about
answer
to
your
to
your
question
and
you
know
just
to
Circle
it
back
like
we
haven't,
thought
a
lot
about
multi-cluster
use
cases
in
this
group
yet
but
we're
eager
to
explore
them.
We
just
need
folks
to
kind
of
come
in
and
help
spearhead
it
because
we're
definitely
limited
on
Cycles.
So
that
being
said,
I
think
we'll
move
the
discussion
on.
Unless
there's
anything
else,
I
know
Boaz
had
his
hand
up.
A
Cool
does
that
sound
good.
A
Cool
but
I'm
going
to
keep
it
in
mind.
The
one
other
thing
to
think
about
too,
with
those
sort
of
apis
is
no
longer
are
our
Downstream
consumers
only
going
to
be
cnis
right,
like
in
a
lot
of
cases,
there's
there's
multi
multi-cluster,
Network,
plugins
and
so
now
you'd
be
looking
at
an
API,
that's
implemented
across
a
couple
different
entities.
A
So
just
keep
that
in
mind
like
we're
going
to
end
up
with
apis
like
that
most
likely,
especially
with
multi-cluster
and
even
possibly
like
Gateway
API
implementations,
but
it's
definitely
going
to
get
a
bit
more
complicated.
A
good
point
of
reference
to
how
some
of
these
problems
were
solved.
Is
the
MCS
API,
multi-cluster,
Services
API,
that's
part
of
another
entry
Sig.
They
kind
of
address,
namespace,
sameness
and
stuff
like
that.
A
Thank
you.
Yeah
no
problem,
I'll
link
the
I'll,
try
and
find
the
links
to
that
after
this
meeting
for
the
MCS
stuff.
I,
don't
have
it
handy
right
now,
but
I'll
link
it
in
our
agenda:
sweet,
okay,
roll
all
the
way
back
around
to
you
on
the
egress
cap.
I
Yeah
I
I
made
this
comment
in
the
I
added
it
as
a
comment
to
that,
but
I
wanted
to
bring
it
up
in
the
discussion
as
well.
I'm
pitching
my
favorite
use
case,
which
is
fqdn,
selectors
or
egress
I
think
this
is
as
good
a
time
as
any
to
add
them.
I
As
a
user
story,
I
think
there
I
think
we've
established
that
they're
pretty
widely
used
in
most
cnis,
like
most
cnis,
have
some
flavor
of
this
in
their
own
network
policy,
crd
so,
broadly
speaking,
I
don't
think
it's
unimplementable.
Obviously
the
details
will
have
to
be
ironed
out
so
that
everyone's
happy
but
yeah
I'm
just
curious.
If
people
have
any
concerns,
thoughts.
C
F
Yeah
I
do
have
some
thoughts
about
that,
because
I
actually
was
going
to
implement
a
separate
npap
for
that.
That's
a
big
topic
for
a
discussion
so
and
that's
like,
can
you
type
of
appear
that
yeah
we'll
make
sense
generally
speaking,
but
it
may
overload
the
existing
and
pep
I
think
so
yeah
the
actually
I
have
a
plan
to
open
an
unpap
for
that
soon,
and
there
is
a
history
of
discussion
around
fqdn
based
rules
for
Network
policies.
Also
I
will
link
the
there
is
like
yeah.
A
Most
of
those
most
of
those
links
are
from
Raul
trying
to
push
this
forward
like
he
was
the
one
like
way
back
then
so
so
I
think
it
would
be
awesome
as
I'm
totally
open
to
another
npap.
Maybe
you
and
Raul
can
kind
of
work
together
on
it
even
be
co-authors
on
it.
Whatever,
however,
you
want
to
do
it.
Does
that
sound
all
right,
bro,
because
I'm,
a
full
plus
one
for
that.
I
Yeah
I,
don't
I,
don't
mind
that
I
think
I
mean
I'm
just
curious.
What
the
boundaries
of
this
one
are
like.
This
says:
support
for
egress
traffic
control,
so
I
was
thinking
that
this
makes
like
this
makes
sense
as
a
addition
to
the
end
pep,
but
we
can
make
it
more
granular,
I,
guess
a
specific
type
of
egress
control
that
that's
fine
too.
I
F
Makes
sense
yeah
to
have
a
separate
one
and
they'll
definitely
reach
out
to
you.
I
think
I've
read
some
of
the
documents
you
created
before
and
hopefully
now
we
have
some
more
questions
and
examples
actually
of
how
this
feature
is
implemented
by
different
different
companies.
So
we
can
actually
make
the
next
step
yeah.
A
Should
do
this
in
Upstream,
so
congrats
on
that?
First
of
all,
thank
you.
Second
of
all,
I
agree
that
it
totally
fits
in
with
this
use
case,
but
we've
we've
made
the
npep
process
super
lightweight.
So
if
we
can
have
multiple
small
npaps
I
think
it's
going
to
work
better
than
like
one
big
one.
Does
that
make
sense?
That's
the
only
reason
I'm
pushing
you
to
open
another
one.
A
Makes
sense
but
I
think
it
should
be
done
in
this,
and
you
know
we've
been
we've
gotten
hung
up
so
much
before
on.
You
know
the
Implement
implementability
of
features
I
mean
I'm,
going
to
talk
a
little
bit
about
that
at
the
end
of
this
meeting,
we're
trying
to
adopt
Gateway,
api's
kind
of
method
of
stable
versus
experimental
release
channels,
and
that
should
help
a
lot
with
this
right.
Like.
A
Okay,
anything
else
on
this
egress
or
the
first
stab
out
of
the
egress
npep
I
am
going
to
merge
it
this
week.
If
there
is
no
kind
of
pushback,
it
does
kind
of
lead
me
into
what
I
put
next
on
the
agenda.
We
should
probably
have
different
npep
like
merge
requirements
right
like
it
shouldn't
I,
don't
think
it
should
just
be
one
approver.
The
approvers
right
now
in
the
repo
are
Dan
Yang
and
I
dude.
A
A
Okay,
the
one
thing
I
probably
will
do
for
sure
is
make
it
that
two
out
of
three
approvers
have
to
give
it
a
proven
lgtm
before
it
merges,
but
other
than
that
I
didn't
I,
couldn't
really
think
of
anything.
F
F
A
Right
I'll
at
least
start
with
the
first,
which
is
2030
approvers,
but
then
I'll
look
into
how
we
could
possibly
add
verification
for,
like
a
number
of
non-official,
plus
ones,
right
kind
of
like
Quorum
voting,
there's
some
tools
like
mergify
that
are
really
cool,
that
I
use
in
other
projects,
where
you
can
apply
conditions
like
that
to
PR,
so
I'll
take
a
look
at
it.
I
just
wanted
a
soft
sound.
That
idea
here
now
that
some
of
these
end
Pips
are
kind.
A
Point
when
in
doubt
see
what
Gateway
is
done.
A
And
at
the
end
of
the
day,
it'll
make
it
easier
because
I
think
all
this
stuff
that
we're
kind
of
using
that
Gateway
does
will
eventually
move
into
its
own
sort
of
repository
that
Sig
Network
subgroups
can
follow
or
use
cool.
Okay
in
line
with
all
of
this
I
open
up
a
draft
for
implementing
staple
and
experimental
API
channels.
It's
not
super
useful
right
now,
because
the
way
a
Gateway
API
defines
it
at
least
is
any
object.
A
That's
in
Alpha
is
automatically
experimental,
so
everything
in
our
API
today
is
experimental,
but
I
think
it
could
be
useful
in
the
future.
If
we,
for
instance,
add
a
fqdn
policy
to
or
an
fqdn
selector
to
add
a
network
policy
we're
ready
to
move
admin,
Network
policy
to
Beta,
but
the
fqdn
selector
specifically
isn't
ready
to
go
to
Beta.
Then
we
can
mark
it
as
experimental
and
still
move
the
entire
rest
of
the
API
to
Beta.
If
that
makes
sense
so
anyway,
I'll
be
working
on
that
more
this
week.
A
I'll
need
some
reviews,
but
it
should
kind
of
help
us
out
in
the
long
term
and
then
there's
also
a
concept
of
extended
support
resources.
I
think
that
I
need
to
explore
within
Gateway
API,
but
that's
more
in
relevance
to
actual
like
levels
of
support
that
the
Sig
provides.
So
not
exactly
sure
how
it
will
work
here.
Yet.
I
A
Work,
that's
what
I
was
yeah
I'm,
probably
getting
my
terms
mixed
up.
That's
what
I
would
think,
but
how
that
plays
into
the
API
channels
that
that
we're
shipping
in
each
release
I'm
not
really
sure,
like
I
kind
of
thought.
We
could
overload
the
channel
to
mean
that,
but
Shane
kind
of
pointed
something
out
to
me
earlier
in
this
meeting
around
also
looking
at
levels
of
support,
so
I'm
gonna
go
take
a
look
at
that
and
I'll
apply
anything
I
find
at
this
PR.
A
F
Yeah
I
just
wanted
the
quick
last
that
I
think
there
may
be
like
an
optional
field
of
feature
right.
So
experimental
is
like
the
level
of
its
maturity.
Well,
I
think
we've
discussed
right
that
some
of
the
features
or
Fields
may
actually
stay
like
optional
forever,
meaning
that
you
don't
need
to
implement
that
to
say
that
you
implement
ANP.
D
F
F
A
You
don't
have
to
but
yeah.
Basically
we
have
the
mechanism
to
do
so
already
in
our
C
I
have
to
structure,
but
we
haven't
had
a
good
level
of
like
documentation
around
it.
And
again
it's
very
similar,
like
awapi,
does
so
I'll.
Add
that
in
this
PR.
A
A
Think
this
is
just
from
Nadia
for
another
poke
for
folks.
If
they
could,
please
check
out
her
and
pep
on
tenancy.
Is
there
anything
else
you
want
to
say
Nadia.
F
Yeah
actually
because
I
I
created
this
and
but
it
as
a
rework
of
the
existing
API
but
I
wasn't
going
to
add
new
user
stories,
so
I
actually
marked
its
status
as
I.
Don't
remember
how
it's
called
the
next
one,
which
has
like
the
the
second
level,
because
we
already
have
the
user
stories
defined
right.
So
then,
mostly
what
I
was
talking
about
in
that
is
how
I
wanted
to
change
the
API
itself.
F
A
Yeah
for,
for
me
at
least
when
I
reviewed
it
I,
didn't
quite
understand
that
what
you
just
said,
which
is
like
you,
were
basically
just
trying
to
re-analyze
the
implementation
of
the
existing
user
stories
right
I
think
you
need
to
make
it
very
clear
if
you're,
adding
or
altering
any
of
these
the
existing
user
stories.
It's
gonna
have
to
start
from
the
bottom
right,
but
if
you're
just
trying
to
reanalyze
how
the
existing
ones
are
implemented
than
what
you
did
makes
a
lot
of
sense.
F
Okay,
I
think
I'll
leave
it
like
that
for
now
maybe
add
some
comments
in
the
pr
itself
and
then
we'll
see.
If
we
actually
decide
to
change
the
existing
user
stories,
we
also
need
to
actually
go
and
find
them
because
I
think
the
Baseline
level
for
tenancy
I'm,
not
sure
if
there
is
an
explicit
user
story
for
that,
but
it
was
possible
to
do
and
then
that
is
important
to
decide
on
in
this
potential
new
API,
so
yeah,
maybe
a
maybe
I,
can
try
to
do
both
in
the
same
PR.
I
don't
know.
A
Yeah
I
think
for
yeah.
This
story,
five
on
our
website
at
least-
was
the
one
around
Baseline
rules,
so
I
know
it's
kind
of
a
unique
intersection
of
tenancy
and
this
one
trying
to
think
of
the
the
least
confusing
way
to
do
it.
To
be
honest,.
F
A
No
worries
we'll
keep
we'll
keep
going
through.
It
I
think
our
kind
of
approach
to
tenancy
was
a
bit
confusing,
so
Yang
already
left
I
I
really
value
Yang's
opinion
on
this
I
know
he
feels
strongly
so
I'm
going
to
keep
prodding
him
to
give
more
reviews
as
well,
but
if
there's
anyone
else
on
the
call
who's
interested
with
how
we're
addressing
tenancy
with
admin
hour
policy,
please
take
a
look
at
this
end.
Pep
very
interesting.
A
Okay,
last
one
Syria
I'm,
assuming
this
is
what
you
had
your
hand
raised
about:
cyclonus
yeah.
So
for
folks
who
are
a
bit
newer
here,
cyclonus
we
initially
were
hoping
to
use
as
kind
of
our
conformance
test
engine,
because
it
had
done
a
lot
of
great
work
with
truth.
Truth
table
based
testing
I
think
it
was
a
bit
too
heavy-handed
for
our
kind
of
simple
first
stab
at
conformance
test,
and
now
it's
just
a
bunch
of
codes
sitting
in
our
repo
I
mean
I
mean
I'm
inclined
to
get
rid
of
it.
A
D
Yeah
I
mean
I
I
actually
also
have
concerns
like
in
the
sense
that
we
have
started
this
new
conformance
test,
which
is
exactly
like
what
Gateway
API
does,
but
with
all
these
new
in-peps
and
new
features
coming
in,
maybe
the
truth
table
metrics
is
easier
to
digest
than
having
individual
tests.
But
having
said
that,
I
personally
felt
that
that
framework
that
we
added
nearly
was
easier
than
trying
to
find
what
was
pass
and
fail
in
the
in
the
test.
D
Script
for
different
combinations,
but
I
could
be
biased
right
because
I'm,
the
one
who
wrote
the
news
so,
but
at
least
at
the
very
least
we
I
don't
know
if
we
have
issues
on
the
repo
like
I,
do
I
just
wanted
to
say
like
cut
off
or
help
kind
of
a
thing
in
case.
Anybody
on
the
call
is
interested
in
spearheading
this.
J
Yeah
I've
actually
contributed
a
bit
to
the
cyclonus
original
repo
and
my
team
has
found
it
pretty
useful.
J
A
D
A
So,
if
you
are
willing
just
to
kind
of
go
in,
maybe
make
an
issue
or
to
start
with,
like
very
simple
things,
you
know
make
sure
it
works.
Do
instructions
for
how
to
run
it
from
our
repo
I
think
that
would
go
a
long
way
to
to
advocate
for
us
to
keep
it
around.
If
that
makes
sense.
A
Okay,
we
are
almost
at
time
I'm
pretty
much
done
for
today.
Does
anyone
have
anything
else
they
want
to
bring
up.