►
From YouTube: SIG Network - Network Policy API Meeting 20221107
Description
SIG Network - Network Policy API Meeting 20221107
A
Awesome
hello:
everyone
today
is
November
7th
2022..
This
is
a
meeting
of
the
Sig
Network
policy,
API
subgroup
to
Sig
network.
It
is
a
cncf
certified
meeting.
So
let's
be
nice
and
yeah
have
a
good
one
today.
So
it's
the
week
or
I
guess
the
meeting
following
qcon
so
I
see
some
familiar
faces
here.
A
Who
I
met
at
kubecon
and
may
not
have
been
here
before
I
think
it'd
be
great
to
just
kind
of
get
a
quick
intro,
say
hi,
while
you're
here
so
yeah
I
think
I
see
three
so
Mike
Ryan
and.
B
C
Hey
I'm
Mike
I
work
with
hash
core
on
the
console
team.
I
am
primarily
involved
in
Gateway
API
stuff,
but
caught
a
talk
on
admin,
Network
policy
stuff,
while
I
keep
gone
and
definitely
interested
in
it.
As
we
start
thinking
about,
I'll
see
things
for
gamma,
which
is
Gateway
API
for
East
West,
ServiceMaster
stuff,
so
I'm,
mostly
just
lurking
I'll,
try
to
learn
a
bit
more.
D
Network
intro
meeting
something
there
anyway
and
then
just
jumped
in
on
a
few
GitHub
issues
got
a
few
PR's
in
over
the
last
few
days,
so
that
was
fun
and
I.
Think
today
we're
going
to
talk
about
conformance
testing,
so
that'll
be
fun
and
I
think
that
was
my
first
ever
PR
to
anything
kubernetes.
So
that
was
cool.
B
Sure
I'm
Dave
lunroe
I
work
for
illumio
and
we've
been
trying
to
figure
out
how
to
kind
of
attach
our
control
plane
to
kubernetes
network
data
plane
and
not
have
to
do
it
27
different
ways
in
27,
different
places
and
hoping
this
is
going
to
yield
the
answer
and
that
I'll
be
able
to
contribute
in
some
way.
A
Cool
yeah
I
mean
this
is
mostly
related
to
policy
apis
like,
but
I
definitely
think
that's
an
important
part
of
it.
So
hopefully
we.
B
A
Know,
Gateway
API,
awesome,
yeah,
awesome
well,
super
excited
to
see
some
new
folks
out
of
kubecon
already
have
some
new
contributions
from
Brian,
so
yeah
huge
shout
out
like
we
need
more
of
that.
We
need
help.
So
that's
great
hopping
straight
into
it.
At
cubecon
we
did
a
contributors
talk
on
admin,
Network
policy
which
is
really
well
received.
I
know
Mike
was
there
and
some
others
as
soon
as
I
get
the
link
to
it.
I
will
post
it
here.
A
We
also
spoke
about
admin,
Network
policy
in
the
Sig
Network
meet
and
greet
and
again
Brian
was
there
and
I
believe
that
went
really
well
too
so
I'll
post
those
slides
as
well
for
folks
who
aren't
there
and
as
soon
as
I
have
access
to
the
recordings
I'll
share
those
as
well
for
everyone
to
kind
of
check
out
some
other
interesting
notes
from
kubecon
itself
was
everyone.
I
talked
to
was
really
interested
with
admin,
Network
policy
and
policy
in
general.
It
wasn't
just
restricted
to
that.
A
Rahul
I
had
many
a
conversation
with
folks
interested
in
some
sort
of
fqdn
policy
and
I
pointed
them
your
way.
So
that
was
pretty
exciting
to
see
you
know
the
the
need.
Is
there
it's
just
about
like
what
we
can
actually
accomplish
with
our
you
know,
Upstream
time
in
this
group,
so
it
was
definitely
motivating
and
I
hope
it's
motivating
to
everyone
here.
Who's
been
working
on
all
this
stuff
for
a
long
time,
I
know
Yang
and
Earl
will
have
been
with
us
for
a
while
working
on
it,
so
small,
but
dedicated
group
yeah.
A
It
was
a
good
time
get
to
meeting
getting
to
meet
a
lot
of
people.
It
was
really
a
great
great
experience
and
just
kind
of
gets
you
motivated
to
keep
working
on
it
right.
I
mean
it
was
pretty
cool.
For
you
know,
contributors
we
have
the
contributor
conference
and
it's
like
100
people,
and
then
the
conference
itself
was
like
ten
thousand.
A
So
you
can
really
see,
like
your
reach,
like
it's
pretty
cool,
to
be
working
on
the
core
on
core
kubernetes,
whereas
you
have
all
these,
you
know
consumers
of
it
in
every
bit
in
every
different
industry,
so
should
be
a
motivating
factor,
I
think
pretty
exciting.
For
me
at
least
Surya,
who
is
not
here
also
helped
us
present
the
admin
Network
policy
she's
another
red
Hatter
and
she
is
an
EU.
A
So
this
is
kind
of
a
hard
meeting
for
her
to
make,
and
that
was
actually
another
item
agenda
topic
I
was
going
to
put
on
is
the
possibility
of
moving
this
meeting
to
a
more
EU
friendly
time,
but
I
can
do
that
in
slack.
I
think
a
little
bit
easier,
I
can
throw
a
vote
Link
in
there
and
we
can
just
vote
on
it.
So
does
anyone
have
any
questions
from
kubecon
questions
about
m
r
policy
questions
in
general
happy
to
take
anything.
A
A
A
A
And
it
goes
into
some
of
the
day
two
stuff
we
haven't
really
thought
about,
such
as
versioning
and
stable
versus
experimental
channels
and
other
stuff
that
we're
gonna
have
to
start
thinking
about,
but
that
kind
of
all
brings
us
into
adding
some
conformance
testing
for
API.
So
that's
really
important.
Right
as
we
are
veering
off
into
creating
the
implementations.
I
know,
I,
think
Yang
is
kind
of
in
the
process
of
I
know.
Siri
is
in
the
process
of
for
oven,
kubernetes
I'm
hoping
Google
is
for
psyllium.
A
A
But
anyway,
the
the
point
being
is
like
we
have
these
implementations
kind
of
rolling
and
we're
gonna
need
a
set
of
tests
to
basically
ensure
that
those
implementations
are
conforming
to
the
API.
Now
that's
pretty
tricky
right.
It's
hard
to
do
to
start
thinking
about
conformance
tests
before
any
implementations
are
done,
but
I
think
it's
something
we
need
to
do,
and
Brian
kind
of
raised
his
hand
to
take
this
on
and
and
I
know.
A
Syria
also
was
looking
at
it
because
she's
writing
an
implementation,
so
I
I
just
wanted
to
voice
it
here
and
ask
if
anyone
has
any
suggestions,
you
know
ideas,
I
think
for
the
first
take
we're
gonna,
roughly
kind
of
try
to
mimic
what
Gateway
API
has
done.
They've
written
a
bunch
of
good
conformance
to
us.
C
Yeah
so
yes,
some
of
it
is
just
trying
to
get
consensus
from
at
least
like
two
to
three
different
implementations
on
what
proposed
functionality
is
something
that
seems
reasonable
and
hopefully
like.
In
our
case,
we
have
like
Envoy
and
a
few
non-onvoy
rushes.
So
if
you
have
any
like
outliers
that
like
have
a
very
different
implementation,
making
sure
that,
like
they
have
a
seat
in
those
conversations
and
then
once
that
your
consensus
of
like
writing
the
test,
then
basically
doing
a
second
pass.
C
Once
you
start
getting
implementations
and
updating
or
adding
to
the
performance
test
as
as
Maybe,
okay,.
D
C
The
time
there
was
no
like
Envoy
Community
implementation,
there
was
just
a
handful
of
vendors
implementing,
so
I
have
only
been
involved
with
the
project
for
the
past
year.
So
I
was
on
part
of
that
initial
phase,
but
we're
looking
at
it
was
similar
in
gamma
now
so
and
we're
planning
to
approach
it
a
similar
way
of
like
agree
across
istio
console
Linker,
D
and
Microsoft
osm
with
like
what,
like
folks,
look
like
what.
B
C
A
I
think
that
makes
a
lot
of
sense
and
it's
we're
thankful
that
most
of
the
implementers
are
usual.
They
come
to
this
meeting.
Usually
I
don't
know
Yang.
Do
you
have
any
thoughts?
There
I
know
you've
kind
of
been
working
on
an
implementation
for
antria.
Do
you
think
this
is
something
we
should
kind
of
get
rolling
with?
You
know
right
away
as
soon
as
possible,
or
are
you
not
even
ready
to
think
about
conformance
testing
yeah
well.
E
We
are
actually
you
know
what
we're
doing
right
now,
because
I
think
you
know
I've
been
doing
a
lot
of
things
in
terms
of
other
parts
of
the
policy
in
entry
and
now
I'm
devoting
some
time
myself
actually
to
develop
features
that
are
comparable
with
the
renewable
policy
so
and
Trail.
E
We
in
the
entry
we
have
our
own
policy
types
and
what
we're
trying
to
do
right
now
is
to
first
of
all
put
the
feature
feature
parody
into
the
entry
policy
and
I
mean
our
policy,
because
you
know
we
own
the
entire
policy
crds
right.
So
there
are
functionalities
that
are
in
entry,
but
in
other
mineral
policy
that
are
not
yet
Indian
Trail
policy,
for
example,
the
same
labels
and
now
same
label
stuff.
E
So
what
I'm
trying
to
do
is
that
I'm,
adding
you
know
those
functionalities
in
the
entire
policies
itself,
first
to
just
so
that
you
know
it's
easier
to
do
the
implementation
and
stuff
and
and
after
you
know,
we
actually
pour
a
mineral
policy
in
you
know
we
can
reuse
the
same
code,
because
the
logic
should
be
exactly
the
same.
It's
just
you
know,
another
crd
wrapper.
That's
new
user
facing
you
know.
The
controller
logic
should
be
the
same.
E
E
Yes,
definitely
when
we
roll
it
out,
I
think
we
definitely
needed
to
have
some
sort
of
like
conformance
testing
and
it's
better
in
my
opinion,
to
to
start
earlier
than
later,
because
otherwise
you
will
have
a
bunch
of
implementation
already
written
and
then
you're
asking
each
cni
basically
to
manually
test
it.
You
know
with
all
the
possible
scenarios
they
can,
think
of
which
is
gonna,
be
really
tedious.
Right,
yeah.
E
Be
exactly
exactly
okay,
so
so
I'm
hoping
that
you
know
for
anything
that,
to
you
know,
any
conformance
testing
scenarios
stuff
like
that
we
do
for
in
the
entry
could
be
like
a
reference
point,
for
you
know
any
Upstream
conformance
testing,
because
we
have
a
bunch
of
them
and
we
have
a
lot
of
them
already
in
intra
and
a
lot
of
the
regular
test
cases
is
like
a
matrix,
spacing
I
think
it's
really
close
to
what
Cyclones
you
know
looks
like
it's
like
when
you,
when
you
spring
a
bunch
of
namespaces
and
a
bunch
of
parts,
then
you
will
have
a
reachability
matrix,
and
then
you
apply
a
bunch
of
policies
just
and
I
I
Define
that
hey
from
from
part
one
to
part
15.
E
This
should
be
dropped
and
from
part
14
to
Part
12.
This
should
be
dropped
and
everything
else
on
this
Matrix
should
be
connected.
And
after
you
know,
it's
like
a
true
stable,
exactly
right.
So
so
after
you
apply
these
policies
and
you
run
all
the
connectivities,
you
just
verify.
The
reachability
Matrix
is
correct
and
I
think
this
is
how
the
the
conformance
test
is
built
right
now
in
entria.
E
For
you
know
the
other
features
of
entry
policy,
but
I
think
the
same
concept
should
apply
for
for
admin,
Network
policies,
because,
if
you,
if
you
think
about
this,
when
you
want
to
test
something
like
same
labels
and
not
same
labels,
you
definitely
needs
to
spin
up.
Quite
some
part
enable
to
to
make
sure
that
you
know
this
traffic
are
making
sense,
because
you
need
at
least
four
namespaces
for
for
the
results
to
to
to
actually
show
that
hey
the
policy
is
behaving
as
expected.
So.
D
E
A
We're
yeah
I
think
that's
a
really
good
place
to
start
right,
so
Andrea
has
got
a
cluster
Network
policy
object.
That
is
probably
the
most
similar
to
what
the
new
API
we've
created
out
of
out
of
the
cluster
scoped
policy
objects.
I've
seen
psyllium
also
has
one,
but
it's
more
of
like
just
a
existing
Network
policy
at
a
cluster
scope.
So
it's
not
really
a
different
API
and
Calico
is
more
similar
again
to
like
a
a
existing
Network
policy
sort
of
engine.
So
yeah
entry
is
probably
a
good
place
to
look.
A
We
just
have
to
make
sure,
obviously,
like
that
oven
kubernetes
can
Implement.
You
know
all
of
the
things
that
atrio's
implementing
and
at
the
core
they're
both
using
ovs,
so
they
should
be
able
to
I
believe,
but
it's
just
a
matter
of
like
assembling
a
list
of
tests
or
a
truth
table
of
tests
that
we
want
to
be
enforced
by
every
implementation.
Right
that
that's
the
goal.
Yeah.
A
C
E
A
Okay,
so
we
just
do
the
existing
Upstream
test,
so
yeah
I
think
that's
a
good
place
to
get
started
like.
Maybe,
if
you
want
Brian,
you
can
check
out
what
Andrew
does
and
check
out
some
of
their
tests
and
I
think
what
might
be
good
to
do
is
start
either
a
dock
or
do
it
in
a
GitHub
issue
and
just
start
thinking
through
cases
we
need
to
test,
maybe
not
like
manually
like
we
could
even
talk
at
the
truth
table
level
but
kind
of
thinking
about
the
design.
Now.
A
The
other
thing
of
note
that
kind
of
plays
into
this
issue
is
Yang
mentioned
it
earlier
and
a
lot
of
folks,
because
we
haven't
done
a
good
job
of
documenting
it.
Yet
we
have
this
tool
called
cyclonus
in
our
repo
today,
right
and
cyclonus
is
already
kind
of
just
that
it's
a
kind
of
conformance
engine
for
Network
policy,
so
our
original
plan
was
just
to
extend.
You
know,
reuse,
the
tooling
Cyclones
already
provides
and
extend
it
to
admin
hour
policy.
A
On
top
of
that,
we
need
to
add
more
documentation
like
right.
Now
you
come,
you
come
to
our
repo,
like
we
don't
talk
about
Cyclones
all
on
the
main
page,
like
we
don't
have
any
reference
to
it.
We
don't
have
you
know
we
have
the
existing
instructions
from
the
repo
that
we
imported
but
like
we
need
to
be
clear
with
users
of
our
repo
like
where
this
fits
in
and
so
there's
kind
of
a
lot
to
that
right.
So
that's
two
good
things
to
start
on.
A
I
think
Ryan
is
like
maybe
digging
Andrea
and
dig
into
Cyclops
like
maybe
get
cyclonus
up
and
running,
maybe
get
cyclonus
up
and
running
in
RCI,
because
they
did
have
some
CI
that
we
aren't
running
right
now,
because
we
just
like
consumed
their
work,
so
you
might
be
able
to
use
that
there
actually
had
it
running
for
and
it
looks
like
Andrea
Calico
anseling.
So
that's
interesting.
A
A
So,
just
of
note,
though,
we
might
not
necessarily
want
all
our
CI
to
be
running
in
parallel,
like
we
did
for
formatting
and
verification
like
like
some
of
this,
we
might
want
to
just
do
in
GitHub
actions,
because
it's
easier
for
us
to
iterate
on
and
like
these
are
bigger
systems
right,
but
yeah
good
place
to
start
and
for
anyone
on
this
call
who's.
Looking
for
a
good
first
tackle,
like
I,
have
an
issue
for
this.
A
D
A
Yeah
I
mean
I
I'm
still,
I
would
love
to
give
it
a
try
which
is
get
creating
a
evpf
based
admin.
Network
policy,
implementation,
Dan
and
I
were
gonna
kind
of
tackle
that,
whereas
he
was
going
to
do
the
network
policy
and
I
was
gonna,
do
that
in
our
policy.
I
think
our
priorities
have
shifted
a
little
bit
for
this
quarter,
because
they
do
kind
of
go
together
like
if
we're
going
to
do
admin,
Network
policy
and
evpf.
A
We
probably
need
to
do
the
network
policy
as
well
because
of
past
rules
so
yeah.
If
anyone
has
again
has
free
time
and
wants
to
learn
more
about
evpf
I'm
totally
down
to
help
out
with
it,
I'm
just
struggling
to
find
the
time
this
quarter,
what's
more,
what
what
seems
more
realistic
to
get
done
fast,
at
least
for
first
draft,
is
Andrea
Andrea's
work
and
the
admin
kubernetes
work.
Psyllium,
like
I,
said
they
are
set.
Google
said
to
be
doing
it,
but
I
haven't
had
an
update
for
a
while.
F
We're
not
on
on
our
side
we're
not
making
any
progress.
This
quarter
for
sure
yeah
there's
just
been
a
whole
bunch
of
other
stuff.
That's
come
up.
It's
it's
on
the
radar
for
early
next
year,
but
I'll
I
know
I'll,
have
more
updates.
As
we
get
closer.
A
Austin
no
worries
thanks
for
the
thanks
for
just
saying
that
I
I
kind
of
got
that
that
sentiment
from
the
folks
I
talked
to
at
coupon.
It
seems
like
y'all,
are
really
strapped
with
Downstream
stuff
right
now,
so
I
get
it
yeah,
cool,
okay,
so
I
think
that's
a
good
place
to
start
on
the
conformance
Mike's
input
on
like
what
the
Gateway
API
did
and
does
is
really
helpful.
A
The
other
thing
Gateway
API
does
that
I
found
was
interesting
and
I.
Think
they
talk
more
about
in
the
presentation
I
attached
is
they
have.
They
do
have
a
way
of
specifying
like
what
parts
of
apis
are
implemented
by
everyone
or
maybe
like
newer.
We
don't
know
if
we
can
Implement
them
by
everyone,
they
call
it
like
channels,
they
have
like
I,
think
stable,
experimental
and
something
else,
and
that
could
also
be
a
cool
mechanism
we
could
bring
in
like
for
V1
Alpha
two.
A
If
we
want
to
introduce
some
like
feature
that
we
don't
think
is
it
could
be
possible,
it
could
not
be
possible,
like
we'll,
probably
need
to
introduce
a
thing
like
that,
and
it
would
play
into
the
form
the
testing
as
well
so
just
kind
of
something
to
think
about.
If
you
get
bored
and
want
to
read
more
go
check
out
the
Gateway
API
they're
doing
exactly
stuff.
D
They
have
the
channel
stuff
big
cool,
some
of
their
CI
as
well
like
they
would
actually
pass
through
each
channel
in
some
of
their
pre-commit
tests
with
their
PRS.
So.
A
Yeah
and
that
that
we're
probably
going
to
need
to
do
something
of
the
same
with
that,
based
on
what
we
add
I
mean
originally,
when
we
specified
we,
we
even
before
we're
worried
about
implementability
when
we
specify
that
admin
Network
policy
only
touched
East-West
traffic,
so
only
close
touch
cluster
traffic.
We
were
kind
of
worried
that
there
might
be
some
implementations
that
couldn't
do
that,
and
maybe
that's
still
a
valid
worry
I
haven't
heard
any
voices.
A
A
A
I
mean
it's
probably
impossible
to
verify
it
exhaustively,
but
yeah
I
mean
the
Legacy
reasons,
for
that
are
that,
for
the
reason
for
the
decisions
we
made
are
all
in
the
cap,
so
we
can
go
back
and
look,
but
anyway,
it's
just
kind
of
a
point
of
like
if
we
create
something,
that's
not
implementable
at
some
level,
thankfully,
we
haven't
run
into
anything
yet,
but
we're
still
young,
so
cool.
A
That's
all
I
had
for
today.
I'm
pretty
sure
there
are
some
open
issues,
I
wonder
on
our
repo
and
we
can
check
those
out
really
quick.
A
Yang
I
know
is
working
on
updating
the
cap.
I
need
to
go
review
that
Yang
it's
been
weeks,
I'm.
Sorry,
it's
on
my
list
did.
E
A
Yeah
no
worries
whenever
it
yeah
it's
like
and
for
folks
who
didn't
understand
what
I
was
saying
like
we
wrote
the
admin
hour
policy
cap
that
got
merged
and
then
the
actual
API
that
merged
here
is
a
bit
different.
So
we
need
to
go
back
and
update
the
cap
and
Yang's
kind
of
taking
the
charge
so
another
good
place
for
folks
to
get
involved
once
he
pushes
it
up,
because
you
can
learn
kind
of
what
changed
between
kept
review
the
actual
implementation.
A
There
are
some
good
ideas
here:
I
Know
Dan
opened
a
pretty
reasonable
one
regarding
you
know
same
not
same
name
spaces
Surya
was
taking
a
look
at
it.
I
do
think
she
has
that
fairly
Under
Wraps
and
it
made
a
lot
of
sense.
So,
if
you're
interested
go
read,
we
do
have
some
random
kind
of
ideas.
So
this
one
is
like
trying
to
keep
an
up-to-date
table
of
implementations
and
what
they
support
for
both
Network
policies,
any
policy
API
and
kubernetes.
A
That
could
be
cool
really
like
I'm
willing
to
merge
anything
to
our
repo
that
helps.
Kubernetes
users
understand
any
policy
API,
so
like
Network
policy
or
admin
hour
policy
like
if
we
can
have
tools
here
that
help
users
like
let's
put
them
in
here
like
this-
is
the
place
it's
fun
to
live,
so
some
good
ideas
there.
A
It
depends
on
the
back
end
in
some
cases
weirdly
enough,
but
for
the
most
part
I
think
you
like
what
you
could
do
here
is
make
some
sort
of
command
line
tool
that
just
kind
of
prints
out
what
pods
are
affected
and
what
those
pods
can
talk
to
and
cannot
talk
to
and
I.
Think
cyclonus
does
some
of
that
so,
like
maybe
an
investigation
as
a
cyclonus
will
turn
over
some
Stones
onto
you
know
it.
A
Yeah
I
know:
cyclonus
does
some
stuff
and
I
haven't
poked
at
it
in
so
long
that
I've
kind
of
Forgotten,
but
that's
why
we
brought
it
here,
so
we
can
actually
keep
using
it
and
and
pick
up
on
where
they
left
off.
I
know,
Matt
is
still
kind
of
helping
out,
but
he's
on
to
other
things,
he's
who
started
it
so
yeah.
If
you're
interested
just
give
it
a
poke
and
say
I'm
checking
this
out.
A
A
B
A
No
no
worries
you're
all
good
we're
thinking
about
that
for
performance
testing,
so.
A
The
links
to
contributor
guidelines
I
had
opened
that,
and
it's
already
fixed
I
just
need
to
close
it
motivate
Network
plugins
to
use
port
names.
Dns
foreign.
A
This
is
probably
more
of
a
addition
to
the
Upstream
sleep
Network
tests,
but
it's
definitely
something
we
can
talk
about
here
and
think
about
here.
I'd
have
to
go
look
at
what
the
tests
verify
now
like
if
they're,
not
because
I
thought
we
had
a
name
for
tests
in
Upstream
CI,
but
could
be
wrong.
Yeah,
okay,.
E
G
Sorry
so
I
was
gonna
say
this
is
somebody
who
I
talked
to
at
coupon.
G
Actually
I
talked
to
her
at
kubecon
at
Valencia
and
then
didn't
really
get
back
to
it,
and
this
probably
didn't
belong
in
this
repo,
but
she
filed
it
here
and
I
was
like
well,
maybe
we'll
talk
about
it
at
the
meeting,
so
the
idea
is
that
lots
of
people
want
to
be
able
to
write
Network
policies
that
allow
DNS-
and
it's
actually
kind
of
tricky
to
do
that
right
now,
because,
depending
on
the
the
kubernetes
distro,
the
policy
has
to
specify
a
different
port
like
in
openshift
the
the
pods
actually
listen
on
Port
5353,
so
they
don't
need
whatever
the
capability
is
that
lets
you
bind
low
ports
so
and
and
if
everybody
consistently
implemented
the
name
Port
feature,
then
we
could
make
sure
that
core
DNS
used
a
named
port
and
then
you
could
write
a
policy
using
that
and
everybody
would
be
able
to
use
the
same
policy
everywhere
and
in
fact
we
could
make
the
the
tests
use
name
use
DNS
again
like
they
used
to
before.
A
G
A
F
G
It's
just
you
know,
sort
of
DNS
is
the
motivating
feature
for
it,
but
it's
it's
really
just
ensure
everybody
implements
an
Imports
got
it
got
it
yeah
and
I.
Don't
have
a
good
sense
of
how
many
people
already
do
and
don't
I
know
it's
better
than
it
used
to
be,
but.
E
I
know
I
know
for
Andrew.
Definitely
you
know
we
we
do
named
Port
I,
think
it's
just
that
you
know
for
kubernetes
now,
policies
it
doesn't,
in
my
opinion,
has
a
really
extensive
or
good
documentation
on
on
nameports,
because
you
know
people
needed
to
understand
when
they
write
that
in
Ingress
and
rigorous
rules
there
are
a
little
bit
of
difference
when
they
write
egress
rules.
E
The
name
chords
are
resolved
on
the
egress
peers,
where,
as
you
know,
when
you
actually
write
the
Ingress
rule
that
has
name
ports,
the
names
for
like
actually,
you
know
resolved
on
Parson
actor
workloads
rather
than
you
know
the
actual
Ingress
peers,
because
it's
always
the
destination
input
that
gets
resolved.
So
this
kind
of
behavior
wasn't
highlighted
anywhere
in
any
sort
of
like
documentations,
because
when
we
try
to
Implement
these
features,
we
I
checked
quite
extensively,
but
I
didn't
find
those
anywhere.
E
It
just
says
you
know
you
can
use
the
import,
for
you
know
resolving
ports,
but
and
that's
it
basically,
so
I've
actually
had
I
remember
entry,
not
entry
users,
but
some
Downstream.
You
know
VMware
distribution
users
who
actually
use
the
name
port
in
the
wrong
fashion
and
they
they
were
expected
policies
to
be
to
be
working,
but
you
know
they
were
actually
not
using
it
correctly.
So.
G
E
A
A
E
A
A
Once
going
twice
sold
cool
well,
we'll
give
everyone
about
20
minutes
back
thanks
for
all
the
new
faces
that
we
got
to
see
today
and
yeah.
If
there's
any
questions
comments,
just
please
reach
out
on
the
slack
Channel
and
keep
doing
what
you're
doing
hope
you
have
a
good
week
thanks.
Everyone
take.