►
From YouTube: Kubernetes SIG Network meeting 20210902
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
My
name
is
tim.
I
am
leading
the
meeting
today
for
the
record.
We
are
all
under
the
auspices
of
the
kubernetes
best
behavior
policies,
so
don't
be
jerks.
It
will
be
recorded
and
post
it
on
youtube
with
your
name.
So
that
said,
we
have
a
pretty
thin
agenda
today
and
we
start
with
triage
okay,
we're
done
with
triage,
there's,
there's
nothing
new
to
triage.
Today
I
went
through
it
and
other
people
went
through
it
this
morning
or
this
afternoon.
A
B
B
Where
are
we
now,
this
one's
called
sig
network
caps?
Let's
find
that
window.
A
There
we
are:
can
everybody
see
that
it's
pretty
wide
I'll
make
it
there?
Nowhere
he's
got
that?
Yes,
all
right
so
in
terms
of
calendar,
the
kep
freeze
for
1.23
is
next
to
next
thursday
end
of
day,
I
guess
the
9th
of
september.
So
we
have
one
week
to
get
prs
for
keps
merged.
A
If
you
have
a
cap
that
hasn't
gone
through
production
readiness
review,
your
deadline
is
today
and
you
need
to
make
sure
your
prr
is
updated
and
assigned
so
that
the
prr
reviewers
have
this
intervening
week
to
poke
at
them,
and
the
final
freeze
will
be
next
week.
So
I
know
some
people
have
already
sent
me
prs
today
and
in
the
last
couple
of
days
the
first
thing,
let's
just
run
through
what's
on
the
board,
and
then
we
can
talk
about
the
spreadsheet.
A
So
I
don't
think
we
have
anything
in
here
that
we're
making
progress
on
in
the
evaluated
not
committed
column
things
that
actually,
let's
start
at
the
other
end,
those
are
ga
merged.
Let's
talk
about
betas,
so
ipv4
ipv6,
dual
stack,
I
believe
cal
is
not
here
today
and
bridget
is
out.
So
the
goal
is
to
move
this
to
ga
in
23..
A
That
cal
is
working
on
a
pr
now
he's
blocked
behind
my
service
refactoring
pr.
But
the
goal
is
to
move
that
in.
If
anybody
has
concerns
about
it
speak
now,
andrew
terminating
endpoints
is
this
going
to
go
for
23.
C
Yeah,
so
this
cap
is
just
the
endpoint
slice
condition
update
and
not
the
q
proxy
logic,
so
I
think
api
change
alone,
I'm
hoping
to
get
that
to
ga
in
123..
Okay,
are
you
going
to
update
the
is
that
one
of
the
caps
you
sent
me
today,
yeah
and
the
the
prr?
C
I
don't
think
anything
changes
like
from
beta.
We
answered
all
the
questions
when
we
went
data,
so
I
think
it's
just
like
the
milestone
update
for
this
one.
A
Okay,
we'll
move
things
in
the
the
dashboard
here
when
the
pr
to
change
the
gate
merges
instead
of
when
the
cap
merges
network
policy,
port
ranges,
ricardo.
A
D
E
Yeah,
so
I
I
I've
sent
to
you,
can
you
hear
me?
Yes,
okay,
cool,
so
I
I
said
to
you
mostly
it's
stuck
in
in
that
graduation
criteria.
I've
sent
in
in
slack
and
I
didn't
get
any
any
apr.
Revealer
remember.
F
A
Okay,
okay,
so
it's
in
my
slack.
A
E
A
E
We
need
we
need
at
least
psyllium
or
openshift
to
to
support
okay.
We
we've
got
tube
router,
chemical
and.
A
E
So,
syrian
folks
they
they,
they
told
me
that
they
will
need
to
change
the
data
path
of
celium.
They
need
to
to
make
some
changes
into
how
ebpf
works.
So
they
don't
think
it's
gonna
be
easy
and
openshift.
I
talked
with
with
then
windshield
in
best,
and
he
told
me
that
duncan
can
say,
but
I
guess
it's
a
policy
from
openshift
just
using
graduated
features.
G
E
A
Okay,
all
right
well,
we'll
have
to
maybe
go
back
and
look
at
it
again
from
an
implementation
point
of
view.
If,
if
it's
really
difficult
for
two
of
the
major
implementations,
we
should
maybe
talk
about.
Why
and
what
we
can
do
about
that?
Okay,
okay,
are
we
being
serious
again
anyway?
Okay,
okay,
we
won't
push
on
it
for
now
namespace
scoped
parameters.
I
saw
the
pr
for
that
already
disable
node
ports
for
type
load,
balancer.
That's
the
allocate
node
load,
balancer,
node
ports
andrew
your
names
on
it.
Is
that
still
you.
C
A
Okay
and
load
balancer
class
yeah
same
thing
for
that
one,
I
think
you
need
to
create
switch
awesome,
so
our
beta
column
will
be
almost
empty.
Hopefully
that
sounds
great.
All
right.
Who's
going
to
move
from
alpha
to
beta
andrew
termination
for
etp.
C
Yeah,
so
this
is
the
one
that
I'm
having
the
most
trouble
with
specifically
around
like
testing
right.
If
you
want
to
promote
this,
to
beta
we're
going
to
want
some
ue
tests
and
and
tony
and
I
have
talked
about
it
a
bit
but
like
we
need
an
edb
test
that
does
a
rolling
upgrade
behind
a
load
bouncer
that
validates
that
we're
not
dropping
traffic
against
terminating
pods.
And
I
don't
know
how
feasible
that
is.
F
I
I
have
a
branch
with
with
something
to
work
on
this,
but
you
want
to
graduate
this
in
this
release.
C
I
think
it'd
be
good,
because
I
know
this
is
a
issue.
That's
been
reported
by
a
lot
of
people
on
multiple
cloud
providers
and
it
comes
up
a
lot,
so
would
be
good.
I
know
that
we've
also
had
conversations
about
whether
there
needs
to
be
a
terminating
fallback
mechanism
for
internal
traffic
as
well,
which
I
don't
like
for
internal
traffic.
C
So
tracking
terminating
endpoints
is
just
the
the
endpoint
slice
api
update
to
track
the
terminating
condition
for
a
pod
so
like
that
enables
I
don't
know
like
maybe
ingress
controllers
might
want
to
use
it
for
other
consumers
of
the
api
might
find
it
useful.
It's
separate
from
the
the
way
so,
and
so
the
graceful
termination
cap
is
just
keep
proxy
consuming
the
terminating
condition
and
then
falling
back
appropriately.
C
Yeah
this
one
is
dependent
on
one
six.
Seven,
two
tracking
term
rating
endpoints
but
like
track
covering
endpoints,
is
already
beta
and
is
probably
gonna
go
j
ahead
of
the
key
property
changes.
So
I'm
not
too
worried
about
that.
So.
F
C
A
C
A
A
All
right,
expanded,
dns
configuration
well.
F
This
is
funny,
I
don't
think
that
yeah,
because
I
mean
that's
alleged,
but
it's
the
thing
with
glibc,
which
I
don't
know,
which
version
only
admit
six
domains
and
since
right
I
don't
know
the
question
you
can
allow
a
lot
of
them.
So
the
guy
did
a
lot
of
job
and
and
and
the
cap
is
good
and
all
the
code
and
all
the
tests,
but
once
you
put
it
to
work
all
the
container
run
times
have
a
limitation
to
run
only
six,
so
it
doesn't
work
neither
with
cryo
or
container
link.
F
F
I
asked
him
to
to
add
us
criteria
to
to
to
have
a
container
on
times
that
work
with
with
the
new
expanded
dns.
So
that's
interesting,
I
don't
know
how
is
the
cryo
and
container
the
policy
you
know,
because
we
we
cannot
allow
clusters
with
continuity
policies
with
container
d
that
doesn't
allow
expanded
dns
to
run
with.
F
A
I
mean
either
way
it's
not
pleasant,
because
it'll
fail
way
down
at
the
node
instead
of
at
the
api.
Okay.
Well,
it
sounds
like
we're
not
pushing
this
one
forward.
This
cycle,
then.
F
The
the
person
that
submitted
it
sent
a
message
saying
a
comment
saying
that
he
wasn't
able
to
work
on
that.
I
think
that
that
you
undocumented,
I
was
checking
out
at
this
at
this
camp
the
other
day
and
the
the
problem
is
that
we
requested
some
some
supporting
crowd
providers
and,
as
far
as
I
know,
nobody
in
the
crowd
provides
that
area.
C
I
know
that
chef,
taco
or
walter
is
working
on
a
cap
for
the
cloud
controller
manager
to
more
easily
spin
up,
validating
admission,
webhooks
and
the
the
reason
why
it's
relevant
for
this
is
that
then
we
can
create,
like
interface
for
club
providers,
to
say
what
combination
of
ports
are
valid
for
service
type
load
balancer,
which
would
then
make
getting
like
removing
the
mixed
protocol,
validation,
more
safer
and
easier
okay.
So
I
don't
know
if
we
want
to
overlap
those
two
things.
A
Or
until
we
have
some
implementations
of
this
like
it
doesn't
make
a
whole
lot
of
sense
to
move
forward,
and
I
know
like
I
know
how
to
do
the
google
one,
but
I
don't
think
anybody
has
done
it
so
we'll
just
leave
it
for
now.
A
Is
that
do
we
have
a
table
of
that
somewhere?
Is
that
in
the
cup
I
haven't
looked
at
this
one
in
a
while.
C
No,
but
I
can
take
an
action
item
to
try.
A
Okay,
topology
hints
rob.
H
A
Okay,
all
right
and
then
stuff
that
wants
to
move
into
alpha,
it's
quite
a
long
list.
I
saw
that
there's
a
grpc
pr.
I
wonder
why
this
one?
Oh,
yes,
there's
still
qualify
right.
I
saw
there's
a
pr
for
this.
That's
just
making
its
way
through
ci
bumps
host
ips.
Does
anybody
know
what
what
the
status
on
this
one
is
spending
on
reviews?
A
Okay,
probably
a
lot
of
these
are
under
views.
I
know
all
ports
is
not
gonna
make
this
cycle.
If
I
remember.
I
Oh
yeah,
I
actually
cleaned
up
that
cap
quite
a
bit,
but
it
doesn't
have
the
production,
readiness
fully
filled
out
and
also
there
are
like
two
open
questions
that
are
in
a
separate
section,
so
yeah
all
that
won't
be
resolved
by
next
week,
but
that
could
be
ourselves
can
be
merged.
A
True:
dual
stack:
api
server
support
dan
winship.
G
Yeah,
I
should
probably
take
the
work
in
progress
off
that
and
people
can
start
looking
at
it.
So
yeah
people
should
review
that
if
they
want.
A
Okay,
if
you
want
me
to
look
at
it,
make
sure
you
assign
it
to
me,
because
I
will
cycle
through
my
assigned
prs
more
than
ones
that
I've
been
mentioned
on.
F
Well,
I
want
to
comment
something
on
this,
because
we
had
a
back
this
this
week
about
this,
and
I
thought
that
the
cube
kubernetes
dot
default
was
a
mandatory
for
the
endpoints
nice
name,
but
it
seemed
that
this
it
is
not.
So
I
think
that's
a
good
thing
for
for
this
pr,
because
we
we
don't.
We
are
not
tied
to
the
name
for
the
dual
stack
api
server.
We
can
create
two
endpoints
right.
A
I
saw
that
that
test
case
come
through
okay,
multiple
ipam.
I
talked
to
rahul
already
this
week,
cluster
ip
and
node
port
allocations.
I
don't
suppose
that
we're
going
to
drive,
that
is,
for
kubernetes
140.
A
Okay,
all
right!
Well,
that's
you
know
it's
right
around
the
corner
cube
proxy
architectural
stuff.
That's
not
going
for
alpha
unless
there's
something
grave,
that
I've
missed,
load,
balancer,
behavior
this
one
that
got
merged
and
then
rolled
back.
F
The
the
person
that
commented
that
he
is
not
able
to
to
follow
up,
but
in
in
he
will
take
over.
I
don't
know
one
thing
that
that
this
this
he
find
out,
and
I
think
that
he
has
a
pr
up
is
that
we
were
mixing
the
road
balance.
The
status,
ingress
load,
balancer,
fill
in
ingress
and
in
services.
H
A
J
Yeah
sure,
there's
actually
a
thing
item
later
the
agenda
to
talk
a
little
bit
about
some
documents.
We've
prepared,
basically
just
down
to
you-
know
coming
to
consensus
over
like
the
yaml
architecture.
Basically-
and
we
need
help
from
network
for
that,
and
then
we
can
move
forward
with
this
kept.
But
it's
not
going
to
happen
for
this
release.
A
Okay,
okay
and
gateway
api.
That's
async
to
everything
else:
right,
rob.
A
Okay,
cool,
that's
all
of
our
caps.
So
let
me
then,
let
me
stop
my
share
where's
that
stop
sharing,
I'm
going
to
add
all
the
ones
that
we
talked
about
today
to
the
spreadsheet,
the
tracking
spreadsheet.
A
If
you
have
a
cap
that
is
either
changing
a
life
cycle
stage
or
is
net
new
that
we
want
to
get
into
23.
I
need
you
to
let
me
know.
I
believe
that
this
dashboard
has
all
the
caps
on
it
somewhere,
except
the
one
that
was
reopened.
The
cni
bandwidth
that
one
we
did
not
talk
about
I'll
have
to
find
that
one.
I
believe
that
other
than
that
all
of
them
are
on
this
dashboard,
which
means
there
shouldn't
be
any
surprises.
But
if
there
are
please
let
me
know
asap.
F
F
F
The
problem
is
that
the
contract
keeps
working
and,
and
it
doesn't
keep
procedures
and
clean,
so
they
are
requesting
cube
procedure
to
clean
this
contract,
and
but
we
cannot
do
it
because
otherwise
we
we
break
the
graceful
termination
and
the
the
other
new
wave
of
issues
that
people
is
opening
is
about
protocols
like
c
for
ftp
that
needs
these.
F
You
know
they
embed
some
information
inside
the
the
tcp
protocol,
with
the
udp,
port
or
or
one
part
needs
another
part,
so
they
want
affinity
between
the
services
part,
and
I
I
think
that
this
is
a
pattern
of
people
of
using
services
for
more,
let's
say,
stateful
workloads,
and
I
think
that
we
don't
have
a
solution
right
now,
so
people
is
is
coming
up
with
their
solutions
and
what
this
is
the
risk
of
we
doing
and
development
based
on
on
on
this
instead
of
solving
the
the
real
problem
is
offering
the
people
a
solution,
and
we
guiding
them
to
do
that
when
you
want
to
do
this,
you
should
be
use
this
feature
or-
and
I
was
thinking,
gateway,
api
or
because
services.
F
A
F
Yeah
but
the
the
thing
is
you
always.
The
lars
did
a
a
good,
a
good
investigation
on
that
and
the
problem
is,
you
can
offer
only
affinity
if
you
receive
the
the
traffic
in
the
same
node
and
it's
that's
not
real.
I
mean
that's
what
I'm
saying
is,
so
we
are
offering
half
half
fake
solutions
for
and
the
the
problem
that
they
see
is
that
this
can
grow
and
because
this
is
the
problem
that
this
person
has
now-
and
we
are
solving
this
particular
problem
in
this
particular
moment.
F
But
I
start
to
see
a
turn
and
I
start
to
see,
determines
people
coming
from
from
their
legacy
application
that
they
want
to
keep
their
legacy
applications
and
they
are
applying
the
same
principles.
So
what
I
intend
to
to
bring
up
here
is,
if
we
can,
you
know,
think
out
of
the
boss
and
say
well.
This
is
the
overall
problem
of
the
problem
that
these
people
is
trying
to
solve
and
we
offer
a
good
solution
same
as
stateful
sets
all
those
things,
but
in
this
case
it's
for
the
network.
F
A
F
A
The
case
like
they
already
have
affinity,
even
if
we
assert
that
affinity
is
based
on
the
five
tuple,
we
still
can
we
guarantee
it
in
the
face
of
ecmp.
F
I
already
have
an
affinity
field
and
the
only
solution
is
to
is
to
final
traffic,
so
I
thought
that
that
I
mean-
and
I
I
don't
remember-
how
well
is
the
person
from
this
year.
If,
if
you
are
able
to
funnel
to
to
an
english
or
a
category
single
product,
I
mean
you
have
affinity
there
and
because
what
they
see
the
people
is
trying
to
use
the
service
api
as
a
load
balancer
api.
They
are
programming
their
hd
process
with
services.
That's
that's
the
thing.
The
tenancy.
A
F
Yeah,
but
we
look
at
a
lot
of
ip
tables
and
the
ipbs
services,
so
we
we
cannot,
I
mean
we
cannot
have
everything
yeah
and,
and
that's
the
thing
is:
how
did
you
move
forward
now?
That's
that's
the
this
is
not
a
problem
now,
but
in
one
year
there's
going
to
be
a
problem,
because
this
is
what
I'm
saying
is
people
is
saying.
Services
is
my
my
hci
process
and
I
want
all
the
features
there
and
we
don't.
We
cannot
offer
that
or
we
can
duplicate
cube
process
and
replace
it.
A
Ago
so
the
is
the
question
then
like
can
we
teach
gateway
to
be
smarter
or
is
the
question
like
somebody?
Anybody
have
an
idea.
K
F
I
I
was,
I
was
going
more
to
the
to
modeling
an
api
for
this
right.
You
know
it's
one,
one
layer
above
that
you
say
you
want
to
do
this.
Are
you
going
to
do
this
with
services?
So
the
question
right
now
is:
do
we
want
to
solve
this
with
services
or
not,
and
do
we
have
gateway,
bio
or
english
to
help
us
here
or
that's?
My
question
now
is
service
good
enough
or
not.
K
A
And
gateway
also
layers
on
top
of
service,
although
there
was
some
idle
speculation
of
what,
if
it
didn't,
or
what,
if
it
had
an
option
to
not,
I
don't
know
if
that
was
ever
pursued.
F
I
mean,
I,
I
think,
that
people
trying
that
unfair
and
I
don't
think
that
nobody's
going
to
sell
so
the
only
solution
is
to
to
to
have
a
single
and
that's
what
I'm
saying
is
if
we
are
able
to
move
one
another
up
and
finally,
the
traffic
and
have
this
you
know,
model
control.
These
new
features
as
you
want
to
have
affinity
per
ip
all
the
load
balancer
features,
because
at
the
end
of
the
day,
what
people
is
doing
is
is
like
an
h,
a.
A
So
we
already
have
is
an
annotation
that
lets
you
say
proxy
name
or
a
field.
I
forget
if
it's
a
field
or
annotation,
but
we
have
a
way
to
say,
like
don't
run
this
through
cube
proxy
service
proxy
name.
So
maybe
that
gets
to
what
lars
was
suggesting
of
saying
like
this
one
runs
through
the
special
singleton
proxy.
Not
everything
does,
but
this
one
does.
F
A
Okay,
well,
I'm
supportive
of
us
trying
to
figure
something
out
here.
I'm
tentatively
supportive
of
an
idea
of
you
know
what.
If
there
was
something
other
than
service
like
that
time
may
come.
Maybe
it's
this
year
that
we
look
at
service
and
just
say
we
need
a
not
a
version
two
in
the
kubernetes
api
versioning
sense,
but
like
a
newer
abstraction
that
does
better
and
maybe
gateway's
the
right
vehicle.
A
For
that
it's
a
little
I
mean
it's
a
little
early
to
be
putting
all
the
eggs
in
the
gateway
basket,
but
it's
it's
the
it's
the
basket!
We've
got.
K
L
I
I
think
like
the
implementation,
and
we
have
to
see
because
there
is
l4
support
right
now,
although
we
have
to
get
some
implementations
to
implement
it,
to
see
how
that
shapes
out,
but
like
which
implementations
map
to
what
that's
actually
fairly
broad
right
now,.
F
I
was
also
wanted
to
gather
the
feelings
of
people,
because
I
mean
we
all
work
in
different
scenarios
with
different
environments,
and
I
mean
people
can
say
well,
I
I'm
not
going
to
use
gateway
api
in
two
years,
so
it's
not
a
solution
for
me
and
that's
fair
or
people
can
say
well.
We
are
okay
and
in
time
to
solve
this
in
in
a
different
way.
A
Well,
I
mean
I'm,
I'm
cautiously
enthusiastic
about
gateway
api
as
a
way
to
fix
some
of
the
api
mess
around
service,
but
still
build
on
top
of
service,
like
basically
reduce
the
role
that
the
service
api
fills
by
moving
more
of
it
up,
the
api
stack,
but
not
necessarily
up
the
implementation
stack
but
affinity,
at
least
in
my
mind,
was
one
of
those
things
that
stays
at
the
lower
level.
But
maybe
I'm
not
thinking
big
enough.
A
Sure
but
first
we
have
to
model
it.
Somehow
we
have
to
give
users
a
way
to
express
what
it
is
they
wanted
right
now
they
don't
have
even
that.
F
K
K
N
F
A
Okay,
so
let's
take
it
to
the
mailing
list,
I
think
and
see
if
we
can
ideate
more
and
figure
out
what
the
next
steps
are
in
terms
of
exploring
and
either
an
api,
mock-up
or
or
something
that
shows
how
it
might
look
over
time.
F
The
thing
I
finished
simply
you
have
a
deployment
with
two
ftp
pods
and
the
traffic
of
the
ftp
server
needs
to
go
to
the
same
part,
because
the
ftp
server
is
going
to
announce
the
the
ftp
port.
You
know
the
ftp
data,
so
you
need
to
to
solve
this
problem.
That's
the
problem
that
people
is
asking
us
to
solve
the
other
stuff.
A
F
A
F
A
Next
and
last
probably
item
on
the
agenda
is
cluster
network
policy.
J
Show
us
yeah,
I
didn't
really
want
to
go
into
it
today.
I
spent
some
time
working
on
this,
but
it's
mostly
from
abhishek
and
sanjiv
who
sanji
is
here
but
abhishek
is
on
holiday.
Today
we
plan
to
present
it
or
at
least
give
a
quick
run
through
of
the
various.
You
know.
Yaml
designs,
we're
looking
at
in
our
next
sig
network
meeting,
but
thought
we
should
go
ahead
and
give
the
stuff
we've
already
made.
That
includes
a
slideshow
and
a
document.
J
J
J
We've
basically
agreed
on
the
use
cases
we
want
to
proceed
forward
with
and
there's
two
main
ideologies
for
implementation
in
yaml.
One
is
priority
based
yaml,
which
a
lot
of
cni's
already
do.
The
other
is
non-explicit
priority
based
yaml,
so
lumping
kind
of
what
we
do
with
network
policy
well
kind
of
lumping
action
and
priority
together.
J
So
we
need
help
deciding
on
those
two
broad
categories,
and
these
documents
are
supposed
to.
You
know
present
them
both
side
by
side
and
allow
you
all
to
decide
what
is
easier
to
use.
What
you'd
like
to
see.
A
I
I
mean,
I
really
don't,
have
you
don't?
I
don't
want
to
put
you
on
the
spot.
So
if
you,
if
you're
not
ready
to
to
talk
to
it,
then
that's
fine,
we
can
take
it.
Async
just
thought
we
had
15
minutes
left.
Maybe
you'd
want
the
slot,
but
if
you
don't
that's
cool.
J
No
for
sure
I
feel
bad
about
talking
to
it
without
abhishek
here
as
well,
because
he's
had
a
huge
hand
in.
A
J
O
We're
all
thinking
we
can
do
it
like.
I
can
cover
up
your
expressions
as
well
and
true
or
any
one
of
us
can.
If
he
can't
talk.
J
J
Make
co-hosts
that's
fine
there
you
go.
What
do
you
think
sanjeev
start
with
the
slideshow
or
go
into
the
docs.
O
We
can
do
the
slideshow
and
then
we
can
occasionally
come
back
to
the
box.
We'll
just
give
the
overview
today
right
so
yeah.
J
Okay
yeah,
so
this
is
the
powerpoint
I
was
talking
about
on
the
first
page,
we've
linked
the
google
doc
that
goes
with
this
powerpoint.
It
goes
a
lot
more
into
actual
examples
of
the
specs
that
we
are
considering
here.
Just
for
completeness,
let
me
start
by
saying
there
is
three
different
yaml
specs
we're
offering.
Two
of
them
are
related
to
non-explicit
priority.
One
is
related
to
explicit
priority
in
the
slideshow.
We
only
talk
about
empower
over
deny
over
allow
and
the
priority
based
one.
J
We
don't
kind
of
go
into
this
middle
compromise
solution,
so
you'd
have
to
check
that
out
to
see
where
you
go.
I
think,
as
a
team
we've,
you
know,
the
two
extremes
are
a
and
c
and
we
kind
of
met
in
the
middle
at
b
and
said
we
could
all
live
with
b.
So
if
the
team
was
having
to
decide
about
a
yaml
design
today,
we'd
go
with
b,
but
anyway,
moving
back
to
the
powerpoint.
We
really
just
lay
out
the
user
stories
here
and
some
global
assumptions.
J
So
some
really
important
global
assumptions
you
want
to
start
with,
obviously
is
cmp-
is
no
longer
going
to
focus
on
north-south
traffic,
so
we're
not
including
I.p
block.
In
this.
We
are
assuming
that
there
needs
to
be
another
object.
That's
going
to
act,
as
you
know,
emote
around
the
cluster,
whereas
this
is
focusing
more
on
the
multi-tenancy
use
case
and
east-west
controls.
J
We
also
wanted
to
take
a
second
to
explicitly
say
what
we're
defining
as
a
tenant.
A
tenant
here
can
be
one
or
more
name,
spaces
that
are
surrounded
by
a
hard
deny
and
that
cmps
are
most
often
going
to
be
used
to
enforce
sort
of
a
multi-tenancy
boundary
scenario.
J
We
also
want
to,
you
know,
emphasize
what
we
use
the
word
delegate
for
so
this
multi-tenancy
scenario
is
a
little
different
than
like
a
normal
one
solely
because
we
already
have
an
existing
network
policy
object.
So
we
have
to
have
a
special
term
that
we
use
to
talk
about
how
cmp
interacts
with
the
existing
network
policy-
and
we
use
delegate
for
that.
Delegate
just
means
essentially
that
the
cmp
doesn't
necessarily
allow
traffic,
but
it
delegates
action
on
that
traffic
to
the
network
policy.
J
So
it's
just
something
you
should
know
anything
else
there
to
add
sanjeeva
abhishek.
O
No,
that's
good.
Some
of
the
use
cases
we
see
that
I
mean,
in
addition
to
enforcing
finance
boundaries,
it's
also
allowing
for
holes
for
system
services
and
so
on
and
we'll
see
those
details
as
we
go
oops
sorry.
J
J
The
first
I'd
say
two
are
pretty
self-explanatory
right,
so
we
wanna
have
a
if
we
wanna
have
a
strict
deny
on
for
all
pods
and
all
name
spaces
in
a
cluster
to
a
specific
name
space.
We
can
do
that
same
thing
on
the
other
side
for
strict
allow.
J
So
those
are
pretty
self-explanatory,
I
would
say
strict
deny,
but
allow
exceptions
that's
like
if
we
want
to
implement
multi-tenancy,
implement
tenancy
but
be
able
to
poke
holes
in
those
hard
tenant
boundaries
to
allow
those
tenants
to
talk
to
system
name
spaces
such
as
cube,
dns
monitoring,
name
spaces
et
cetera,
et
cetera,
and
these
will
make
more
sense
too,
with
the
associated
figures,
as
we
scroll
through
this
here.
J
The
one
use
case
that
has
caused
a
lot
of
confusion.
I
think
in
general,
is
strict,
deny
but
delegation
and
how
that's
different
from
a
strict
deny
but
allow
that's
kind
of
why
we
highlight
what
delegate
means
here
when
we
say
strict
deny
but
delegate,
we
aren't
necessarily
allowing
traffic,
but
we're
saying
we're
gonna.
Let
this
traffic
pass
through
and
be
dictated
by
the
existing
network
policies.
J
An
example
we
give
for
that
is
basically
if
we
have
a
service,
that's
in
a
tenant
call
it
tenant
bar
that
we
want
to
be
globally
accessible
but
still
be
able
to
be
turned
on
and
off
access
to
that
via
network
policy.
J
This
is
where
a
use
case
like
this
comes
in,
and
there's
diagrams
like
for
that
later
on,
so
because
of
the
strict
deny
but
delegate
use
case,
we
also
have
the
problem
of
default
disposition
right,
so
that
is
when
we
create
a
tenant,
we
allow
some
traffic
to
be
delegated
and
then
network
policy
doesn't
ever
do
anything
with
that
delegated
traffic.
What
happens,
then,
if
we
didn't
define
this,
it
would
just
fall
through
and
be
default.
J
Allow,
and
I
think,
a
lot
of
users,
don't
necessarily
wouldn't
necessarily
want
that
and
that's
what
we've
kind
of
come
to.
So
we
also
look
at
what
happens
in
those
scenarios.
Can
we
build
the
yaml
so
that
we
can
toggle
that
default,
behavior
on
and
off,
etc,
etc,
so,
cluster,
external,
all
or
nothing
for
external
traffic?
That
one,
I
don't
think,
is
a
great
really
use
case
here.
It's
just
going
back
to
the
assumption
that
we
are
not
focusing
on
north-south
traffic.
J
Something
else
to
think
about
with
these
sample.
Yammels
is
the
possibility
for
future
extensibility
again,
we
don't
go
too
much
into
this
this
one.
It's
just
something
to
think
about.
When
reading
the
specs
like
we
want
to
extend
this,
I
don't
think
for
the
ip
block
selector,
but
for
other
features
like
logging
status
field,
et
cetera,
et
cetera,
new
types
of
selectors
et
cetera
number
eight
is
basically
a
combination
of
one
through
five
and
just
looking
at
how
those
sample
yamls
look.
J
So,
like
I
said
solutions
considered,
we
have
a
non-explicit
priority
based
solution
within
that
is
two
sub
tiers.
One
is
where
empower
action
is
always
over
deny
which
is
always
over
allow,
and
the
other
is
where
empower
is
always
over.
Allow,
which
is
always
over,
deny
there's
some
important
differences
between
those.
When
you
overload
action
and
priority
together,
the
priority
yamls
are
a
little
more
straightforward
and
are
already
well.
J
O
J
Okay,
thanks
sorry,
computer
went
through
some
struggles
there.
These
are
some
notes
on
current
vendor.
Cluster
network
policies
are
already
implemented.
Just
put
that
there
for
completeness.
I
don't
think
we
need
to
go
into
it
right
now.
I
think
most
of
you
know
are
working
on
one
of
these
cni.
So
you
kind
of
know
the
differences.
J
J
J
Yep,
okay,
so
going
down
to
the
actual
samples
we
have
some
notes
for
priority-based
samples,
specifically,
obviously
we
have
to
make
some
assumptions
in
order
for
the
priority
system
to
work
specifically
with
how
the
priority
numbering
system
works
and
how
cmps
would
be
aggregated.
J
Basically,
in,
inter
cmp
resolution
is
handled
by
the
priority
number,
so
the
priorities
can
range
from
zero
to
a
thousand,
and
if
you
have
the
higher
priority,
your
rule
is
winning
and
then,
if
you
have
multiple
rules
within
a
cmp,
they
are
prioritized
by
yaml
listing
order
within
the
cmp.
One.
Special
thing
to
note
here
is
that
we
have
overloaded
the
priority
value
of
zero
and
for
it
to
mean
that
it's
evaluated
beneath
existing
network
policy.
J
Something
else
to
note
with
that
is.
We
are
going
to
explicitly
define
that
you
can
only
have
one
cluster
network
policy
in
a
cluster
with
a
priority
of
zero,
because
there's
only
one
overloaded
priority
value,
so
that
would
be
like
if
you
want
to
set
a
default
stance
of
the
cluster.
So,
instead
of
being,
you
know,
drop
through
to
allow
it's
a
drop
through
to
die
and
we'll
show
some
examples
of
that
later.
J
Another
thing
to
highlight
the
action
field
for
priority
base
can
have
three
different
values
allowed
in
eye
or
pass
pretty
much
pretty
similar
to
the
other
passes,
pretty
similar
to
the
empower
in
the
other
solution.
Basically
pass
just
means
if
traffic
hits
that
rule
it's
delegated
to
standard
network
policy
now
the
one
thing
we
are
going
to
have
to
add
for
priority-based
samples
is
some
more
tooling
into
cube,
cuddle
to
kind
of
give
the
user.
J
So
those
are
some
of
the
you
know
the
initial
things
I'm
not
going
to
go
into
each
user
story
explicitly.
I
don't
think,
but,
as
you
can
see,
as
we've
moved
through
these,
we
have
diagrams
kind
of
illustrating
every
user
story
that
should
help
clarify
things.
If
you
do
have
any
confusion
and
like
I
said
here,
we
have
the
yamls
that
fit
neatly
on
a
powerpoint
and
the
yaml
examples
that
fit
neatly
on
a
powerpoint.
But
if
you
want
to
see
the
complete
set
you
migrate
over
the
dock.
J
Yeah,
so
are
there
any
questions
about
the
doc
and
the
yammels,
the
slideshow?
What
we're
trying
to
do
here.
J
Basically,
we've
gotten
to
the
point
that
if,
if
people
feel
strongly
about
how
this
yaml
for
cmp
is
gonna,
look
like
now
is
your
time
to
say
it.
If
not,
it's
gonna
be
left
up
to
the
decision
of
you
know
the
smaller
sig
network
policy,
api
subgroup.
We
want
to
hear
what
sig
network
thinks
we
want
to
get
this
kept
moving
forward.
We
need
your
input,
so
we
got
to
present
a
lot
of
it
today.
J
We
can
talk
a
little
bit
to
it
next
meeting
as
well,
but
if
we
could
get
some
reviews
comments
questions
on
this
before
then
that
would
be
great.
O
Sorry,
I'm
getting
an
echo,
get
some
feedback
from
the
different
cnr
vendors.
J
Yep
and
yeah,
that's
basically
all
I
have
feel
free
to
comment
on
the
slides
and
we'll
respond
to
those
or
reach
out
and
see
network
policy.
Api,
slack
channel.
A
And
these
are
all
linked
in
the
agenda.
O
B
A
Today,
yes,
awesome,
awesome
all
right
well,
it's
time
so
so
much
for
having
a
thin
meeting
thanks
everybody
for
joining
us
today,
we'll
post
the
video
as
soon
as
it's
ready,
and
if
anybody
wants
to
follow
up
on
anything
here.
Let
me
know:
caps
are
my
top
priority
for
the
next
week.