►
From YouTube: Kubernetes SIG Network meeting 20210513
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
All
right
I
apologize.
I
did
not
have
a
lot
of
time
to
pre-filter
these
today,
so
I'm
going
to
be
reading
them
a
lot
of
these.
For
the
first
time,
then
there
are
a
fair
number
today,
there's
30
open
so
been
a
busy
week
for
people.
Kubernetes
service,
h.a,
support
right
now,
players,
service.
C
C
C
Asymmetric
routing
on
bare
metal
multi-interface,
I
feel,
like
we've
talked
about
this
topic
many
times,
and
it
comes
down
to
some
of
the
iptables
rules.
We
install.
Let's
see.
C
C
If
you
think
it's
a
bug,
go
ahead
and
accept
it
and
okay,
let's
see
what
we
need
to
do:
flaky
test,
we'll
change,
the
type
and
ports
of
a
service.
Oh
great,.
D
E
C
F
C
What
okay.
C
F
C
D
C
I
don't
know
how
many
times
does
somebody
want
to
do
a
quick
follow-up
and
see
if
there's
something
we
can
address
here.
D
C
That's
useful
okay
table
driven
test
for
cube
proxy
and
service
proxies
jay.
It's
like
you
read
my
mind.
D
C
E
F
A
C
A
E
C
A
H
C
Oh,
you
can't
see
that
sorry
put
it
back
over
here
somewhere.
A
C
A
Is
this
closed
now
time
check
tim?
Do
we
want
to
stop
now.
C
C
A
A
Sorry,
I
gotta
pull
up
the
agenda
doc
again
for
some
reason
I
closed
that,
but
I
think
we're
on
a
kep
review
right
or
anything
anybody
wants
to
bring
up.
Somebody
want
to
share
the
spreadsheet.
Do
you
want
to
do
that
tim
since
you're,
already
sharing.
A
C
All
right
so
here's
the
sheet,
there's
not
that
many
when
you
scroll
through
here's
most
of
our
network
ones.
I
think
there's
a
few
others
there's
a
few
more
down,
so
we
have
endpoint
slice
api.
What
are
we
doing?
Are
we
looking
for
questions
or.
A
C
I
have
one
two
three,
four,
five,
six
caps,
six
pr's
that
I
am
tracking
at
the
moment.
Four,
hopefully,
for
today
one
is
storage;
one
is
node,
two
or
storage,
one,
two,
three,
our
network.
H
H
C
C
So
I
see
that
you're
merged-
that's
good
andrew,
I
think,
has
the
two
that
I'm
otherwise
tracking,
which
has
some
of
the
more
interesting
comments.
Andrew,
do
you
want
to
talk
about
those.
F
F
C
Yeah
I
mean
I,
I
was
trying
to
figure
out
like
what
question
would
I
want
to
ask
in
order
to
know
whether
the
internal
traffic
policy
is
working
right
and
the
answer
that
I
came
to
was
well,
I
guess
I'd
want
to
know
if
there's
something
with
internal
traffic
policy
turned
on
and
it
was
going
going.
Nowhere
like
the
feature
wasn't
wasn't
actually
assigning
endpoints
to
it
right.
So
that's
why
I
suggested
the
black
holes.
C
I
mean
it's
a
terrible
name,
but
you
get
the
idea
as
as
a
possible
metric,
because
it's
also
not
particularly
complicated
to.
F
Yeah,
I
think
the
tricky
part
is
that
being
blackholed
is
a
desired
state
in
some
cases
right
so
like,
but
but
yeah.
I
get
a
point
that,
like
we,
don't
necessarily
have
to
learn
on
it.
We're
just
creating
a
metric
and
folks
can
do
whatever
they
want
with
it.
C
C
Well,
I
mean
at
least
the
way
I've
used
external
traffic
policy,
many
of
the
nodes
are
black
hole
on
purpose
or
not
on
purpose,
but
because
there
aren't
as
many
replicas
as
there
are
nodes
right
like
I
might
have
one
one
replica
of
a
service
that
I'm
exposing
externally
and
in
my
hundred
node
cluster
99
of
those
nodes,
don't
have
a
local
copy
of
it
and
that's:
okay,
that's
not
a
problem,
and
we
we
have
all
the
health
check
stuff.
The
health
check,
node
port
stuff,
to
make
that
okay.
C
But
if
I'm
setting
something
for
internal
traffic
policy,
it
seems
like
the
main
use
case,
for
that
is
the
per
node
service.
Right
I
mean
that
was
the
motivating
use
case.
E
So
but
we
we
can
report
services
and
local
endpoints
and
non-local
endpoints
like
a
contract.
C
Sure
we
can,
we
can
return
a
count
of
services
and
we
can
return
a
count
of
endpoints,
but
those
are
like
sums
across
all
of
the
services.
We
can't
report
go
ahead.
E
C
But
you
don't
want
to
put
in
general,
you
don't
want
to
put
non-constant
data
in
metric
labels
because
you
end
up
with
huge
cardinality,
which
is
really
hard
to
deal
with.
C
C
What
I've
read
is
they
want
you
to
aggregate
into
something
you
know
of
a
finite
cardinality,
so
slicing
it
up
based
on
you
know,
which
are
which
traffic
policy
or
you
know
something
else
is
reasonable,
but
because
we
know
that
there
will
be
services
that
have
one
endpoint
and
they'll
be
services
that
have
100
endpoints
in
the
same
cluster.
C
The
total
isn't
super
useful
in
one
of
these
threads.
Here
I
suggested
going
further.
Where
is
it?
It
was
in
the
other
pr
yeah
I
suggested
going
further
and
like
actually
logging,
you
know
fixed
percentiles
of
stuff,
but
that's
a
lot
more
work
to
think
about.
Maybe
there's
a
better
way
to
express
this
in
prometheus,
but
this
is
how
I
would
think
about
it.
F
C
But
to
to
appease
the
prr
gods,
let's
figure
out
what
we
can
do
in
the
relatively
short
what
we
can
commit
to
in
the
short
term
right.
So
I
think
committing
to
number
of
black
holes
is
a
pretty
reasonable
start.
C
And
antonio,
I
liked
your
other,
or
was
your
idea.
Yeah,
I
think,
was
your
idea,
but
up
above
about
slicing
the
desired
endpoints
on
the
controller,
though
I
guess
that
this
summation
problem
needs
to
be
dealt
with,
but
that
seems
like
a
reasonable.
You
know
reasonable
step.
C
F
For
the
terminating
and
graceful
handling
of
with
q
proxy,
I
think
the
other
major
blocker
was
testability
and
how
we're
going
to
automate
that,
because
it
requires
you
to
do
a
rolling
update
of
the
pod
with
the
cloud
load
balancer.
And
then
you
have
to
time
the
way
the
health
check
noteport
lines
up
with
when
the
when
the
endpoint
count
goes
to
zero,
and
so
you
know
there's
a
whole
possibility
that
the
completely
flaky
and
and
not
a
good
signal.
F
I
don't
really
have
a
good
answer
for
that.
I
think
you
know
we're.
Gonna
do
our
best
to
write
the
test,
but
I
think
the
the
flakiness
is
going
to
be
hard
to
control
on
this
one.
I
think.
E
E
E
I
think
that
time
I
have,
I
was
working
the
the
other
day
in
my
week.
I
have
a
mock
for
the
load
balancer
and
I'm
the
only
problem
that
they
have
now
is
to
fake
the
traffic
to
emulate
that
is
coming
from
outside,
because
we,
the
road
balancer,
knows
the
if
it's
internal
or
it's
an
external,
so
the
mocking
the
load
balance.
This
is
is
just
patching
the
the
service
and
all
the
stuff
with
the
status.
A
D
A
F
F
C
B
B
All
right,
so
can
everybody
see
my
screen
right
now,
yep,
looking
good
cool
sounds
good,
so
I'll
go
ahead
and
update
you
guys
on
you
know
the
current
status
on
the
cluster
network
policy.
So
we
receive
you
know
a
bunch
of
comments
on
the
cap
and
there
are
some
key.
You
know
battlefields
or
I
would
say
you
know
points
where
people
are
not.
B
You
know
agree
upon,
so
I
wanted
to
sort
of
like
list
it
out
here
and
see
if
we
can
sort
of
get
get
some
other
consensus
as
a
group
and
seek
network
so
that
we
can
basically
push
this
cap
forward.
B
Essentially,
what
we
are
trying
to
do
is
that
we
are
trying
to
sort
of
like
separate
the
use
cases
in
the
across
the
network
policy
from
the
cap
to
a
separate
user
story
folder
in
the
new
network
policy,
github
repo
that
we
created
and
for
these
user
stories
we
sort
of
like
wanted
to
have
you
know
some
approval
or
some
reviews,
and
then
we
can
agree
on.
B
You
know
this
is
user
stories
that
we
definitely
want
to
solve
and
on
the
cap
itself,
we
can
basically
just
focus
on
the
crd
design
to
solve
these
use
cases.
That's
a
move
that
we're
we're
trying
to
do
now.
So
some
of
the
key
points
we
guessed
it
here
is
that
we
wanted
to
have
some
more
feedback
on.
First
of
all,
do
we
actually
need
the
cluster
default
network
policy?
B
And
this
is
something
that
we
started
out
by
you
know
wanting
to
have
a
default
security
posture
for
some
of
the
workloads,
but
they
can
completely
be
overridden
by
namespace
scope,
network
policies
and
on
the
cap
we
are
seeing
sort
of
like
contradicting
some
contradiction-
comments
where
some
people
think
it's.
You
know
beneficial
to
have
this
sort
of
like
default
policy
and
the
other
thing
that,
for
example,
on
the
on
the
cases
like
namespace
isolation,
etc.
B
They
should
be
you
know
enforced
rather
than
giving
the
network
namespace
users
the
ability
to
write
their
own,
our
policies
to
sort
of
like
override
these.
So
this
is
the
one
field
that
we're
trying
to
you
know
still
get
some
more
feedback
on
and
the
other.
The
other
thing
is,
you
know,
for
the
cluster
network
policy
action
we
listed
a
separate
slide
here
is
that
is
a
explicit,
deny
needed.
B
First
of
all,
and
I
think
the
group
has
all
somehow
agrees
that
you
know
we
cannot
use
a
pure
allowance
model
like
the
network
policy
does
so
in
some
circumstances
it
is
useful
to
enforce
enterprise
security
compliance,
so
we
needed
a
explicit
denial
for
cluster
network
policy,
so
we
can
explicitly
say
hey
this
from
this
to
this
should
be
denied,
but
the
question
is:
if
we
have
a
explicit
denied
explicit
allowed
at
least
these
two
actions,
it
doesn't
make
sense
to
have
one
action
type
always
have
precedence
over
the
other,
or
we
should
basically
just
do
the
number
sequence
numbered
priority
model
so
that
it's
more
flexible
and
if
the
complexity
of
the
sequence
numbering
is
worth
it.
B
We
listed
the
two
very
basic
examples
here
to
sort
of
like
illustrate
this
problem,
because
right
now
we're
thinking.
Okay,
we
can
do
something
like
network
policy
right
where
the
allow
action
will
always
be
have
higher
precedence
than
the
delay.
Action,
for
example,
now
think
about
this
scenario,
where
we
wanted
to
basically
for
a
certain
set
of
parts,
we
want
to
drop
traffic
received
from
s1
and
s2
and
at
the
same
time
we
want
to
accept
only
https
traffic.
B
Now,
if
we
have
some
sort
of
like
president's
non-determined
presidents
or
poverty
numbers,
we
can
always
write
a
two
different
rules
for
this,
and
one
is
basically
deny
non-https
traffic
at
the
bottom
and
then
so.
B
Solo
deny
allow
https
traffic
at
the
bottom
and
deny
these
two
at
the
top,
so
so
that
when
we
stack
those
two
policies,
it
will
give
us
this
semantics
that
we
want,
but
if
a
law
has
always
have
a
higher
priority,
this
becomes
very
tough
to
write,
because
now
you
need
to
write
the
rule
as
allow
from
everywhere
except
s1
and
s2,
and
this
is
a
very
hard
selection
to
make
based
on
the
current
select,
selectors
and
mechanisms.
B
So
it's
essentially
the
point
is
without
explicit
priority
numbers.
We
needed
a
very
strong
or
powerful
way
to
do
selections
in
any
any
combinations
like
from
certain
namespaces,
except
for
certain
sources
and
that
kind
of
stuff
that
way
we
can
get
around
not
having
having
a
allow
that
is
always
being
higher
precedence
than
the
deny
or
vice
versa.
But
does
everybody
think
that's
a
good
idea
or
we
should
essentially
go
with
some
sort
of
like
power.
B
The
number
based
approach
for
for
the
presidents
of
actions
I'd
simply.
I
B
Yeah,
I
think
so
so
those
two
are
essentially
the
use
cases
where
we
come
up
in
terms
of
thinking
about
the
sort
of
like
the
corner
cases
for
for
the
priorities
right.
So,
if
you
think
about
things
like
very
common
use,
cases
like
namespace
isolation,
for
example,
that's
could
become
fairly
straightforward
because
it's
a
determined
use
case-
and
you
know
we
can-
we
can
always
come
up
with
even
keywords
like
self
namespace
or
not
self
name
spaces,
to
make
the
selections
that
we
intended
now.
B
These
two
we
listed
here,
those
are
actually
by
sanjeev,
who
is
also
on
the
call
I
believe
he
was.
You
know
listing
this
out
as
corner
cases,
which
I
believe
is
also
a
possible
scenario
where,
where
customer
or
cluster
means
wants
to
enforce
these,
but
they
will
be,
you
know
very
hard
to
express
using
a
a
plus,
an
hour
policy
spec
that
has
very
explicit
power.
These
precedence,
like
one
action
over
the
other,
so
I
understand.
K
K
So
one
is,
and
we
invite
your
thoughts
on
this,
so
one
is
having
good
cluster
egress
filtering
right
and
in
a
way
that
the
cluster
admin
can
enforce
security
which
namespace
scope
policies
cannot
bypass
because
you
don't
want
parts
connecting
to
you
want
to
limit
how
parts
can
connect
outside
the
other.
One
is
clustering.
This
without
filtering,
which
has
two
aspects.
One
is
ingress
to
the
cluster
as
well
as
ingress
within
the
cluster.
Now
we'll
come
back
to
that
because
there's
the
issue
of
the
source
ip
thing
over
there.
K
And
here
what
we're
saying
is
it
appears
at
least
both
from
the
comments
on
the
kip,
as
well
as
our
experience
that
it's
pretty
frequent
that
a
cluster
admin
wants
to
enforce
some
kind
of
tenancy,
using
a
group
of
name
spaces
with
the
selective
ability
to
add
connect
to
services,
name
spaces.
So
that's
that
use
case
and
then
the
explicit
deny
option
we
just
talked
about
and
then
finally
just
better
solutions
for
exceptions.
The
current
id
block
exception
is
sort
of
kind
of
strange.
K
So
what
we're
doing
with
use
cases
is
one
is
these
kinds
of
categories,
and
one
is
specific
examples
like
the
previous
slide,
where
you
know,
and
if
you
go
back
to
that,
there's
some
rationale
for
those
use
cases
as
well.
If
you
go
back
yeah
so
here
this
one
is
saying:
okay,
you've
got
malicious
sources,
s1
and
s2,
and
also
you
only
want
to
accept
https.
So
that's
appears
to
be
a
fairly
reasonable
sorts
of
sort
of
requirement.
K
K
So
bottom
line
is
that
we
think
these
are
rational
use
cases.
We
want
to
use
these
to
close
sufficient
amount
of
kept
to
make
for
to
move
forward.
We
think
maybe
we
can
differ
on
some
of
the
slightly
controversial
features
like
whether
the
default
network
policy
and
so
on,
and
that
way
we
can.
Our
whole
goal
here
is
to
get
this
cluster
network
policy
into
v,
one
alpha
one,
and
then
we
will
always
have
the
opportunity
to
add
changes
into
v.
K
One
alpha
two,
but
you
know
young
and
team-
have
been
working
on
it.
I
kind
of
just
recently
got
in
so
I
was
just
trying
to
help
move
this
along,
and
my
thought
was
that
let's
try
to
close
the
mandatory
set
for
v1
alpha
one
and
then
we
can
continue
on
the
other
cases
which
can
bring
it
into
either
v1,
alpha,
2
or
or
maybe
in
a
future
release.
K
There's
several
things
here,
but
maybe
we'll
take
a
pause
here
and
get
the
question
for
the
team
is:
would
the
team
be
open
to
saying
that
there
is
reasonable
consensus
of
a
common
set
of
cluster
and
drug
policy
controls,
as
mentioned
in
the
next
slide,
and
let's
go
ahead
and
start
with
the
crd
for
some
or
all
of
these
use
cases,
and
then
we
can,
as
we
are
sort
of
pipelining,
that
implementation.
K
K
So
question
is:
is
the
community
okay
with
either
finding
sort
of
a
baseline
set
for
v1,
alpha
1
for
cluster
network
policy
and
then
moving
on
that
and
then
additional
features
which
are
sort
of
on
the
fence
can
be
either
v1
alpha,
2
or
the
debate
can
continue
so
that
you
know
this
feature
which
has
been
sitting
on
the
sidelines
for
gets
to
move
forward.
C
I
think
that
seems
reasonable.
The
default
cluster
network
default
network
policy
is
additive.
We
can
really
split
on
those.
So
that's
that's
a
good
insight.
I
think
the
question
I
mean
there's
some
fundamental
schematic
questions
here.
Right
like
is
it
numerical
stacking
or
is
it
what
was
this?
The
last
I
looked
at
it
was
pretty
simply
defined
as
allow
deny
allow
right.
K
Yeah,
so
our
current
thinking
is
that
that
sort
of
allowed
denial
or
authorized
did
not
allow
is
somewhat
hokey
and
yes,
a
lot
of
people
are
anyway
used
to
priority
sequence
numbers,
so
we
might
as
well
bite
that
bullet,
and
that
way
we
will
not
have
any
caveats,
because
once
you
have
the
priority
sequence
numbers,
you
can
do
all
these
exception
scenarios,
so
otherwise
we'd
be
sort
of
dancing
around
the
issue
with
all
these
other
alternatives.
J
B
C
J
Oh
yeah,
my
network
is
bad,
I
might
be
cutting
out,
so
I
I
want
to
object
to
the
idea
of
let's
rush
out
alpha
one
and
then
we
can
add
to
it
later
like.
J
This
is
how
we
got
network
policy,
how
it
is
by
by
like
just
throwing
in
every
feature
that
somebody
wanted
and
not
really
thinking
about
how
it
all
fits
together
and-
and
I
think
it
would
be
much
better
to
come
up
with
a
coherent
idea
of
what
we
are
trying
to
do
and
then
come
up
with
a
good
api
for
doing
that.
Rather
than
taking
every
feature
request,
anybody
has
had
and
coming
up
with
an
api
that
can
accommodate
all
those
things.
K
I
I
look
at
it
two
ways.
Personally,
if
you
want
something
quick
or
quicker
right,
you
would
just
take
network
policy
as
is
and
change
the
scope
right
when
we
start
adding
use
cases
like
we
have
to
start
thinking
and
talking
about
maybe
making
a
new
version
of
apis.
I
think
that's
like
something
that
was
confusing.
People
with
the
initial
cap
was
what
set
of
apis
is
it's
designed
for
the
existing
set,
how
network
policy
works
now
or
a
new
set
of
apis.
K
So
under
one
thing
that
we've
seen
so
that
makes
sense,
but
it's
it'll
be
slightly
incomplete.
If
you
go
to
the
previous
slide,
young,
actually
one
more
three,
so
at
a
minimum,
we
feel
that
we
will
need
an
explicit
deny,
because
the
whole
purpose
of
a
cluster
network
policy
is
drop.
Packets
that
are
compromising
the
cluster,
then
not
have
any
way
for
the
namespace
policy
to
override
that.
K
So
the
key
point
is
that
cluster
network
policy
has
to
one
of
the
main
goals
is
to
enforce
cluster
security
and
at
a
minimum,
explicit
denial
is
needed.
Once
you
have
explicit
deny,
then
you
have
okay.
What's
the
relative
priority
between
explicit
denying
expression
explicit,
allow
and
do
we
want
the
sequence
numbers
so
that
starts
defining
your
minimum
set,
which
is
explicit,
deny
explicit,
allow
possibly
with
priority
numbers.
A
A
And
then
you
punch
holes
through
that
yeah.
That
was
my
question
as
well.
So
I
don't
think
I
followed
that
yeah.
Why?
Why.
A
If
we
don't
have
an
explicit
deny,
you
know,
we
don't
have
one
in
network
policy
either,
but
we
do
have
for
network
policy
is
when
you
create
a
network
policy
that
applies
to
anything.
You
know
we
have
the
default
deny,
and
so
until
you
punch
holes
through
something
everything
is
denied
by
default.
J
C
C
We
I'm
just
saying
this
out
loud,
but
I
don't
think
it's
the
right
way
to
go
either.
You
could
say
that
everything
that's
allowed
at
the
cluster
level
is
the
bounding
box.
It's
not
actually
allowing
them.
Just
like
that's
the
box
and
you
can't
go
outside
the
box
and
then
you
don't
need
an
explicit
deny
because
everything
that's
not
and
let's
call
it
empowered
anything,
that's
not
empowered
is
denied,
but
then
that
doesn't
provide
for
explicit.
Allow
semantics
right
the
like.
C
Always
let
the
monitoring
system
through
the
the
main
one
of
the
main
reasons,
a
long
time
back
that
I
suggested
that
we
pursue
this
as
a
different
api
was
because
I
feel,
like
the
semantics
are
different
and
the
persona
that's
interacting
with
it
has
a
higher
capacity
for
complexity.
It's
the
cluster
admin,
not
the
app
owner,
and
so
it's
okay
for
this
api
to
be
moderately
more
complicated.
K
Our
thought
was
that
it's
more
likely
for
cluster
network
policies
to
be
denying
because
it's
tracking
you
know
malicious
sources
and
things
like
that,
so
yeah
and
also
limiting.
For
example,
egress
traffic
only
goes
through
docker
hub,
but
nowhere
else,
or
only
from
this
namespace
and
things
like
that.
C
I
like
the
the
tenant
ability-
I
think
that's
a
really
interesting
use
case
that
I
have
definitely
heard
come
up
over
and
over
again,
I'm
I'm
torn
go
to
the
next
slide.
Please
really
quick!
C
Sorry
go
ahead,
I'm
I'm
torn
on
the
allow
deny
allow
versus
arbitrary
levels.
I,
on
the
one
hand,
I
find
the
the
fixed
three
level
to
be
very
expressive.
You
can
express
a
whole
lot
of
things
in
that
pattern,
especially
if
we
generalize
the
exception
condition,
on
the
other
hand,
like
the
industry
at
large
sort
of
understands
the
leveled
rules,
and
so
maybe
we
shouldn't
swim
upstream
against
that.
C
The
I
think
it's
more
than
a
little
bit
more
complexity,
though
I
think
it's
it's
actually
fairly
significant,
because
we
don't
have
one
list
right.
You
can
create
any
number
of
resources
and
in
order
to
understand
the
effect
of
a
new
change,
you
have
to
understand
all
of
the
other
ones
right
like
where
you
insert
it
into
the
list.
Isn't
it
there's?
No
one
list
right
unless
we
mandate
it,
I
guess
we
can.
We
could
require
the
api,
but
we
don't
have
a
concept
of
like
a
singleton
in
kubernetes.
B
Yeah
so
so
I
guess
that's
why.
The
other
ideas
we
had
is
some
sort
of
like
tiers
or
buckets
or
enums
for
the
priorities
so
that
you
know
we
have
a.
You
know
definite
set
of
priorities
which
can
be
you're
not
allowed
and
denies.
But
so
we
don't
give
the
user
ability
to
specify
any
sort
of,
like
sequence,
numbers
that
we
that
that
we
want
basically
but
still
that's
sort
of.
C
C
D
C
Some
other
some
other
tooling
right.
I'm
just
saying
that
you
know
that
you
can't
take
a
myopic
view
of
a
single
resource
and
understand
what
that
resource
is
going
to
do
which,
by
the
way,
is
the
same
thing
that
several
people
have
accused
ingress
of
having
a
problem
with
right.
You
can't
look
at
a
single
ingress,
especially
with
like
the
engine
x,
ingress
and
understand
what
it's
going
to
do
to
the
system.
K
One
quick
note
before
we
stop,
so
we
would
invite
feedback.
What
we
were
hoping
was.
We
definitely
don't
want
to
rush.
We
won
alpha
one,
but
we
think
there
is
a
reasonable
set
for
v1
alpha
one,
and
given
that
this
gap
has
been
sort
of
under
the
discussion
for
close
to
a
year,
we
would
love
for
ideas
on
getting
to
v1
alpha
1..
K
One
last
thing
about
the
sequence:
numbers
different
teams,
because
the
cluster
cluster
network
policies
by
the
cluster
team
are
the
cluster
admins
typically
are
not
operating
independent
of
each
other,
as
opposed
to
developers
which
are
developing
services
independent
of
each
other,
so
giving
developers
priority
sequences
can
cause
them
to
override
each
other
or
bypass
each
other.
But
a
cluster
admin
typically
has
a
unified
view
across
the
cluster
admin
team.
So.
J
That's
the
sort
of
thing
that
ought
to
be
in
the
the
user
stories
or
use
cases
like
as
we
have
designed
this
under
the
assumption
that
the
the
cluster
admins
who
are
using
this
api
are
working
in
concert,
and
so
we
don't
have
to
worry
about
coordination
problems
between
them.
Like
that,
that's
the
kind
of
thing
that
that
affects
what
is
a
good
or
a
bad
solution.
J
C
So
the
last
I
looked
at
the
cap,
it
was
still
the
allow
deny
allow
model.
I
look
at
it
now.
I
see
there's
been
some
responses
to
my
comments.
If,
if
you
guys,
as
the
authors
of
the
cap
are
proposing
that
we
switch
to
a
priority
level
model,
is,
is
the
cap
updated
or
when
will
it
be.
B
So
I
I
I
think,
the
the
reason
we're
seeing
here
in
this
industry
network
meeting
is
we
wanted
to
get
some.
You
know
feedback
on.
Do
you
guys
think
priority
model
is
the
way
to
go,
or
should
we
just
not
do
the
power
of
the
model
and
just
do
allow
and
deny
two
actions
period
and
we
can
figure
out.
You
know
selecting
workloads
by
using
some
more.
You
know.
C
C
Are
there
so,
given
these
use
cases,
it
would
be
interesting
to
see
what
sorts
of
things
we
cannot
express
with
the
allow
deny
allow
model
or
allow
whatever
you're
calling
it
authorized
in
io,
and
that
would
be
expressible
in
the
more
general
model
and
whether
those
matter
like
these
seem
like
they're
expressible
in
the
allow
deny
allow
model
right,
deny
all
traffic
from
s1
and
s2.
K
C
I
agree
with
that.
I
think
that
might
have
been
my
comment,
but
that's
just
words.
We
can
find
the
right
words.
K
Okay,
so
should
we
invite
your
suggestions
on
how
how
we
can
move
the
camp
forward
to
some
level
of.
C
I
guess
yeah,
my
default
position
is
simple.
Unless
you
can
show
me
why
simple
doesn't
work
and
honestly,
I'm
willing
to
give
a
lot
of
weight
to
people
already
understand
this
model
and
you
shouldn't
go
against
it
look.
This
is
what
cisco
does,
and
this
is
what
you
know,
blah
blah
blah.
K
So
all
all
acl
implementations,
whether
they
are
physical
switches
or,
as
you
know,
in
aws
or
all
the
clouds-
they
all
have
sequence
number
based
acls.
So
it
is
not
an
uncommon
pattern.
In
fact,
it
is
the
most
common
pattern,
but
you're
absolutely.
K
A
All
right,
thank
you
and
we're
gonna
move
antonio's
item
for
host
network
pods
to
the
next
meeting
and
bridget.
You
wanted
to
highlight
the
preferred
dual
stack
fix
loop,
pr
edition.
D
Yes,
I
know
that
finding
anything
in
your
github
notifications,
when
you
have
all
of
the
github
notifications
is
challenging,
but
since
dan
and
tim
and
antonio
were
the
exact
people
that
cal
was
hoping
could
look
at
this.
I
put
a
link
in
the
notes,
and
I
also
just
put
it
in
the
chat
and
basically
we
don't
need
to
discuss
it
at
length
on
this
meeting,
but
this
is
the
trying
to
fix
the
preferral
preferred
dual
stack
loop
issue
and
it
doesn't
look
simple.
D
So
cal
is
looking
for
folks
to
possibly
weigh
in
on
that.
On
that
comment,.
E
I
was
talking
with
carl
two
hours
ago,
so
basically
moving
from
single
to
dual:
it's
okay,
moving
from
dual
to
single,
it's
just
it's
complicated.
A
Yeah,
it's
simple:
if
you're
going
from
the
preferred
address
family
in
a
dual
stack
to
only
that
address
family
in
single
stack
right,
because
then
you
don't
have
to
touch
cluster
ip,
but
the
problem
is
if
you're,
going
from
the
dual
stack
to
the
non-preferred
address
family
in
single
stack.
That
seems
not
possible
with
our
current
restrictions
around
cluster
id.
A
C
E
So
the
repair
loop
grabs
the
the
snapshot
checks
if
it
needs
to
do
something
and
send
a
patch
to
the
service
that
needs
to
be
upgraded.
So
this
has
to
go
through
all
the
rest
and
it
has
to
touch
the
locator
there
right
and
I
don't
know
if
that
is
going
to
work
or
it's
going
to
be
racy
or
it's
going
to
have
some
kind
of
problem,
because
you
are
going
to
operate
in
an
snack
shop
that
is
older
than
the
one.
C
C
C
A
C
Thanks:
everyone,
don't
forget
your
kubecon
north
america.
Cfp
closes
in
like
a
week
or
something
so
a
week
and
a
half
something
like
that.
So
get
your
get
your
proposals
in
antonio.
I
can
see
a
proposal
on
ip
allocation
coming.