►
From YouTube: Kubernetes SIG Network meeting 20210527
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
we're
now
recording
this
is
the
sig
network
meeting
for
may
27th
2021.?
Do
you
want
to
start
with
some
issue
triage
anybody
sure.
B
Can
I
share
window
issues
share
so
we
had
30
today
I
was
able
to
ping
a
few
close
one
or
two
and
assign
a
few,
but
I
thought
they
were
open.
They
were
interesting
enough
that
they
were
worth
talking
about.
So,
let's
burn
through
a
few
casey.
This
one
was
all
about
calico,
so
I
signed
it
to
you.
I
just
wanted
to
say
that
on
recording
so
that
you
couldn't
claim
ignorance.
B
This
one,
this
one's
about
cube
proxy
and
cleanup
and
antonio
reminded
me
that
we
used
to
have
this
cleanup
logic,
and
then
we
removed
it
because
it
was
super
problematic
and
there's.
Basically,
this
user
saying
I'm
switching
from
iptables.pvs
without
rebooting
and
there's
a
bunch
of
iptables
rules
that
are
left
over
and
the
short
answer
is
yup.
There
are.
We
have
cubeproxy
cleanup
which
tries
to
clean
up
stuff,
but
I
tried
it
myself
and
it
actually
seems
to
like
fail
and
terminate
early.
B
If,
like
on
my
machine,
I
don't
have
ipset.
So
it
seems
like
that's,
not
perfect.
I
wonder
if
it's
worthwhile,
like
adding
a
dash
cleanup,
equals
iptables
to
force
it
to
only
try
ip
tables
for.
C
B
F
Okay,
one.
F
B
It's
a
good
question.
I
have
no
idea,
should
I
just
reconvert
this
to
a
docs
issue,
maybe.
F
I
just
feel
like
when
people
ask
you
know
it
hurts
when
I
do
this.
Well,
don't
do
that,
but
maybe
we
should
write
down
somewhere
that
people
will
find
it.
G
G
B
B
Okay,
now
we
have
a
couple
of
three
sctp
issues
which
I
am
ill-equipped
to
answer.
This
one
is
about
a
ctp
restart
lars
tried
to
tag
with
area
sctp.
Perhaps
we
should
actually
petition
to
get
that
we
don't
have
it
at
the
moment.
The
bug
issue
is
here
to
somebody
who
knows
and
can
recreate
sctp
issues
want
to
take
this.
B
B
Nobody
feels
equipped
to
take
sctp
all
right,
we'll
have
to
come
back
to
it.
Then
there's
a
second
one
about
sctp
and
it's
unclear
to
me
exactly
what's
happening
here.
I
don't.
I
don't
know
what
the
relationship
is
between
sctp
and
nat
and
contract.
B
We
need
somebody
to
again
try
to
try
to
confirm
that
it's
a
real
thing
or
not
lars
is
all
over
these,
but
I
didn't
want
to
assign
it
in
his
absence
bars.
Are
you
here.
I
Yeah
I
I
cannot
take
these
because
from
my
company
will
not
use
the
the
lioness
core
stp.
We
have
fight
it
basically
right,
so
I'm
not
sure
that
it.
I
B
I
would
hope,
and
then
this
is
another
one
around
sctp
and
sourcenet.
I
I
asked
a
question
here
I'll
follow
along,
but
again
I
don't
really
know
sctp
well
enough
to
answer
completely.
Antonio
was
also
very
helpful
here.
C
Yeah,
this
is
not
about
this
ctp
and
we
had
several
of
these
before
so.
This
is
people
is
starting
to
to
create
a
active
passive
deployments.
We
serve
probably
was
with
me
in
another
similar
issue,
so
they
have
one
in
point
active
and
when
certain
points
dies,
moves
to
the
other
point.
So
in
that
window
you
don't
have
the
time
to
the
contract
is
not
able
to
go
away
and,
and
they
keep
sending
to
the
same
bucket.
B
I
I
B
Okay,
well
we'll
keep
an
eye
on
this
one,
and
I
guess
if
we
don't
get
anybody's,
answering
we'll
revisit
it
next
time.
B
B
Yeah,
but
that's
the
closest
thing,
though
I
guess
I
don't
know
okay,
so
I'm
willing
to
just
leave
this
in
as
a
feature
request,
but
I
said
here
you
know
somebody
somebody
would
have
to
lay
it
all
out
and
figure
out
what
the
api
changes
in
the
compatibility
story
and
everything
are.
If
we
wanted
to
consider
this
at
all.
You.
F
B
Yes,
this
is,
this
is
kept
territory,
so,
unless
you
want
to
like
actually
go
ahead
and
try
to
like
start
thinking
about
a
cap,
I
don't
know
if
you
want
to
be
assigned
to
this.
Tell
me
no
really.
L
So
I
think
you
can
add
a
note
here
that
in
the
cluster
network
policy
the
possibility
of
a
deny
action
is
being
investigated
and
then
we
already
have
a
cap
for
that.
So
just
point
to
the
cap:
call
okay.
G
M
N
Yeah,
we
can
also,
you
know-
maybe
direct
them
to
the
youtube
network
policy-
api
repo
now
and
maybe
have
these
issues
there
so
that
we
can
further
discuss.
B
Cool,
can
you
guys
just
jump
on
this
one
and
link
whatever
links
you
want
to
link
102
127.
B
Service
network
service
latency
should
not
be
very
high.
So
this
is
an
interesting
one.
I
saw
antonio
on
it
again.
B
They're
getting
an
issue
in
the
repair,
allocator
or
the
allocator
repair.
Sorry
cluster,
ip
not
allocated
what
that
means
to
me,
I
think,
is:
there's
a
service
with
this
ip
assign
and
it's
not.
The
bit
is
not
set
in
the
bitmap,
and
that
sounds
like
a
bug
oops.
B
So
I
responded
here
and
apparently
they've
responded
too.
So
I've
assigned
this
one
to
myself.
I
think
no,
I
didn't
I'll
assign
this
to
myself
for
now.
If
anybody
has
any
clues
or
if
they
want
to
dig
into
the
service
allocator
feel
free
to.
Let
me
know
I'll
tear
this
one
off.
So
I
remember
to
reply.
B
Next
was
ingress
versus
network
policy.
I
actually
put
something
on
the
agenda
to
talk
about
network
policy
a
little
bit
later.
B
I
think
the
issue
is
that
they
have
an
in-cluster
ingress
implementation
in
nginx
and
they're,
not
enabling
the
network
policy
for
it.
Does
that
sound
right?
I've
seen
this
before
this.
J
B
This
one
looks
like
they
have
some
failed
connections
which
are
leaving
contract
records
in
leaving
them
around
they're,
not
closing,
because
there's
no,
the
connections
never
actually
established
and
they
have
some
limitations
on
source
ports,
and
so
they
really
want
to
get
rid
of
those
contract
rules.
I
don't
really
understand
the
limitation.
B
What
I
think
is
interesting
is
whether,
if
we
really
are
seeing
contract
records
for
things
that
we
could
clear
like
we
have
a
service,
we
have
the
endpoint
for
it.
We
should
probably
ensure
that
the
contract
records
are
cleared
the
first
time
we
see
it.
That
seems
like
a
maybe
a
reasonable
feature
request,
but
somebody
needs
to
investigate
casey
you're
assigned
to
this,
but
I
don't
see
you
working
on
it.
Do
you
want
me
to
see
if
there's
anybody.
A
Else-
I
don't
remember
being
assigned
to
that
one.
I
can
take
a
look
at
it
through
okay,
somehow
yeah
there.
It
clearly
happened.
Yeah
I'll,
take
a
look
at
this
one
over
the
next
few
days.
B
B
I
I'm
very
much
with
you.
I
I
found
some
other
corner
case
around
contract,
of
something
that
we
probably
should
be
clearing
but
weren't,
and
I'm
trying
to
remember
the
details
of
it.
I'll
have
to
find
my
notes
somewhere.
But
basically
I
looked
at
the
contract
code
and
I
thought
oh
geez
this.
This
whole
thing
needs
to
be
thought
about
a
lot
more
carefully.
B
So
if
anybody
feels
like
walking
in
the
minefield,
let
me
know.
I
B
Right,
okay,
cool!
Well,
please
take
a
look,
and
let
us
know
if
there's
things
that
we
can
actually
do
here.
It
seems
entirely
plausible
that
there
are
real
bugs
here.
B
I'm
sure
great,
and
that
was
about
as
far
as
I
got
in
my
prep
work
anyway,
we're
down
into
over
a
month
ago
with
issues.
So
last
reminder
everybody
who
has
been
assigned
issues.
Please
go
look
at
the
issues
that
are
assigned
to
you
and
ping
them
or
try
to
figure
out
if
they're
real,
if
they're
real,
except
the
triage.
If
they're
not
go
ahead
and
close
it
or
bring
it
back
to-
or
you
know,
add
some
notes
and
we'll
bring
it
back
to
this
group
next
time.
Thanks
tim.
B
A
Thank
you.
Tim
next
step
was
antonio.
C
Okay,
we,
I
see
cloud
view
here
we're
talking
about
this
two
two
meetings
ago.
That's
that's!
The
thing
is:
if
a
cube
led
pro
should
be
able
to
reach
pots
in
another
node,
okay-
and
this
was
discussed
in
the
main
list.
I
created
that
some
slides
to
to
give
more
confidence.
I
don't
know
if
you
read
it
or
you
want
me
to
go
through
them
quickly,
and
so
the
the
the
question
is.
C
C
What
we
found
is
that
the
reverse
is
not
possible
or
is
not
straight
forward.
So
if,
if
you
want
to
reach
a
host
network
pod,
you
can
use
a
different
ap,
but
that's
that's
one
of
the
something
that
we
should
clarify
in
the
network
model.
But
the
question
here
is
is
about
the
crowd.
Use
test
that
is
tubelet
should
be
able
to
reach
parts
in
another
nodes,
and
my
argument
is
that
it
should
because
cubelet
use
the
host
network
name
space.
C
L
C
Do
you
allow
horse
network
pods
to
reach
pods
in
another
nets,
nodes
and
cube?
Let's
know
because
we
are
talking
about
processes,
so
the
network
name
space
is
the
same
for
cubelet,
but
for
a
host
network
pod,
it's
tricky!
That's
why
I
did
this
rice.
If
you
go
through
there,
it
will
give
you
more
context
with
the
scenarios-
and
I
mean
and
is-
is
a
reasonable
doubt
and
and
it's
a
loophole
they
are
in
our
documentation
that
I
think
we
should
clarify
about
to
have
this
discussion.
B
I
hadn't
seen
your
slides
before,
but
they
look
really
nice.
I
think
my
my
I'm
torn
on
this
one,
because
on
the
one
hand
I
want
to
give
as
much
implementation
freedom
as
I
can,
and
so
you
know,
assuming
that
host
network
means
the
same
as
cubelet
is
kind
of
a
dubious
assumption
like
what,
if
cubelet
isn't
in
the
host
namespace,
but
it's
also
very
it
makes
assumptions
around
namespaces
right,
like
we
are
kind
of
making
assumptions.
E
C
O
M
O
Yeah
well,
basically,
it's
basically
spawned
two
pods.
One
of
them
is
a
web
server
and
the
other
one
is
a
pod
that
has
a
pre-stop
web
hook
or
a
post
start
web
hook,
and
the
issue
is
that
if
it's
cross
node,
it
might
be
problematic.
O
J
C
L
B
Yeah
I
mean
we
have
a
feature
that
lets
you
do
that
and
whether
we
like
the
feature
today
or
not,
we
have
it
and
we
need
to
figure
out
what
the
bounds
of
it
really
are.
B
So,
antonio,
I
will
read
over
your
slides
in
more
detail
and
I'll.
I
guess
I'll
offer
my
opinion,
but
I
really
would
like
to
hear
other
folks
thoughts
on
this.
L
One
it
seems
like,
rather
than
opening
up
cubelet,
reaching
pods
and
other
nodes,
it
would
be
better
to
define
what
is
the
correct
networking
model
for
host
network
ports.
What
should
host
network
pods
be
able
to
reach
and
what
should
they
not
be
able
to
reach,
rather
than
letting
cubelet
also
open
up
to
other
nodes.
B
C
O
M
M
B
I
see
okay
well,
I
know
there's
a
thread
on
this
on
the
mailing
list,
so
it
would
be
great
locking
your
color
on
the
vagaries
of
windows
would
be
excellent
and
I
will
offer
my
own
thoughts
as
soon
as
I
get
a
chance
to
digest
these
slides.
A
E
I
can
do
this
in
10
minutes:
perfect,
okay,
I'll
go
through
a
few
slides.
So
to
give
some
context
of
what
we're
doing
here
on
the
key
proxy
sub
project,
we
created
a
tool
to
validate
the
services
inside
kubernetes
and
we
reused
some
of
the
tooling
that
already
exists
and
frameworks
that
the
writers
is
to
build
this
tool.
E
Basically,
we
are
using
to
validate
some
other
fronts
like
a
ping
and
other
things
that
are
going
on
in
the
subgroup,
and
we
have
some
examples
of
issues
in
things
that
were
happening
when,
when
in
the
middle
of
the
sig
network
situation.
So,
as
I
said,
we
have
some
other
tooling
around,
like
the
the
main
one
is
the
table
driving
tests
that
are
created
for
the
cni,
and
we
are
reusing
these
in
this
this
project.
So
we
are
using
the
kuberc
kubernetes
6z2e
framework
and
clango.
E
We
have
a
few
cases
covered
so
far.
We
have
close
rp
noteboard
with
external
traffic
policy,
cluster,
external
traffic
policy,
local
external
name,
it's
a
name
test
and
load
balancer.
E
So
the
architecture
is
like
something
like
this,
so
you
have
like
a
few
bootstrap
schema
to
bring
the
pods.
We
run
the
tests
as
unit
tests
or
integration
tests.
We
probe
the
pods
between
them
print
the
results
in
the
output
and
clean
up
in
the
after
run.
All
the
name
space
so
basically
is
a
binary
that
you
can
run
in
our
closer
and
take
a
look
and
see
the
results
and
see
if
your
your
services
are
running
correctly.
E
How
to
develop
these.
We
are
using
our
network
police
framework
for
table
tests.
Basically,
you
define
a
reachability
in
initiation,
so
you
say
everything
on
the
table
is
false.
You
enable
a
few
of
them
like
all
the
pods
on
namespace
and
nsx.
E
E
That's
it
for
these
slides.
I
can
run
a.
G
B
G
Yeah,
so
basically,
this
will
just
be
like
that
like
so
this
will
be
the
thing
where
you'll
you'll
see
the
patterns
and
you'll
know
like
none
of
my
nodes
can
reach
it
anything
that
way
we
can
a
b
test
between
coping
and
coop
proxy,
because
otherwise
it's
too
hard.
You
have
to
be
super
smart
to
figure
it
out.
Otherwise,.
E
Yeah,
so
basically
it
will
take
a
while
to
probe
this
thing
that
but
basically
build
your
binary
to
point
to
the
cube
kubernetes
config
on
your
host
to
your
probe
to
make
sure
everything's
running
the
pods
are
up
have
a
kind
cluster
here.
E
It
will
try
to
validate
the
probes
between
the
pods
and,
if
it
fails
that
what's
happening
here
on
my
machine
I
don't
know
the
reason,
but
it's
a
regular
clustered
request,
but
it
it
fails
you
to
try
the
second
try
and,
in
the
end,
you
print
the
difference
between
the
expected
and
the
observed.
So
if
you
look
through
all
the
this
test
case,
we
have.
E
I
have
a
metal
lb
running
on
my
machine
that
will
give
me
an
id
for
the
load
balancer
and
I
have
some
results
in
the
load
balancer
as
well.
This
is
for
the
external
drive
pulse
local.
We
expect
some
failures.
E
This
is
a
very
interesting
one,
so
we
expect
that
all
the
pods
in
the
name
space
hit
this
this
first
spot
and
block
every
every
other
pod,
but
this
pass
for
the
higher
pin
ones.
So
when
I
hit
one
pod
using
the
using
the
the
node
port
or
in
another
host,
it
fails
in
the
local
port
in
the
local.
G
E
G
E
The
same
port
like
the
harpy,
it
will
pass,
even
if
you
use
the
the
node
ip
and
the
part
of
the
servers
so
and
then
like
we
have
this
external
dns
and
it
passes.
So
it
will
end
up
with
a
good
error
and
her
sas
zero.
E
G
B
G
But
well
both
we'd
like
to
use
it
the
same
way.
We
use
network
policy
tests
right
now,
which
is
that,
yes,
we
use
them
for
verifying
cni's,
but
we
also
can
use
them
now
when
individuals
have
bugs,
we
can
just
tell
them
to
go
run.
Those
etvs
and
they'll
tell
them
exactly
what's
wrong
and
also
for
issue
triage
and
also
for,
if
we
can,
you
know
once
we
get
node
ports
and
stuff
like
that,
working
for
doing
a
b,
testing
between
that
and
and
and
proxy
and
so
on
and
so
forth.
B
C
C
G
Would
like
to
like
get
rid
of
the
surface
tests
that
we
currently
have
in
test
to
e2e,
I
mean
a
lot
of
them
are
not
easy
to
use
right
and
a
lot
of
them.
They'll
do
things
like
they'll
pick
a
note
and
pick
they
have
pick
favorites
as
far
as
who's
pulling.
Who
and
it's
like
you
know,
I
don't
know
like
what
yep
what's
wrong,
so
I
think
it'd
be
kind
of
cool.
G
C
C
B
C
B
B
Maybe
maybe
the
right
step,
then,
is
to
just
have
a
conversation
with
like
john
bellameric
or
something
john.
I
don't
know
if
you're
here,
no,
maybe
maybe
he'll
have
an
opinion
on
this
and
how
he'd
like
to
see
it
proceed.
He's
sort
of
the
the
steward
of
conformance
right
now.
G
G
B
Cool
you
want
to
take
the
action
just
reach
out
to
john
and
and
ask
him
if
he's
got
10
minutes
to
see
the
demo
and
give
us
some
thoughts
about
how
to
integrate
this
with
ede
and
conformance.
G
Yeah
sure
all
right
I
mean
we
can
sync
up
maybe
offline
and
talk
about
all
right,
cool.
C
G
N
C
G
A
Up
cool
looks
good,
guys,
tim,
you
are
next
with
cap
stuff
and
then
policy.
B
Cool,
so
so
I
spent
some
time
since
the
kept
freeze
within
the
last
couple
weeks
and
I
spent
a
lot
of
time
reading
caps.
I
spent
some
time
this
last
weekend
taking
a
look
at
our
project
board
and
it's
our
project
board
that
we've
got
today.
We
know
we
have
a
project
board,
we
have
a
signet
project
board
and
we
don't
really
use
it
for
anything.
B
It
was
sort
of
a
mishmash
of
ideas,
and
I
thought,
let's
just
make
it
about
caps,
because
damn
we
have
so
many
open
caps
and
so
many
open
feature
gates
that
I
was
overwhelmed,
trying
to
keep
track
of
them
all.
So
I
fired
up
the
project
board.
In
fact,
can
I
share
screen
again?
Let's
do
that?
Share
window
share,
so
I
fired
up
the
project
board
and
I
deleted
everything
that
wasn't
kept
related
and
I
re-titled
the
columns
you'll
see
that
I
have
pr's
ignore
that
one
for
a
minute.
B
B
We
talk
about
caps
being
alpha
beta
and
ga,
but
there's
nothing
actually
physical,
that
manifests
that
except
the
gates
and
some
caps
have
no
gates,
and
some
caps
have
multiple
gates
and
those
multiple
gates
are
sometimes
in
different
stages.
Like
endpoint
slice,
I
thought.
Oh,
that's,
ga
cool,
no
problem,
oh
wait!
Actually
one
of
the
gates
in
the
endpoint
slice
cap
is
still
beta,
so
I
was
trying
to
figure
out
like
how
do
I
want
to
activate
this
bridgette
says
we
have
to
remove
the
gates.
B
No,
we
have
to
remove
the
gates
two
releases
after
we
ga
them.
We
have
to
lock
it
at
ga
time,
but
the
gate
still
has
to
be
there.
So
anybody
who
has
a
command
line
that
still
specifies
that
gate
will
still
operate.
They
won't.
They
won't
get
an
error
and
then
two
releases
later
they'll
start
getting
air.
B
So
there's
a
ton
of
gates,
they're
like
remove
after
1.24
and
there's
some
they're
like
remove
after
1.6
and
they're
still
in
there.
So
but
that's
different
issue,
so
I
went
through
all
the
caps
and
I
added
them
all
here
and
then
I
started
like
verifying
them,
and
so
I
started
from
ga
backwards.
So
everything
that's
in
the
ga
column,
I
can
confirm.
I
went
and
looked
at
the
gates
that
are
associated
with
these
caps
are,
in
fact
ga
and
they
are
just
waiting
to
be
removed.
B
Everything
that's
in
the
beta
column
is
in
fact
has
its
gates
in
beta
and
is
you
know
potentially
moving
to
ga
this
cycle,
but
is
currently
in
beta.
As
of
you
know,
master
last
week
I
went
through
all
of
alpha
and
the
same
thing
now.
B
B
There
are
a
lot
that
are
still
in
pre-alpha
or
new,
not
evaluated
that
I
need
to
go
through
it's
entirely
possible
that
I
missed
some.
I
only
picked
things
up
that
were
labeled
sig
network,
and
this
is,
I
think,
the
interesting
column.
I
added
all
the
prs
that
are
labeled
sig
network.
Some
of
these
are
duplicative
of
these
first
other
two
columns,
but
some
of
them
are
like
edits
to
existing
caps.
B
I
don't
really
know
what
I
want
to
do
with
that
yet,
but
I
thought
before
I
got
too
much
farther
with
it.
I
thought
I
would
show
the
group
and
see
like
is
this
useful?
For
me
it
was
useful
just
getting
my
head
around
the
sheer
number
of
caps
and
gates
that
we
have
open.
B
B
I
found
that
to
be
useful
too.
Maybe
a
link
will
be
better
than
the
kept
number,
but
anyway,
the
the
question
here
is
like:
is
this
useful
for
people.
F
I
think
it's
going
to
be
very
useful,
especially
if,
as
you're
referencing
you're
putting
a
few
other
notes
here
and
there
as
we
go
through,
we
might
find
it's
valuable,
valuable
to
be
like.
If
we
have
one
say,
that's
beta,
that
we're
trying
to
move
to
ga
are
the
docs
in
you
know,
because,
like
there's
that
spreadsheet,
for
you
know
the
bitly
blah
blah
spreadsheet,
but
I
feel
like
we
can
easily
search
in
there
compared
to
something
like
this.
So.
B
So
the
problem
that
I
have
with
project
boards
in
general
is
it
really
only
supports
issues
and
pr's,
there's
really
there's
no
other
manageable
entity
so
like
we
don't
have
a
an
issue
or
a
pr
per
gate
right
and
now,
maybe
we
should
maybe
we
should
have
a
separate
sub
type
of
issue.
That
is,
you
know.
These
are
for
kept
issues,
and
these
are
for
gate
issues
and
one
cap
can
have
one
or
more
gait
issues.
I'm
not
sure
that
we
need
to
go
through
that
level
of
machinery
and
sort
of
obfuscation.
B
On
the
other
hand,
it
was
definitely
a
wake-up
call
of
like
if
you're
writing
a
cap-
and
you
want
to
add
a
new
gate,
make
it
a
new
cap,
because
otherwise
your
cap
is
going
to
get
stuck
in
this
beta
column.
While
we
wait
for
everybody
to
catch
up
and
what
does
it
actually
mean?
I
don't
know
not
nothing
much
really
right.
We
can
still
call
the
endpoint
slices,
ga
it's
just
not
completely
ga,
because
it
you
know
it's
still
waiting
on
one
straggler
right
here.
B
So
so
I
thought
this
was
useful
and
I
wanted
to
you
know,
share
with
people
it's
in
this
state
now.
If
anybody
wants
to
go,
take
a
look
at
it.
I
could
certainly
use
help
verifying
the
pre-alphas
and
news.
B
B
It
wasn't
super
painful,
but
it
was
a
little
bit
of
work
and
then
and
then
I
would
love
to
like
visit
this
every
meeting
or
every
couple
of
meetings
just
to
see
how
how
things
are
going
with
all
of
our
various
caps,
which
leads
me
to
my
my
second
point.
While
we
have
a
lot
of
caps
open
right
now
and
in
fact
in
reading
some
of
them,
I
don't
think
all
of
them
are
even
complete
like
andrew.
I
don't
know
if
you're
here,
but
there's
the
kept
around
graceful
termination
for
demon
sets.
B
I
think
we
should
probably
extend
that
to
just
be
graceful
termination
for
all
services
and
that's
not
even
covered
by
the
cap,
so.
B
Well,
you
have
it
for
demon,
sets
where
you'll
add
terminating
endpoints,
for
the
demon
set
or
for
for
for
policy
local
right
yeah,
those
for
policy
level.
It
wasn't
intense
yeah,
I
feel,
like
the
same
terminating
endpoints
trick,
probably
should
apply
to
all
services
once
we
prove
that
it
works
and
like
that's
not
even
documented
in
the
cap
like
it's,
it's
a
new
cap
right.
B
So
anyway,
my
point
was
we
have
a
ton
of
stuff
going
on
and
it's
been
very
tricky
to
think
about
the
intersection
of
many
of
these.
The
so
I
wanted
to
soft
propose
just
to
float
a
trial
balloon
here.
I
don't
know
what
you
all,
but
I'm
feeling
a
little
overwhelmed
with
these.
B
I
would
love
for
us
to
like
get
those
all
into
ga
and
be
confident
in
them
before
we
start
bringing
in
a
whole
host
of
new
caps.
So
I'm
not
saying
we
are
going
to
do
it,
but
I'd
like
to
suggest.
Maybe
we
should
think
about
it.
Yeah
somebody.
C
C
B
Frigid
says
we
should
finish
or
officially
abandon.
I
agree.
I
agree
with
that
and
I'm
not
above
backing
out
changes
if
we
can't
get
somebody
who
wants
to
carry
the
ball
forward
on
them
right
and
in
fact
I
think
we
should.
I
think
we
should
think
hard
about
like
do.
We
merge
code
for
things
that
are
alpha
but
not
complete
like
well.
How
do
we?
How
do
we
manage
this
life
cycle
better,
like
clearly,
we
have
some
gaps,
and
you
know
people
disappearing
is
causing
us
to
have
stale
stuff.
Sorry
go
ahead,
virgin.
F
Oh,
no,
that's!
I
was
completely
agreeing
with
you
and
just
saying
having
people
plugged
into
the
process
looking
huge
and
I
did
add
the
link
to
this
board
to
the
next
meeting,
but
maybe
two
minutes
at
the
beginning
of
every
meeting
just
make
sure
that
something
has
happened
on
at
some
set.
You
know
maybe
one
meeting
we
looked
through
one
column,
another
meeting.
We
look
through
another
column.
B
P
B
It
has
all
the
same
labels
as
every
other
repo
right,
so
maybe
maybe
help
wanted
is
a
good
one
andrew.
If
you
want
to
go
shopping
for
a
label,
then
you
can
just
nominate
one
help
wanted.
Maybe
a
good.
P
Start
yeah,
I
just
don't,
have
the
context
on
like
which
ones
are
actually
abandoned
and
which
ones
are
just
kind
of
in
progress.
B
Okay,
well,
I
can
tell
you
everything
in
alpha
and
to
the
right
is
active,
oh
wait!
No,
that's
not
true.
Everything
beta
and
to
the
right
is
active
alpha.
I
think
the
mixed
protocols
was
not
supported.
I
think
that's
one
that
was
stuck
in
pre
is
stuck
in
alpha
because
it's
only
half
implemented
just
as
an
example.
P
B
Yeah,
so
if
we
can
get
to
a
place
between
now
and
the
next
meeting,
where
we're
sure
that
everything
in
this
board
is
in
the
correct
column,
then
we
can
go
through
them
and
just
talk
about
them
and
talk
through.
Why
they're
in
the
columns
that
they're
in
right,
especially
the
things
that
are
pre-alpha
and
alpha.
C
B
Yeah
I
wanted
to
float
it
here
just
to
see
if
people
were
like.
Oh,
this
is
stupid.
You
dummy,
but
since
everybody
seems
to
be
at
least
not
complaining
I'll
go
ahead
and
announce
it
on
the
mailing
list
and
I'll
suggest
the
slow
down.
F
And
you're
worried
about
people
not
liking
the
idea
of
a
slowdown.
You
could
reframe
it
as
we
want
to
make
sure
everything
on
this
board
has
a
chance
to
succeed.
So
we
want
to
not
put
more
on
without
evaluating,
moving
forward
or
aging
out
some
just
so
that
there's.
A
Q
B
B
Cool
all
right,
so
everybody
please
feel
free
to
take
a
look
at
these
I'll,
send
a
note
to
cygnet
and
andrew.
We
can
talk
about
questions
like
yours,
like
you
just
asked
on
the
thread
there
with
the
large
audience.
B
I
think
I
was
I
had
the
next
agenda,
but
we're
just
about
out
of
time
yeah,
maybe
I'll,
just
I'll
I'll
punt
it
I'll
just
I'll
open
it
here
and
then
I'll
leave
it.
The
topic
of
the
intersection
of
external
traffic
and
network
policy
has
come
up
again
and
I
could
have
sworn
that
we
had
sort
of
decided
what
to
do
with
it,
but
I
think
dan
and
casey
have
convinced
me.
B
We
talked
about
it,
but
we
never
actually
decided,
and
so
it
seems
like
quite
a
mess
that
network
policy
and
node
ports
and
network
policy
and
load
balancers
and
network
policy
and
ingress
is
ill-defined.
So
perhaps
this
is
something
we
can
come
back
and
define
more
clearly
and
I'll
move
it
to
the
agenda
for
next
time.
F
I
totally
moved
it
next
time,
but
I
would
be
thrilled
if
anyone,
especially
people
that
cal
mentioned,
has
a
chance
to
review,
because
it
is
in
that
beta
column
that
we
want
to
move
to
ga,
and
there
is
that
one
prefer
dual
stack
like
network
loop
issue.
That
needs
a
few
people
to
look
at.
So
I.
B
F
This
is
the
one
blocking
bug
that
we
know
of,
but
we
also
would
like
to
see
more
signal
from
various
like
say
cloud
providers
and
I'm
working
to
try
to
get
our
team
in
our
current
planning
phase
to
make
sure
that
we
make
it
as
possible
as
easy
as
possible
for
customers
to
be
using
the
1.21
beta
and
now
that
it's
going
to
be
available
in
aks
and
let
them
test
dual
stack.
So
yeah,
that's
kind
of
the
you
want
to
get
more
signal.
B
Yeah
there
are
some
related
issues
like
host
ips
for
that,
but
those
are
separate
caps
and
I
think
they're
they're
async.
I
don't
think
they're.
Ga
blockers.
F
A
A
Well,
I
think
that's
that's
the
end
thanks
everybody
for
coming
we'll
see
you
all
in
two
weeks.