►
From YouTube: Kubernetes SIG Network meeting 20210819
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
B
C
Have
two:
if
you
want,
if
you
want
to
display
the
two
or,
if
you
want
me
to
display
it,
doesn't
matter
number
104,
394.
C
Okay,
so
something
about
not
logging
events
properly.
I
didn't
pull
all
the
contacts
on
it
because
I
just
found
it
a
few
minutes
ago,
but
does
anybody
have
any
context
on
this
one?
Yes,.
F
F
Is
it
going?
Is
it
going
to
emerge
it's
waiting
for
somebody
to
approve
it.
G
G
Yeah
but
the
torch,
that
is
the
other
one
that
says
cal
is
right,
but
there
are
two
or
three.
F
H
C
Okay,
why
don't
bridget
want,
or
I'm
just
a
self-assigned
since
I've
got
it
open
here
too
I'll
assign
this,
and
I
will
go
stir
the
pot
on
those
bugs
see
if
anybody
gets
mad
at
me
for
just
approving
it.
I
B
C
J
C
F
So
let's
do
this
I'll
I'll
spend
the
day
today
try
to
to
reapprove
this,
and
I
will
I
slack
you
with
what
I
find
and
then
we
can
see
where
we
can
put
the
solution
for
it
either.
We
need
to
be
our
standard
yeah.
C
B
F
Okay,
I
went,
and
I
and
I
like
knocked
few
few
issues
like
trash
accepted
and
some
things
were
yeah
on
our
side
and
I
moved
into
other
sects
right.
C
A
reminder
for
everybody,
everybody,
everybody.
There
are
a
lot
of
bugs
open,
there's
a
150-ish
147,
open
issues
that
are
tagged.
This
network,
many
of
them
are
assigned
to
people
here
and
I
don't
feel
good
about
going
through
150
bugs
pinging
the
assignees.
So
please
take
a
take
a
half
hour
to
load
up
the
list
of
issues,
especially
the
issues
that
are
assigned
to
you,
but
maybe
just
all
of
them
and
run
through
and
see.
If
there's
anything
that
you
signed
up
to
do
that
you're
not
doing.
F
C
I
think
I
think
it's
best
effort.
We
if
it
seems
like
something
that
might
be
a
real
bug
and
we
can
reproduce
it
easily.
Then
we
should
try
to
reproduce
it.
If
it's
I
saw,
I
think,
the
one
that
you're
looking
at
it
was
like
a
spurious.
You
know
error,
occasionally
sometimes
from
cube
proxy
and
we're
like.
I
don't
know
what
to
do
with
that.
So
much
has
changed
right.
C
B
K
F
A
Great
bridget,
you
are
next
on
the
agenda.
B
So
a
couple
of
concrete
things
I
need
from
the
this
crowd
here
I
put
in
links,
clarifications,
etc,
and
you
know
a
little
more
nuance
than
we
had
in
the
acceptance
criteria
in
the
cup
for
moving
to
stable
and
I
would
love
to
get
yes,
no
change,
etc.
On
that,
and
then
also
I
took
a
look
right
before
this
meeting
at
the
enhancements
list,
the
official
list
for
1.23,
and
we
would
need
to
get
our
enhancements
issued
listed
on
there,
which
I
believe
the
current
process
is
you're.
C
Okay,
I
haven't
yet
started
to
think
about
which
caps
go
on
the
spreadsheet.
I
don't
have
the
spreadsheet
in
front
of
me,
but
we
should
for
sure
do
that
calendar
we're
just
talking
about
this
yesterday,
this
brewing
pr
for
service
flattening.
C
I
really
think
we
should
get
that
in
before
we
flip
the
gate
off,
so
I've
been
working
as
hard
as
I
can
to
get
that
pr
ready,
I'm
fairly
confident
that
we're
fine
but
the
kept
windows
first.
So
we
need
to
get
the
cap
updated.
B
F
C
If,
if
you're
comfortable
with
the
change
as
I
proposed
it,
then
we
could,
I
think
it
definitely
makes
things
more
obvious.
F
Okay,
so
I'll
wait,
I'll
wait
for
you
a
bit
until
that
appear,
you're
working
on
is
like
in
a
state
where
we
can
look
at
it
and
then
side
by
side
I'll
upgrade
they
are
allowed
back
up.
C
A
Great,
the
last
item
on
the
agenda
was
from
rahul,
like
I
saw
you
here.
L
K
So
the
this
this
is
in
regards
to
the
cap
for
supporting
multiple
cluster
ciders,
so
we've
had
a
decent
amount
of
good
discussion,
especially
thanks
to
antonio
for
chiming
in
on
the
cap.
There's
a
couple
of
open
items
that
I
wanted
to
discuss
and
then,
hopefully
we
can
get
this
merged
and
I
think
there's
only
a
couple
of
things
to
discuss.
K
The
first
is
something
that
may
or
may
not
be
interest,
so
the
first
one
is
has
to
do
with
changing
the
actual
spec
that
we're
proposing.
So
this
is
something
that
machia,
I
think
he's
on
the
call
he
had
suggested.
K
Basically,
the
idea
is
that
we
want
to
specify
both
families
of
ips
in
a
single
object
and
basically
say
here's
your
ipv4
cider
block,
here's
your
ipv6
block
and
then
select
a
bunch
of
nodes
that
that
that
applies
to
and
basically
the
hope
is
that,
by
being
very
specific
about
selecting
a
set
of
nodes
and
assigning
them,
either
things
from
one
ip
family
or
two
iep
families
will
be
able
to
support
upgrades
better.
Basically,
if
someone
has
an
existing
cluster,
that's
single
stack
and
they
want
to
go
to
dual
stack.
K
They
can,
you
know,
start
moving
nodes
one
by
one
or
you
know.
However,
they
choose
this,
you
know
obviously
has
some
implications
on.
There
will
be
a
period
of
time
where
the
customers
are
running
sort
of
some
pods
with
single
stacks,
some
pods
with
dual
stack
and
that
sort
of
thing,
but
I'm
not
I'm
not
sure
if
the
actual
upgrade
behavior
is
well
defined
or
if
there's
some
sort
of
guidance
we
want
to
provide
on
that.
M
Sure
so
the
other
concern
is
as
well
some
corner
cases
over
race
conditions,
of
assigning
specific
ip
family
to
the
node
and
since
node
is
not
right.
At
least
that
field
is
not
immutable.
It
is
immutable
once
assigned
you
stuck
with
that
right.
So
if
we
split
the
families
across
two
objects,
then
you
must
make
sure
they
both
of
the
families
are
already
there
before
you
assign
them
right.
M
So
there
is
that
that
as
well
concern
which
then
later
have
to
be
either
handled
by
cue
button
when
you
create
a
cluster
or
some
other
mechanism
when
you
create,
when
you
kind
of
set
it
all
up,
there
was
another
thing,
so
we
had
as
well
a
small
discussion
about
how
would
you
basically
decide
whether
it's
a
single
stack
or
dual
stack,
because
that's
currently
indirectly
assumed?
Oh,
you
provided
two
ip
families
that
matches
it
then
go
assign
two,
but
that
might
not
be
always
the
case
right.
M
Maybe
with
this
true
dual
probability
of
providing
those
two,
we
can
have
some
ways
of
defining
okay,
I
want
to
have
always
do
one
stack
or
single
stack
or
one
of
the
both
right.
So
it's
more
deterministic
on
how
what's
desired
by
the
user
when
they
create
a
cluster.
C
So
the
race
argument
holds
a
lot
of
water.
For
me,
I
don't
think
in
general
we
prescribe
anything
about
cluster
upgrades
or
node
ipam,
so
presumably
how
how
how
a
node
decides
to
use
single
stack
or
dual
stack
is
entirely
outside
of
kubernetes
ownership
right
now,
and
so
like
there's,
no
intent,
it's
not
like
a
service
where
you
can
specify.
I
want
these
ip
families
right.
The
node
is
the
reflection
of
reality,
not
the
request
for
reality
right.
So
I'm
not
sure
where
I
was
going
with
that,
but
the
anyway.
C
The
point
is,
I,
I
guess
that's
outside
of
what
communities
can
control
I'm
going
to
go
out
on
a
limb
and
assume
that
somebody
somewhere
is
going
to
want
to
update
a
live
cluster
from
dual
stack
to
from
single
stack
to
dual
stack?
I've
just
spent
the
last
three
days
writing
tests
for
services
to
do
that.
So
I
think
we
do
have
to
handle
the
in
between
state.
G
C
G
That's
what
they
are
just
suggesting,
so
the
problem
is
that,
right
now
we
we
have
one
respect
for
for
each
family,
but
what
they
are
suggesting
is
just
have
one
respect
for
both
families.
So
you
never
know
so
I
think
that
that's
covering
that
scenario,
but
I
don't
know
if
there
are
more
scenarios,
because
we
cannot
go
from
single
to
dual.
I
think
that's
or
this
is
what
I
understood.
C
Yeah,
you
can't
change
a
you,
can't
change
a
given
node
from
single
to
dual,
but
you
can
create
new
nodes.
So
if
there's
a
race,
then
you
need
to
have
both
in
one
atomic
api
transaction,
which
is
creating
the
one
cider
config.
G
C
G
C
I
guess
I'm
I'm
agreeing
with
you
I'm
willing
to,
I
think,
agree
with
that
change.
C
C
F
F
The
reason
how
dual
stack
might
work,
especially
in
managed
environment,
is
people
generally
speaking,
tend
to
upgrade
by
creating
new
nodes
and
deleting
old
nodes,
which
means,
with
the
upgrade
I'll,
be
api
server
first
and
that
works,
but
an
existing
node
and
you're
because
you
don't
know
the
new
node,
even
if
you
set
up
set
it
up
like
on
one
api
server.
Your
new
node
might
connect
on
all
the
apache
which
doesn't
have
the
two
sides.
F
C
Not
I
have
a
I've
run
out
of
cider
space
in
my
cluster.
I
can't
I
go.
I
create
a
new
node,
but
I
can't
allocate
it
a
cider
because
there's
nothing
to
allocate
from
they're
all
full,
then,
as
a
administrator
gets
an
alert,
they
go
and
they
say
oh
crap,
let
me
go
create
the
thing
cube.
Cuddle,
create
cider,
config,
family
v4,
create
that
boom
the
node
schedules,
but
the
node
gets
fulfilled.
F
C
Will
fix
it
right?
No
bouncing
cubelet
wouldn't
be
enough
right.
You'd
you'd
have
to
have
something
that
re-ran
the
node
ipam
allocator.
F
C
I
guess
my
feeling
is:
I'm
not
against
the
idea
of
putting
the
two
into
into
one
config.
It
doesn't
seem
awful
to
me.
F
C
Then
then
we
have
to
explain
what
people
are
allowed
to
expect
will
happen
if
they
change
it.
Do
you
really
expect
all
the
pods
on
the
node
to
restart
with
new
ips?
No,
no,
I
don't
that's
why
it's
immutable
in
the
first
place,
you
can
make
it
additive,
but
then
it's
the
same
thing.
You
have
to
have
a
human
who
recognizes
this
error,
condition
and
comes
along
and
changes
it,
because
there's
no
statement
of
intent
that
this
is
supposed
to
be
a
dual
stack.
Node.
C
K
Yeah,
but
that's
fair
enough,
so
yeah
I
mean,
barring
anything
else
I
think
will
try
to
go
down
that
path.
You.
K
Yeah
yeah
I'll
go
ahead
and
update
the
cap
to
reflect
that
right.
K
Yeah
yeah,
the
other.
The
other
thing
that
we
wanted
to
talk
about
was
making
this
a
crd
versus
a
built-in
part
of
the
api,
and
so
I
think
tim
you
were
especially
vocal
about
making
it
a
crd.
So
we
could
change
our
minds
in
the
future
or
you
know,
experiment
first
and
then
pull
it
in
tree
later
on.
K
K
So
our
upgrade
story
right
now
was
that
we
would
respect
the
old
flags
that
get
assigned
on
the
queue
controller
manager,
and
we
would
read
those
values,
and
you
know
actuate
based
on
that
and
also
customers
could
then
or
users
could
then
go
in
and
make
more
resources
that
define
additional
discontiguous
ranges
as
they
want.
But
if
we're
running
as
a
crd
with
a
separate
controller,
we
don't
have
access
to
any
of
those
flags.
So
we
can't.
K
We
can't
do
a
seamless
upgrade
and
you
know
we
were
also
piggybacking
on
the
leader
election
implemented.
In
cube
controller
manager-
and
you
know
if
we
have
to
build
our
own
controller-
we
have
to
handle
that
problem
ourselves
again
in
a
high
availability
situation.
G
C
Not
a
resolved
problem
yet
but
notice
that
there's
a
there's,
a
distinction
between
using
a
crd
to
define
the
schema
and
necessarily
using
an
external
controller.
You
can
define
a
crd
and
still
have
a
built-in
controller
right.
It's
just
not
nice,
it's
not
the
norm,
but
it
wouldn't
be
the
first
time.
That's
happened,
I'm
fine
with
making
this
a
built-in.
If
we
think
that
that's
really
what's
necessary,
then
I
feel,
like
we've
done
the
homework,
to
figure
out
why
crd
is
not
reasonable.
F
K
Okay,
yes,
does
anyone
else,
have
any
input
they'd
like
to
share
otherwise.
K
That
that
would
be-
I
guess
ideal
I
I'm
not
sure
about.
I
guess
the
feasibility
of
that
right.
K
Okay,
well,
I
guess
I
think
that
sounds
like
we
can
explore
keeping
this
as
in
or
making
this
a
built-in
api
instead
of
a
crd.
K
All
right,
that's
all.
I
had
thanks
everyone.
A
Thanks
guys,
that
was
the
end
of
the
formal
agenda,
but
it
sounds
like
cal
in
the
chat.
You
said
you
want
to
talk
about
a
couple
caps.
F
No,
I
just
so
I
spent
I
spent
a
good
chunk
of
my
last
week
looking
at
cats,
and
there
is
a
bunch
of
them
that
are
high
high
value,
but
are
opposed
not
progressing
right.
F
You
find
them
like
the
typical
thing
you
find
like
too
many
issues
open
that
people
are
trying
to
close
and
and
all
of
that
two
of
them
took.
I
took
a
personal
note
off
because
I
I
I
I
want
these
features
like
the
old
port
ones
and
the
traffic
shaping
ones
right.
F
C
C
J
Yeah
yeah.
I
think
there
are
two
major
issues
right
now
from
what
I
can
see.
One
is
yeah.
We
kind
of
agreed
upon
the
last
time
that
we
could
introduce
this
all
quotes
google
or
google
pointer
field
and
solve
that
in
the
alpha
api
to
make
sure
all
the
node
versions
like
that,
we
have
a
supported,
node
version
upgraded
before
we
ever
graduated
to
beta.
J
So
you
know,
clients
like
q,
proxy
or
dns
have
had
too
early
cycles
to
upgrade
to
yeah
yeah
to
pick
up
that
new
field,
so
that
I
think
we
had
a
path
forward.
The
the
things
we
are
stuck
on
is
one:
how
would
the
implementation
work
on
ipvs?
J
J
The
other
open
issue
is
there
was
a
comment
from
john
howard.
I
think
about
istio
or
network
policy,
not
able
to
implement
this
or
or
will
have
to
come
up
with
a
path
for
network
policy
to
work
on
top
of
this
or
easily
order
implement
this
this
field.
J
N
F
I
don't
think
anybody
is
like
it
can
kept.
You
can
pr,
but
the
pr
won't
merge.
It
might
be
a
good
idea
to
see
think
how
things
will
change
if
you
implemented
it.
That's
like,
let's
see,
let's
say:
okay,
the
cap
is
at
70.
Yes,
we
know
the
30
percent
remaining
are
critical
and
important
to
clarify,
but
you
can
still
try
to
pr
and
see
how
things
would
change
and
see
all
like
all
the
hidden
ghosts
that
we
might
show
up.
C
Yeah,
the
the
the
the
kep
intention
was
merge
and
iterate
right.
So
maybe
maybe
the
right
thing
to
do
is
just
to
press,
for
we
know
that
this
is
unresolved.
We
mark
the
unresolved
sections.
In
fact,
we
have
specific
syntax
to
say
this
part
of
the
cap
is
not
resolved
and
we
should
merge.
It
merge
the
parts
we
do
agree
on.
That
doesn't
mean
that
it's
ready
to
implement
it
just
means
that
we're
checkpointing
the
partial
agreements
so
sure.
In
fact,
that
should
be
true
for
all
caps.
J
J
Okay,
yeah,
we
could
I
I
could
put
the
ip
tables
solution
for
for
ipvs
and
see
if
you
can
design
alcohol
size
allowances
on
the
call
here,
planning.
F
J
Well,
that's
fine!
I
I
appreciate
the
question
that
that
helps
move
it
forward.
I've
been
stuck
on
it
for
a
while.
So
thank
you.
F
I
just
trying
to
do
something
different.
Sometimes
you
just
need
to
like
look
at
the
future
yeah.
Yes,
I
am
I
I
got
you
all
right,
don't
worry!
Just
it's
been
an
interesting
week
last
week.
F
F
I
know
antonio
has
his
own
reservation
like
this
might
not
be
even
possible
to
implement,
but
is
it
a
common
requirements
like
I
have
I've
been
dealing
with
a
lot
of
customers
deploying
deploying
fairly
large
workloads,
and
they
come
from
one-to-one
mapping
between
workload
and
machines
and
as
the
pack
into
things
they
know
that
okay,
I
am
okay,
dealing
with
cpu
and
memory.
This
way
it
just
discs
and
network
are
also
something
we
worry
about.
F
So
that's
something
I've
been
thinking
about
as
well.
Well,
there
was.
G
No,
I
know
that
some
some
person
annotations
to
do
yeah
something
with
annotation,
but
the
traffic
control
is
is.
I
think
that
is
the
worst
thing
to
configure.
I
remember
appear
somebody
fighting
to
to
summon
someone
trying
to
to
create.
You
know
the
right,
the
right
bandwidth
first
and
all
these
parameters
that
they
require.
F
F
C
I've
had
a
lot
of
discussions.
I
I
apologize
to
lars
and
everybody.
I
haven't
reread
this
kept
since
it
re-entered
the
the
sphere,
but
I've
had
a
lot
of
discussions
with
folks
and
it's
really
fuzzy
to
me
whether
what
people
are
asking
for
is
scheduling
of
bandwidth
resources
like
like
cpu
bytes,
you
know,
network
bytes
or
sorry
memory,
bytes,
cb
network
lights
or
if
what
they
want
is
like
a
prioritized
access
to
bandwidth
or
if
what
they
want
is
a
guaranteed
minimum
with
the
option
to
burst.
A
F
C
Dc
the
question
is:
is
that
really
the
requirement,
because
I've
certainly
had
people
say
no?
I
want
this
pod
to
have
a
hard
limit.
I
don't
care
if
there's
idle
capacity,
this
pod
bought
an
isdn
line
and
they're
getting
128k,
damn
it
and
everybody
else
can
share
the
rest
of
the
bandwidth,
but
that's
it
and
I've
heard
other
people
describe
it
as
you
describe
it
right
like
if
p0
needs,
it
then
give
it
to
him,
but
otherwise
give
it
to
p1
with
some.
G
C
G
F
G
P
G
F
You
what
between
all
ports
and
this
one
all
ports
is
definitely
has
much
more
people
waiting
for
it
all
right.
There
are
people
like
the
networking,
get
the
bandwidth
limits
and
all
of
that
people
can
wait,
and
there
are
some
solutions
out
there,
like
this
annotation
that
you
talked
about
with
people
even
just
monkey
batches.
N
O
I
haven't
actually
tested
this.
I
was
requested
to
bring
the
cap
back
into
life
because
the
function
is
already
implemented
by
some
some
cni's.
We
have
kind
of
a
restriction
against
using
alpha
features,
and
this
has
been
in
alpha
since
2018.
I
believe,
but
the
function
is
there
from
what
what
I
can
understand.
O
Yeah
and
bring
it
to
to
general
availability
that
some
cni's
are
free
to
interpret
these
official
sanctionized
annotations.
A
I
think
that
calico
does
it,
but
via
the
like
reference
tuning
plug-in,
it's
been
a
while,
since
I
looked
at
that,
though
it
was
probably
in
like
2018
that
the
last
time
I
yeah
paid
attention
to
that.
C
C
C
G
C
O
G
C
A
P
I
have
one
topic
to
throw
out
there
on
the
topic
of
caps.
What
is
our
stance
on
new
caps
going
into
123?
Like
tim,
I
know
there
was
that
email
thread
where
we
were
talking
about
whether
we
should
be
holding
off
on
caps
for
longer
until
our
backlog
is
more
smaller
but
yeah
for
123,
like
what
are
we
thinking.
C
Sorry,
I
was
just
looking
at
the
dashboard-
I
let's
I
didn't
prep
for
this
today,
so
I
don't
have
time.
We
need
to
do
a
run-through
of
all
of
the
caps
on
the
project
board
and
see
what's
moving
this
cycle
and
decide
if
we
feel
we
have
breathing
room
in
there,
there's
a
ton
in
the
pre-alpha
state.
There's
not
many
in
alpha
right
now,
which
is
actually
great,
there's
some
that
have
been
stuck
there
forever.
C
We
need
to
figure
out
what
we
want
to
do
with
those
there's
a
bunch
in
the
beta
and
hopefully
what
I
really
want
to
figure
out
is
how
many
of
those
are
going
to
go
to
ga
from
beta
right
now,
we've
got
one
two,
three,
four,
five,
six,
seven
in
beta
and
four
in
alpha.
C
The
only
one
that
I
know
for
sure
plans
to
go.
Ga
is
dual
stack.
Jay.
You
think
that
port
range
network
policy,
orange
or
ricardo.
Do
you
think
that's
going
to
go.
Ga.
Q
If
open
shift,
folks
or
psyllium
folks
accept
that
as
a
city
in
the
cni
yeah,
there
is
that
restriction
of
network
policy
with
status.
But
I
don't
know
we
we
need
to
discuss
with
the
the
scalability
folks
the
production,
readiness
folks,
because
they
are
expecting
us
to
put
the
features
as
a
status
in
network
policy.
And
I
don't
think
it's
a
as
easy
as
they
think.
P
You
get
some
out
so
definitely
planning
to
get
all
those
to
ga,
but
of
course,
like
best
effort
right,
I
don't
know
what
issues
we'll
run
into
or
whatnot.
I
also
don't
know
like
if
we
want
to
talk
about
how
many
features
we
should
allow
to
go.
Ga
in
a
single
release
like
these
are
all
beta,
so
they're
on
by
default
anyways.
So
I
think
it's
just
a
matter
of
like
doing
doing
more
testing,
adding
more
test
cases
and
whatnot.
C
So
I
don't
have
a
problem
with
that.
I
wouldn't
want
to
move
like
10
things
from
alpha
to
beta
at
the
same
time
yeah.
So
I
guess
if
we
think,
if
we
honestly
think
that
we're
gonna
be
able
to
clear
out
four
or
five
of
these
betas
into
ga,
then
I
do
feel
like
we've
got
some
breathing
room
now,
we've
got
it
looks
like
10
or
so
in
pre-alpha.
C
P
I
mean:
is
there
also
a
case
of
putting
caps
and
pre-alpha
back
into
like
a
needs
review
kind
of
label
because,
like
I
know,
a
lot
of
them
are
old
or
like
don't
have
owners
anymore?
C
And
I
have
zero
confidence
that
they
are
all
what
we
want
anymore.
We
should
definitely
if,
if
we
think
that
they're-
not
let's
just
like,
if
they're
dead,
if
they're
stalled,
close
them
just
take
them
off
the
board
and
and
untag
them,
and
if
people
really
want
to
bring
them
back,
they
can
bring
them
back.
A
All
right
folks,
I
think
that
is
the
end.
Next
meeting
is
september
2nd.
It
looks
like
two
weeks
from
now.
C
Let's
plan
for
september
2
that
we'll
run
through
that
project
board
and
see
what
is
or
isn't,
moving
or
has
or
hasn't
moved.
I
challenge
everybody
here
to
make
things
move
before
then
and
if
we're
good,
then
we
can
let
a
few
more
trickle
into
the
pipe.
C
So
so
I
yes
that's
a
great
point.
I
encourage
you
all
to
try
to
get
things
to
move
before
then
ping
me
or
dan
or
casey,
on
slack
or
email
and
let
us
know
which
caps
we
need
to
add
to
the
spreadsheet
for
sure
and
which
ones
are
maybes.