►
From YouTube: Kubernetes SIG network meeting 2019-08-08
Description
Kubernetes SIG network meeting Thursday 08 2019.
B
C
C
E
C
C
E
C
E
F
C
H
They
put
the
localhost
address
on
the
node
object
and
they're
willing
to
pretend
chocolate
to
the
no
Pole.
They
are
using
127.0.0.1
they're
doing
they're
in
the
park
with
without
most
network,
so
that
wouldn't
work
who
is
putting
the
local
host
on
of
node
I.
Think
that
time
trying
to
use
1
to
7,
not
you,
the
coil
environment,
I,
don't
know
yeah.
E
C
C
E
C
All
right,
Kubler
start
fails
streaming.
Service
stopped
unexpectedly
looks
like
you
had
this
one
Tim
do
want
to
pass
it
off.
No.
E
C
E
H
G
I
C
E
I
mean
it
sounds
like
a
potentially
real
feature
request:
I,
don't
have
any
thoughts
on
how
to
design
it
at
the
moment
we
can
either
you're,
probably
just
mark
it
as
triage
or
kind
of
feature
and
remove
the
triage
label.
Okay.
If
anybody
wants
to
help
think
about
how
to
design
this
like
help
is
welcome.
I
do
not
have
the
bandwidth
in
the
near
future
to
think
about.
E
L
E
E
B
B
E
E
C
K
C
All
right,
Karuna,
DS
endpoint,
maybe
ou
API
machinery
and
also
not
us
okay.
So
that's
the
full
issue
backlog.
So
perfect
timing
I
want
to
look
at
the
bit
at
the
PRS
because
we
have
a
lot
of
those.
How
far
back?
Should
we
probably
jump.
M
E
A
good
point,
I
I've,
been
going
through
I,
was
out
for
the
last
two
of
the
last
three
weeks,
so
I'm
going
through
the
appears
that
are
assigned
to
me
and
I
know.
There's
many
people
on
this
call
and
many
of
these
PRS
in
this
list
that
fall
into
that
category,
but
yeah.
Maybe
we
should
start
at
the
oldest
ones.
Okay,
see
if
there's
anything,
we
can
just
be
well
I.
E
C
E
C
E
E
The
problem,
frankly,
like
I'd
love
to
see
the
PR
get
resolved
or
closed
and
just
removing
ourselves
from
ISM
fix
that
but
short
of
getting
sick
cloud
rider
to
do
the
same
process.
There's
not
much.
We
can
do
with
it.
So
we
can
just
remove
Signet
work
and
reassign
sit
club
later
make
it
Henry's
problem.
Yeah.
N
E
E
C
M
C
F
C
C
E
C
E
E
E
A
A
G
F
E
E
E
F
E
E
More
recently,
we've
been
arguing
about
what
that
means
and
the
general
consensus
is
there's
no
such
thing
as
a
control,
plane,
node
and
so
Clayton's
on
a
mission
to
eliminate
anybody
who
uses
the
node
role
label
within
core
of
comity.
It's
totally
fine
for
users
who
want
to
use
that
and
make
assumptions
about
their
own
environment,
but
we
shouldn't
have
built-in
stuff
making
assumptions
about
the
apartment.
So
one
of
those
users
is
the
service
controller.
E
When
it
prepares
the
list
of
nodes
to
feed
into
the
cloud
provider
for
load
balancer
eligibility,
it
filters
out
any
node
that
has
the
node
roll
label
so
part.
One
of
the
proposal
is,
but
let's
just
remove
that,
and
then
you
just
say
well
what?
How
do
we
filter
those
machines
out?
Well,
there's
already
an
annotation
that
was
added
a
while
back
as
an
alpha
which
was
exclude
balancer,
and
so
maybe
we
just
formalize
that
move
it
to
GA
and
tell
people
like
if
you're
Cupid
minun.
E
E
N
Two
more
notes
on
the
problem:
one
is
the
not
only
the
service
controller
consumer
of
the
no
no
master,
and
also
there
are
a
lot
of
ingress
controller
out
there.
My
thinking
they
used
the
same
library
to
determine
what
knows
to
be
included
behind
the
no
balancer
or
not
and
I
think
they
share
that
know.
Can
you
respond
to
that
issue
and
sure
yeah?
N
Yes,
that
does
one
thing
and
then
the
second
thing
is
that
lets
say
for
regular,
open
source
communities
deployment
the
usually
there
is
no
queue
proxy
running
on
master
right,
usually,
there's
no
queue
proxy
and,
let's
say
we
sort
of
removed,
no
role,
the
role
able
and
the
service
controller
sort
of
okay
treat
the
master.
No
and
other
notes
the
same.
That
includes
master
no
into
robots,
pretend
there's
no
queue
proxy
on
the
master
running,
so
that
means
either.
We
need
to
standardize
that
to
include
queue
proxy
running
on
master
or
exclude
I.
E
Is
what
at
least
one
release
where
we
have
the
old
annotations
supported
another
release
where
we
have
it
enable
almost
basically
covered
by
a
gate
and
go
in
the
other
direction
for
future
gates?
The
feature
here
is
the
removal
of
a
feature,
and
so
we
would
start
with
it
enabled
move
to
it
disabled,
but
enable
and
then
remove
it
over
the
course
of
several
releases.
E
That's
that's
the
proposal,
so
we
could
get
all
the
tooling
and
everybody
else
up
to
the
the
correct,
exclude
balancer,
annotation
and
there's
a
few
other
places
internally
that
use
the
node
role
so
that
he's
working
through
those
two
from
the
networking
side,
though
I
don't
have
confidence
in
what
the
right
semantics
carries.
So
I
laid
out
in
an
issue
that
I
linked
off
the
agenda
of
sort
of
an
analysis
of
like
should,
should
X
work
should
Y
work,
should
it
be
required
or
allowed,
or
it's
really
not
clear
to
me.
E
On
a
note,
it
does
not
necessarily
mean
no
service
proxy.
On
the
note
that
is
a
totally
different
decision.
This
is
that
the
intention
of
this
annotation
was
a
hint
to
cloud
providers
that
saying,
if
you're
doing
to
hop
load-balancing,
don't
use
this
known
as
a
general
gateway
for
the
load
balancing
infrastructure.
E
E
Scheduler
has
no
such
information
right
and
in
fact
it
can
change
dynamically
right.
Don't
don't
conflate
this
with
just
masters?
There
are
other
the
reason
that
the
entities
was
added
in
the
first
place
was
to
be
able
to
decorate
other
nodes,
non
master
nodes
and
say
don't
make
this
part
of
the
load
balancing
infrastructure,
not
just
for
masteries.
These
are
really
they're,
really
orthogonal
questions
like
should
you
schedule
a
pod
behind
a
load
balancer
on
your
master
when
your
master
doesn't
have
load
balancer
actus,
you
probably
shouldn't
do
that,
but
what
about
the
non
masters?
E
Yeah,
maybe
so
I
posted
it
so
that
we
could
have
the
the
issue
on
the
record,
and
people
could
argue
about
it.
There
I
didn't
intend
for
it
to
be
here.
Please
go
take
a
look
at
the
issue
and
weigh
in
and
if
we
can
get
to
semantic
agreement
and
I
can
tell
Clayton
what
to
name
the
annotation
and
how
to
document
it.
A
So
that
was
there's
a
couple
more
items
but
they're
from
lucky
and
Cal,
who
aren't
here
so
I
think
the
main
thing
is
just
that
the
services
endpoints
dual-stack
phase,
two
PR-
is
ready
for
review
those
who
are
interested
in
reviewing
that.
Please
take
a
look
at
it
and
then
there's
an
FYI
status
document.
You
can
look
at
if
you're
interested
in
the
overall
status
of
that
effort.
C
G
C
C
E
E
One
we
have
a
cube
con
prep
coming
up.
We
have
until
I
think
next
week
to
decide
whether
maybe
the
end
of
this
week
to
decide
whether
we
want
to
do
a
maintainer
track
session.
Again,
this
is
cubic
on
San
Diego.
This
is
generally
the
most
well
attended
of
all
the
queue
comes.
The
sick,
Network,
intro
and
deep
dive
in
Barcelona
was
very
well
attended.