►
From YouTube: Kubernetes SIG Network 20170907
Description
Kubernetes SIG Network meeting 2017-09-07
A
B
A
I
added
that
block,
based
on
the
1.8
burned
down
meeting
that
was
held
earlier
this
week.
There's
also
another
burn
down
meeting
tomorrow,
I
believe
where
they
just
you
know,
essentially
Bayley,
release
and
sure
enough
to
try
to
make
it
happen,
but
it
looks
like
Chris
Lucia
actually
has
the
most
up
to
date,
status
on
that
so
I'll.
Let
him
speak
that
in
just
a
second,
the
other
point
down
there,
the
stable
release
proposal.
It
turns
out
that
that
is
actually
shelved
for
the
moment.
A
C
A
C
C
We
are
missing
one
in
there,
for
this
is
on
the
gk
east
side.
Is
it
that
network
tears,
because
we
had
four
features,
I
think
to
with
network
policy
one
for
at
PBS
and
then
there's
one
final
one,
but
I
didn't
see
in
the
release
notes,
but
it
is
a
open
feature
that
we
have
listed.
I
forget
okay,
specific
feature.
B
B
B
C
B
C
B
E
Famous,
except
or
is
the
one
about
updating
the
storage
yeah
policy
which
I
forget
if
that's
the
original
issue
or
the
PR,
to
fix
the
issue.
But
you
can
find
and
linkage
together
and
and
reading
through
the
comments
in
the
PR
as
I
mention.
The
mailing
list
like
there
seems
to
be
a
lot
of
confusion
over
exactly
what
needs
to
be
done
and
what
affects
which
changes
will
have
yeah.
B
A
C
C
C
B
B
C
G
B
A
B
B
A
That
didn't
go
in
what
I'm
working
on
is
a
get
call
and
that
will
actually
return
the
existing
IP
configuration
and
interfaces
of
the
container
that
the
plugin
is
set
up,
or
if
it's
not
correctly
set
up.
When
something
is
broken,
then
the
plug-in
will
return
in
an
error,
and
then
we
need
to
make
cubelet
and
the
runtimes
actually
respect
that
error
and
do
something
useful
when
an
errors
return
from
the
status
or
get
call.
This
is
a
per
pod
thing.
A
We
talked
about
that
I
think
we're
still
kind
of
discussing
a
general
status
thing
upstream,
but
you
know
you
could
also
kind
of
hijack
the
per
container
per
pod
status
to
return
a
generic
plug-in
error.
If
you
wanted,
although
that
wouldn't
tell
cubelet
that
the
node
was
unhealthy,
if
that
makes
any
sense,
no
I'm
two
parts
here
but
I
guess
we
don't
have
to
dwell
on
it
right
now,
but
I
just
wanted
to
say
that
I'm
trying
to
work
on
a
longer
term
fix
for
this.
Oh.
B
D
D
B
A
Yeah
the
problem
with
that
is
that
the
run
times,
especially
on
restart
they
don't
actually
cache
any
kind
of
for
checkpoints,
any
kind
of
value
about
whether
network
was
set
up
or
not.
So
if,
for
some
reason,
things
quit
between
when
you
create
the
container
and
when
you
step
the
network,
then
you
could
run
into
a
situation
where
the
container
is
started
and
it's
not
ready
and
on
restart.
Cubelet
assumes
that
that
means
the
network
is
ready
because
there's
no
saving
of
any
kind
of
network
status.
A
B
D
I
know
a
little
bit
about
it,
so
I
debug
it
a
little
bit
I,
don't
know
how
his
cluster
was
able
to
get
into
this
state
what's
happening
is
that
there
was
a
validation
error
at
the
API
server
side
that
prevents
the
endpoint
objects
being
updated
and
the
validation
error
is
about.
The
update
operation
cannot
change
the
node,
the
node
of
of
an
endpoint,
so
my
suspicion
is
that
the
node
IP,
so
there
there
might
be
a
bug
in
the
in
endpoint
controller,
where
it
messed
up
with
the
pod,
IP
and
the
corresponding
node.
D
But
we
have
never
seen
seen
it
seen
any
other
issues
with
it
before
and
another
possibility
is
that
the
node
IP
or
the
no
name
change
like
and
the
same
cider
get
Rita
allocated
and
whatnot,
though
yeah.
So
then
I
stopped
looking
yeah,
so
it
because
the
the
guy
mentioned
that
his
cluster
was
like
non
gracefully
shut
down
because
because
of
some
other
miss
operation,
so
basically
all
the
nose
gets
shut
down,
and
then
you
come
back
with
yeah.
D
This
is
one
of
the
problem,
so
I'm
not
sure
what
happened,
but
if,
but
if
like
I,
want
everyone
to
keep
on
mind,
if
we
seen
cases
like
this,
where
the
endpoints
objet
messed
up
with
the
thought
IP
and
the
node
of
the
pod,
then
there
must
be
a
bug
in
the
endpoints
controller.
Could
you
summarize
that.
A
B
B
Yeah
cube
proxy
sends
traffic
to
the
wrong
pod.
B
A
A
A
A
C
Good
question
I
think
this
is
actually
he's
saying
that
it
shouldn't
ever
go
to
the
thing,
but
if
you
look
closely
at
his
load,
balancer
configuration
or
the
arm
queue
app.
He
is
specifying
that
it
has
a
label
of
ml
services
gateway,
so
it
makes
sense
that
it
would
hit
that
one
because
he
specified
it
in
the
selector
some
just
reading
this
wrong.
D
C
B
G
C
It
was
set
to
unknown
versus
what
it
originally
was
blank
values.
So
when
you
go
to
recreate
it,
it
says
unknown,
isn't
a
valid
cluster
ID
and
there's
a
late
pr2
to
force
it
back,
but
it's
currently
failing
the
Etsy,
detests
and
I'm,
not
sure
if
that's
because
maybe
there's
an
explicit
and
an
tests,
that's
targeting
this
thing.
I,
don't
know
too
much
about
strategy,
but
I
think
it
relates
to
at
CD
in
some
way.
G
C
B
B
A
H
B
I
I
B
B
C
Yeah
and
the
community
meeting
they
mentioned
some
locking
tests.
I,
don't
know
if
we
really
need
to
go
through
these
because
they
seem
to
be
a
lot
of
them
are
related
to
kind
of
other
things
that
are
currently
happening.
I,
don't
explicitly
see
any
of
the
tests
that
are
really
owned
by
us.
It's
like
a
part
of
a
suite
is
failing
and
it
shows
up
in
our
dashboard
I.
C
B
B
Where
SIG's
can,
you
know,
give
a
presentation
and
an
update
on
on
what
they've
been
up
to
and
a
separate,
deep
dive
working
session
kind
of
similar
to
what's
been
at
previous
keep
cons.
So
how
do
we
just
I
just
get
a
get
a
poll
for
you
know
who
who's
gonna
be
there
and
would
be
interested
in
going
to
these
things,
and
you
know
if
there's
enough
then
I
will
register
us
for
for
both
of
those
or
one
or
both
of
those.
A
B
So
there's
an
email
that
went
out
so
Sara
says
these
sessions
within
the
track,
are
an
opportunity
for
SIG's
to
hold
a
talk
and
provide
updates
to
the
community,
which
is
about
as
much
as
I
know
for
that
I'm
guessing
it's
a
kind
of
more
I
guess:
I,
don't
even
know
how
long
the
slots
are,
but
it's
probably
similar
to
the
updates
that
are
given
at
community
meetings
with
potentially
a
bit
more
depth.
Okay,.
A
F
J
C
B
A
So
I
added
that
one
does
the
same
time
as
last
time.
Work
for
people
I
mean
I,
I,
think
there's.
We
should
just
set
up
another
meeting
for
that
time.
If
that
works
for
everybody
and
then
go
from
there
obvious
we've
got
a
couple
of
things
talked
about
mostly
services
and
just
need
to
keep
pushing
that
forward.
I
will
coordinate
with
Tim
just
to
make
sure
he's.
Okay
with
that
slot
again.
B
A
B
H
G
H
H
F
F
H
H
F
F
H
F
K
Question
for
Dan
Williams,
if
you
can
look
in
the
chat
window,
I
put
a
link
to
a
github
issue
for
nine
four
eight.
Oh,
this
is
important
for
ipv6.
We
need
the
the
zero
six
zero
CNI
binaries
added
in
so
I
was
just
curious.
Is.
Is
there
anyone
I,
guess,
there's
just
some
hesitation:
we
don't
necessarily
need
it
for
one.
Eight
doesn't
like
that.
It's
gonna
happen,
but
definitely
for
one
nine.
If
we
want
to
support
ipv6,
is
there
anyone
in
the
CNI
community,
that's
taking
ownership,
it
looks
like
there's
a
and
the
PSC
that.
A
A
Of
CNI
anything
but
I
mean
it's
a
question
more
of
where
those
binaries
go
in,
like
the
son,
google
drive
somewhere,
Google
I'm,
not
sure
where
they
are,
but
what
I
mean
is
he
says
it's
too
late,
and
it
probably
is,
but
it
can
definitely
happen.
Four,
one,
nine!
Okay!
If
that
answers
your
question,
okay,.