►
From YouTube: Kubernetes SIG Network 20170921
Description
Kubernetes SIG Network meeting from September 21st, 2017.
A
A
B
All
right
so
1.8
release
is
supposed
to
be
next
week,
Wednesday
that
got
pushed
back
a
little
bit
some
of
the
test
flakes
and
also
they
wanted
to
ensure
that
upgrading
worked
correctly,
and
so
that
was
one
of
the
reasons
that
got
pushed
back.
That
decision
I,
think
was
made
last
week
or
maybe
earlier
this
week
in
the
release
meetings
anyway.
So
currently
1.8
released
next
Wednesday,
but,
as
will
the
list,
we
don't
really
have
anything
to
do
at
this
point.
Is
that
also
your
understanding,
Kacie
and
Tim.
B
I
mean,
obviously,
we
need
to
still
pay
attention
to
flakes
or
last-minute
urgent
things
as
they
come
through.
I
have
done
a
little
bit
of
triage
on
some
of
the
issues
that
we
have
and
I've
noticed,
and
these
aren't
necessarily
1/8
related,
but
I've
noticed
that
there
are
a
number
of
kube
ADM
issues
coming
in.
That
seemed
to
be
multi,
node
related,
so
you
know
I
think
maybe
if
people
start
trying
the
1/8
release
after
it
gets
released.
B
If
there
are
some
issues
with
the
way
that
cube,
ADM
actually
configures
things,
then
maybe
we'll
start
getting
a
lot
more
issues
about
that.
I,
don't
think,
there's
sig
network
related
necessarily,
but
it
might
be
flannel
related
with
respect
to
how
cube
idiom
does
stuff
sorry,
it
might
be
something
to
look
out
for
in
like
the
next
two
weeks
or
so.
If
we
start
getting
a
flood
of
those
types
of
issues.
B
D
D
Part
of
that
deprecation
was
to
put
put
those
workarounds
into
the
cni
bridge
plugin,
but
after
working
through
that
a
little
bit
and
getting
a
little
bit
of
pushback
from
the
cni
maintainer,
x',
I
sort
of
agree
with
with
them
and
I'm
proposing.
Instead
that
maybe
we
should
think
about
point-to-point
as
the
default
plugin
instead
of
bridges
and
that
may
move
things
along
a
little
quicker.
C
E
C
Unregistered
net
dev
mm-hmm:
do
you
remember
what
version
of
code
we
did
test
it
against
cause
right?
Do
you
remember
just
to
see
if
it
hangs
on
the
hairpin
stuff,
I
thing
is
I'm,
pretty
sure
it
does.
We
grabbed
like
for
completeness.
We
should
just
retest
it.
It
shouldn't.
Be
that
complicated
to
do
I,
don't
know
if
the
talk
man,
how
to
do
it,
but
well.
B
C
B
Yeah,
if
somebody
could
do
that,
that
would
be
awesome
just
to
confirm
that
it
still
exists
and
for
for
the
point
of
me
trying
to
reproduce
it
was
to
figure
out.
If
there
was
something
and
then
maybe
I
could
start
digging
into
the
kernel
issue
and
try
to
figure
out
where
that
reference
counting
was
going
wrong.
But
yeah
I
wasn't
able
to
reproduce
it
and.
B
So
back
to
deprecation,
then
I
mean
they
from
the
sea
and
I
said
it
was
kind
of
a
question
of
if
nobody
sees
this
on
relatively
recent
kernels,
then
maybe
we'd
like
to
not
keep
piling
hacks
on,
but
if
we
can
reproduce
it
on
relatively
recent
kernels
and
yeah,
maybe
we
either
do
need
the
bridge,
Eevee
tables
thing
or
you
know
we
need
to
go
with
something
else
like
point-to-point.
Oh
my.
C
B
D
C
E
C
D
C
D
So
I
mean
I:
did
some
testing
I
put
p2p
in
GCE,
kubernetes,
cluster
and
I?
Did
it
sort
of
the
equivalent
of
our
soak
test
that
we
have
for
the
bridge
plugin
for
cube
mint
and
it
looked
pretty
good
I
think
that
it
would
work
I
would
test
it
obviously
a
lot
more
if
we
chose
to
go
this
route,
but
I
wanted
this
sort
of
get
consensus,
but
I
guess
what?
If
we
can
figure
out?
If
we
still
see
this
first
and
then
we
can,
we
can
sort
of
go
from
there.
E
C
C
A
C
So
I
think
we've
got
a
pretty
strong
existence.
Proof
there
and
you
know:
if
Daniel
did
the
initial
testing
and
it
doesn't
doesn't
stink
I
think
we
should
press
ahead
with
that
and
see.
If
we
could
make
that
a
change,
we
have
to
think
through
whether
we
want
to
make
that
just
a
silent
sort
of
default
switch
in
1-9
or
whether
we
make
it
an
opt-in
thing
or
whether
we
alpha
it
or
something
like
that.
I,
don't
know
what
what
guards
we
would
want
to
put
in
place
for
that.
C
And
I'm
open
to
hearing
suggestions
on
what
we
would
need
for
that,
but
do
think
it's
important
enough
to
push
in
1,
9,
1,
9
being
a
very
short
cycle
and
we're
focusing
a
lot
here
on
stability
and
clean
up
and
pay
down
in
size.
It
is
kind
of
dead,
that's
a
lot
if
we
can
delete
it
and
if
we
can't,
then
it's
actually
debt
accrued
right.
Yes,
exactly.
B
Very
important
is
to
get
tests
for
it
in
or
at
least
make
sure
that
we
do
a
very
good
run
of
all
the
tests,
because
I
mean
the
only
reason
to
put
it
in
alpha
for
1/9
is
to
get
people
to
start
using
it
and
report
bugs,
and
if
we
can
do
a
lot
of
that
ourselves
before
1/9.
Even
then,
maybe
we'll
be
in
a
stronger
position
to
switch
it
right
after
1/9.
B
E
B
Yep
sorry
I
just
got
to
close
my
window,
so
nobody
hears
airplanes
so
yeah
we
Tim
found
a
slot
that
works
for
him,
so
I'm
just
kind
of
throwing
that
out
there
does
that
slot
listed
in
the
agenda,
which
is
Wednesday,
September
27th
at
9
a.m.
US
Pacific
time
does
that
work
for
other
people
or
most
people
who
are
interested
in
the
multi
network
conversation.
The
goal
of
this
next
meeting
would
be
to
discuss
all
of
the
proposals
and
discussions
around
services
and
multi
network
and
the
impact
on
the
QB
API
for
services.
B
A
A
C
A
B
Did
just
add
a
quick
topic
which
is
host
ports
and
I
was
curious
on
the
state
of
host
ports
in
the
cni
driver.
We
had
talked
I
think
long
ago
and
I
probably
missed
the
end
conclusion
here
about
kind
of
having
that
split
between
the
runtime
and
the
actual
network
plug
in
itself,
where
the
runtime
would
do
the
port
reservation
on
the
host.
B
But
the
plug-in
would
actually
do
all
the
iptables
stuff
and
currently
CNI
I,
don't
think
does
anything
around
that,
but
it
still
pushes
the
host
ports
out
to
the
network
plug-in
when
and
I
think
that
happened
when
the
0
3
spec
update
happened
for
and
I,
so
I
was
wondering
if
the
run
time
does.
The
port
reservations
is
still
the
plan
of
record
and
therefore,
if
we
need
to
update
the
CNI
driver
to
do
some
of
that.
B
Well,
also
for
the
record,
the
other
runtimes
are
a
little
bit
mixed
here.
The
CR,
the
cryo
runtime,
for
example,
does
all
the
port
stuff
internally
itself
without
regard
to
whether
CNI
plugins
want
to
handle
the
ports
themselves.
So
that's
something
that
I'm
going
to
go
fix
in
cryo,
I'm,
not
sure
what
other
runtimes
do
there,
but
it's
a
little
bit
confused
right
now
and
so
I
feel
like
we
probably
need
to
bring
some
sanity
to
it
for
the
one
nine
timeframe
right.
C
With
simple
implementations
of
host
ports,
it
won't
work,
yeah
and
so
sorry
go
ahead.
So
then
that
leaves
the
last
part
which
is
like.
Do
we
need,
like
capital
n,
need
to
reserve
those
ports
by
actually
opening
them?
No,
we
don't
need
to
I
think
it's
polite.
It's
it's
a
it's
a
way
of
preventing
people
from
shooting
themselves
in
the
feet
and
if
we're
gonna
do
that,
I
think
it
should
probably
be
the
plugins
problem
to
do
that.
Okay,.
B
I
mean
I
think
the
only
downside
to
that
is
that,
if
you're
going
to
use
simple
CNI
plugins
like,
for
example
in
the,
if
we
do
spin
cube
net
out
into
a
CNI
config
list,
nobody
would
do
those
port
reservations
or
we'd
have
to
create
some
kind
of
plugin
for
CNI.
That
you'd
put
into
the
config
list
or
enhance
the
CNI
port
and
that
plug-in
to
like
fork
off
some
kind
of
demon
and
then
communicate
with
our
PC.
To
do
those
port
reservations
or
just
do
no
port
reservations
at
all.
And
this
is
exactly.
C
Why
we
haven't
pushed
real
hard
on
doing
it
in
the
cni
plugins
right
yeah,
but
that
sort
of
opens
the
door
to
a
different
question
which
is
are
like?
How
happy
are
we
I'm
going
to
try
to
phrase
this
as
delicately
as
I?
Can
how
happy
are
we
overall
with
CNI
as
it
currently
stands,
and
would
we
be
open
to
looking
at
changes
that
would
make
those
sorts
of
things
easier,
potentially
at
the
cost
of
making
CNI
more
complicated?
C
So
the
question,
without
being
without
being
cagey,
the
storage
plug-in
team
has
been
working
on
a
specification
for
better
storage
plugins
and
after
careful
examination
of
all
the
problems
that
the
current
storage
plugins
have.
They
decided
that
it
made
a
lot
more
sense
to
do
a
an
active
plugin,
a
plugin
that
actually
is
alive
all
the
time
and
takes
an
RPC
call
over
grv
C
and
responds
to
that.
C
As
a
as
a
way
of
doing
the
storage
mounts
and
attachments,
it
there's
a
hundred
reasons
why
it's
better
and
probably
five,
why
it's
worse,
the
question
I
guess
is
I'm,
sorry
and
also
at
the
same
time,
we
have
people
working
on
device
plugins,
which
is
you
know,
stuff
like
GPUs
and
those
sorts
of
things
well
in
Nix.
Unfortunately,
well
yes,
and
so
we
have
to
jump
the
gun
there.
C
Interestingly,
a
lot
of
these
device,
plugins,
like
GPUs
in
particular,
are
simultaneously
device
plugins
and
plugins,
and
it
would
be
really
nice
if
nvidia,
for
example,
could
ship
one
driver
that
was
one
demon
set,
that
you
ran
on
your
cluster
and
that
demon
set
satisfied
both
the
networking
interface
and
the
device
interface
and
we've
even
heard
talk
of
things
that
want
to
satisfy
networking
and
storage.
At
the
same
time,
and
so
the
question
I
guess
I
put
forth
to
the
group
without
expecting
an
answer
here,
but
you
know
start
to.
Let
it
percolate
is.
C
H
B
Specific
conclusion,
because
I
think
at
this
point
we
didn't
have
any
particular
need.
The
only
use
case
that
we
had
up
until
now
was
Windows,
where
it's
a
lot
more
expensive
to
fork
processes
or
start
new
processes,
and
so
Windows
people
wanted
us
to
just
have
kind
of
like
an
RPC
interface
that
they
could
tunnel
that
stuff.
Over
so
I
mean
we
can
continue
to
bring
it
up
in
the
meetings
and
start
talking
about
it.
A
little
bit
more
I
mean.
C
B
B
B
Yeah
so,
basically
think
about
cases
where
you
have
DHCP
or
ipv6
slack,
addressing
for
a
pod
either
of
those
mechanisms
can
change
the
address
ascending
to
a
pod
at
will,
without
cube
being
able
to
know
about
it.
Currently,
ipv6
routing
you
can
just
get
a
completely
new
router
advertisement
with
a
completely
different
prefix
or
do
options
DHCP
same
thing
on
a
renew
and
currently
there's
no
way
to
translate
to
send
those
options
back
to
kubernetes.
I
know
cube,
does
not
currently
expect
the
networking
setup
of
a
pod
to
change.
B
I
I
All
right,
thank
you,
so
I'll
just
put
my
toe
in
the
waters.
I
think
actually
I've
heard
something
like
that,
in
addition
to
being
someone
who's
doing
something
like
that,
I
think
I'm,
not
the
only
person
here,
who's
talking
about
building
infrastructure
as
a
service
type
of
cloud
using
kubernetes
technology
and
in
that
world
the
configuration
of
your
compute
units
does
tend
to
be
more
dynamic.
C
H
C
Floating
the
idea,
sure
I
think
we've
got
a
bunch
of
potential
options
here.
One
of
the
big
things
that
I
like
about
a
more
active
demon
based
model,
is
the
discovery
of
it
is
easier
and
the
you
know
the
installation
of
a
driver
is
easier.
It
is
more
heavyweight,
so
it's
got
some
pros
and
got
some
cons.
C
Alright.
Well
we'll
table
that
for
another
discussion,
when
we've
got
a
bit
more
gravity
behind
actually
doing
something
there
I
don't
think
it's
on
anybody's
near-term
plans.
We
probably
should
I
think
that's
variable,
that's
fair!
Who
wants
to
start
the
conversation
right
dang
is
laughing
because
he
chomping
at
the
bit
to
do
this.
B
C
B
C
I,
don't
think
the
plugins
and
the
infrastructure
we've
got
around.
Plugins
is
really
well-suited
for
that
right
now,
given
it
the
exact
nature
of
Si
and
I
like
having
it
exec
a
process,
that's
only
job
was
to
hold
a
port
open.
We
be
a
start,
but
we
still
have
to
deal
with
what
happens
when
that
process
dies
and
who
pays
for
that
processes.
Resources
I
mean
no
yeah.
B
H
C
Could
do
you
know
I
think
if
we're
gonna
consider
a
CNI
change
I'd
rather
consider
that
first,
this
isn't
killing
anybody
no
again
nobody's
ever
called
me
to
yell
at
me
for
not
opening
a
port
and
in
fact
cubelet
does
it
today,
but
it
does
it
in
a
buggy
way
anyway.
So
so
it's
only
half
working,
even
incriminating
I.
C
Something
for
one
nine
I
can
take
a
few
things:
okay,
so
Sonia
I'll,
throw
out
here
that
came
up
in
discussions
on
some
other
email,
threads
and
I
apologize.
If
I
don't
loop,
all
the
threads
together
ingress
has
sort
of
languished
in
beta
for
quite
a
long
time.
There's
a
couple
of
known
issues
with
the
specification
of
ingress
that
we
could
probably
address
to
make
it
somewhat
more
portable
and
then
there's
a
very
big
open
question
about
the
plethora
of
annotations
that
almost
everybody
ends
up
using
when
they
using
grasp.
C
C
B
C
B
C
C
H
This
is
Michael
I'm
from
Poway.
Can
you
guys
hear
me
yeah?
We
hear
you
yeah,
so
we
have
a
list
of
to-do
list
and
maybe
what
we
need
to
do
is
on
to
scope
out
what
we
want
to
what
we
can
do
in
1.9.
So
if
you
guys
can
give
me
the
list
of
things
that
you
think
we
have
to
have,
then
I'll
put
those
as
high
priorities
to
let
my
team
to
implement.
H
H
F
B
H
D
B
B
H
B
C
B
B
B
B
A
B
A
J
J
C
And
similarly,
I
still
encourage
everybody
to
get
involved
in
code
reviews
like
some
of
the
bigger
code
reviews
just
because
they
get
assigned
to
me
or
to
Daniel
or
to
Casey
or
Dan.
It
doesn't
mean
that
we
can't
use
more
eyes
on
them,
so
anybody
here
who's
looking
for
more
technical,
concrete
ways
to
participate.
Those
code
reviews
then
many
eyes
make
shallow
bugs.