►
From YouTube: Kubernetes SIG Network 20161215
Description
Kubernetes SIG Network 2016-12-15 call audio recording
C
C
What
does
it
actually
mean
to
replace
cube
proxy?
So
what
is
the
services
abstraction
really
and
one
being?
What
what
is
the
actual
expectation
of
pod
networking
sort
of
hand
waved
about
that
for
a
long
time?
We
know
that
you
know
folks,
like
Mike,
are
doing
interesting
things
with
health
checking
that
sort
of
falls
outside
the
norm
of
what
most
of
us
are
doing.
I
would
like
to
crystallize
that
in
the
spec
in
early
2017,
so
that
we
can
tell
people
that
we've
at
least
but
fought
into
these
things
when
you
say
spec
Tim.
C
C
D
C
D
C
Okay,
so
I'm
assuming
may
be
falsely,
but
I'm
assuming
that
this
is
our
last
cig
network
meeting
for
the
year
and
that
will
regroup
sometime
in
early
to
mid
january.
Do
we
want
to
start
doing
task
assignment
today,
or
should
we
leave
that
until
early
january,
where
we
can
have
a
little
bit
more
detailed
breakdown
of
the
docs
that
we
think
people
before.
B
A
C
Shh
we
say
everybody
that
nobody
will
do
it.
Would
it
put
it
on
the
the
notes
dock
at
for
action
items
for
at
least
you
know
you
and
me
and
Dan,
to
go
through
the
existing
dogs?
Let's
generate
a
list
of
what
we
think.
Those
docs
that
we
own,
our
I
bet
will
be
surprised
and
I'll.
Take
the
action
to
invite
some
of
our
docs
people
to
the
next
meeting
in
January
and.
C
C
A
Cool
so
Charlie
move
on
to
the
next
thing
yet
now
sure
so,
the
next
on
the
list
was
cube
proxy
wondering
if
we
should,
before
getting
into
some
of
these
more
technical
ones,
just
jumped
the
2017
planning
duck
when
people's
awareness
to
that
so
start
from
the
bottom.
I
put
a
bullet
2017
planning
where
I'm
hoping
we
can
and
start
to
to.
A
You
know
flush
out
out
a
plan
for
what
2017
is
going
to
look
like
and
I've
laid
this
out
with
a
high
level
goals,
section
or
I've
tried
to
capture
to
start
just
three
kind
of
high-level
things:
to
improve
those
being
user
experience,
testing
and
finishing
existing
features.
That's
just
what
I
started
with,
but
no
reason
to
say:
there's
not
more
or
less
that
we
can
be
focusing
on,
but
I,
think
kind
of.
A
E
Just
adding
an
item
here
based
on
my
email
earlier,
you
know:
Ivy
ipv6
ought
to
show
up
on
the
2017
plan
somewhere
yeah.
B
B
C
But
not
the
same
thing
as
multiple
networks.
I
think
these
things
are
all
in
bounds
for
2017
I,
think
it's
good
to
numerate
them
all,
and
then
we
can.
We
can
try
to
rough
sort
them
I,
love
planning,
but
if
we
think
that
we're
going
to
be
accurate
beyond
a
few
months
with
this
project,
we're
probably
diluted.
D
Yeah
I
agree
on
that
list
is
one
thing
that
surprised
me:
this
business
about
empty
list
versus
nil,
I
thought
we'd
already
discussed
that
decided.
C
Well,
we
discussed
it,
but
then
we
went
off
and
tried
to
actually
implement
it
properly
and
we
found
a
bunch
of
options,
and
so
I
would
like
to
make
that
a
point
for
discussion
today
that
we
make
a
final
decision
on
that
Daniel
Winship.
Are
you
here?
Yes,
all
right!
Well,
I
would
like
to
discuss
the
the
details
that
we
can
give
a
quick
update
on
the
experimentation
that
we've
been
doing,
and
then
we
can
make
a
decision
as
to
what
we
want
to
make.
The
final
decision
be
great.
D
C
D
C
D
But
we
do
get
other
topics,
but
we
can
add
them
without
can
have
enough
for
GA.
B
C
Think
it's
a
good
doc
I,
don't
think
I
was
look
when
I
was
reading
it
over
this
afternoon.
I
don't
think
I
was
thinking
of
it
in
the
entirety
of
2017.
I
was
really
thinking
of
it
in
short
of
the
short
to
medium
term.
I'm
gonna
take
him
to
look
at
it
with
my
eye
towards
all
the
crazy
crap.
We
want
to
get
done
in
2017,
yeah.
B
I
mean
I
I
know
for
myself.
Personally,
I
am
going
to
be
working
on
some
of
these
new
features
in
the
first
half
of
2017.
So
you
know
I'm
certainly
going
to
be
trying
to
do
a
lot
of
that
work
upstream,
committed
ins
first
or
just
preparing
the
way
for
that.
So
I
think
you
know
at
least
some
of
this
stuff's
relevant
and
I'm
sure
the
others
on
this
call
and
working
on
stuff,
like
this
tier.
G
So
I
have
a
quick
question.
We
got
pleased
that
this
is
a
difficult
or
I'm
new
to
the
party
and
kind
of
sort
of
understand
things.
So
one
of
the
thing
I'm
the
venom.
Looking
at
here
2017
plan,
one
of
the
questions-
I
wrote
there
was
a
multi-tenancy
sport.
So
can
we
potentially
discuss
data
include
that
as
a
part
of
2017
charter,
yeah.
B
D
A
fact
it
had
I've
been
looking
out
for
multi-tenancy
issues
and
a
couple
things
on
the
road
map
could
foul
up
the
ability
we
already
have
four
multi-tenancy
and
now
Chris
does.
Let
you
kinda
from
the
outside
in
impose
multi-tenancy
and
I,
want
to
keep
that
possible,
not
ruin
it
with
some
of
the
other
work.
C
G
A
B
C
B
C
So
we
had
decided
a
sort
of
in
the
abstract
API
space
that
the
difference
in
semantics,
between
an
empty
list
of
allowed
ports
and
a
non
specified
list
of
allowed
towards
was
a
correct
and
potentially
useful
semantics.
And
so
we
expect
it
that
way,
and
then
it
turned
out
that
the
way
goes,
JSON,
unmarked,
learn
and
proto
and
Marshall
our
work.
C
So
in
order
to
fix
the
tests
I
set
out
to
convert
all
of
our
API
machinery
to
no
longer
assume
that
nil
and
empty
are
the
same
thing
with
respect
to
a
slice
and
the
pull
request
started
off
really
small
to
make
it
basically
work
except
we
ran
into
a
million
problems
which
are
specifically
protobuf
still
get.
Some
of
the
encoding
is
wrong
and
it's
hard
to
make
that
assertion
generically.
C
So
the
case
that
blew
up
on
us
most
readily
was
a
map
of
string
to
slice
of
string
and
there's
no
such
thing
as
an
optional
map
value
in
protobuf
and,
as
it
turns
out
like
not
in
protobuf
in
the
native
format,
and
not
in
the
proto
buff
rendering
for
c++
and
not
in
the
protobuf
rendering
for
other
languages.
So
that
gets
talking
now.
Clayton
who's,
not
here
I,
think
but
Clayton
sort
of
argued.
We
can
fix
this
and
the
only
language
you
really
care
about
is
go
but
I'm
not
really
buying
that.
C
So
the
alternative
which
dan
windship
implemented
in
a
pull
request,
Erna
gist
or
something
that
I
have
open
somewhere.
It
was
to
take
that
slice
and
wrap
it
in
a
pointer
to
a
structure,
and
so
now
it's
very
clear
when,
when
a
field
comes
through
the
API,
whether
the
user
specified
it
means
the
API
is
a
little
more
chatty.
You
have
one
extra
level
of
nesting,
but
it's
very
unambiguous
when
you
intended
to
specify
something
that
was
empty
versus
when
you
didn't
specify.
C
So
it
looks
ok
to
me,
it
is
language
agnostic,
so
it
should
render
correctly
in
every
language.
The
problem,
of
course,
is
it
still
actually
sort
of
breaks
when
you
get
down
to
maps
because
again,
there's
no
such
thing
as
a
non
specified
map
value.
So
if
we
had
another
thing,
we
have
this
case
yet.
But
if
we
had
a
case
of
a
map
of
string
to
port
list,
we
would
still
not
be
able
to
tell
if
the
user
had
not
specified
the
port
list
or
or
not,
I'm,
not
sure
I'm
worried
about
that.
C
D
Back
on
the
other
alternative,
where
we
do
keep
the
distinction
I
any
proposed
that
in
the
API
we
would
wrap
this
up
potential
option,
ille
sliced
in
a
struct
and
make
pointer
to
it
so
that
it
could
be
optional.
And
you
said
that's,
maybe
we
would
do
the
API
I
guess
you're
talking
about
the
go
rendering
not
in
the
JSON
and
yes,
it
would.
H
B
A
D
B
C
C
Is
right
right?
I
mean
it,
it's
not
a
typical,
like
we
do
this
a
lot
when
we
change
something
in
the
subtle
way
between
beta
and
GA
and
implementations.
Just
have
to
know
that
if
they're
reading
a
a
v1
object
that
it's
you
know,
their
code
will
not
compile.
Hopefully
if
they
were
accessing
a
field
or
ugly
right
unless
you're
using
python,
in
which
case
good
luck
is.
C
C
H
D
C
Yes,
I
hear
that
argument.
The
engineer
in
me
says:
I
agree
with
you.
The
practical
lazy
person
in
me
says:
yeah
I
think
we
can
get
away
without
it,
so
I
leave
it
to
the
consensus
of
the
group.
I,
don't
actually
have
an
implementation
here,
so
I
don't
really
have
any
skin
in
the
game.
I'm.
Looking
at
the
folks
who
are
implementing
it
as
to
what
you
prefer
to
do
most
just.
B
C
B
I
C
E
C
All
right:
how
about
dan
you've
already
got
most
of
this
PR
done
right,
yeah.
C
You
do
to
write
the
conversion
logic
between
them.
Okay,
maybe
it's
worthwhile
to
try
to
flush
it
out
and
actually
get
it
to
pass.
All
the
API
round-tripping
and,
like
I,
mean
maybe
I'm
going
on
a
limb
here,
but
like
what,
if
we
made
this,
the
v1
PR
like
this
is
the
pr
that
moves
it
to
GA
and
introduces
the
semantic
change
in
the
process,
and
then
we
can
take
that.
Take
any
comments
back
to
the
PR
and
get
like
Chris
Marino.
C
You've
got
people
who
aren't
on
the
call,
obviously
get
a
little
bit
wider
attention
to
it.
What
I
don't
want
to
do
Dan
is
waste
a
bunch
of
your
time.
So
if
maybe
we
can
like
pull
off
a
really
simple
version
of
this
may
be
without
like
X
I.
Don't
think
it's
me
that
complicated
mostly
it's
like
copying
bribery,
yeah
it
shouldn't
they.
C
What
do
you
think
guys?
Should
we
just
try
that
not
quite
sure
what
your
proposal
was
in
terms
of
the
proposal
is
make
the
API
change
as
we
move
it
from
beta
2
GA
I
mean
that
the
door
is
open
for
16
changes.
So
why
not
do
it
now?
B
G
C
J
So
the
dog
has
been
out
for
some
weeks
now.
I
got
what
I
feel
is
a
good
amount
of
feedback.
Just
a
couple
days
ago,
I
sent
out
an
email
which
contains
basically
what
I
think
are
the
next
steps
and
I
actually
started
working
on
that
being
so
if
anybody
sees
something
that
doesn't
require,
has
suggestions
or
anything
guys.
G
C
Rules,
yes,
so
I
want
to
raise
the
issue
of
the
mistake
that
we
well
a
mistake.
The
thing
that
we
didn't
do
early
on
with
cube
proxy,
which
was
we
started
in
describing
everything
in
the
implementation,
and
we
didn't
really
describe
it
in
the
abstract,
like
what
is
the
sort
of
API
that
we're
trying
to
construct
it
might
be
worth
thinking
about
if
you
can
describe
it
in
a
way
that
doesn't
include
the
details
of
IP
tables
but
says
like
what
are
we
expecting
the
abstract,
node
level
service
proxy
to
do?
Okay,.
J
Yes,
yeah,
I
think
the
document
has
has
quite
a
bit
of
that,
but
it's
phrased
in
a
different
context.
So
I
can.
I
can
turn
that
in
a
sort
of
real
design,
doc
rather
than
a
right
now
is
war
in
the
form
of
a
proposal
like
jotting
down
some
ideas.
Sure.
C
I
just
want
to
call
out
to
the
the
the
big
issues
that
I
see
with
it
and
they're,
not
problems
but
they're.
Just
things
that
we
need
to
be
aware
of.
Is
you
know
that
the
proposal
as
you've
written
it
traps
all
traffic
right
and
I've
raised
this
concern
with
you
separately,
but
I
want
people
to
look
at
it
with
that
in
their
eyes,
you'll.
C
J
C
J
At
least
in
my
mind,
this
was
a
more
of
a
kind
of
next
level.
Prototyping
so
basically
have
something
that
gets
the
job
done
and
see
if
it
can't
any
unexpected
issue
or
if
we,
if
there
is
some
Fiat,
we
forgot
so
I,
do
not
expect
to
say,
come
show
up
in
a
couple
weeks
and
say
like
okay:
this
is
what
the
hair
it
is
done.
I
beat
up
it's
over
and
watch
something
else.
So
thank
that's.
G
G
J
C
C
G
C
D
Along
that
lines,
let
me
also
point
out:
I
have
to
apologize.
I
got
distracted
or
somehow
missed
it.
I
didn't
comment
on
this
yet
I'm,
but
one
big
haul
si
remembering
multi-tenancy
putting
up
a
proxy
per
node
runs
into
the
same
grief
that
the
coop
two
runs
in
today
for
multi-tenancy,
which
is
that
the
nodes
are
not
dedicated
to
a
given
tenant.
Yep.
C
J
So
the
so
just
to
address
also
the
second
part,
which
is
a
per
pod.
In
that
case,
the
situation
is
quite
different
again,
based
on
what
I
have
been
experimenting
with.
I
believe
that
we
can
do
what
we
want
to
just
by
having
a
few
iptables
rules
within
the
odd
network
range
space.
So
at
this
point
like
the,
but
in
this
case,
like
the
question
becomes,
how
do
we
inject
the
proxy
container
in
the
pod
and
how
do
we
program
the
iptables
rules?
So
for
that?
J
The
thing
that
seems
to
be
the
that
seems
to
make
more
sense
to
me
is
to
use
a
clever
admission
control
plug
in
that.
Does
this
job,
possibly
in
a
generalized
way,
so
that
it
ends
up
being
useful,
not
just
for
this
use
case,
but
for
more
and
so
along
that
lines.
What
I've
been
experimenting
with
is
to
have
basically
a
in
each
plug-in,
which
comes
up
before
the
sign
it
container
that
comes
up
before
the
application
of
the
proxy
container.
J
We
just
progress
on
getty
tables
and
then
exits
and
then
so,
and
it
also
has
to
run
with
the
couplet
that
mean
so
enter
and
after
that
the
the
proxy
contain
and
the
application
container
come
up.
The
network
race
face
has
been
already
programmed
to
do
to
do
the
route
in
the
right
way
and
so
that,
from
this
point
on
like
we
don't
need
to
make
any
anymore
change.
So
we
don't
need
any
external
component,
at
least
for
again,
for
what
I've
seen
now.
J
J
J
Likely,
including
the
fact
that
it
seemed
to
be
a
little
bit
higher
risk,
I
had
planned
to
start
from
the
Pernod
case.
Anybody
wants
to
help,
but
if
anybody
has
reasons
why
we
should
maybe
work
on
the
purport
case,
first
I'd
be
happy
to
hear
them,
and
not
just
eventually.
I
think
that
at
least
at
the
beginning,
like
they're,
both
important
and
we
want
to,
we
want
to
have
both
going.
J
There
is
an
extra
problem
on
the
burn
old
case
which
I
haven't
looked
into,
but
brian
has
been
looking
into,
and
it
is
that
it
may
be
useful
or
even
required
to
make
sure
that
we
preserve
the
IP
address
of
the
source
IP
address
of
the
traffic
so
and
that
will
be
doable
if
the
proxy
works
in
fully
transparent
mode.
So
you
want
to
comment
on
that.
Brian
I,
so
I
sent
an
email
today,
yeah.
K
Yeah,
so
I
have
been
doing
to
prototyping
manually
myself
about
approximate
works
in
full,
transparent,
there's
some
quirks
I'm
using
some
pretty
hairy
iptables
rules
to
make
it
work.
I'm
also
curious.
If
you
know,
if
there's
any
use
cases
other
than
summers
through
the
outbound
case,
the
inbound
case
on
that
I
haven't
thought
through
that,
yet
and
I'm
not
sure
how
that
plays
into
some
of
the
other
solutions.
J
C
B
B
J
C
J
D
J
G
A
I
B
M
Because
he
is
the
owner
for
the
original
github
issue,
okay,
so
so
because
I
wanted
to
because
on
our
and
our
CRI
testing,
there's
a
bunch
of
test
cases
failing
because
the
CRI
currently
doesn't
support
hose
port.
So
I
wanted
to
enable
it
I
need
to
put
a
CR
testing
to
get
going
and
then
prevent
further
like
disruptions,
which
is
currently
I'm.
B
B
That's
currently
in
Cuba
late,
for
opening
the
host
ports
and
keeping
those
reservations,
though
I
don't
have
anything
to
post
in
a
PR
yet
because
it
requires
some
of
the
other
PRS
and
to
see
harry
potter
been
working,
but
I'm
happy
to
sit
down
and
talk
with
you
a
little
bit
more
about
that
or
right
up
on
the
list.
When
I
was
plenty
to
do
there,
alright
sounds
good.
A
Topic:
yeah
Chris
was
there
any
more
you
wanted
to
say
on
ipv6.
E
So
ipv6
is
obviously
a
very
broad
topic,
I
sort
of
volunteered
to
capture
and
and
document
the
scope
of
what
I,
pbx
ipv6
support
actually
means.
I
mean
this:
there's
the
control
plane
and
there's
data
plane,
just
as
two
very
separate
and
distinct
buckets
and
again,
I
can't
commit
to
actually
devoting
development
resources
to
this.
Just
yet,
but
I
think
it's
something
I'd
like
to
understand,
and
people
were
working
with
would
like
to
understand
a
little
bit
more
specifically
about
you,
know,
rollout
and
capability.
C
There
was
some
discussion
of
this
last
week
or
the
week
before
after
reinvent
on
Twitter
jovita
and
a
couple
other
people
were
agreeing
that
we
should
get
together
to
talk
about
the
meaning
of
ipv6
and
communities.
I
think
you're.
You've
got
the
right
of
it
that
there's
a
number
of
topics
to
figure
out
and
they're,
largely
not
tied
to
each
other,
so
we
can
actually
move
forward
in
many
different
directions.
E
And
if
OpenStack
is
any
indication
of
how
how
deep
these
tentacles
extend,
I
think
this
could
be
a
very
long
effort.
I
Can
we
let
me,
try
and
make
this
practical,
because
I
think
we're
actually
not
far
off
at
least
like
to
get
it
read
partial
support
in,
given
that
we
have
prototype
working?
What's
what
would
seem
logical
next
step
actually
gets
stuff
into
communities
SL
base.
First
of
all,
the
ability
for
a
pod
works
both
expose
both
v6
and
v4.
C
I
think
there's
there
were
two
issues
really
there
Thomas
one
is.
How
can
we
just
allow
an
interface
to
have
both
v4
and
v6
addresses,
which
I
think
is
going
to
require
some
amount
of
just
plumbing
yeah
and
the
second
part
is
PF
in
cube
proxy,
which
I'm
afraid
we're
going
to
run
out
of
time
to
talk
about
today,
but
I
mean
everything
I've
seen
about
the
BPF
stuff
looks
like
panacea.
Frankly,
so
I'm
still
waiting
for
the,
but
it
looks
fantastic.
I
C
And
that's
I
mean
that's
what
we
did
with
the
transition
from
user
space
to
IP
tables
right
as
we
check
to
see
if
iptables
was
present
and
allowed,
and
if
it
was
then
we
use
that
so
we
can
totally
do
the
same
trick.
I
think
ppf
is
the
most
promising
technology
we've
seen
for,
like
the
cube
proxy
that
we
actually
want
so
I
would
love.
I
I
C
I
C
G
B
Said
the
only
concern
I
have
there
is
that
IP
addresses
are
sort
of
tied
to
multiple
networks,
so
I
mean
we
could
do
two
things.
We
could
add
additional
fields
of
kind
of
like
your
primary
ipv6
dress
like
we
currently
have
for
ipv4,
and
we
could
do
that
and
then
add
multiple
network
support
later.
So
that
might
be
one
path
forward,
but
you
know
I
feel,
like
I'm
sure
you
look
a
card.
I
had
like
a
v6
working.
C
Together
and
to
really
I
mean
I'm
greedy
here,
so
I
want
to
see
the
BPF
stuff
sooner
in
purely
non
v6
mode,
like
I'd
like
to
decouple
those,
but
I
think
it's
worthwhile
getting
a
v6
working
group
together.
People
who
actually
know
which
end
is
up
with
respect
to
be
six
and
to
make
some
recommendations
about
what
the
right
that
changes
are,
because
I
honestly
do
not
know
what
the
answer
is.
I.
E
L
I
Again,
if
you
have
five
minutes
left,
I
would
have
a
question
for
dan.
You
have
been
measuring
the
latency
of
the
delay,
oreck
reconfiguring
thousands
of
iptables
world.
What
is
the
actual
requirement
use
face
driving
this?
Is
it
because
you're
introducing
delayed
to
your
services,
or
do
you
care
about
latency
of
reconfiguration
with?
What's
driving
this.
B
B
B
Yeah,
we
do
expect
probably
a
bunch
of
service
turn,
but
I
don't
think
at
this
point.
We
would
expect
10,000
services
to
change
every
couple
of
seconds.
No,
it
would
be
you
know,
I
mean
eventually,
you
add,
you
know
five
ten
services
every
couple
of
seconds.
Maybe
you
would
get
up
to
you,
know
10,000
or
more
services
at
some
point,
but
the
churn
would
certainly
not
be
that
great.
B
C
Can
I
propose,
then,
if
we're
looking
at
16
as
a
stabilization
release
that
we
make
this
actually
like
a
good
topic
for
that,
like
let's
we're
out
of
time
today,
but
let's
define
what
we
think
the
important
metric
and
goal
here
is
maybe
its
measured
in
terms
of
CPU
usage
of
cube
proxy
over
a
one-second
window,
or
something
like
that.
And
then
we
can
focus
on
making
sure
that
we've
got
a
test.
That
proves
that
there
is
a
problem
that
we
can
get
checked
in
and
then
we
can
work
on
the
various.
B
C
I
mean
I'm
not
sure
that
we
don't
there's
a
million
things
that
could
have
gone
wrong
in
the
you
know,
manual
testing
that
I
did
and
I
think
it's
worthwhile
it
you
I
mean
you're,
not
the
only
people
who
reported
this.
So
if
this
is
a
real
problem,
we
should
really
try
to
spend
some
energy
to
reproduce
it.
We're
going
to
have
some
new
people
come
online
in
your
g1,
so
you
know
this
might
be
good
starter
project
for
people,
those
sorts
of
things
yeah.
B
C
C
B
Something
on
my
to-do
list
is
to
collapse.
The
endpoint
and
service
updates
before
we
even
get
to
the
point
of
sinking
iptables
rules
based
on
our
discussion
last
week.
Tim
again
so
I
was
going
to
do
a
second
PR
for
the
applicable
stuff
to
do
that.
That
was
largely
independent
of
the
iptables
rules.
/
cpr,
okay,
so
I
mean
that's
kind
of
attempting
to
do
your
multi-layered
mitigation
suggestion
yeah.
C
I
mean
I
love
fixing
problems,
but
I
want
to
convince
myself
that
this
doesn't
unfix
itself
somehow
and
ya
know
spin
about
to
capture
it.
Now,
we'll
we'll
just
be
back
here
again:
yep,
alright,
I
guess
with
that
we're
out
of
time.
I
wish
everybody
a
happy
holidays
and
we'll
see
you
all
in
early
January,
I
guess
what
day
is
our
next
week.