►
From YouTube: Kubernetes SIG Network 20171130
Description
Kubernetes SIG Network meeting from November 30th, 2017
A
A
D
C
Do
you
guys
see
the
presentation
now
yep?
Okay,
so
the
idea
is
behind
this
tool.
I
have
been
waiting
to
debug
an
application,
so
there
is
a
application
running
in
humanities
cluster.
So
as
a
user,
it's
always
a
challenge
to
go
in
there
and
get
all
the
Iberian
rules
if
a
pod
is
not
able
to
dial
out
or
not
able
to
talk
to
any
other
service
inside
the
tester.
So
just
in
the
state
and
I
have
been
working
on
this
tool.
It's
not
it's
not
complete
yet,
but
I
have
oh
I.
C
Have
it
at
a
point
where
I
can
do
a
POC
and
get
some
feedback
like
if
it's
really
helpful
or
if
it's
later,
if
it's
of
no
use
so
that
I
can
think
of
something
else
so
and
get
more
feedback?
How
I
can
improve
it
if
it
is
a
healthy?
So,
as
I
have
only
two
slides
to
cover,
which
will
give
like
the
flow
of
what
I
am
doing
so
in
here,
I
have
four
main
components.
C
Now
it
doesn't
support
that
command,
cube,
CLD
work,
so
I
had
to
go
this
route
to
use
the
dock
or
EPA
directly
to
attach
the
running
container,
so
I
attach
the
worker
in
there
which
just
sends
the
traffic
with
the
destination
provided
and
collector
is
sitting
on
the
master
or
somewhere
in
the
in
the
in
the
cluster
itself,
which
gets
all
the
logs
from
the
inspector
and
collector
can
send
it
toward
different
back-end
to
view
those
logs
or
right.
Now
it's
just
displaying
on
standard
in
so
so.
C
The
steps
to
create
is
I,
just
I'll
just
create
I
will
just
deploy
a
controller
and
a
collector,
and
then
I
create
my
cre
definitions
in
that
cid
definition,
I
have
a
the
pod
I
need
to
debug
and
the
destination
to
check
and
on
the
collector
will
just
trace
the
logs
where
the
collector
is
and
then
to
clean
up.
We
just
delete
the
controller
and
collector
and
delete
the
tracer
CID
and
everything
it.
Everything
is
cleaned
up.
Basically,
so
I'll
just
start
with
the
demo.
Well
good.
C
C
C
A
controller
and
controller
is
just
watching
at
this
for
the
CID
objects
once
those
CID
objects
are
created,
it
will
create
an
inspector,
so
I
will
deploy
a
controller
right
now
and
a
collector
which
will
collect
all
the
logs
from
the
inspectors.
So
once
I
start
this
guy
excuse
me,
so
the
collector
and
controller
are
up.
Now
we
can
go
ahead
and
create
the
CR,
D
and
I
can
show
the
definition
of
my
cre
also.
C
C
C
Create
a
tracer
so
so
this
is
where
I
tell
my
D
ID,
that
I
want
to
debug
orders
and
name
a
part
which
has
name
orders.
So
it's
the
selector
I
provide,
and
these
are
the
target
destinations.
I
want
to
debug
like
I,
want
to
see
if
the
pod
can
get
outside
ICMP
or
refuge
if
it
can't
reach
a
payment
service
within
the
cluster
on
TCP
port
80.
C
C
So
there
is
no
tracer
and
I
can
create
one
tracer
once
I
create
a
tracer
controller,
we'll
get
this
information
try
to
get
where
the
order
pod
is
running
on
which
node
and
it
will
deploy
the
inspector
on
all
node.
But
on
that
node,
where,
where
the
order
part
is
running,
it
will
also
apply
the
worker.
C
So
inspector
will
connect
so
looks
like
the
inspector
is
coming
up
now
and
so
inspector
gets
deployed
on
all
the
nodes
and
once
the
inspector
comes
up
it
brings
up
the
worker
also
and
worker
immediately
starts,
and
in
traffic
and
inspector
capture
that
and
sends
it
to
us
and
the
collector,
which
collected
all
the
logs.
It
will
tell
us
where
the
so
on
this
host.
C
My
order
part
is
running,
that's
the
source
IP
for
my
orders
and
that's
the
destination
we
wanted
to
debug,
and
it
will
give
me
all
the
IP
table
rule
it
has
being
fast
through.
So
every
IP
table
rule
is
doing
down
there
right
now,
but
this
can
be
optimized
and
it
can
also
give
the
information
about
our
and
other
things
from
that
house.
C
C
C
E
C
C
E
B
E
Would
love
to
hear
the
use
cases
for
it
and
we
could
discuss
it.
The
the
target
that
we're
focusing
on
with
the
debugging
stuff
was
really
the
case
of
I
want
to
deploy
my
pod
without
any
tools
in
it
for
security
reasons
and
then,
when
I
need
to
debug
it
I
want
to
attach
the
debugger
as
a
separate
container,
but
that
was
intended
to
be
sort
of
a
transient
thing.
Not
a
production
level
thing
so
it'd
be
interesting
to
talk
about
those
other
use
cases.
Sure.
C
How
I'm
doing
is
in
the
inspector
is
when
the
inspector
comes
up
it,
it
runs
the
iptables
s
command
and
passes
all
the
routes,
so
it
all,
the
rules
are
there
in
the
in
the
user.
In
the
memory,
and
then
the
packet
goes
through
inspector
adds
the
NF
log
NF
log
policies.
So
all
the
packages
are
going
through
that
those
rules
are
being
given
to
the
inspector
and
inspector
matches
that
packet
with
the
existing
rules
in
the
memory
which
are
painted
at
the
time
of
when
the
instructor
came
up.
C
E
B
E
C
E
C
Yeah
but
the
the
other
challenge
I
saw
with
that
was
the
traces
were
like
so
many
and
then
you
have
to
go
and
pass
that
syslog
from
there.
That's
true.
Oh
it
traces
everything
all
the
time
yeah.
So
what
I
tried
was
I
added
those
logs
in
the
raw
in
the
rod
table
and
then
I
was
getting
all
the
traces
for
all
the
packets
going
through.
But
in
this
case,
I
have
all
the
filters
where
I
can
get
only
the
logs
for
the
targets
I'm
sending.
E
C
C
G
G
H
C
No,
so
this
this,
these
packets
are
sorry.
These
IP
table
rules
are
copied
from
the
root
networking
space
and
it's
not
it's
not
put
it
in
the
network,
namespace
of
inspector
there
in
memory
and
then
the
packet
is
being
when
the
packet
goes
through
the
root
net
of
the
instance.
It
hits
the
NF
log
entries
and
based
on
that
NF
log
entries.
We
get
that
packet
and
we
can
match
that
packet
with
these
routes.
Oh
I
see.
G
C
Trace,
yes
and
then
I
match
that
packet
with
the
with
all
those.
So
let's
say
it
came
in
I
can
take
an
example.
It
came
in
filter,
so
it
came
in
filter
forward
and
then
after
filter
forward
and
I
will
match
all
the
change
in
that
filter
forward
with
those
policies
and
whichever
matches
I'll
just
don't
go.
G
I
see
so,
but
the
thing
is
when
you
like:
let's
say
you
see
the
packet,
but
then
it
somehow
because
something
happened.
It
went
somewhere
that
you
weren't
certain,
but
this
is
synthetically
kind
of
evaluating
the
rules
like
how
do
you
know
it
actually
went
through
all
the
rules
that
you
evaluated
because.
C
C
E
Evaluating
are
you
like
evaluating
yourself
all
of
the
conditionals
in
all
of
the
rules
so
like?
If
there's
a
rule
that
says
if
source
is
mod,
you
know,
if
there's
a
particular
source
range,
then
jump
to
some
chain.
Are
you
evaluating
that
yourself
or
are
you
just
saying
now
wait
for
something
to
tell
me
that
I
hit
that
chain
no.
C
E
E
C
I
H
C
E
It's
very
interesting
related.
We
have
someone
who's
been
looking
at
something
internally
here
to
also
diagnose
network
failures,
and
basically
they
took
this
dock
that
we
wrote
a
while
back
called
debugging
services
and
starting
to
codify
it
into
a
program
that
you
run,
and
it
would
then
tell
you
whether
it's
your
service
that
doesn't
work,
whether
you're,
missing
endpoints
or
whether
your
selectors
don't
match
or
whether
cube
proxy
is
not
running
and
it
tries
to
go
through
and
make
all
those
sorts
of
diagnoses.
E
C
B
Okay,
Oh
see
if
I
can
share
my
screen
here.
Yeah
just
sure
the
desktop
that'll
be
easiest,
so
I
wanted
to
just
mention
some
of
the
stuff
I've
been
working
on.
A
larger
project
is
I,
think
of
it
as
using
pieces
of
kubernetes
to
build
something.
That's
not
kubernetes,
in
my
context,
we're
interested
in
building
an
infrastructure
cloud
that
can
serve
VMs
and
virtual
networks
and
I
just
wanted
to
show
a
piece
of
the
work
on
virtual
networks
here.
B
So
I
put
up
a
brief
outline
on
it's
linked
from
the
agenda,
so
you
can
see
so
this
idea,
I'm
going
to
focus
on
right
here,
is
just
the
basics
of
building
some
virtual
Ethernet
and
making
network
interfaces
that
attach
to
them
using
kubernetes
api
machinery
to
build
a
distributed
control
plane.
We
use
OVS
for
the
data
plane,
we're
virtualizing
using
the
excellent,
so
we're
introducing
two
new
kinds
of
API
object,
network
and
network
endpoint.
Actually,
it's
a
network
definition
is
the
one
we
call
it.
B
There
are
Network
definition
and
network
endpoint,
and
so
the
central
problem
here
is
on
each
node,
we're
using
the
local
control
plane
of
OBS,
which
is
limited.
Doesn't
anybody
other
know
that
has
to
be
told?
The
issue
is
when
you
make
a
Linux
network
interface
on
a
virtual
network
on
one
node,
all
the
other
nodes
that
have
also
have
Linux
network
interfaces
on
the
same
virtual
network.
You
have
to
do
some
local
work
in
in
the
OVS
on
that
node.
B
So
there's
this
control
plane
distribution
problem,
so
we
did
it
in
just
obvious
way
using
kubernetes
api
machinery,
we
introduced
this
type
of
object
or
a
kind
of
object
called
network
endpoint
when
there
is
a
linux
network
interface
made
there's
also
a
network
object
made
in
the
kubernetes
API
servers
on
H,
no
there's
a
net
agent.
That
is,
has
an
informer
on
these
objects
and
it
invokes
the
appropriate
local
operations
on
the
local
OBS
to
to
update
it
so
that
the
thing
I'll
get
stitched
together
and
works.
B
B
No
okay,
well
I'll
just
go
ahead
and
show
the
thing
right.
Let's
see
here
so
I've
got
a
shell.
F
B
Right,
let's
see
here,
I
just
gobbled
up
a
bit
of
scripting
here,
so
I'm
gonna
go
ahead
and
make
a
network
definition,
and
you
can
see
here
what's
in
a
network
definition,
it's
very
simple
right
now,
all
the
user
specifies
is
an
IP
subnet
and
a
name
and
there
it
goes.
We
have
a
simple
controller,
oh
right,
so
we're
using
VX
Lancer.
There
are
virtual
network
identifiers,
OVS
Tunnel
IDs
involved,
so
they
there's
a
simple
controller
that
assigns
only
these
things
in
network
an
int,
a
a
network
number.
B
B
Okay,
so
here
we
go
here
is
the
network
interface
created
by
that
C&I
plugin.
We
also
make
a
bridge
a
linux
bridge
for
friendliness
to
key
mu
and
that's
the
thing
that
actually
got
the
IP
address.
But
this
is
a
thing
created
by
the
the
Sdn.
Let's
see
here
and
now,
let's
we
can
actually
look
at
the
network.
Endpoint
object
for
that.
The
the
name
here
is
just
the
the
MAC
address
in
X.
B
So
there's
the
network
endpoint
that
got
created
for
that
again,
it's
pretty
simple:
it's
got
just
some
basic
stuff,
identifying
the
host
and
where
the
network
interface
is
on
that
host,
we
can
make
another
one
and
show
that
they
can
ping
each
other,
for
example.
So
I
can
do
you
know
cou
control
exec,
let's
see,
let's
do
unless
the
new
guy
to
ping,
the
old
guy.
J
B
It
works
so
that's
you
know,
coop
grenades
knows
the
IP
address.
Of
course
we
reported
it
and
there
it
is,
and
I
can.
Let's
see
oh
right,
so
I
can
also
go
on
and
show
a
little
bit
about
the
the
stuff
we're
doing
for
VMs
for
vm's.
We
actually
want
a
dynamic
set
of
network
attachments
and
we're
using
more
sophisticated
syntax
for
that.
So
here's
the
the
specification,
the
dynamic
set
of
network
attachments
for
this
VM
here
is
a
command.
That's
going
to
using
a
different
API.
Add
something
to
that.
B
B
It's
a
little
bit
strange
so
to
have
a
scene
I
plug
in
you
know,
you'd
normally
think
it
was
doing
a
local
operation,
but
it's
reaching
out
and
doing
all
this
remote
stuff.
So
it
feels
a
little
bit
inverted
in
some
sense,
but
it
back
on
the
basic
concept
of
the
Sdn.
It
seems
very
natural.
In
fact,
I
have
people
who
know
more
about
ovn
and
saying
this
is
a
just
a
much
better.
B
Implementation
of
ovn
is
kind
of
doing
the
similar
thing,
but
they're
starting
from
a
database
foundation
rather
than
a
kuba
API,
Machinery,
Foundation
and
they've
just
got
a
much
tougher
row
to
hoe
on
the
it.
You
know,
I'm
told
that
the
the
goob
API
machinery
just
makes
this
more
direct
and
efficient.
B
Okay,
in
fact,
I
want
to
argue
you
know:
I'm
I,
have
kind
of
a
long
history,
building
systems
and
I
really
like
this
state
based
management
approach,
I've
been
advocating
it
before
I
heard
about
kubernetes,
but
the
coup
brace
API
machinery
is
a
really
nice
foundation
on
which
to
build
that
style
of
management
right.
So
I'm
trying
to
argue
in
general
that
this
is
actually
a
good
thing
on
which
to
build
whatever
distributed
system.
You
want
to
build
all.
B
A
A
D
D
I
A
So
the
only
other
failing
on
here
is
the
hoops
yeah,
a
calico
set
of
tests,
which
I
think
is
on
my
plate.
There
was
some
progress
made
I
think
it's
an
infrastructure
problem
with
kubernetes
anywhere
and
yeah.
I
think
this
is
with
me
to
investigate
at
the
moment,
but
shouldn't
shouldn't
block
or
concern
1.9.
At
all.
E
E
A
B
B
A
J
Gains
leading
that
deign
are
you:
are
you
on
yeah.
L
I'm
here
yeah,
we
we're
still
in
the
design
phase
trying
to
get
get
some
end-to-end
test
running
in
that
with
the
infrastructure.
But
we
have.
We
have
a
test
suite
picked
out.
We
just
need
to
create
some
create
some
containers
that
that
run
dinned,
so
we
can
run
multi
node
in
a
virtualized
environment.
Somebody
somebody.
E
Here
had
a
really
interesting
idea
recently
that
I
thought
might
be
interesting
for
us
to
try
to
use
to
test
six
on
four.
So
tell
me
if
you
think
it
would
fly
or
not
what
if
we
could
bring
up
in
the
regular
surrounding
most
of
the
test,
Suites
on
GCE,
what
if
we
could
bring
up
the
GCE
and
since
that
set
up
six
to
four
tunnels
on
the
VM
and
then
treated
the
G
scoober
Nettie's
cluster,
like
a
v6
cluster
I
did
some
really
rudimentary
testing
and
it
seems
to
work
I.
E
E
E
Need
to
do
some
custom,
some
custom
setup
to
make
it
work,
but
if
we
could
do
that,
then
it
would
just
be
a
regular
old
test
run
like
any
other
test
run
with
it
with
a
custom
setup.
Wonder
if
that
would
give
us
the
ability
to
test
the
all
the
v6
work
effectively
in
perpetuity
on
GCP,
even
though
GCP
doesn't
support.
V6
natively
are.
F
A
L
That's
that's
worth
considering
I,
not
sure
if
we're
going
to
have
a
problem
with
with
the
web
hook,
if
github
github
is
sending
us
web
hooks,
I'm,
not
sure
if
that
only
works
in
v4,
but
that
that
needs
to
be
captured
in
would
it
what
is
a
component
that
does
the
captures
the
web
hook?
But
if
that's
in
a
v6
cluster
I
don't
know
if
we
have
to
translate
that
from
a
v4
packet
to
to
VC.
E
E
J
Put
in
the
chat
window,
this
is
the
COO
Badman
didn't
project
we
shared
with
the
team
a
few
months
back
and
yeah
I
believe
who
I
think
is
on
the
call.
I
did
a
lot
of
work
to
add
ipv6
support
there
and
it's
what
we've
been
using
for
testing
and
GC
as
well
as
local,
and
it's
worked
out
really
well
so
I,
don't.
E
Mean
to
take
anything
away
from
that.
Certainly
I'm
just
was
thinking
of
if
it
would
make
more
sense
to
be
running
without
any
of
the
DMD
trickery,
but
we'd
be
doing
substituting
for
a
different
set
of
trickery.
So
if
you
guys
think
you
can
get
test
frameworks
up
here
enough,
then
that
is
fine.
Once
we
get
to
dual
stack,
we'll
have
to
maybe
think
about.
Maybe
it
makes
sense
to
only
run
the
test
Suites
once
and
just
run,
everything
in
dual
stack:
yeah,
yeah.
H
J
B
E
F
E
B
F
E
L
J
J
Since
we
were
talking
about
v6
testing
I
in
the
chat
window,
put
a
link
to
my
PR
five
six
to
four
or
five
and
I
think
it's
it's
failing
tests,
because
test
impre
doesn't
support
these
euro
six
euros.
Cni,
binaries
and
so
I
opened
up
an
issue
which
is
the
last
thing
that
I
provided
Izzard.
How
do
we
get
confirmation
or
any
kind
of
guidance
to
inspect
you?
What
version
of
the
CNI
biner
is
a
test?
Impre
is
running
when
it's
doing
its
integration
and
and
then
to
us
I.
A
J
J
J
L
E
Can
you
tag
XD,
I
XD,
why
I.
E
A
E
E
K
A
E
E
E
F
F
They
look
fine
to
me.
Is
anybody
on
this
call
doing
any
talks
at
cube
con
you
know:
are
we
should
we
meet
up
at
some
point?
Networking
involve
people
to
just
kind
of
chat
or
hang
out
or
develop
a
rapport
before
the
deep
dive
which
is
like
the
last
possible
day
or
what
so
just
kind
of
all
things
cube
con
I'll.
A
Be
there
I
don't
have
any
talks
other
than
the
my
name
is
against
the
update
session,
so
we've
got
some
slides
for
that.
I
think
I
still
want
to
review
those
a
little
bit
more
and
get
a
feel
for
what
our
presentation
formats
going
to
be.
This
one
person
gonna
be
presenting
all
of
that,
or
should
we
split
it
up
into
a
bunch
of
different
sections?
A
B
Also
I'll
be
going
I
tried
to
collect
suggestions
on
the
mailing
list
and
nothing
came
in
during
the
expected
time.
So
in
case
he
put
forward
some
suggestions.
Then
a
couple
of
things
came
in
a
mailing
list,
so
you
have
a
couple
more
ideas,
but
we
decided
last
meeting,
but
we
didn't
record
it
that
will
just
gonna
self
organize
at
the
start
of
the
deep
dive.
Okay.