►
From YouTube: Kubernetes SIG Network meeting 2018-10-04
Description
SIG Network meeting from October 4th, 2018
A
B
Good,
thank
you
so
hi
everybody
I'm
Mike
I,
want
to
do.
We
do
all
introduce
myself.
I
am
the
113
release
lead
so
the
recycle
for
113.
If
you
haven't
seen
already
on
the
Kuban,
it
is
step
channel.
It
kicked
off
this
Monday
October
1st
and
it's
it's
going
through
November
and
the
release
date.
The
anticipated
release
is
set
for
December
3rd,
the
first
week
of
December.
B
B
Everything
in
and
currently
code
freeze
is
set
to
the
15th
of
November,
which
kind
of
just
gives
us
about
a
little
more
than
six
weeks
to
get
all
the
coding
and
tests
and
documentation
in
for
the
freeze,
though,
we'll
start
our
formal
feature
collections
starting
next
Monday,
that's
October,
8th
I
just
wanted
to
meet
with
you
guys
and
request
you
to
factor
in
the
release
time,
while
planning
your
enhancement
flow
for
this
release.
While
we
are
not
formalizing
it,
we
are
aiming
or
attempting
this
to
be
more
of
a
stability
release.
B
In
a
sense,
our
request
is
for
you
to
try
to
land
whatever
might
have
differed
in
the
previous
releases,
or
focuses
more
on
some
stability,
bug,
fixes
or
increasing
test
coverage
or
testability,
as
opposed
to
introducing
something
risky
or
large
in
the
short
timeframe.
So
that's
just
something
to
think
about.
While
you
plan
your
future
load,
any
questions
around
that
before
we
quickly
just
talk
about
two
features
that
we
see
on
the
features
of
already.
B
C
B
So
the
two
features
that
are
right
there,
one
is
switching
the
default
DNS
to
Colinas.
This
is
something
that
was
attempted
in
112,
but
was
deferred
due
to
the
out
of
memory
issues
that
we're
heading
so
I
just
wanted
to
get
a
sense
of.
Is
it
still
planned
for
113?
If
so,
what?
What
kind
of
pending
work?
Are
you
guys
thinking
about.
D
B
Would
it
be
possible
to
just
update
the
feature
issue
with
your
level
of
confidence
and
what
and
also
spending
that
will
be
thanks
a
lot
and
the
same
for
feature
of
five
six
three,
which
is
adding
for
me:
I,
see
that
this
is
graduating
to
be
a
plan
to
graduate
a
beta
in
130.
So
again
the
same
request
there
in
a
sense
of
what
spending
in
terms
of
code
or
tests
or
dogs.
That
would
really
help
us
make
it.
A
E
F
Okay,
yeah
so
just
to
clarify
there's
a
beta
status
for
the
v6,
only
feature
that
is,
it
was
put
in
1.9
as
alpha
and
I
think
that
data
for
that
has
a
chance
for
for
the
1.13
time
frame,
but
for
the
dual
stack,
this
563
I
really
don't
think.
There's
there's
any
way
we
can
get
this
done
in
ten
weeks
time,
although
we
do
want
to
keep
this
as
a
high
priority
in
you
know,
getting
getting.
Let's
say,
half
of
it
done
or
you
know
big
chunk
of
it
done
in
1.13
timeframe.
So.
B
What
I
could
do
is
as
part
of
a
tracking.
We
generally
take
the
milestone
out.
If
it's
not
targeted
for
the
next
one
and
yeah
you
got,
you
can
go
ahead
and
put
a
future
milestone
so
that
it's
still
tracked
whenever
you
think
going
to
be
done
so
for
our
purpose,
I
would
just
drop
the
113
out
of
it
for
now.
Okay,.
F
A
A
A
G
G
G
A
A
A
I'm
Zoey
I'm,
seeing
your
circle
have
a
or
your
square
have
a
green
line
around
it,
but
I
don't
hear
anything
so
I'm,
not
sure
if
it's
a
problem
on
my
end
or
what
okay
we're,
not
okay
cool,
some
of
the
make
sure
it
was
a
zoom
UI
issue
and
not
something
else,
so
I
think
that's
all
that
needs
to
be
said
on
that.
For
this
call
please
go
and
comment
and
make
suggestions
and
I
will
address.
All
of
that.
H
Okay,
so
yeah,
you
wanted
to
get
some
input
on
and
share
some
work.
We've
been
doing
here
at
Google
regarding
running
a
no
local
DNS
cache
some
of
the
living
see
problems
hundred
issues
so
yeah
rajasam.
He
results
here
and
some
design
proposal
see
I
have
a
few
slides.
But
what
is
it
that
you
wanna?
Do
we
want
to
start
a
no
local
DNS
cache
nodes
as
a
human
set
and
serve
DNS
for
all
the
pods
that
it
was
clustered
in
yesterday?
Why
do
we
want
to
do
this?
H
H
Latency
in
that
communication
we
wanna
ski
it
contract
for
the
bar
to
the
local
DNS
cache
connection
using
materials
that
prevents
you
know.
Contract
from
UDP,
DNS
query
is,
and
also
hopefully
alleviate
some
of
the
risk
conditions
that
have
been
reported
in
open
source.
Also,
we
have
the
opportunity
to
now
grade
the
cache
to
the
cube,
DNS
connection
into
TCP.
Maybe
a
future.
H
Visibility
in
your
matrix,
more
information
on
what
particular
node
are
doing,
since
we
have
no
local
metrics,
now
be
able
to
re-enable
in
a
caching,
depending
on
what
we
run
as
a
caching
agent.
We've
not
been
able
to
do
that.
Gayness
mask,
hopefully
that
reduces
a
bunch
of
the
body.
Queries
at
least
that
keep
hitting
next
domains
now
see.
There's
also
been
significant
interest
in
the
community
for
a
solution
similar
with
this
inhibitions.
So
hopefully
we
can
make
it
take
off
now
so
yeah,
that's
that's!
H
The
motivation
good
feature,
some
more
details
on
how
we're
trying
to
do
this
wanna
run
the
caching
agent
as
a
as
an
add-on
trans
as
a
demon's
I
didn't
keep
system
namespace
make
sure
it's
not
it's
up
set.
The
priority
class
wanted
to
be
the
course
network,
node
to
listen
for
packets,
DNS
requests
trader
dummy
interface
on
the
node
on
the
host
service.
Ip
to
that,
so
all
the
pods
can
now
reach
out
to
this
service
IP.
H
To
for
DNS
queries,
one
needs
to
be
in
the
privileged
security
context,
since
we
want
those
custom,
IP
tables
rules,
just
keep
contract
and
also
the
dummy
interface
to
be
created
and
unique
container
might
be
an
option
if
you
don't
want
to
periodically
make
sure
like
I
said
before
we
wanna
expose.
This
is
service.
H
I
Might
be
worth
talking
a
little
more
about
the
service
key
points
thing:
okay,
okay,.
H
So
please,
how
did
we
try
and
see
if
we
can
hard
code
an
IP
address
and
then
get
that
that
populated
in
the
fortress
order
comp,
but
a
better
idea,
a
better
idea
that
we
could
I
saw
on
a
service
name
to
the
cubelet
and
which
can
determine
the
IP
address
assigned
to
the
service
that
conflicts
with
the
static
IP
here
assign
you
would
in
your
exam.
So
we
can
pass
on
a
service
name
to
the
cubelet
as
a
flag
and
redditor
means
that
IP
address
and
popular
start
into
the
pastures
on
your
country.
J
I
J
I
think
for
the
most
part,
it
probably
will
just
something
to
keep
in
mind
that
it
might
some
network
plugins
might
have
expectations
that
this
not
necessarily
violates
cuz.
That's
the
wrong
word,
but
this
kind
of
thing
I
think
was
fairly
under
specified
with
respect
to
Cooper
neighs
anyway,
I
can't
recall,
having
you
know
any
kind
of
either
written
or
unwritten
rule
that
said
that
Network
plugins
definitely
need
to
be
able
to
send
service
traffic
from
the
cluster
Network
to
the
host.
J
I
Probably
in
practice
this
will
work
for
most
users
right,
but
it
does
not
agree,
doesn't
really
work
an
interesting.
You
know.
There's
there
I
wish
you'd
consider
options
here.
If
anybody
has
other
ideas,
another
one
that
we
sort
of
toyed
with
that
we
didn't
decide
on
I.
Don't
really
remember
why,
but
maybe
we
could
revisit
in
light
of
that
comment
was
to
just
assign
a
link.
B
I
B
I
J
Yeah,
what
we
do
for
open
shift,
at
least,
is
for
a
lot
of
the
systems
we
run
DNS
mask
in
the
host
name
space
and
that's
got
some
IP
tables
rules
to
make
sure
that
you
can't
hit
up
the
node
for
DNS
from
outside
the
node,
but
because
we
expect
pod
connectivity
going
out
of
the
node
they're
able
to
access
the
nodes
IP
address.
So
that's
what
gets
populated
into
resolve
cough.
So
it's
really
close
to
this
I
think
the
only
real
difference
is
the
the
addition
of
the
service
IP
to
this
particular
proposal.
I
I
J
I'm
not
saying
that
you
know
this
is
the
wrong
way
to
do
it.
Just
that
yes,
having
a
local
caching
agent
is
something
that
is
good
and
seems
to
work
fairly
well,
and
you
know
there
are
obviously
different
ways
to
do
it
and
I
think
this
one
seems
okay,
so
let's
I
say
go
for
it
and
you
know
see
what
problems
crop
up
with
respect
to
network
plugins.
If
any
and
see
what
happens-
and
one
thing
to
do
maybe
is
to
explicitly
specify
what
the
expectations
are.
But
you
know
you
were
just
talking
about
that.
H
I
I
J
J
J
H
More
details
here,
yeah
just
detail
on
the
connections-
the
local
regions,
service,
IP,
quad,
talks
to
the
caching
agent
and
those
connections
are
not
tracked.
We
have
tablespoons
to
stick
contract
for
those
and
in
any
process
as
the
applicable
rules
and
in
phase
and
have
a
tech
uses.
The
applications
check
command
to
make
sure
the
rooms
are
present
periodically.
H
H
Just
sub
set
up
the
results
here
so
for
setting
a
CPU
limit,
we
found
the
GPS
run
down
to
be
higher
than
what
we
got
from
Cody
and
s,
so
it
on
ahead
in
the
city
requirements
we
can
plan
is
to
run
and
bound,
but
I
did
see
that
after
they
so
really
suck
reduce
for
supporting
Cody.
The
result
said
the
performance
UPS
has
increased
quite
a
bit
so
I'll
revisit.
That
is
the
supply
and
so
far
and
yeah,
based
on
those
same
results
defaults.
H
Now
the
CPU
requested
50
Emily,
we
see
PU
and
the
memory
limit
at
twenty
five
mics
yeah
since
I
have
the
protractor
higher
did
not
include
matrix,
so
resource
usage
might
change
a
little
bit
after
we've
added
the
matrix
and
the
periodic
I
became
is
check.
So
this
is
like
it
is
change.
A
little
bit
they
are
four
matrix.
Unbound
provides
an
unbound
control
program
that
queries
unbound
for
matrix.
You
can
choose
from
a
variety
of
commands
to
issue
to
read
the
matrix
from
the
cache
patinas
has
needs
to
do
the
same
thing.
H
H
B
F
Yeah
thank
Tim.
Forget
it
getting
a
getting
the
doctor
to
do
a
little
document
out.
It
sounds
like
that
that
discussion
is
winding
down,
and
so
I
was
just
putting
in
a
plug
to
get
more
eyes
on
the
cat
that,
if
possible,
especially
in
that
the
endpoints
API
we're
going
to
start
coding
that
up
for
you
sooner,
you
just
want
to
there's
some
shortcut.
Our
shortcut
have
a
shortcut
that
I'm,
taking
and
I
just
want
to
make
sure
that's.
Okay,.
J
It
posted
a
PR
a
while
back
that
basically
just
puts
the
couple
of
versions
of
that
v1
spec
in
the
cube
community,
repo
I
think
we
talked
about
this,
maybe
about
two
two
and
a
half
months
ago.
We
just
kind
of
wanted
somewhere
to
live,
and
there
were
a
couple
of
options
for
it
to
live
somewhere,
and
we
just
decided
to
try
the
kubernetes
community
first,
so
I'm,
just
kind
of
looking
for
some
feedback
on
that.
J
It's
not
that
we
want
feedback
on
the
spec
itself,
since
this
is
of
you
inspect
that
the
working
group
kind
of
agreed
on
back
in
August,
but
just
that
feedback
on
is
this
the
right
place?
To
put
it?
If
it
is,
is
it
okay
to
just
merge
it?
Otherwise,
if
it's
not
the
right
place
to
put
this
back,
we
can
find
somewhere
else
to
do
it.
A
K
So
repetitive
myself,
my
name
is
Pradesh
I
work
in
company
called
a
loom.
You
in
this
animal
I
have
my
colleague
Tom
and
this
year
with
me
as
well.
So
some
of
our
clients
are
facing
some
issues
with
kubernetes.
They
were
deploying
few
things
and
this
other
iptables
rules
from
cue,
proxy
and
cubelet.
We
have
are
being
prevented,
so
they
are
seeing
some
issues
from
our
products
or
some
other
product
from
the
field
as
well.
K
So
what
did
to
know
if
there
is
anything
any
config
which
we
can
toggle
to
get
it
to
append,
instead
of
tripping,
especially
the
firewall
rules,
if
there
is
no
firewall
policy
controller
behind
it,
kubernetes
still
aggressively
Forsett
ahead
of
everything.
I
think
the
timer
is
set
for
10
minutes
or
something
so
I
wanted
to
know.
If
there
is
anything
to
change
the
config,
if
not,
we
have
a
patch
locally,
and
what
is
the
procedure
to
submit?
I
I
M
Blair's
summons
both
there's
like
the
like
that
mate
IP
tables
utility
flag
on
Cuba
that
you
can
disable,
but
that
also
has
some
not
applications
on,
so
that
wasn't
sufficient
cublas
and
what
I
can
tell
insured
the
cube
firewall
chain,
I
think
in
at
the
top
and
then
cube
proxy
ensures
that
the
cube
forward
chains
at
the
top.
So
it's
a
bit
scattered
across
the
two,
but
they
both
prepend
certain
chains.
It's
at
the
top
of
the
firewall.
N
So
this
is
min
Han
and
speaking
so
I
believe
Q
proxy
and
keep
the
dust
prepend
some
rules,
but
I
I'm,
not
sure
which
one's
get
prepended
into
the
filter
table.
Most
of
them
are
in
the
NAT
table,
I
believe
and
for
the
future
table
once
the
firewalls
that
you're
referring
to
are
those
like
the
in
the
service
bag,
where
you
can
specify
the
low
balance
or
source
ranges
are.
Those
are
the
is
that
the
firewall
you're,
referring
to
or
some
other
firewalls.
M
J
J
M
J
M
O
M
J
One
of
the
ways
that
we
fixed
it
for
some
things,
an
open
shift.
Was
we
created
an
administrator
specific
firewall
chain
that
everything
jumped
to
first?
So
then,
you
could
add
rules
to
that
chain
before
it
actually
got
sent
along
to
the
open
shift
stuff,
but
our
use
was
a
little
less
complicated
than
kubernetes
is
so.
I
I
I
guess
I'd
rather
see
a
patch,
or
at
least
like
a
gist
or
something
that
calls
out
this
areas
that
are
problematic,
and
maybe
maybe
the
right
answers
to
do
something
like
what
Dan
just
suggested,
which
is
to
provide
explicit
hooks
so
that,
if
you're
running
it
at
kubernetes
node,
you
can
rely
on
these
hooks
existing
and
being
in
the
right
places
rather
than
sort
of
arm
wrestling
for
who
gets
me.
First.
O
Yeah,
so
so
we're
trying
to
understand
there
are
two
ways
we
could
do
this
yes
and
I'm,
not
a
developer.
I
I
do
product
management,
so
I'm,
just
working
with
the
customer
who
is
having
the
trouble,
so
we
could
either
go
and
if
there's
no
strong
reason
for
appending,
we
could
just
go
change
to
a
pen
by
default.
That
would
be
one
option
which
would
solve
the
problem.
I
Want
a
flag
and
I'm
wary
of
blanket
turning
all
prepends
into
pens,
because
I
just
I
mean
I
have
to
sit
and
stare
at
this
for
a
long
time
to
figure
out
what
the
implications
of
that
would
be
and
what
the
possible
things
that
could
go
wrong
with.
That
would
be
okay,
I'm,
not
a
hundred
percent,
but
it's
gonna
require
a
certain
amount
of
inspection
to
figure
out.
I
K
I
This
is
this
is
Tam.
I'd
say
that
the
right
thing
to
do
would
be
to
write
a
a
doc
or
maybe
even
a
cat.
The
cat
feels
a
little
heavy
weight
for
this,
but
maybe
just
repeat
a
doc.
That
says
this
is
what
we're
proposing
in
terms
of
extending
iptables
rules
with
hooks.
The
thing
to
think
about
is
it
wanted
to
apply
to
both
the
IP
tables
and
the
IP
vs
proxies,
because
they
both
use
IP
tables
for
certain
things,
and
so
we
want
to
make
sure
that
they're
equally
capable
here.