►
From YouTube: Kubernetes SIG Network meeting 2018-09-20
Description
Kubernetes SIG Network meeting from September 20th, 2018
B
Cool
thanks.
Yes,
so
thank
you.
First
of
all,
thank
you
to
all
the
pain
and
networking
winners
going
to
call.
We've
really
really
tried
hard
for
a
while
to
at
Windows
CNI
at
our
own
Windows
c9
to
the
Container
networking.
Plugins
repo
really
excited
about
getting
this
merged
thanks
for
reviewing
this
quickly.
B
So,
following
up
from
that,
perhaps
would
be
good
to
understand
the
next
steps,
as
it
was
a
pretty
big
PR
and
we've
tried
to
address
all
the
major
concerns
in
a
timely
manner
or
no
asbestos
as
quickly
as
possible
as
we
could,
but
if
there
are
any
outstanding
gaps
that
are
particularly
urgent
or
that
you
would
like
to
highlight,
it
would
be
great
if
you
could.
If
we
could
go
over
that
I
know,
we
have
to
PRS
that
are
scheduled
to
go
out
as
well.
I
think
Madonna
Madonna
has
joined
the
call
as
well.
B
C
C
D
D
C
C
D
Okay,
I
mean
I.
One
of
the
kind
of
cross
points
here
between
kubernetes
and
the
CNI
stuff
would
be
to
figure
out
a
slightly
better
way
of
pushing
that
information
down
from
kubernetes
to
the
plugin.
So
maybe
that's
a
general
point
for
the
group.
How
we
can
kind
of
improve
and
make
the
DNS
bits
in
cubelet
be
communicated
better
down
to
CNI.
So.
C
F
H
G
F
And
reason
I
ask
this:
is
because
there's
a
large
emphasis
being
placed
from
sega
architecture
on
conformance
tests
and
we're
trying
to
figure
out
sort
of
for
every
feature
in
the
entire
system.
Is
this
a
conformance
thing
or
is
it
not
and
as
soon
as
I
sort
of
picked
up
on
the
scent
of
that
not
being
the
same,
we
have
to
figure
out
if
this
is
or
is
not
eligible
for
conformance
I.
C
G
D
I
mean
it
could
be
the
case
that
not
to
go
too
far
into
implementation
details
here,
but
you
know
we
can
create
some
capabilities
that
the
windows
plugin
could
advertise.
Cni
driver
like
in
docker
shim
could
stuff
in
some
of
the
DNS
stuff
has
runtime
capabilities
or
something
like
that
would
be
100
within
the
current
framework.
F
B
Okay,
yeah,
that's
that's
helpful
and
you
know
what
what
is
the
process
of
adding
maintainer
is
for
the
Windows
CE
and
I
plugins.
D
D
The
community
yes,
so
there
is
a
process
sort
of
for
that.
I
I
believe
it's
documented
somewhat
in
the
CNI
Doc's.
Okay,
it
takes
a
little
bit
and
you
know
being
on
top
of
this,
the
issues-
and
you
know
doing
PR
reviews
and
things
like
that.
So
it's
definitely
possible.
You
know
just
keep
doing
good
stuff
and
that,
hopefully,
will
turn
into
maintainer
ship
at
a
official
level.
Does
that
make
sense.
B
F
F
E
Hello
I
just
wanted
to
won
that
are.
We
are
right
at
the
release
of
1.12
and
unfortunately,
we
had
a
scalability
memory.
Issue
is
core
DNS
and
we
could
not
make
up
in
time
to
to
ever
fix.
So
everyone
I
mean
everyone
relays
and
scalability.
They
decided
to
keep
cube
DNS
as
default
when
you
deployed
with
Cuba,
so
the
coordinates
as
default
is
delayed
for
next
release.
E
F
A
really
unfortunate
situation,
I
appreciate
everybody,
he's
digging
into
it
and
trying
to
figure
out
what
was
going
on
and
coming
up
with
a
best
safest
answer.
I
am
pushing
hard
here
on
our
work
around
an
autoscaler
for
things
like
DNS,
which
is
designed
to
be
sort
of
low
dependency
for
system
components.
F
E
We
stuck
that
with
the
issue
and
there's
that
first
understanding,
whatever
memory
is
coming
from.
Second
understanding:
what
are
the
constraints,
because
we
were
also
at
the
end
finally
not
doing
some
of
the
constant
about
the
memory
and
and
I'm
sure
that
we
merge
early
all
this
change.
So
we
can
have
really
the
return
of
for
the
end-to-end
test.
Probably
so,.
F
I
mean
my
main
point
being,
hopefully
you
guys
find
the
real
problem,
so
you
don't
get
stuck
behind
us
1:13,
but
I'm
rien
reigniting
the
fire
to
make
this
non
problem
in
the
future.
Because,
honestly,
had
we
had
a
proper
scaler,
it
would
have
been
just
tweak
the
numbers
in
the
scaler
and
move
on
mm-hm.
G
Another
thing
I
think
we
should
look
at
for
instant
I,
know
Chris
kind
of
brought.
This
up
is
that
you
know
with
with
cube
DNS.
Now
it's
it's
scaled,
just
proportional
to
the
number
of
nodes.
Yes,
it's
not
really
necessary
with
core
DNS
in
the
same
way
that
we
can
scale
by
say,
CPU
utilization
or
something
like
that
so
kind
of
directly
correlate
with
that.
But
the
question
then,
is
we
don't
know
we
have
no,
no
tests,
no
visibility
into
actual
mix
of
queries
and
the
actual
load
in
a
real
cluster
right.
Yes,.
E
Well,
we
have
one
we
have
one,
but
it's
not
in
the
real
question
when
we
other
scalability
tests,
we
talked
with
a
guy
of
scalability
and
they
did
not
want
it
too
austere,
real
scalability
DNS
because
of
the
cost,
and
they
keep
the
scalability
for
this
to
test
what
they
call
load
and
and
density.
But
this
one
are
not
exercising
DNS
in
fact
rank
right
now
we
have
a
memory
issue,
but
but
there's
no
query
right.
E
G
Have
to
look
closely
at
what
those
and
parameters
are,
though,
because
like
we
don't
what
I'm
wondering
about
maybe
Tim
do
we
have
any
visibility
in
existing
clusters
of
what?
Because
we
know
that,
for
example,
queries
the
external
external
services
will
issue
other
things
and
I
mean
it's.
Gonna
vary
wildly
based
on
that
the
customers
I
guess.
Maybe
we
just
have
to
characterize
the
edges.
Yeah.
F
G
I
think
we
could
characterize
that
it's
like
they
get
say.
Okay,
if
we
have,
we
can
give
us
the
win,
give
us
the
QPS,
but
we
can
say
what
what
level
of
you
know.
What's
this?
What's
the
auto
scaling
proportion,
if
we're
doing
100%
external
query
for
sure,
if
we're
not
having
to
do
a
fully
qualified
and.
F
To
the
assertion
that
you
don't
need
to
scale
based
on
number
of
nodes,
the
the
idea
of
behind
scaling
behind
number
of
nodes
was
that
nodes
was
just
a
reasonable
proxy
for
how
many
services
you're
gonna
have
and
that
your
memory,
fundamentally
you're
gonna,
be
bound
by
how
many
services
and
endpoints
you're
watching
I
just
sit
in
your
watch
cache
right.
So
we
use
that
as
a
core
man's
proxy
for
a
number
of
services
and
endpoints.
Maybe
right.
G
F
Well,
well,
there's
there's
the
resources,
of
course,
coming
multiple
dimensions,
right
and
CPU.
A
memory
memory
is
always
the
one
that's
more
persnickety,
because
it's
not
elastic
and
there's
you
know.
If
you
run
out
of
it,
you
run
out,
whereas
CPU
just
gets
so
we
we
sort
of
focused
everything
around
scaling
around.
How
do
you
manage
memory,
but
CPU
is
actually
there
too,
and
that's
that's
only
a
function
of
how
many
queries
are
going
to
take,
which
you
could
use
number
of
cores
in
your
cluster
as
a
proxy.
E
A
So
that
was
the
end
of
what
we
had
on
the
agenda,
but
I
was
curious
to
know
if
people
are
interested
in
having
kind
of
a
step
back
discussion
about
what
we
want
to
try
to
tackle
in
1.13
and
maybe
looking
at
some
of
the
things
we
said
we
are
going
to
tackle
in
the
past
couple
of
releases
and
seeing
where,
where
those
are
right
now
that's
a
great
idea.
Nice
I
will
share
my
screen,
then,
hopefully
the
right
one.
A
I
A
I
C
E
F
The
fun
thing
with
the
dual
stack
is:
it
has
spawned
quite
a
involved
conversation
around
how
to
properly
evolved,
API
fields
and
from
singular
and
plural
for
anybody
who
hasn't
followed
along.
In
that
conversation,
it
has
dragged
on
for
months
as
we're
working
on
and
refining.
The
guidance
here,
I
feel
like
we're.
Pretty
close
I
owe
a
dock
for
Daniel
and
Clayton
and
Bryan,
and
the
other
API
reviewers,
which
I
will
be
hopefully
working
on
tomorrow.
If
I
get
my
so
I
think
that's
sort
of
the
major
blockage
on
that
cap
right.
I
Yes,
yes,
and
there
were
a
couple
more
things
that
we
introduced
recently
and
one
is
keeping
keeping
the
server-side
piece
of
single
family
verses
and
never
there
wasn't
proposal
and
we
think
that
make
sense.
You
know
there
were
some
trade-offs,
but
we
think
that
make
sense
it
and
and
also
the
end
points
we
have
to
make
those
dual
stack
so
that
that
adds
a
little
complexity
and
adds
another
API
change.
So
I'm
gonna
review
the
big
architecture
review
for
that
change
as
well.
F
I
We
started
started
from
coding
and
there
were
a
couple
couple
of
people
in
other
organizations
that
volunteered
I
haven't
seen
them
that
the
CSA
started,
because
somebody
is
somebody
volunteer
for
the
coop,
Roxie
change
and
I
forget
what
the
other
one
was.
But
yeah
we
we
started
in
you
know:
I
was
waiting
for
the
cab
I,
don't
I've
been
given
the
volume
I,
don't
think.
There's
we
have
a
good
chance
of
coating
this
up
by
one
113
and
113
to
tight
a
tight
sprint
as
well.
So
that's
true
cuz
for
short
yeah
but
like
well.
A
A
F
Bowie's
not
here
today,
but
he
I
saw
a
dot
come
through
my
mailbox
with
exactly
that
and
I
haven't
read
doc
yet,
but
it
was
his
pre
draft
before
he
sent
it
out.
Probably
shouldn't
say
that
you
know
it
will
hopefully
have
something
out
real
soon.
So
I
think
this
is
still
a
reasonable
goal.
I
mean
it's
unfortunate
that
it
got
punted
by
two
releases,
but
there
it
is
I've
got
no
real
good.
Excuse,
I!
Think
we
keep
this
one.
Anybody
disagrees.
G
G
F
F
A
F
G
G
A
H
A
G
B
So
I
think
it's
actually
so
I'm,
not
exactly
sure
what
the
item
is.
There
I
haven't
added
it
there,
but
so
cou
proxy
schedule
to
actually
go,
not
just
beta
but
I,
think
J
or
is
scheduled
to
be
there
at
least
with
1.14.
Okay.
F
F
F
Would
you
do
me
a
favor
and
let
me
know
what
parts
of
conformance
you're
running
into
with
respect
to
Windows
and
networking
I'm
as
part
of
the
overall
conformance
stuff
I'm,
trying
to
figure
out
which
bits
of
networking
or
correctly
marked,
as
conformance
which
parts
are
incorrectly
marked
as
conformance
and
which
parts
are
incorrectly
not
marked
as
performance.
Okay,.
B
F
B
F
B
A
F
B
F
Ownership
means
you
do
that
you
work
and
you
know
it
like.
There
are
owners
files
within
the
code
base
that
you
can
put
yourself
into
in
the
Q
proxy
Windows
sub
modules.
You
can
put
yourself
in
the
reviewers
of
appropriate
pieces
where
you
think
there's
significant
amount
of
Windows,
specific
stuff
or
you
can
just
hang
out
and
be
present
and
watch
the
bugs
and
the
mailing
lists
and
make
sure
that
anything
that
says
Windows
gets
either
I
know
acknowledged
or
triaged
or
something
okay.
B
B
Great
all
right,
I'll
take
a
little
bit
more
frutos
awesome.
Is
there
anyone
I
could
reach
out
to
perhaps
to
get
like
help
on
just
in
case
I
miss
something
or
like
how
order
all
the
different
resources
that
I
should
that
should
be
on
my
radar.
That
I
should
be
looking
out
for
right
now.
It's
mainly
just
github
issues
that
I'm
tracking
him
yeah
failing
sure.
F
If
you
want
to
drop
me
a
note
you
can,
this
is
Tim
and
I'm
happy
to
sort
of
guide
you
through
figuring
out
where,
where
all
the
bodies
are
buried,
okay,
thank
you.
I
appreciate
it,
of
course,.
H
So
this
is
I
spoke
to
so
I
think
so
he's
handling
as
part
of
the
container
de
effort.
So
so,
when
the
container
D
goes
to
alpha
soon
from
like
within
the
signal
node,
they
will
be
shipping
with
another
non
cube
net,
never
plug
in
with
it,
which
is
based
on
p2p.
So
once
dad
graduates
to
beta
I
can't
speak
for
him,
but
like
I,
think
that
can
be
back
ported
to
whatever
the
default
of
kubernetes.
H
F
H
F
A
F
I
wanted
to
throw
it
at
least
one
more,
which
is
DNS
stuff
we've
got
me
through
is
also
not
here
today.
Where
is
everybody
there
she's
working
on
a
DNS
proposal
for
how
to
make
a
lot
of
the
problems
that
we're
having
with
DNS
sort
of
the
low-grade
pain
with
DNS,
better,
so
she's
been
talking
to
John
and
others
about
how
to
how
to
address
some
of
those
concerns.
F
F
We
Google
are
spending
a
lot
of
time
trying
to
address
things
like
DNS,
which
are
sort
of
long-standing
pain,
points
things
that
users
struggle
with
and
have
for
a
long
time,
but
aren't
exploding.
I
would
like
to
encourage
anybody.
Who's
got
time
to
look
at
those
sorts
of
issues.
I,
don't
think
we
have
a
good
enough
list
of
what
those
issues
are.
F
That
are
in
kubernetes,
DNS
or
issues
that
are
just
declared
against
kubernetes
kubernetes
that
people
are
experiencing
around
networking,
whether
that's
you
know
ingress
stuff,
whether
that
service
stuff
or
whether
that's
endpoints
scalability,
like
we've,
got
long-standing
bugs
around
reverse
records
that
we
need
to
figure
out
properly
and
we've
got
issues
around
endpoints
when
you
set
hostname
in
a
deployment,
it
does
the
wrong
thing,
and
so
we've
got
a
lot
of
these
sort
of
long-standing
bugs
that
aren't
killing
anybody,
but
they
hurt
everybody
a
little
bit.
Okay,.
H
F
F
Let's,
let's
true
and
that's
fine,
let's
put
it
on
the
agenda
for
like
mid-october
early
early
to
mid,
October
I
said
some
a
month
month
from
now
and
check
in
on
these.
What
do
you
think
yeah.