►
From YouTube: Kubernetes SIG Network Meeting 2018-08-09
Description
Kubernetes SIG Network Meeting for August 09, 2018
A
All
right
we
are
recording,
this
is
the
Signet
work
meeting
for
Thursday,
August,
9
2018
and
also,
as
usual,
I
will
post
the
agenda
into
zoom
chat.
If
anybody
doesn't
have
it
there's,
the
agenda
looks
like
we
have
a
pretty
light
agenda
today,
but
also
feel
free
to
add
more
agenda
items
to
the
bottom
of
the
agenda
document.
If
you
have
anything
else,
I
see,
we
have
a
new
core,
OS
agenda
item
open
there,
but
anyway,
let's
get
started.
A
The
first
item
is
the
test
dashboard,
and
actually
we
probably
don't
have
to
do
much
here,
because
I
took
a
look
earlier
today
and
the
test
grid
looks
pretty
good
the
only
red
test
that
we
have
that
are
significantly
failing.
The
first
one
is
all
the
cube,
ATM
ones,
and
that
is
actually
already
being
handled
and
may
already
be
fixed
today.
So
we
don't
care
about
that.
One
anymore
and
the
other
one
is
the
EDG
CI
GCE
the
net
detest.
Those
are
I,
think
fairly
recently
added
tests
and
they
are
failing.
A
If
anybody
wants
to
take
up
debugging
that
test,
that's
fine,
otherwise
we
can
let
it
go
for
another
week
or
two
and
see
I,
don't
think
I'm
gonna
be
able
to
get
to
it
more
than
they
do
buggy
that
I've
done,
but
overall
I
think
we're
looking
pretty
good
a
lot
of
the
tests
that
we
had
either
got
removed
or
got
fixed
and
so
I.
Don't
think
we
have
much
to
worry
about
here
from
the
test
grid
at
this
point.
A
B
B
B
Okay,
I
just
moved
over
the
dock,
I
was
just
taking
a
quick
look
to
see
what
was
open.
That
was
network
related,
so
basically,
where
we're
at
is
yeah.
We've
been
working
on
Windows
support
for
you
know
the
little
over
two
years
now
initially
started
and
reached
alpha
towards
the
end
of
2016,
and
then
we
reached
beta
last
year
and
so
from
a
networking
standpoint.
B
So
the
current
progress
right
now
is
that
all
the
main
features
that
that
are
relevant
and
to
Windows,
you
know
we're
not
going
to
implement
things
like
selinux,
because
we're
not
running
Linux
processes,
of
course,
are
pretty
much
done
and
so
we're
just
sort
of
fixing
a
tail
of
bugs
and
in
the
process
of
getting
our
upstream
tests
merged.
So
those
will
be
coming
on.
We
have
within
the
next
few
weeks
here
after
a
few
discussions
that
was
suggesting.
B
So,
basically,
what
I'm
looking
for
is
you
know
whether
or
not
there's
any
you
know
any
questions
or
areas
of
concern
from
from
Signet
work
and
whether
or
not
you
want
to
designate
a
reviewer
to
work
with
us
on
this
process,
because
this
is
the
first
time
anyone
has
ever.
You
know
added
another
another
OS
node
support
to
kubernetes,
okay,.
A
Yeah
I,
one
of
the
questions
I
had
was
what
are
the
targets
for
Windows
support
with
respect
to
Windows
versions,
because
I
know
from
some
of
the
reviews,
I've
done
and
some
of
the
other
stuff
that
I've
looked
into
on
the
windows
side,
the
containerization
and
behavior,
especially
on
the
networking
side,
heavily,
depends
on
what
kind
of
or
which
windows
versions
you
have
and
whether
hyper-v
and
other
stuff
is
being
used
or
not
so
I'm
kind
of
curious.
You
know,
are
you
cutting
off
support
for
some
of
the
older
releases?
A
B
So
I
mean
basically
for
for
the
supported
version.
We
want
to
focus
on
Windows
Server
version,
1803
and
newer
I
think
most
customers
are
going
to
want
to
actually
use
Windows
Server
2000
19,
because
that's
one
that
Microsoft
is
going
to
support
for
a
full
five
year,
life
cycle
rather
than
18
months,
and
so
that
has
support
for
the
network,
compartments
and.
C
B
You'll
be
able
to
get
your
typical,
your
pod
and
service,
networking,
topology
and
cute
proxy
running
on
Windows
you'll.
Be
able
to
do
that
using
plugins
for
you
know
either
you
know
vehicle
and
encapsulation,
or
you
know
non
capsulation
and
just
doing
sort
of
you
know
the
programmed
routing
topology.
If
you
want
to
do
that
using
something
like
calico
and
flannel
and,
of
course
we're
gonna,
have
you
know
the
answer?
Specific
C
and
I
plug
in
and
I?
B
B
Yeah
yeah,
and
so
you
know
that
that
of
course
will
work
with
open
virtual
switch
underneath,
but
basically
the
focus
is
around
2019
and
newer.
The
older
os's
are
gonna,
have
some
missing
capabilities,
in
particular
like
on
storage,
there's
also
problems
where,
like
secrets,
don't
work
on
Windows,
Server,
2016
and
there's
no
way
for
us
to
fix
it,
but
it
does
work
in
the
newer
releases.
Okay,.
A
B
That's
a
good
question:
let
me
discuss
that
a
little
more
of
sig
windows
to
see.
If
there
is
anybody
that
would
like
to
use
the
older
versions
at
a
reduced
capability,
my
preference
would
be
to
cut
off
and
switch
to
2019
so
that
way
that
could
align
with
you
know
Kubb,
1.12
or
1.13.
You
know,
whichever
one
means
you
know
the
criteria
that
we
land
on.
B
A
The
only
concern
I
had
here
was
maintainability
some
of
the
code
in
the
patches,
especially
the
networking
side
that
I've
seen
you
know
there.
There
have
to
be
some
I
guess:
I
won't
call
them
hacks,
but
you
know:
there's
some
significant
behavioral
differences
in
communities,
Network
code
and
so
maybe
trying
to
harmonize
those
would
be
useful
from
a
maintenance
burden
on
people
who
work
in
the
network
code.
But
again
that
could
already
exists
in
those
differences
already
work.
So
yeah.
D
A
B
And
I
also
think
that
once
you
move
over
to
CNI,
0.4
or
whatever,
the
next
version
is
I,
think
that'll
give
us
a
chance
to
clean
some
of
that
up
as
well,
specifically
because
in
the
case
of
like
the
DNS
configuration
the
way
that's
done.
Linux
today
is
that's
the
result.
Doc
Kampf
is
created
and
placed
in
the
pot.
Sandbox.
B
Well,
windows
doesn't
have
that
file,
so
we're
basically
using
the
capability
arguments
to
pass
the
needed
DNS
information
in,
and
so
that's
a
PR
that's
going
to
be
coming
through
in
a
few
days
here,
but
like
that's
an
example
where,
by
putting
that
back
into
the
runtime
config
in
a
new
C&I
spec
we're
going
to
be
able
to
get
rid
of
some
of
those
workarounds.
Okay,.
C
B
B
One
of
the
difficulties
we've
been
having
with
container
D
is
there's
still
some
areas
that
are
sort
of
being
being
redesigned,
and
so
we've
had
a
few
setbacks
in
terms
of
getting
some
of
that
done.
But
at
this
point
it
is
working
for
running
containers,
but
the
work
on
the
portion
has
not
really
started
yet.
Looking
over
the
CRI
code
recently,
there's
still
a
lot
of
things
that
are
doing
Linux,
specific
path,
handling
things
like
that
that
need
to
be
worked
through
so
I,
don't
think
we're
gonna
see
a
container
or
cry
container
DD.
B
B
A
E
Sorry
this
is
Tim,
so
there's
a
whole
lot
of
really
interesting
stuff
around
windows
and
compatibility
with
the
Cabrini's
API
that
we
don't
need
to
go
into
here,
but
we
some
we,
the
Royal.
We
have
to
go
into
at
some
point
and
knowing
full
well
that
there's
a
bunch
of
Linux
isms
that
have
made
their
way
into
the
pod
API
and
that's
sort
of
unfortunate,
but
I
guess
was
a
known
trade-off
at
the
time.
E
B
B
You
can't
necessarily
reconfigure
it
using
the
same
network
api's
from
inside
the
pod,
but
it
would
be
feasible
to
have
you
know
something
like
a
networking
controller
connect
using
what's
called
win
RM,
you
can
think
of
it
as
being
yeah.
I,
guess,
SSH
isn't
really
a
good
analog,
but
it's
a
it's
an
RPC
based
management
framework.
Of
course
it's
also
worked
with
RPC
over
HTTP.
We
basically
the
way
I
can
see
that
happening
is
doing
a
connection
back
to
the
host
with
certificate
or
restored
credentials
to
go
and
be
able
to
make
those
changes.
B
That
is
able
to
you
know,
have
enough
kubernetes
access
through
our
back
to
get
those
additional
things
that
are
needed
and
then
talk
back
to
the
windows
host.
The
other
thing
we
can
do
that
I,
don't
know
check
if
I
don't
think
it's
hooked
up
yet
is
we
can
also
do
named
pipes
which
was
sort
of
like
a
UNIX
domain
socket.
So
if,
instead,
you
wanted
to
have
an
agent
running
on
the
over.
B
E
E
B
I
think
the
most
likely
one
would
be
I,
don't
remember
forgetting
whether
it
was
calico
or
flannel
but
getting
Felix,
or
he
would
probably
be
a
good
use
case
and
that's
where
we
probably
have
the
windows
specific
code
still
wrapped
up
in
a
container
running
in
a
demon
set
and
then
doing
a
look
back
connection
back
to
the
node
to
handle
the
network
plane.
Config.
Oh.
E
A
B
And
so
you
know,
I'd,
like
someone
to
you
know,
go
through
it
and
least
review
this
and
I
mean
it's
a
I,
don't
know
a
few
or
ten
where
the
best
person
for
that,
but
then
I'd
like
to
you
know,
respond
to
those
make
sure
it's
clear
in
the
doc
and
then
send
a
list
to
that
to
mail
just
get
or
sitting
so
send
a
the
mailing
list
and
restorative
an
acknowledgement.
Saying:
okay,
the
docs
closed,
we
understand
what's
there
and
once
all
these
things
are
met,
then
we'll
then
we're
okay
with
it.
A
A
B
A
Not
specifically,
but
you
know
it's
thinking
in
terms
of
you
know
what
what
Tim
mentioned
earlier.
You
know
that
there
are
a
number
of
guarantees
that
communities
kind
of
makes
right
now.
Some
of
those
may
end
up
being
Linux
specific
just
because
of
history
and
whatnot,
but
having
making
sure
that
you
know
as
many
of
those
guarantees,
as
are
generally
accepted,
that
kubernetes
provides
making
sure
that
the
windows
code
also
provides
those
guarantees
so
that
you
know
people
aren't
confused
about.
A
E
There's
there's
two
things
there.
One
is
the
conformance
which
was
actually
the
topic
or
one
of
the
topics
at
cig
architecture
today,
so
I
suggest
maybe
starting
to
attend
that
to
make
sure
you
keep
track
with
the
evolution
there.
The
other
part
is
sort
of
the
evolution
of
the
API.
It's
clear
that
there's
a
bunch
of
facets
of
the
API
pod,
specifically
that
just
won't
make
sense
in
Windows
I,
haven't
seen
sort
of
an
analysis
of
what
those
are
and
what
it
means
to
start
ignoring
those
I.
E
Don't
know
if
that
analysis
has
been
written,
I
certainly
haven't
been
tracking.
The
sig
windows
work
very
closely
myself,
but
at
some
point
I
would
like,
or
we
will
need
to
do,
a
deeper
review
of
sort
of
the
implications
of
that,
whether
there's
any
fields
that
are
marked
as
required,
that
are
being
ignored
or
whether
we
can
just
ignore
only
optional
fields
and
what
the
evolution
of
the
API
would
like
to
stop
making
that
so
horrible.
They
say
yes,.
B
D
E
B
D
A
Crd
spec
and
I
was
wondering
if
the
kubernetes
community
sig
network
pages
is
one
place
that
could
live.
I
also
thought
about
the
CNI,
repos
or
new
repo
in
the
CNI
organization,
because
these
vectors
namespace
itself
sort
of
underneath
CNI,
but
at
the
same
time
the
spec
tries
very
hard
to
not
be
specific
to
CNI
and
to
be
implementable
by
plugins
that
do
not
necessarily
implement
all
of
the
CNI
bits.
E
That
idea
versus
whether
C
and
I
should
be
part
of
the
runtime
or
not
part
of
the
runtime,
so
I'm
really
cold
in
about
eight
different
directions
on
this
idea,
which
would
at
least
initially
lead
me
to
say
it
probably
shouldn't
get
any
official
nod
until
we
got
some
clear
answers
to
those
things.
Putting
it
with
CNI
seems
reasonable
as
a
starting
place.
Okay,.
C
Yes,
it
just
for
precision.
In
fact,
there
was
two
week
ago:
I
saw
that
core
units
right
now
is
GA,
but
it's
not
the
default
DNS
server
of
Cuban
it
is.
It
is
sin,
cube
DNS,
and
we
wait.
We
said
in
in
a
KP
that
we
wait
to
have
some
feedback
to
have
to
move
to
default,
also
because
it
needs
to
deprecate
Cuba
to
be
honest,
but
we
have
two
processes
happening
in
the
same
time:
I'm
the
key
D
that
was
during
last
last
three
days
of
kubernetes.
C
C
Okay,
so
I
was
not
sure
which
one
to
follow.
I
sent
an
email
last
week,
I
think
Tim.
You
were
a
target
of
this
email
justice
also,
but
I
had
no
reply.
I
just
wanted
to
let
you
know
what
happened
right
now.
I
am
thinking
to
follow
still
the
KP.
That
expect
to
have
some
feedback
before
before
graduate
as
default.
So
we
build
the
survey
to
have
some
feedback
right
now
we
have
about
20
person
replying
that
survey.
I
still
try
to
push
to
have
some
more.
C
C
We
need
a
certain
number
to
be
defined
of
cluster
of
sickness
if
nakum
size
to
be
defined,
adopting
and
running
core
DNS.
So
right
now,
I'm
collecting
the
information
but
I,
don't
know
what
is
a
real
criteria.
I,
don't
know!
If
do
you
think
that
we
should
push
for
1.12,
which
means
we
need
in
the
two
coming
weeks
to
have
to
what
queue?
Burp,
sorry
cube
up,
defining
coordinates
as
default,
and
we
will,
although
end-to-end
test,
switching
switching
to
cody,
honest
it's.
What
I.
E
E
C
E
C
E
Know
honestly,
if
it's
in
my
mailbox,
it's
buried
with
300
or
400
other
messages
right
now,
what
is
it?
What
is
it
my
talk,
then?
You
can
hit
me
on
slack
or
you
can
ping
me
on
hangouts
or
something
or
honestly,
if
you
respond
to
the
email,
it'll,
probably
pop
to
the
top
of
my
mailbox
I'm
trying
to
handle
them
in
more
priority
order
and
that
way
I
can.
Maybe
you
know
we
can
bring
more
publicity
to
it,
retweet
it
or
scream
from
a
louder
mountain
or
something.
Okay,.
C
E
F
F
C
B
A
With
C
and
I,
but
I
think
one
of
the
long-standing
opens
that
we've
had
is
to
allow
cubelet
to
consume
DNS
information
that
might
be
passed
back
from
CNI,
how
that
interacts
with
core
DNS
and
there's
stuff
I'm,
not
quite
sure
yet,
but
currently
it.
It
is
not
part
of
anything
seeing
that
related.
Okay,.
B
A
D
D
E
I'm
gonna
try
I'm
gonna,
try
to
take
a
look
at
that
ASAP
so
that
I
can
weigh
in
on
anything
that
I've
been
ignoring
for
last
six
months,
and
you
know
if
we
want
to
put
it
in
C
Network
I.
Don't
really
have
an
objection
to
that,
but
the
last
time
I
looked
at
it.
It
was
pretty
coupled
to
see
and
I.
So
it
might
make
more
sense
than
see
my
space.
That's
what
I
want
to
go
back
and
reevaluate
yeah.
A
And
that's
fine.
We
did
change
a
lot
of
that
to
be
less
see
a
nice
Pacific.
There
are
basically
the
base.
Spec
itself
is
not
CNI
specific
and
there
are
sections
that
are
CNI
specific
that
are
do
not
need
to
be
implemented
by
a
plug-in.
That
does
not
call
other
CNI
plugins
so
for
the
v1
spec,
whatever
influence
it
currently
is
AC
and
I
plug-in.
A
A
Yeah,
the
config
field
is
still
CNI
specific
configuration,
but
it
is
not
required.
The
config
field
is
not
required
and
the
language
around
plug-in
and
all
that
kind
of
their
stuff
was
taken
out
like
a
month
or
so
ago
or
two
ago,
so
it's
possible
to
implement
the
spec
and
the
network
objects
without
having
CNI
specific
stuff
in
them.
D
A
E
D
A
E
A
A
A
All
right
thanks,
everybody
and
the
next
meeting
will
be
the
23rd
of
August
and
note.
I
will
not
be
able
to
make
that
meeting
so
Tim,
either
you
or
Casey
will
have
to
run
that
one
Roger
all
right
thanks
everybody.
Thank
you.
Thank
you.
Bye.