►
From YouTube: Kubernetes SIG Network Meeting 2019-01-10
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Sure
so
happy
new
everyone,
so
we're
trying
to
bring
in
the
only
support
for
Windows
and
we
started
working
on
it.
We
have
been
working
with
the
partners
and
there
was
one
specific
discussion.
One
of
the
requirements
of
kubernetes
is
like
all
nodes
can
communicate
with
containers
without
NAT.
It
makes
sense
for
l3
based
networks,
but
it
is
where
we're
starting
to
question
that
requirement,
and
we
started
this
discussion
with
Tim
and
Tim
suggested
to
bring
this
discussion
to
signatur
so
that
we
can
all
discuss
here
and
specifically
any
random
node
can
access.
B
The
pods
is
is
is
the
biggest
concern
we
have
and
it
kind
of
breaks
some
of
our
overlay.
Networking
requirements,
such
as
like
you,
can
bring
your
own
IP
and
an
ability
for
any
course
to
intervene
with
the
the
be
excellent
network
so
and
Tim
was
on
the
call.
But
in
your
slight
hello,
I'm
here
go
ahead
like
Tim
was
saying
that
the
local
host
2
part
is
something
that
is
required,
but
any
random
host
2
part
is
something
that
can
be
relaxed,
but
it
has
to
be
discussed
here.
C
C
C
E
F
B
The
the
biggest,
especially
in
the
overlay
Network,
is
the
question
right,
so
only
network
the
pods
are
in
a
different
all
the
network.
The
nodes
are
not
on
the
same
holy
space.
It's
on
a
physical
network,
right
ability
for
any
node
to
access
of
be
excellent
network
on
the
bodice
without
NAT
is
the
question
here
so.
C
G
D
Because
you
guys
are
working
at
an
environment
with
Windows,
where
you
know
your
users
and
they're
not
for
three
years:
I,
usually
iraq
with
and
I'm
mostly
a
network
guy,
but
yeah.
This
means
I
deal
with
a
lot
of
people
who
are
dealing
with
operational.
What
the
hell
has
gone
wrong
problems
in
networking
right,
and
I
would
expect
it
to
be
kind
of
surprising
for
users
who
are
accustomed
to
being
able
to
from
a
node
to
ping
whatever
pod
that
they
have
to
go
pick
the
right
node
to
be
on
for
that
to
be
true.
B
I
mean
this
is
the
same
pattern
that
yes,
I
agree
with
that
statement
that
if
they
are
used
to
diagnose
failures
by
where
they
can
go
from
any
node
to
so
other
thing
like
when
you
overlay
network
Windows,
customers
are
at
least
known
to
not
have
fought
the
host
like
host
ability
to
access
the
parts.
I
mean
that's
well
known
thing,
but
I
understand
the
requirement
of
some
log
collection
where
you
want
the
local
node
to
access
the
port
so
that
you
can
collect
the
logs
for
the
parts
and
send
it
out
through
the
node.
C
I
would
agree-
and
in
fact
I
did
agree
by
email
except
the
case
of
Network
popped
into
my
mind
later
and
so
I'm
glad
whoever
brought
it
up
today
brought
it
up
right
away
because
that's
the
interesting
case
there
are
reasons
to
have
pods
running
in
host
network
I'm.
Some
of
the
reasons
are
specious,
and
some
of
them
are
legit
and
I.
B
C
B
H
B
C
B
The
vehicle
on
interface
was
for
specific
reason,
put
on
a
different
namespace
in
Windows
and
those
those
are
the
two
biggest
requirements
of
we
excellent
recommends
like
bring
your
own
IP
and
a
host
not
able
to
sniff
the
package.
Someone
like
these
are
the
two
basic
things
so
changing
that
will
basically
involve
much
bigger
changes
on
the
platform.
So
that's
why
I
am
trying
to
push
back
on
that.
So.
I
I
B
J
This
is
Jason
I,
just
want
to
interject
and
just
kind
of
make.
It
comment
and
I
think
the
way
that
we're
approaching
a
lot
of
these
scenarios
is
based
on
what
Windows
customers
are
used
to
seeing
it's
kind
of
the
expected
behavior.
So
I
can
think
to
mesh
made
this
point
earlier,
but
to
the
comment
about
infrastructure
roles
and
needing
access
to
the
host
network,
that
hasn't
been
a
focus
really
of
the
windows
container
space
to
date.
J
A
lot
of
these
workloads
that
we
are
containerized
and
running
are
kind
of
lifted
and
shifted
from
VMs
into
a
container,
and
so
I
want
to
make
sure
that
we're
not
loving
additional
requirements,
because
some
use
cases
from
a
Linux
perspective
are
being
looked
into
when
they
don't
necessarily
make
sense.
There
hasn't
been
the
scenario
focus
at
end.
Users
expect
from
a
Windows
perspective.
If
that
makes
sense
today,.
C
If
there
isn't,
there
will
be
one
in
ten
minutes
and
I
mean-
or
there
won't
be
like
this
is
actually
the
discussion
here
right,
like
is
host
network
actually
required
for
conformance,
because
if
the
answer
is
no,
then
engine
X,
as
an
ingress
deployment,
for
example,
becomes
a
lot
less
tenable.
But
people.
J
C
I
C
Not
sure
we
I
think
it
was
undecided
to
other
windows
is
a
profile
or
whether
there's
actually
like,
oh
s
as
an
orthogonal
concept
profiles
regardless.
If
there's
a
if
there's
an
out
here
where
you
can
be
conformant
as
Windows
without
host
Network
support
working,
then
then
we're
back
in
territory.
We
talk
about.
I
C
B
So
I
think
Donna's
saying
trying
to
say
that
I,
the
the
see.
Okay,
let
me
let
me
restate
things
right,
so
every
node
can
access
every
part
without
NAT
I
completely
agree
with
for
an
l-3
based
network,
and
it
is
a
flat
network
and
it
windows
also
harness
that
and
no
questions
asked
right.
The
only
concern
we
have
when
we
are
trying
to
bring
all
a
networking
support
where
Windows
as
a
platform
has
been
supporting
network
virtualization
for
Microsoft
Azure
and
Azure
stack.
We
are
now
bringing
that
similar
platform.
B
This
feature
to
containers
as
well,
and
one
of
the
main
requirements
is
to
support
this,
bring
your
own
IP
thing
not
for
containers
for
VMs
and
other
scenarios
right,
and
that
was
the
reason
why
in
Windows,
the
VX
LAN
interface
is
put
in
a
non-default
compartment
and
that's
what
is
breaking
this
main
requirement
of
an
ability
for
a
host
l3
based
network
to
access
of
VX
land-based
network
for
the
one
that
is
on
the
part.
That
is
what
that
is
created
for
the
part.
Now.
B
And
for
the
you
can
have
like
same
ip
with
two
visits
on
the
same
coast
or
you
can
also
have
a
conflicting
IP
from
the
host
IP
and
the
port
IP
anyway.
That's
we
are
not
discussing
that
as
a
feature
in
kubernetes
I'm,
not
bringing
that
please,
let's
not
like
I'm,
not
Breen
I'm,
just
I
just
explained
that
why
it
is
hard
for
us
to
move
that
interface
into
the
default
things
I'm,
not
bringing
that
as
a
feature
requirement
or
anything
like
that.
I'm
not
doing
that.
Okay,.
D
B
D
B
H
B
But
III
just
want
to
state
that
further
further
III
I
agree
with
that.
But
I
don't
know
the
Linux
community
and
I.
Don't
know
the.
If
there
is
such
a
requirement
are
there
such
discussions?
I?
Don't
to
divert
this
discussion
into
that
discussion.
I
just
want
to
make
progress
on
this
saying
that
I'm
just
just
for
overlay
network.
Is
there
a
way
we
can
relax
the
the
know
the
requirement
of
like
not
any
node
taxes,
any
part
on
the
be
excellent
network,
so.
C
Communities
itself
doesn't
have
a
concept
of
overlay
versus
non
overlay
right,
there's
just
that
it's
just
the
network,
which
is
what
makes
this
this
challenging.
Yes,
if
we're
able
to
say
that
Windows
exists
without
host
network
and
and
and
say
arch
is
willing
to
certify
that,
then
it
seems
reasonable
to
say
that
without
host
network
pod
host
Network
Pato,
sorry,
there's
no
such
thing
as
a
host
network
pod.
C
D
C
E
D
A
E
C
I
I
Of
host
network,
when
you
say
I'm,
adding
host
network
to
this
platform,
one
of
those
required
features
of
quote
host
network
is
that
odds
in
that
owners
network
can
reach
all
other
pods.
Yes.
So
if
you
don't
that's
where
that
feature
comes
from
that's
where
that
functionality
comes
from
it's
out
of
the
host
Network
feature,
then
we
kind
of
science-type
the
problem
I.
D
Suggested
this
rephrasing
is
we're
sort
of
quibbling
about
whether
nodes
should
be
able
to
reach
all
pods,
and
if
we
look
at
it
this
way
we
can
basically
say
look.
There
is
not
any
requirement
that
node
should
be
able
to
reach
all
pods,
but
if
you
support
host
network,
that
is
a
consequence
of
that
support.
Yes,.
C
D
C
Allowed
to
hurt
themselves
I
agree,
and
it's
a
I
mean
the
the
topic
of.
We
know
whether
this
is
watering
down
conformance
for
Windows
or
whether
it's
actually
meeting
users,
where
they
expect
to
be
is
ongoing.
I
think
and
cigar
seems
to
be
leading
in
the
latter
direction,
that
this
is
what
users
expect.
So
it's
okay
for
it
to
be
different
from
Linux
and,
frankly,
if
you
want
to
run
an
engine
X
deployment,
you
should
pop
up
some
Linux
nodes
in
your
cluster
and
run
them
there,
because
they
actually
work
yeah.
I
B
C
B
A
B
Are
trying
to
bring
in
the
support
for
Windows,
and
what
we
have
seen
is
that
the
cloud
providers
are
not
allowing
the
part
to
like
the
VM
to
send
traffic
with
a
different
matter,
I
think,
and
they
chose
to
do
that.
I
don't
know,
but
my
question
is:
that's
also
violating
the
requirement.
Is
that
something
that
spins,
the
skill
that
doesn't
sound
like
expected?
Behavior
to
me.
K
C
J
C
I
B
J
Yeah
I,
don't
know
exactly
what
here,
I
think
it's
leaning
towards
a
hybrid
cluster
model,
but
I
I'm,
not
sure
I,
mean
I,
definitely
know
like
a
sure.
Our
public
cloud
is
gonna,
be
doing
it
that
way,
not
only
just
because
we
need
Linux
notes
or
we
use
Linux
notes
to
run
them
the
like
API
server
and
scheduler.
J
C
B
K
B
K
C
C
K
The
other
parts
are
fairly
minimal.
Those
are
just
updating
the
dependencies
and
yeah.
Adding
the
flags
and
I
had
to
change
some
of
the
config
tests,
because
there
are
nuclides,
so
I
think
that's
the
only
part
that
you
would
have
to
really
take
a
look
at
so
that
should
be
minimal.
I
can
split
those
up
into
a
different
comment
freaks
so
that
it's
easier
for
you
to
it.
B
C
L
E
C
C
Okay
I
mean,
if
you
guys
are
okay
with
that
I.
Don't
have
problems,
making
changes
to
it,
I'm
just
really
afraid
of
them,
and
so
I
don't
want
to
be
on
the
hook
for
reviewing
those
changes
that
says
it.
This
change
wasn't
too
egregious,
like
it
wasn't
changing
the
actual
proxy
ports
connection
being
behavior.
It
was
really
around
the
the
running,
so
it
wasn't
too
bad.
But
still,
if
I
had
my
druthers
I'd
delete
that
code
so
does.
E
C
Cool
I'll
throw
one
more
thing
out
and
see
if
anybody
else
has
had
similar
experiences.
We
have
had
a
couple
of
customers
who
have
bumped
into
problems
with
services
with
very
large
sets
of
endpoints,
and
we
knew
we've
known
for
a
long
time
that
the
endpoint
resource
was,
let's
just
say
badly
designed
and
I
can
say
that,
because
it
was
me,
but
we
were
starting
to
look
at
how
we
could
possibly
evolve,
that
to
be
less
of
a
scalability
nightmare.
C
D
G
C
You
know
they
there's
some
tumult
over
there,
so
I'm
not
sure
what
the
future
of
that
effort
is
right.
Now,
that's
actually
a
great
thing
to
bring
up
Thank
You
Francois
ipv6
experts
needed
that
project
could
use
some
extra
wood
chopping
and
water
carrying
to
make
forward
progress.
There's
a
wonderful
cap
on
how
to
do
dual
stack.
It's
actually
a
really
well
thought
out.
Cap
and
I
know
the
guys
who
wrote
the
cap
are
sort
of
committed
to
trying
to
move
it
forward,
but
reality
often
intrudes
with
our
best
plans.
M
H
Re
so
I
was
at
the
contribute,
the
Sega
update.
So
if
you
guys
saw
the
slides
well,
nothing
new,
just
what
we
had
any
trips
on
it,
but
even
more
summarized
it
seems
like
those
things
are
moving.
They
have
some
using
the
github
projects
to
manage
their
tasks.
Those
thinking
we
should
probably
see,
if
it's
possible,
to
take
our
spreadsheet,
that
we
do
for
the
release,
stuff
and
long
term
planning
and
just
use
the
automation.
That's
something
I'll
poke
at
it
to
see
what
it
is
put
on.
The
agenda
for
next
franchise.