►
From YouTube: Kubernetes SIG Network 2019-04-18
Description
Kubernetes SIG network meeting, April 4th 2018
B
A
C
D
C
C
A
E
F
A
F
G
C
H
D
C
C
C
J
Okay,
this
reminds
me
of
another
one
that
I
started
to
look
at.
Maybe
I
should
try
to
take
this
one.
Okay,.
C
C
C
G
G
E
C
E
M
G
J
G
A
J
D
J
M
C
C
C
C
L
L
M
H
E
E
J
G
J
L
B
C
C
J
C
M
C
L
C
K
E
C
G
C
A
B
C
E
E
A
C
D
A
L
Yeah
I
know
you
saw
on
the
Signet
work
mailing
list,
the
the
external
DNS
team,
which
has
been
a
cube.
Incubated
project
needs
to
move
out
of
Cuba
generator,
since
that's
not
a
thing
anymore,
and
so
they
were
asking
if
sig
network
would
sponsor
them.
For
that
much
my
opinion
seems
very
reasonable.
We
go
under
kubernetes,
SIG's
and
just
means
they
would
become
part
of
our
our
domain
to
attract
that
I.
Don't
want
to
get
people's
opinion.
L
L
There
are
some
sort
of
concerns
with
it
because
I
it
talks
to
many
different
vendors,
api's
and
the
you
know
the
community
see
I
doesn't
have
access
to
all
of
those
different
vendors
equipment
running
so
there's
a
limited.
Not
all
of
those
providers
can
be
part
of
the
CI
and
it
also
needs
right
now.
The
images
are
in
the
windows
docker
hub
dr.
registry,
so
they're
looking
for
GTR
Cades.
If
you
know
official
place
to
live.
L
Well,
I,
don't
think
it's
graduating
in
any
sense,
incubation
is
going
away,
so
it
needs
a
place
to
live.
So
it's
just
kind
of
does
it
get
to
live
under
the
kubernetes
organization
or
keep
things
organized
information
and
if
it
does-
and
we
have
to
decide
that
we're
okay
with
that,
because
I
mean
which
means
that
if
there
are
bugs
with
it,
they're
probably
going
to
show
up
in
our
in
our
you
know,
buck
sprouts
and
and
that
sort
of
thing
Wow,
and
so
it's
adding
load
to
this
team.
Do.
L
I'm
here
on
behalf
of
Rafael,
because
it's
like
I'm
peein
or
something
like
he
hasn't
be
able
to
be
here
but
I,
don't
think
they're,
saying
they're
gonna
pull
out
uninvolved,
so
they
would
still
be
involved.
I
guess
maybe
maybe
what
our
our
take
home
here
should
be.
As
we
should
decide.
If
there's
criteria,
they
need
to
meet
commitment,
they
need
to
make
or
something
to
make
it
make
sense
for
us,
but.
L
G
Any
other
comments
do
they
need
a
kept
where
this
ordinal
is
just
that's
another
private
science,
it's
not
an
existing
project.
So
it's.
G
L
L
A
L
D
Yeah
not
a
lot
to
say
on
the
subject.
We
had
a
meeting
the
other
week
to
kind
of
hash
out
some
rough
outlines
of
what
we're
gonna
do.
I
have
a
very
V
one
version
of
a
inte
representation
up,
basically
going
through,
like
the
basic
network
stack
and
tracing
like
what
the
service
components
are
and
how
to
debug
it.
A
bit
I,
don't
know
who's
taking
on
the
deep
dive,
stuff,
Tim
and
Bowie.
Both
sounded
like
they
might
be
doing
something,
but
I,
don't
wanna,
go
and
tell
them
well,
they're
absent.
G
D
C
G
A
C
Yes,
we
can
yes,
so
we
have
been
cooking
up
this
proposal
for
quite
a
while
and
then
right
now
we
wanted
to
share
to
a
community
and
start
getting
feedback,
so
the
main
goal.
So
so
there
are
multiple
problems
with
the
existing
endpoints
API,
so
it
basically
has
asked
element.
Scalability
limit
bounded
by
the
city.
C
So-
and
this
is
happening
a
scale
when
there's
many
nodes
and
there
are
many
endpoints
and
then
an
old
rolling
update,
happens
and
then
aggregated
by
its
transmitted
through
the
network
is
quite
high.
So
so
we
wanted
to
propose
a
new
API
can
solve
these
kind
of
problems,
that's
kind
of
any
problems
in
order
to
support
like
tens
of
thousands
of
endpoints
on
a
cluster
of
thousands
of
nodes
and
leave
some
room
for
like
foreseeable
extensions.
So
that's
the
the
main
API
part
is.
C
The
main
idea
is
to
shard
the
big
endpoints
object
into
multiple
objects,
so
so
each
endpoint
slice
will
contain,
like
part
of
the
endpoint,
like
only
portion
of
the
endpoints
up
to
a
hundred.
So
so,
if
you
have
more
than
a
hundred
end
points
behind
a
service,
it
will
get
sharded
among
multiple
endpoints
lysed
objects
by
whom,
by
the
controller,
so
there
will
be
a
new
controller,
employ
slice
controller
that
basically
implements
this
this
api
and
translate
the
pod
and
then
service
label
selectors
and
then
into
multiple
endpoints.
Lies
objects.
J
C
So
so
we
in
the
intersection
of
this
later
section
of
the
dock,
we
basically
say
the
existing
endpoint
controller
will
keep
running
because
there
are
always
third
party
consumers.
We
don't
know
about
watches
the
end
coin,
object
right.
Yes,
so
he
will
be
kept
running
but
like
starting
from
beta.
We
will
start
throwing
like
error,
warning
events
and
then
eventually,
probably
capping.
The
number
of
end
points
in
one
endpoint
object
to
some
some
number,
let's
say
500
or
even
100,
and
then,
if
you
have
more
endpoints,
then
use
the
new
API.
Instead,
today.
C
C
A
C
C
So
the
simple
case
is
20
20
thousand
endpoints
on
the
5,000
node
cluster
right.
So
on
creation,
both
the
endpoint
slides
like
the
endpoint
slides,
the
API,
will
have
a
write
amplification
because
it
has
to
write
like
1200
endpoint
slice
object
instead
of
1
poins
object,
but
the
total
amount
of
data
you
should
be
around
roughly
the
same
because
they
contain
the
same
amount,
end
points
in
it,
but
when,
after
the
service
is
created
on
any
endpoint
update,
the
data
transmitted
and
the
right
QPS
would
be
much
better
than
the
existing
end
points
object.
C
So
there
are
estimations
of
like,
for
instance,
if
we
use
the
if
we
use
the
existing
API
and
there's
a
single
endpoint
change
and
behind
the
service
we
will
end
up
transmitting,
like
10
gigabits,
around
10
gigabits
of
data,
think
about
gigabytes
of
data
and
then
versus
the
new
API
is
like
50
megabyte
and
then,
if
we
consider
rolling
update,
which
means
that
every
single
endpoint
gets
updated
and
then
recreated
so
that
it's
even
more
like
it's
more,
it's
gonna
transmit
more
than
200
terabytes
of
data.
If
we
keep
using
like
today's
API.
C
So
so
mostly
the
proposal
is
to
address
the
scalability
and
performance
issue
coming
from
like
come
from
original
design
of
the
API
and-
and
there
are
some
caveats
which
we
wanted
to.
There
are
also
some
other
problems
we
want
to
address
with
the
new
API.
One
is
that
the
support
is
no
longer
like
required
field,
because
if
we
require
the
port
for
you
what
that
means,
we
have
to
basically
honor
the
port
remapping
for
the
service
right.
So
what
if
one
day
we
have
service
v2
that
doesn't
support
like
optionally
support
port
remapping
right.
C
So
that's
one
minor
change
and
the
second
change
is
that
each
instead
of
having
two
lists
of
endpoints,
one
like
ready
list
of
endpoint
Indiana,
is
unready
lists
and
point
that
basically
translate
to
for
each
endpoint.
It
only
has
it
can
only
have
two
states
either
ready
or
not
ready,
but
but
in
reality
you
have
the
like
the
pod
lifecycle.
C
There
are
more
States
than
ready
and
not
ready,
like
graceful
termination
and
things
like
that,
so
it
doesn't
block
and
it
basically
blocks
us
from
adding
more
states
into
the
endpoint
for
into
like
basically
each
individual
endpoint.
So
instead
we
added
a
list
of
endpoint
conditions
associated
with
each
endpoint
and
initially
we
only
have
one
condition
type,
which
is
ready.
It
can
be
either
true
or
false,
basically
inherits
the
current
behavior
to
describe
for
like
whether
the
endpoint
is
ready
or
not
ready.
C
We
would
like
to
switch
the
internal
consumers,
especially
queue
proxy,
because
it
runs
on
every
single
node
to
consume.
The
endpoints
lies
our
endpoint
slice
API
instead
of
the
endpoints
API,
and
keep
the
endpoint
control
of
the
existing
classic
and
point
controller
running
to
produce
the
endpoint
object
up
to
certain
limit.
So
that's
basically
the
proposal
so
I
know
it's
only
a
short
amount
of
time.
C
C
C
Yes,
and
so
I
have
also
included
another
more
advanced
idea,
basically
a
dynamic
sub
setting.
So
the
idea
is
to
like,
instead
of
always
sending
all
the
endpoints
to
all
the
nodes
only
send
a
subset
of
the
endpoint
to
a
group
of
node
so
that
this
basically
further
reduces
the
overhead.
But
since
our
goal
is
only
to
implement
like
tens
of
thousands
of
endpoint
like
millions
or
tens
of
millions
of
endpoints,
so
we
like,
we
basically
determined
that
this
is
too
complex
and
at
this
point
our
weights
the
gain.
C
B
C
C
B
C
B
E
C
L
H
F
H
F
H
M
F
H
C
So
so
some
background,
so
we
have
seen
this
tons
of
time
and
every
time
is
a
different
problem
and
then
the
kernel
always
throw
out
the
same
error
message.
It
just
means
that
error,
the
kernel
lose
reference
to
one
of
the
like
yes,
and
this
can
happen
anywhere
inside
a
kernel.
So
it's
sort
of
a
generic
symptom
for
different
causes,
so
sometimes
upgrading
that
to
a
different
Linux
kernel
would
just
solve
the
problem.