►
From YouTube: Kubernetes SIG Network 2018-02-22
Description
Kubernetes SIG Network Meeting, February 22th 2018
B
A
B
B
B
C
D
B
E
B
C
C
B
F
So
it's
I've
been
looking
at
failures
that
all
the
fillers
that
have
KU
badminton
in
the
name.
So
we
had,
we
got
one
fix
merged
in
it
and
it
unblocked
a
bunch
of
tests.
But
there's
there's
a
secondary
problem
now
with
some
of
the
some
of
the
remaining
tests,
and
it
seems
that
the
the
the
the
pull
jobs
work
and
those
are
those
are.
F
Those
are
tests
that
are
run
before
the
merge
on
HP
are,
but
the
the
post
merge
jobs,
the
seat
that
the
CI
or
periodic
jobs
are
failing,
and
what
it
looks
like
it's
is
happening.
Is
that
the
there's
a
for
each
job?
There's
a
prerequisite,
build
job
that
that's
done
first
and
then,
then
the
test
route
follows
it
looks
like
the
build
job,
is
building
everything
and
putting
it
in
one
bucket
on
on
google
storage.
F
But
then,
when
the
test
runs,
it's
pulling
from
a
different
bucket,
it's
pulling
from
CI
latest,
instead
of
from
from
the
results
of
the
previous
build.
So
so
I
talked
about
that
with
tests
infra
and
they
you
know,
I
showed
him
a
line
in
the
qu
Badman
e
to
e
Python
script
for
the
scenario-
and
they
said
yeah,
it
looks
just
like
this
test
is
very
confused,
so
so
I'm
gonna
try
to
come
up.
It
should
just
be
a
few
lines
of
change
in
that.
In
this
scenario,
okay,.
B
F
B
G
G
We
had
an
internal
meeting
here
just
this
morning
about
a
different
topic
and
this
sort
of
came
up
and
we
were
trying
to
sketch
out
some
possible
ways
that
we
could
build
some
alignment
between
this,
this
topic
and
sto,
and
some
of
the
multi
cluster
work
and
some
of
the
possible
ways
of
rebooting
ingress
that
are
sort
of
circling
each
other
I'm
hoping
to
write
a
little
bit
about
possible
ideas
and
share
that
at
probably
early
next
week.
Unfortunately,
that
means
that
the
proposal
just
has
to
sit
on
hold
until
we
get
something
written
out.
G
G
H
H
Me
and
obviously
it's
cast
a
de
Ming
on
the
way
and
cast
administrator
can
configure
said
what
the
service
are
all
loco
and
water
topological.
They
prefer.
We
fear
service
policy
services,
names
means
business,
coupe,
sauce
and
a
stricter
optional.
We
can
configure
policy,
others
and
political
preferences
service
policy,
but
the
problem
and
other
cover
them
and
the
fails.
The
main
failings
service,
the
policy
striker.
It's
our
two
fails:
a
wise
service
and
the
other.
Why
is
the
top
Roger
I.
G
Like
the
direction
actually
aligns
pretty
well
with
what
we
were
talking
about
just
this
morning,
so
look
for
I'm
gonna.
Try
to
write
this
up
and
there's
a
couple
things
that
I,
don't
think
I
understand
quite
yet
about
how
it
might
work
that
I
need
to
just
end.
My
brain
on
I
will
have
the
proposal
out.
Hopefully,
I'll
commit
to
getting
it
out
by
mid
next
week.
H
G
Right
I
will
I
will
comment
on
I,
hear
I'm.
Probably
gonna
have
to
write
something
new,
because
I
don't
want
to
hijack
your
jock
and
there's
a
bunch
of
other
topics
that
need
to
be
covering
sort
of
in
the
same
discussion.
So
I'll
comment
on
your
proposal
and
Link
it
off
to
the
separate
discussion
and
we
can
figure
out
how
to
refold
those
together.
G
B
G
B
Yeah
I
took
a
look
at
it
too.
I
feel
like
there
would
still
there's
still
some
unanswered
questions
around
I
think
you
would
also
mention
Tim
discovery
things
like
that.
That
I
mean,
if
you
look
at
the
existing
cubelet
driver,
cubelets
C&I
driver
code,
a
lot
of
it
is
built
around.
You
know
like
how
do
we
find
the
network
file
for
the
plugin?
B
You
know
which,
how
do
we
figure
out
whether
it's
ready
that
kind
of
thing
and
all
that
stuff
if
it
got
moved
to
like
a
even
a
G,
RPC
interface,
all
that
stuff
would
still
have
to
be
somewhere.
You
know
cuz
it
that
functionality
still
needs
to
be
their
discovery
needs
to
be
there.
You
still
kinda
need
to
ping
the
plugins.
G
G
Bit
of
work
that
we
can
piggyback
on
with
the
storage
and
GPUs
slash
device,
plugins
here,
they're
working
on
a
proposal,
a
joint
proposal
for
plug-in
discovery,
registration
and
a
handshake
so
that
they
can
use
the
same
system
across
both
of
those
plugins
and
I
asked
them
to
keep
in
mind
that
if
networking
were
to
follow
suit,
whatever
they
designed
should
probably
be
appropriate
for
networking.
Okay,.
H
I
I
think,
as
gene
said,
we
can
remove
some
more
low-level
scenes
alkoxy
and
remove
the
sensor
to
the
to
this
demo
set
of
since
I
am
talking
and
yeah,
and
then
we
can
make
coke
Roxy
and
this
scene
are
much
closer,
as
we
always
said
in
in
the
topic
of
combining
Cena
and
the
services
and
I
think
this
listen,
listen,
bucola,
testing
ability,
models
to
Adam
said
to
Adam,
says
process.
Is
it's
the
fastest
lap
to
do
to
do
that?
Walker.
B
One
thing
I
think
I
wanted
to
see
a
little
bit
more
of
was
what
some
of
the
integration
points
between
C
and
I,
specifically
and
Q
proxy
might
be
I.
Think
there
were
some
comments
in
the
dock.
Around
Q
proxy
might
want
to
know
what
the
interfaces
are
for
a
given
pod,
or
you
know
like
the
address
ciders
and
things
like
that,
so
flushing
that
out
would
be
interesting
to
me.
B
It
I
mean,
as
far
as
I
know
and
Tim.
You
probably
have
a
much
deeper
historical
knowledge,
acute
proxy,
but
you
know,
keep
Roxy
tries
to
be
fairly
generic,
and
that
does
mean
that
a
lot
of
like
the
configuration
options
sometimes
have
to
be
duplicated
and
you
know
sent
both
to
cubelets
and
to
q
proxy
for
some
things
exactly.
B
H
B
G
Think
the
coupling
there
it
would
be
dangerous,
but
I
think,
like
we've,
talked
about
having
a
flag
to
keep
proxy.
That
was,
you
know,
give
me
the
name
of
the
priests
that
your
CI
drivers
going
to
use
for
its
virtual
interface,
but
even
that
is
a
level
of
coupling
that
is
sort
of
weird,
because
I
can't
actually
guarantee
that
whatever
city
and
I
driver
you're
using
is
going
to
handle
interfaces
in
the
same
way,
and
so
it
didn't
seem
palatable
to
me.
In
fact,
I
think
this
idea
overall
came
out
of
that
question.
B
G
Is
that
there's
these
things
are
really
arm's
length
from
each
other,
a
queue
proxy
and
rely
on
C&I,
because
it
can
at
least
ostensibly
work
in
driver
modes
that
aren't
CNI
and
I
mean
other
runtimes.
You
have
modes
that
are
not
C&I
right,
and
so
it
can't
really
know
anything
about
C
and
I,
and
C
and
I
doesn't
really
want
to
know
anything
to
a
queue
proxy,
because
it's
just
an
abstract
specification
that
runs
on,
say,
mezzos
and
doesn't
know
anything
about.
Q
proxy
I
mean.
B
I
guess
what
I'm
saying
is
that
you
know
it
seems
like
it
might
be
another
case
for
storing
whatever
network
result,
whether
that's
a
CNI
result
or
something
else,
storing
that
somewhere
in
the
cube
API
that
other
things
that
want
to
use
it
like
queue
proxy
might
be
able
to
get
at
it.
Oh
I
see
what
you're
saying.
G
B
G
G
G
B
G
We've
had
some
people
adding
themselves
to
own
nurse,
which
is
good.
More
and
more
of
the
PRS
I
look
at
our
RDL
GTM
by
the
time
I
get
there,
which
is
great.
It
saves
me
a
fair
amount
of
work,
I'm
still
finding
some
amount
of
comments
that
feel
like
they
should
have
been
handled.
It's
like
I,
don't
mean
this
in
the
in
the
in
a
nice
needy
sort
of
sense,
but
like
it
would
be
great
if
PRS
were
reviewed
before
they
got
to
me,
because
I'm
so
constrained
right
now.
So.
B
G
I
will
like
I
like
to
look
at
things
also,
but
it's
nice.
If
a
lot
of
the
initial
comments
are
done,
like
concretely
I'll
call
it
Rohit
here
to
help
me
out
a
ton,
because
I
didn't
have
time
to
review
the
IPPS
pr's
in
great
detail
and
he
jumped
on
and
had
a
bunch
of
really
useful
comments
that
made
the
PRS
easier
for
me
to
review
when
I
got
okay.
G
C
J
G
G
G
The
problem
is,
if
you
ping
me
in
the
bug,
I
probably
won't
even
see
it,
because
a
lot
of
the
github
is
just
lost.
My
like
I
have
a
cycle
during
which
I
open
up
my
PRS
and
I.
Unfortunately,
github
just
doesn't
show
me
like
hey
this.
One
was
most
recently
commented
on.
It.
Just
shows
me
them
in
creation
order,
and
my
email
is
absolutely
hopeless,
so
like
github
is
just
sort
of
lost
in
there,
so
I
have
no
signal
of
sort
of
urgency
or
neglected
Ness.
G
So
if
you
email
me-
and
you
say,
hey
this
PR
has
been
neglected.
I
will
definitely
make
priority
for
it
and
I.
Don't
people
should
not
be
shy
about
that.
Slack
is
okay,
although
it's
easy
for
some
for
me
somehow
to
clear
a
slack
notification
and
then
not
realize
that
I
cleared
it
and
then
lose
it.
I'm
I
feel
like
a
Luddite,
like
I,
can't
figure
out
how
to
use
a
computer,
but
don't
feel
shy
about
pinging
me,
especially
if
it's
neglected
for
like
literally
months.
Okay,.
G
G
Creepy
right
and
if
there's
other
people
who
want
and
who
we
think
should
be
reviewing
Q
proxy
ours
overall
I'm
happy
to
promote
those
things,
although
my
way
guys
are
in
the
owners
or
the
IPPS
side
of
things,
there's
been
a
relatively
small
number
of
people
who
contributed
to
the
IP
table
side
of
things,
but
I'm
happy
to
expand
that
list.
I
feel
bad
dating
people
for
it,
because
it
feels
like
I'm
signing
them
up
for
work
with
very
little
advantage,
but
if
people
are
willing
to
do
it,
I
would
love
that.
K
So,
to
give
a
little
background
on
this
problem,
is
you
know
back
in
the
last
November
the
be
invent?
We
introduced
the
ADA,
breastfeed,
PCC
and
I,
so
basically
we
give
the
BBC
native
IP
address
two-part,
so
pod
will
become
a
first-class
citizen
on
the
on
the
cloud,
but
one
of
the
problem
we
have
is
basically
so
depends
on
the
node
type.
K
Where
you,
the
pot,
gets
scheduled
node
each
know
how
to
limit
the
number
of
IP
address
say,
for
example,
if
you
have
a
tea
to
medium,
you
may
only
can
have
maximum
of
15
IP
address.
So
today
the
scheduler
is
not
aware
of
this
particular
constraint,
so
the
scheduler
can
scale
continue
scheduled
apart
on
to
a
node,
can
may
be
like
40
or
50
onto
an
T
to
medium.
K
Well,
the
the
node
itself
cannot
support
a
many
part.
So
that's
the
problem,
so
the
proposal
here
is
basically
we
want
to
use
the
crew
vanetti
extended
resource
to
to
basically
specify,
for
any
part,
will
specify
a
extended
resource
which
we
create
because
I'm
here,
like
me,
PC
Amazon,
AWS
calm.
So
that's
me
for
ipv4
you're,
using
one
IP
address
for
this
part
and
and
base
like
the
scheduled
use
this
information
to
schedule
the
path
to
the
notes.
K
So
here
is
a
basically
overview
of
the
workflow.
So,
as
you
see
here
is
basically
to
solve
this
problem,
we
will
we
introduce
a
new
component
call
yen,
our
IP
controller.
What
it
does
is
it
basically
watched
the
node
object.
So
whenever
there's
an
when
there's
no
nude,
our
new
no
joins
the
the
cluster
we
get
notified
and,
and
today,
based
on
the
number
of
the
node
type,
the
instance
type.
We
basically
program
the
API
server
to
say
for
this
particular
node.
It
has
whatever
number
of
our
available
ipv4
resources.
K
So
this
way
the
API
server
would
know
the
number
of
IP
address.
For
the
note
and
and
then
later,
when
there's
a
pot
need
to,
you
know
actor
your
program
say:
I
need
a
sketch,
add
a
new
part
and
then
the
scheduler
is
adding.
You
know
the
scheduler
knows.
Okay,
this
node
has
15
things.
Example
15
IP
address
and
is
scheduled
as
part
on
to
this
node,
and
then
he
will
reduce
the
number
of
available
IP
address
to
14.
So
this
is
basically
the
overall
of
the
solution.
B
K
We're
relying
on
using
the
secondary
IP
and
an
Ni
and
and
that
resources
is
there's
there's
a
hit.
A
maximum
limit
depends
on
what
kind
of
instance
type
it
is.
So
if
you
have
a
large
one
you
can
have
you
know
if
you
have
a
c4x
large
I,
do
not
know
the
exact
number.
There's
a
website.
There's
a
table
will
show
you.
The
large
instance
will
have
a
lot
more
IP
address
and
versus
the
smaller
one.
I
know
T
two
medians.
Basically,
you
can
have
three
interface.
Each
one
can
have
fit
by
address.
B
I
feel
like
I
can
I
can
see
how
a
number
of
different
plugins
I
mean
even
ones
like
the
default
cube
net
plugin
could
run
out
of
IP
addresses
as
well.
So
I
can
see
how
there
could
be
a
general
need
or
use
for
IP
addresses
as
a
resource
on
a
node
level,
but
I
know
that
not
all
Network
plugins
are
going
to
work
like
that,
and
some
will
be
able
to
dynamically
add
more
addresses
to
the
node.
Once
the
node
runs
out.
K
That
is
small,
you
may
run
you
know.
If
you
have
a
slash
30
giving
example,
you
only
have
4
available
IP,
but
if
you
schedule
more
pattern,
2
you'll
run
out
of
IP
address
as
well.
So
that's
why
my
question
is:
yes,
I'm
sure
people
probably
understand
I
have
seen
this
problem
before
I'm
just
curious,
how
they
solve
their
problem
and
and
here's
our
proposal
and
and
what's
the
next
step
she
we
generalized
to
to
make
the
general-purpose
network
resource.
B
That
is
a
good
question.
Does
anybody
else
have
thoughts
on
whether
they've
encountered
the
problem
before
due
to
their
IPAM
scheme?
I
know
it
at
least
openshift
land?
We
currently
allocate
a
certain
subnet
to
a
node,
but
when
the
node
gets
to
that
point,
we
don't
do
anything
special
or
specific,
around
stopping
pods
from
being
scheduled
on
that
particular
node
you're
expected
to
size
correctly.
B
Well,
if
nobody
else
has
comments
on
this
call,
you
know
if
you
want
to
make
more
comments
and
read
the
proposal
a
little
bit
more.
The
issue
is
linked
in
the
agenda
doc.
So
please
take
a
look
at
that
Thanks
and
yeah.
Let's
see
if
there
are
more
comments
on
the
agenda,
if
you
have
more
changes
based
on
those
comments,
please
update
the
issue
and
then
we
can
circle
back
around
to
this
proposal.
The
next
meeting
sure:
okay,
thanks
all
right.
E
Core
D&S:
yes,
he
responsible
for
DNS.
So
my
issue
right
now,
so
we
are
ready
for
Cody
a
nice
to
go
beta
that
needs
this
PR
that
is
linked
here
on
the
on.
The
document
has
to
be
checked
in.
It
is
ready,
since
one
month
last
meeting
I
asked
Bui
said
he
will
dive
into
e
did
and
thankfully
that
helped
and
we
we
address
his
command
and
will
reapplied
some
change
on
this
PR.
But
since
one
week
there's
nothing
happening
and
I
am
fearing.
E
Now
we
uncut
slash
and
the
code
freezes
on
Monday
I
would
like.
First,
this
pier
is
flag
for
v110.
The
issue
is
the
only
thing
it
are,
but
not
the
PR
itself.
I,
don't
know
how
it
works
really,
but
who
can
flag
this
pier
v110?
So
it
has
visibility
to
be
maybe
maybe
some
priority
to
go
for
v1
10
for
approval
and
then
I
need
another
review.
So
unfortunately,
I
understand
that
Buie
I
didn't
allow.
E
So
it
cannot.
The
thing
is:
I
I
feel
to
be
very
unable
to
push
that
PA
I,
don't
know
how
to
make
to
make
it
going.
Yes,.
B
B
The
rest
of
the
process
is
gonna,
be
that
you
know
again
like
with
the
queue
proxy
stuff.
When
you
see
the
comments
around
need
approval
from
an
approver
of
each
of
these
files
go
to
EPS
owners
and
vendor
owners.
Somehow
we
need
to
or
you
need
to
harass
those
people
that
are
listed
there
and
try
to
get
those
approvers.
E
B
Yep,
yes,
so
the
way
it
works
is
that
the
more
directories
you
touch,
the
more
people
that
you
need
to
approve
stuff,
so
Timothy
st.
Claire
approved
the
changes
to
command
cubed
men,
but
the
changes
to
go
to
EPS
and
vendors
per
vendor
need
other
approvers,
which
apparently
Timothy
is
not
one
of
those
okay.
So.
E
You
another
thing:
I
wanted
to
say:
I
I
saw
that
for
the
integration
test,
so
cubed
mean
there
is
one
read
for
core
DNS
so,
depending
what
it
is,
but
I
see
that
there
are
two
tests
that
are
ready,
DNS,
Ida,
we'll
look
into.
Then
you
said
you.
It
will
be
similar
to
CNI
Calico,
but
I'm,
not
sure
for
these
two,
so
I
would
dive
into
this
job.