►
From YouTube: Kubernetes SIG Network 2018-04-19
Description
Kubernetes SIG network meeting from Apr 19, 2018.
A
So
first
thing
on
today's
agenda
was
to
talk
about
Keep
Calm
yep
coming
cute
Connie.
You
I
had
one
topic
to
talk
about
there,
but
I
suspect
we
also
just
want
to
cover
who's
going
to
be
there
and
what
opportunities
we
have
to
meet
up
so
so
I
wanted
to
ask
I'm,
actually
not
going
to
be
there
myself,
but.
A
B
C
A
A
D
A
Cool
as
long
as
long
as
it's
being
looked
at
yep,
and
then
we
had
air,
probably
related.
The
key
proxy
upgrade
and
downgrade
daemon
set
tests
are
both
singing
to
fail
in
the
kind
of
setup
phase.
E
E
G
A
Okay,
that
sounds
better
yeah
I'm,
not
I'm,
not
sure
who
actually
owns
that
one
I
don't
think
it's
me,
but
I'm
happy
to
look
at
it.
I
think.
If
we
can't
get
this
working
in
the
next
meeting,
we
should
just
get
rid
of
it
because
I
don't
think
it's
adding
a
lot
of
value,
but
I
will
set
aside
some
time
or
anybody
else
who
thinks
they
own
that
come
chat
to
me
and
we
can
figure
it
out.
A
E
H
A
I
J
J
I
know
some
already
have
already
approved
I
guess
the
big
approval
will
be
the
one
of
team.
I
guess
he's
not
here
this
week.
I
would
like
that
we
okay
with
what's
inside
and
then
I
would
like
to
go
ahead
and
IFP
else
waiting.
My
issue
is
very,
very
slow
to
go
to
have
the
peer
check
in
a
merger
of,
or
we
even
reviewed.
There
is
one
on
the
auto
scaling
for
the
DNS,
auto
scaling
that
has
to
be
expected
to
work
with
core
DNS.
J
That
will
help
to
have
the
end-to-end
test
completely
run
for
core
DNS.
Then
I
will
have
a
I
need
this
first
one
to
get
in
first.
Second,
one
is
the
birth
test
that
was
asked
to
be
extended.
That
is
the
criteria.
The
gate
I
mean
for
further
proof
not
for
the
approval,
but
for
the
cap
for
the
feature
to
get
in
its,
the
PIA
is
already
out
since
one
week
and
then
I
will
have
to
to
have
this
PA
encore
DNS.
So
my
question
is:
how
do
I
do
to
push
these
peers?
J
J
Won't
look
at
that
yeah
just
send
me
an
email,
okay,
so
one
thing
that
will
upend
this
accordion
SGA
is
planned
for
v1
dot
11
dies
on
the
old
feature
mechanism
that
was
opened
two
months
ago
or
no
two
quarter
goes
one
thing
that
will
happen
and
we
talked
a
little
about
that
last
time
is
if
we
want
to
verify
that
all
the
criteria
will
be
good
for
GA.
We
need
to
commit
at
least
this
second
commit
core
DNS
default
with
qubit.
That
means
we
make
cube
DNS
default,
at
least
for
a
while.
J
So
we
can
verify
the
end-to-end
tests
are
running
the
scaling.
The
end-to-end
test,
scalability
for
two
thousand
or
five
thousand
knots
run
only
with
God,
that
is,
chicken
does
not
run
on
peels,
so
I
validate
with
hundred
knots
on
PRS,
but
for
the
the
big
the
bigger
cluster
we
have
to
have
to
check
in
so
I
would
like
that
as
soon
as
maybe
next
week.
As
soon
as
we
have
the
approval
receipt
team
can
give
the
approval
on
the
on
the
cap
we
check
in
so
so.
J
J
There
is
the
Austin
of
the
image
on
this
year
at
I/o
and
Tim
started
an
email
thread
last
week
before
before
going
to
vacation,
where
I
wanted
a
full
organized
process
to
push
that
image
on
the
Google
was
just
registration,
something
GCR
and
not.
Everyone
is
agree
on
what
has
to
be
austere
there
on
how
to
make
the
things
so
that
that
feature
will
be
maybe
the
long
pole
for
core
DNA's,
but
I
would
like
a
spoon
as
possible.
J
J
So
if
I,
if
I
show
you
here,
that
is
the
planning
of
these
three
days,
so
everything
is
in
yellow
is
pierre
and
can
be,
can
be
much
as
soon
as
someone
review
and
and
agree.
This
long
pole
in
red
is
the
images
from
the
first
ed
on
GC
a
data.
You
I
don't
need
I,
don't
know,
I
told
how
long
it
takes,
but
as
soon
it
is
done,
I
need
to
update
all
the
cubed
cubed
min
mini
cube
so
ready.
We
we
take
coordinates
from
this
image.
J
J
Sorry,
sorry,
and
and
but
if
we
can
go
ahead
with
the
yellow
that
will
help
and
I
think
I
would
like
to
have
something
check-in
beginning
of
next
week
or
during
next
week.
So
we
can
test
the
end-to-end
scale,
all
right,
okay
and
that's
all
what
I
have
to
say.
So
thank
you,
but
we,
if
you
can
I,
will
send
an
email
and
if
you
can
push
on
these
piers,
oh.
L
That
was
making
changes
for
that
for
a
core
DNS,
and
so
he
separated
things
out
into
v4
v6
test
cases,
and
we
sort
of
found
out
that
it
works
with
core
DNS
and
it's
failing
with
cubed
DNS
and
from
what
I
can
tell
that.
What's
happening
is
one
of
the
requests
where
they
take
the
the
pods
IP
address
and
and
convert
it
to
a
name
and
then
submit
that
with
Digg
to
be
resolved.
When
cube
dns
gets
that
name,
it
tries
to
convert
it
back
to
an
IP
address.
C
L
Hang
on
yeah
I!
Guess
that's
what
I
guess
what
the
question
was
is:
should
we
fix
it
or
should
we
skip
that
test
for
cube
DNS
or
what
now
I
did
do
some
more
testing
and
there's
another
pointer
probe
that
they
do
so
that's
one
probe
that
they
do
for
a
pointer
another
one
where
they
took
a
like
the
dotted,
I
P
address
and
try
to
probe
that
and
that's
failing
also
so
directly
a
bug.
I
think
it's
been
fixed
right.
C
I
So
I
think
there's
a
response.
There's
an
email
threat
going
on
the
signal,
work
group
and
then
I
think
the
Mike's
presence
question
is
particularly
on
how
to
implement
that
policy
you
how
to
basically
make
them
a
policy
like
into
the
pod
ready
plus
plus.
Is
that
the
question
right?
That's
what
you're
trying
to
ask
so
I
think
there's
a
response
from
from
Scranton
regarding
they
already
have
similar
constructs
in
cilium,
where
they
consider
both
cases.
I
Where
is
per
pod
weather,
the
related
policies
are
in
force
and
then
per
node,
whether
on
that,
node
is
already
completed
all
by
programming
for
our
policy,
so
but
they
haven't
like
hookin
up
with
poverty
plus
bus,
obviously
so
any
other.
Like
other
question
this
war
whatnot,
why
is
it
like
hard
or
why
is
it
different?
What
what's
not
enough
from
reparative
prosperous
for
no
policy
right.
M
So
I
haven't
had
a
chance
to
read
Thomas's
last
reply,
but
the
difficulty
is
that
there's
just
a
lot
too
quantify
over,
so
the
relevant
networks
policy
objects
for
the
readiness
of
a
given
pod.
Well,
you
realize
in
a
network
policy,
it
refers
to
objects
in
your
set
of
pods
in
two
different
ways.
There
is
policy
object.
Has
this
pod
selector
that
kind
of
high
up
in
the
in
the
structure?
M
They
were
that
the
selects
pods
that
it
applies
to,
and
then
it
has
a
bunch
of
rules
which
in
turn,
can
have
more
pod
selectors
or
address
selectors
or
notes
based
selectors,
that
select
other
pods
and
for
this
policy
that
affects
traffic,
those
two
pods
so
to
find
all
the
relevant
network
policy
objects.
You
have
to
consider
both
ways
that
a
network
policy
object
can
refer
to
a
pod,
and
you
have
to
test
whether
all
the
relevant
network
policy
objects
have
been
implemented.
M
I
Work
so
so
yeah.
So
all
the
problems
that
you
just
mentioned
is
caused
by
the
label.
Selector
right,
the
implicit
membership
of
odd
and
no
policy
objects
right.
Okay,
because
because
you
say
all
of
this
is
caused
by
label
selectors.
So
it's
an
existing
problem
for
both
support
for
anything
that
relies
on
label
selectors,
to
define
to
group
like
members
like
services
or
an
error
policy.
I
This
is
a
existing
problem,
so
one
solution
right
that,
like
in
the
Thomas
Crafton
points
out,
is
based
on,
like
the
policy
version,
where
everything
becomes
flat,
you've
landed
all
the
normal
policy
versioning
into
a
single
version,
and
then
you
just
I
would
check
if
that
version
is
the
latest
right
across
all
the
all
the
nose
or
on
for
all
and
whether
they
are
implemented
for
all
the
parts
right.
That
I
mean
that's
the
implementation
detail.
What
ready,
plus
plus
and
poverty
plus
for
us
gives.
You
is
an
extension
point
right.
I
It's
an
extension
point
for
external
systems
to
influence
the
pot
readiness
so
that
you
get
workload.
So
you
make
the
workloads
aware
that
okay,
there's
an
extra
feature
that
relying
on
the
pot
readiness,
that's
it,
but
how
you
figure,
how
that
information
is
feedback
into
its
depending
on
each
feature
or
whatever
implement
a
implementation
detail
that
is
behind
it.
M
I'm
sorry
I
didn't
quite
follow.
The
last
remark
clear,
there's,
a
difference.
The
separation
between
things
that
contribute
to
pod,
ready,
plus
plus
and
things
that
consume
body
read
plus
plus
my
concern
here
is
things
that
contribute
specifically
the
network
policy
implementation.
How
to
make
it
contribute
correctly
to
network
to
pod,
ready,
plus,
plus
mmm-hmm.
C
M
M
M
I
Mean
again,
this
is
a
tation
detail
on
like
what
exactly
you.
So
okay
look
like
underneath.
You
need
some
kind
of
aggregation
right.
You
need
to
add
some
aggregation
to
get
global
knowledge
for
whether
a
certain
network
policy
is
already
implemented
across
the
cluster
right
in
either
aggregator.
That's
the
minimum
and
then
that
aggregator
gets
the
global
knowledge
and
then
inject
the
pod
conditions
right
so
that
it's
influencing
corresponding
parts.
I
read,
readiness
saves.
M
I
M
So
the
the
setting
of
that
condition,
for
particular
pod
depends
on
enforcement.
That's
done
adjacent
to
that
pod
and
adjacent
to
every
pod
that
one
might
communicate
with
yes,
so
it
depends
on
the
implementation
of
both
the
network
policy
objects
that
select
that
pod
and
in
their
high-level
pod
selector,
and
also
the
ones
that
select
that
pod
in
their
ingress.
Egress
rules-
mm-hmm,
yes,
searching
about
a
set
of
network
policy
objects,
so
in.
I
Fact
right,
there's
an
egress
rule
that
it
covers
corresponding
to
the
pod
and
there's
our
there
are
ingress
rule
from
on
a
receipt
potential
receiving
on
string,
go
on
yes
right,
those
who
you
need
to
collect
both
of
them.
So
basically
the
u.s.
rule
can
be
collected
from
just
from
the
node,
the
node
that
is
running
the
pod
itself
and
the
ingress
rules
needs
to
collect
it
on
all
the
nodes
right.
M
I
Just
to
something
to
simplify
the
egress
is
very
simple
right,
because
only
the
if
I'm
is
like,
depending
on
the
vanilla
policy
implementation,
but
most
of
them
will
only
improve
the
egress
rule
on
the
node.
That
is
running
the
part
right.
So
that's
that
piece
is
before
you
come.
You
need
to
retrieve
it
from
know
that
is
running
the
part,
but
for
all
the
ingress
rules
you
can't
tell
right
it's
all
over
the
over
the
the
clusters,
so
it
has
to
be
aggregated
across
the
cluster
I.
Don't.
M
B
So
I
I
guess
I.
Don't
forget
why
you're
so
concerned
about
all
of
these
specific
X
Y
pairings
like
at
some
point
your
implementation,
whatever
it's
doing,
has
to
have
applied
all
of
the
rules
that
map
and
and
that's
really
all
that
matters
like
you
can
say
the
pod,
like
once
you
you've,
updated
the
the
network
to
reflect
all
of
the
currently
active
network
policies
and
all
of
the
currently
active
pods.
Then
you
can
say
that
the
pot
is
ready
and
and
and.
M
Thanks
yeah,
that's
a
little
more
than
necessary,
but
that
that
would
be
a
way
to
do
it.
I
mean
that's
a
that's.
That's
asking
to
a
little
too
much
right,
because
pod,
ready
plus
plus,
is
specific
to
a
pod.
So
it
would
actually
be
wrong
to
insist
that
all
the
network
policies
are
implemented
all
yes
all
you.
You
must
check
whether
all
the
relevant
network
policies
were
implemented.
B
H
H
H
Is
you
think
these
will
apply
you
a
network
policy
by
restricting
maybe
a
permissible
in
grass
and
permissible
amount
from
a
particular
pod?
Then
the
point
at
which
you
act,
then
you
have
a
single
point
of
control
around
that
that
happens
at
the
time
deploying
that
pod,
the
the
network
name
at
that
moment,
I
don't.
A
O
A
H
C
M
Yeah
tell
me
again
what
what
are
we
trying
to
do
in
this
dock?
I
mean
I,
don't
have
a
good
solution,
so
I'm
not
gonna,
be
the
right
one.
Yeah.
C
M
This
make
sense,
or
this
make
sense
not
sure
I
can
have
a
dock.
I'll
write
the
problem
statement
and
let
people
put
in
various
solution
approaches,
I,
guess,
I
think
I
understand
what
Casey
outlined
about.
Maybe
I'll,
let
Casey
or
you
know,
actually
write
the
description.
If
you
want
or
Casey
I
could
try
to
write
it.
Yeah.
C
A
Cool
so
the
last
thing
on
the
agenda,
something
I
added,
I,
didn't
haste,
but
I
figured.
We
should
start
thinking
about
our
goals
for
the
1:11
cycle
and
make
sure
we've
got
them
clearly
defined
and
there
are
a
couple
requests
for
updates
on
some
feature
issues
which,
until
we
think
about
what
we're
gonna
do
in
111.
We
can't
quite
answer
I.
K
A
A
K
Yeah,
so
there's
a
there's:
a
PR,
it's
been
out
for
review
for
two
weeks
have
been
gone
through
a
hashing
through
a
bunch
of
changes.
Now
I'm
yeah
I
think
it's
close,
but
it's
it's
not
not
approved
it
not
merged.
Yet
so
so
I
don't
know
if
it's,
if
it's
qualifies
as
being
it
has
a
chance
for
being
declared
ready
for
1.10.
We
might
have
to
push
this
into
1.11.
K
A
Of
course,
I
think
that
puts
it
in
that
category
yep.
We
wanted
to
have
a
proposal
for
supporting
multiple
pod
IP
addresses
I,
don't
know
that
that
was
actually
started.
I.
A
H
J
J
A
A
A
A
K
Yeah
yeah
I
think
we
should
start
looking
at
looking
at
the
dual
stack
requirements
and
I
think
that'll
drive
the
multiple
IP
spec,
multiple
IP
per
pod,
so
I
think
we
should
at
least
come
up
with
a
design
spec
for
a
dual
stack
and
start
working
on
multiple
pod
IP
part
of
it
and
possibly
the
multiple
service
IP
part
of
it
as
well.
Ooh.
H
H
K
M
K
M
K
K
F
B
A
J
F
A
A
P
P
A
A
A
I
I
guess
that
would
be
so
the
first
the
first
PR
is
out,
but
it's
waiting
for
review
from
no
team
and
API
machinery
is
a
signature
machinery
so
and
and
I
consulted
with
signal,
because
the
most
of
the
changes
are
cubelet
changes
related
to
the
past
status
manager,
it's
very
hard
to
feature
gate,
so
they
basically
recommend
the
changes
to
be
beta
gray,
at
least
like
to
be
production
already.
So.
I
M
Mean
it
seems
to
me
there's
at
least
one
bit
of
staging,
or
at
least
one
staging
thought
that's
relevant
here
so
in
pod,
ready
in
the
pod,
ready
plus
plus
proposal,
we're
talking
about
introducing
a
new
pod
condition
that
mirrors
the
existing
ready
condition
and
then
changing
how
the
existing
ready
condition
is
set.
We
could
do
the
first
before
the
second
and
in
some
sense
it
seems
like
they
have
actually
be
necessary.
M
M
I
M
Absolutely
add
one
more
thing:
right,
Stage
one
would
be
add
the
new
condition
plus
update
the
schema
so
that
the
the
set
of
feature
gates
can
be
listed
and
then
stage
two.
Once
we
have
controllers.
Actually
you
know
setting
the
future
gates.
Stage
two
would
be
to
update
the
way
the
existing
condition
is
set.