►
From YouTube: Kubernetes SIG Network 2018-04-05
Description
Kubernetes SIG network meeting, April 5th 2018
A
B
B
B
B
The
gold
is
here
for
this
KP
for
GA
is
so
define
it
as
a
default
DNS
server
right
now
in
beta,
you
can
set
up
Kuban,
it
is
whisk
or
DNS,
but
you
have
to
give
some
parameters
on
the
tool
you
used
to
differ
your
cube
energies
we
so
we
would
like
what
would
like
the
goal
is
to
have
it
as
default
to
have
the
DNS
the
core
DNS
image
posted
somewhere
around
GAC
at
I/o.
As
of
today.
B
B
What
is
not
in
the
goal
is
to
go
back
to
to
translate
back
from
cube
DNS,
so
you
can
still
install
cube
DNS,
but
we
will
not
take
the
coughing
map
of
core
DNS
and
translate
into
a
cube
DNS
config
mode
and
what
is
not
in
the
goal.
So
that
means
what
will
not
appear
as
if
you
have
a
tuning
of
cube
DNS
that
is
not
defined
in
your
config
map
of
creepiness.
That
means
it
is
something
really
on
the
command
line
of
the
internal
internal
component
of
coordinates
that
that
will
not
be
translated.
B
B
B
That
means
make
make
them
run
and
verify.
We
are
still
not
each
the
performance
of
kubernetes
and
extend
existing.
So
there
is
a
set
of
performance
tests
for
DNS
that
are
right
now
exist,
but
only
for
kube
DNA,
so
that
would
be
extended
for
coordinates
and
there
is
somewhere
else,
someone
that
is
taking
care
right
now
to
have
this
path
test.
They
run
into
any
end-to-end
test.
B
So
we
can
so
we
can
monitor
the
degradation
or
all
the
good
performance
of
of
this
component,
not
only
the
DNS,
but
all
the
Pap
test,
so
I
don't
know
any
command
so
now
now
what
I
weight
is
an
approval
on
this
one.
We
can
go
ahead
and
maybe
talk
a
little
of
the
process
as
we
do
we.
How
do
we
manage
to
pass
this
condition
criteria?
As
do
we?
B
A
My
guess
is
that
we
would
need
to
do
this
early
in
a
cycle,
yes
enable
it
and
make
sure
that
we
can
get
sufficient
test
runs
throughout
that
cycle.
With
that
configuration
that
we're
confident
I
mean
it's
it's
interesting
because
part
of
it
is
just
designation.
So
it's
getting
enough
runs
that
we're
confident
designated
as
GA
and
then
the
other
part
is
switching
some
flags
in
sort
of
installer
projects
which
can
be
done
sort
of
independently
of
the
designation
as
GA.
B
So
what
will
happen
is
when
it's
ga
ga
means
okay
communities
say
it's
all
these
default
DNS
servers
is
Cory
honest,
but
it
is
true
only
on
the
paper
as
long
as,
in
fact
there
is
no.
The
kubernetes
does
not
deploy
cube
DNS
directly,
so
it
is
up
to
the
the
tool
around
communities
to
deploy
the
right
DNS
server.
So
we
will.
We
have
already
updated
some
of
these
tools
or,
like
you,
Benigni,
but
some
other
are
not
a
better
than
maybe
a
bit
later.
B
B
B
B
So
it
is
cleared
up,
yeah
I,
think
unity
so
to
test
it.
I
had
to
use
a
one
tool
called
docker
in
Boca,
so
that
is
working
on
my
on
my
deployment.
Docker
in
Dhaka
I
have
a
peer
submitted
since
January
for
that,
but
that
was
never
approved,
know
exactly
why,
but
now
I
need
to
eBay's
and
the
doctor
and
doctor
is
not
working
but
yes,
I,
updated.
B
B
E
C
B
C
E
A
B
A
A
F
Casey
I
I
also
added
a
comment
on
the
cap,
but
I
think
not
just
for
this
one,
but
for
all
graduation
criteria.
Can
we
try
to
be
more
specific,
as
in
not
just
saying
passing,
all
Ito
a
test,
but
can
we
say,
like
passing
tests
for
the
last
10
runs?
15
runs
that
way
like
it's
more
specific.
What
numbers
need
to
be
hit
and
it's
not
ambiguous
as
to
if
the
actual
criteria
was
met
or
not.
B
Sure
that
sounds
sounds
good
to
me.
The
good
idea,
I
think
so
we
are.
We
are
back
to
my
initial
question
in
that
case,
so
we
get
an
approval
for
the
gate,
for
what
is
the
graduation
criteria
and
then
we
merge
quite
early
because
we
want
to
see
if
everything
will
work
for
everyone
and-
and
that
means
that
at
the
time
of
the
code
freeze,
we
need
to
verify
that
the
graduation
criteria
are
met
and
if
not,
we
rewrote
this
match.
G
G
Was
it
to
solve
all
the
transition
problems,
or
was
it
like,
mostly
for
hooking
up
making
the
workloads
aware
like
network
or
other
features
aware
so
so
I
addressed
most
of
the
the
comments
and
then
I
opened
up
a
cab
for
the
the
proposal
and
there's
two
minor
points
where
I
received
a
lot
of
comments
on
one
is
that
the
poverty
plus
first,
the
readiness
gate
was
not
allowed
to
update
after
it
specifies
in
the
initial
part
creation
time.
So
we
cannot
be
updated.
G
Okay,
that's
one
one
minor
point
and
the
other
is
that
I
receive
a
lot
of
requests
to
have
an
additional
pod
condition
to
capture
today's
hot
ready
may
basically
means
that
containers
ready.
So,
in
order
for
other
use
cases
to
have
a
better
signal,
so
I
just
want
to
throw
that
these
two
minor
points
into
the
community
and
see,
if
there's
any
like
proposals
so
and
also
related
to
the
second
point,
where
having
an
additional
heart
condition,
called
container
ready
to
replace
the
today's
ready
condition.
G
G
Behind,
like
let's
say
using
having
the
m
point,
readiness
equals
two
container
ready,
instead
of
the
ready
plus
plus
gives
let's
say
some
use.
Cases
like
I
wanted
to
secure
q
proxy
right,
I
wanted
to
make
Q
proxy
to
say
all
the
programming
has
finished
and
and
and
then
I
mark
the
pod.
That's
ready
for
cube
proxy
right
and-
and
that
makes
it
possible
because
because
cube
proxy
only
watches
endpoints
right
and
then
it's
not
scalable
for
all
the
queue
proxies
to
watch
all
the
pods
right.
G
So
so
so,
basically,
the
endpoint
readiness
can
be
a
signal
for
cube
proxy
or
any
low
balancer
that
wants
to
start
programming
and
then
the
feature
or
whatever
touch
point
on
those
features
like
you,
proxy
can
aggregate
the
programming
like
progress
and
then
say:
okay,
I'm
done
and
then
feed
back
into
this,
its
own
condition,
hard
condition
yeah.
So
so
that
opens
up
that
gate,
any
like
suggestions
or
Commons.
E
Yeah
I
think
that's
appropriate.
This.
The
I
think
we
might
want
to
think
a
little
bit
about
the
name
for
the
current
condition.
We've
been
saying,
containers
ready,
I
think
the
the
real
distinction
that
here
is
really
there's
ready,
plus
plus,
is
not
so
much
about
the
pod.
It's
about
the
rest
of
the
whole
system,
yeah
being
ready
for
traffic
to
flow
from
the
clients
to
the
pogs
mm-hm.
G
Yes,
I
consulted
this
with
the
workload
team,
so
it's
basically
almost
impossible
to
ask
all
the
workload
controllers
to
say.
Let's
say
we
come
up
with
another
hard
condition
called,
let's
say,
ready,
plus
for
us
right
to
represent
the
pause
actually
ready,
but
to
make
that
to
make
that
to
take
effect.
It
needs
all
the
workload
controllers
to.
E
Go
in
there,
yeah
I
was
not
going
there.
Okay
and
just
addressing
the
question
of
you,
talked
about
introducing
a
pod
condition,
but
defining
a
new
pod
condition.
That
means
what
is
currently
pod
ready
right.
Yes,
we
need
a
plain
old
pod,
ready,
conditioner
I've,
just
addressing
the
question
of
what
to
call
it:
okay,
I'm,
okay,
the
name
of
this
condition
should
not
just
focus
on
the
containers.
The
name
should
be
ideally
emphasized
that
what
we're
talking
about
is
the
pod
not
also
the
rest
of
the
system.
G
E
E
G
G
Endpoint
right
now
is
a
company's
internal
construct
and
depends
on
whether
the
third-party,
like
whatever
third-party
controllers,
are
doing,
but
even
though
we
switch
to
container
ready,
you
should
not
enter
like
interrupt
their
current
like
behavior,
because
basically
it's
the
today's
behavior,
it's
mostly
to
facilitate,
like
you,
proxy
or
other
components
that
like
blow
bouncer
and
that
needs
a
more
better
signal
like
like
you
commented
like
Mike
commented
on
the
on
the
cap.
Yeah.
E
G
G
But
okay,
that
goes
back
to
the
other,
like
the
other
part
of
the
comments.
I
have
right.
So
is
this
proposal
aims
to
solve
all
those
kind
of
transition
problems
within
kubernetes
know
right,
so
what
you
just
described
is
one
type
of
the
transition
problems
in
kubernetes.
So
the
this
this
proposal
aims
to
seal
the
gap
between
workloads
and
its
network.
Okay,
that
that's
it
and
then
it's
not
aim
to
solve,
like
all
sorts
of
transition
problems
and
then
like,
for
instance,
one
one
example:
I
got
is
like:
how
do
we
use
this
feature?
G
B
G
Whether
we
want
to
do
it
or
not,
because
that
gives
the
q
proxy
or
other
low
bouncer
like
a
better
signal,
the
look
or
q
proxy
can
q
proxy
sort
of
doesn't
have
a
choice
because
just
there's
it's
just
not
scalable
for
two
proxy
to
watch
all
the
parts
like,
but
for
some
random
third-party
controller
that
used
to
watch
only
endpoints.
You
can
easily
just
open
just
watch
part
instead.
E
Introducing
right
ordering
for
pod
readiness
is
really
so
we're
making
we're
splitting
one
concept
of
pod
readiness
on
youtube.
What
do
you
think
of
also
splitting
the
one
concept
of
endpoint
readiness
into
two,
and
so
we'll
define
two
concepts
of
endpoint
readiness?
One
is
tested,
as
the
current
condition
is
tested
today
will
reflect,
let's
say,
ready,
plus,
plus
and
one
will
be
tested
in
a
new
way
and
will
reflect
the
plain
old
ready.
G
G
G
Or
we
can,
we
can
say:
okay,
let's
we
can.
We
can
like
with
this
proposal.
We
keep
the
definition
of
endpoint
readiness
to
map
to
already
right
and
then,
if
we
have
a
strong
use
case,
like
other
than
like
cue
proxy
wanting
to
have
a
better
signal
to
program
endpoints,
we
we
start
from
there
or
we
had
a
use
cases
like
a
topology
aware,
routing
or
other
use
cases
that
can
basically
reuse
the
already
plus
plus
condition,
and
we
can
proceed
from
there,
how's
that.
E
E
G
The
the
end
point
is
ready
when
the
containers
are
up
and
they're
ready
to
serve
traffic
right
and
then
it's
just
that,
like
maybe
network
policies,
not
ready
that
can
be
solved
by
other,
like
other,
like
tricks
like
having
a
label
selector
having
having
additional
label
selector
on
the
service
and
then
when
Noah
policy
programmer
finishes,
add
a
label
to
the
pot
label
right.
Then
they
automatically
store
up
like
never
policy
specific,
already,
plus
plus
workaround,
if
yeah
for
for
just
for
endpoint,
like
what
a
having
the
endpoint
show
up
online.
E
Sure
I
mean
I'm,
saying
you're
I
mean
you're,
saying
that
the
problem
of
knowing
whether
the
network
policies
been
applied
to
the
endpoint
is
different
from
the
problem
of
warning
than
knowing
that
anyone
in
the
network
policies
been
applied
to
pod.
But
it's
the
same
kind
of
problem
right
and
if
the
question
is,
do
you
want
a
solution?
That's
unique
to
network
policy.
You
deal
in
generic
solution.
Alright
I
mean
you
came
in
here
saying
we
should
have
a
generic
solution
and
then
make
since
yes.
G
E
Look
I'm
also
remembering
Chelsie
Hightower
at
a
kook
on
several
cute
kinds
of
late.
Go
saying:
look
all
this
good
proxy
stuff!
That's
just
training
wheels.
You
don't
have
to
use
that
right.
You
consume
the
end
points
directly,
so
I'm
saying
yeah
workloads
might
be
consuming
the
end
points
directly,
but.
G
It's
advertised
with
recommended
thing
to
do
but
like
in
reality,
no
workloads
like
I,
don't
like
maybe
maybe
this
but
like
99%
like
Oh
at
least
the
core
workloads
all
only
worth
of
work
with
parts.
They
don't
think
they
don't
care
about
endpoints,
don't
care
about
service
and
then
there's
no
way
to
enforce
it
like
force
all
the
workloads
to
actually
consider
network
contracts.
With
this
you
are
sort
of
forcing
them
to
consider
it
so
that.
E
Basically
that
what
I'm
trying
to
say
is
we've
got
an
existing
API.
Okay,
we're
focused
on
putting
a
stricter
readiness
condition
like
tightening
up
the
rayna's
condition,
that's
in
the
API
right,
because
we
have
an
unknown
population
of
workload
out
there,
that's
consuming
the
existing
AP
on
so
I'm
saying.
If
you
want
to
tighten
up
the
readiness
condition,
it
makes
sense
to
me
to
tighten
at
the
readiness
condition
for
both
pods
and
endpoints,
because
those
are
both
what's
consumed
right
now
by
this
unknown,
this
unknown
population
of
code
out
there,
yes,.
E
E
A
D
A
A
A
H
A
H
H
H
A
D
It
had
to
be
changed
to
a
map
and
so
that
that
change
was
made
to
kubernetes
anywhere,
but
now
we're
waiting
for
the
version
of
kubernetes
anywhere
to
get
bumped
up
in
in
the
version
that's
used
in
the
container
in
the
containers,
test,
containers
and
there's
a
PR
for
that.
But
it's
just
sitting
there.