►
From YouTube: Kubernetes SIG Network 20170615
Description
Kubernetes SIG Network meeting 2017-06-15
B
B
B
A
D
B
B
B
F
B
E
B
B
A
A
E
D
There's
this
PBM
issue:
yes,
okay,
so
yeah
I
believe
that
someone
will
be
rebuilding
container
vm
in
order
to
ameliorate
this
problem
and
if
it's
not
zml,
can
you
give
me
a
favor?
Can
you
follow
up
and
make
sure
that
CML
is
actually
going
to
do
it
and
if
he
isn't,
then
we
need
to
find
probably
king,
st.
Clair,
Michael,
Thapa
I.
Think.
H
B
Cool
next
on
the
list
is
number
four,
seven,
four
five
three
which
is
with
me
currently.
This
is
just
related
to
standardizing
C&I
directories
in
some
of
the
requests
to
spin
up
code,
there's
a
PR
open
for
that.
That's
passing
tests
and
just
needs
to
get
merged.
I.
Think
it's
been
improved
to
this
point,
I'm
certain
things,
any
action
there.
I
A
Yeah,
it's
got
a
pretty
good
diagnosis
and
looks
like
there's
nothing
question
on
that.
That
woman
looks
worried
she's.
He
related
also
note
that,
on
a
lot
of
the
folks,
I
will
talk
about
folks
next,
but
there
was
talk
on
I
forget
which
communities
group
about
having
a
stake
for
GC
to
so
that
things
that
are
specific
to
the
Google
infrastructure
can
actually
get
tagged
with
that
I
feel,
like
we've
been
getting
some
things
tag
mystic
network
that
are
actually
more
GCP
issues.
So
something
to
remember
coming
up
with
your
try
at
triaging
flakes.
D
F
A
Sig
testing
and
then
I
mean
I,
for
example,
with
this
particular
issue.
The
last
comment:
there
I
mean
what
theoretically
sig
networking
is
responsible
for
some
of
the
cloud
provider
stuff,
but
I'm
wondering
where
the
boundary
is
between
what
sig
network
does
with
respect
to
cloud
providers
and
the
specific
something
like
the
YouTube
test
need
on
GCE
to
actually
do
their
thing
like
we.
What
we.
D
Generally,
do
inside
I
like
to
copy
storage,
because
they've
been
around
and
they've
been
dealing
with
these
things
for
a
bit
I'm
more
familiar
with
that
it
may
or
may
not
be
appropriate,
I
believe
in
storage.
We
still
run
a
bunch
of
these
tests
in
kind
of
two
environments:
there's
a
Red,
Hat
environment
and
there's
a
Google
environment,
I,
don't
think
Azure
or
Amazon.
Yet
as
they're
running
and
I
think
what
we
tend
to
do
is
we
keep
them
on
the
sig
storage
label?
D
Because
you
know
at
the
end
of
the
day,
what
you're
trying
to
do
is
make
sure
that
the
right
people
are
looking
at
it,
and
so
you
know
we
may
or
may
not
make
them
blockers
for
the
release.
If
we
feel
that
no
they're
not
blockers,
but
you
know
we
depend
on
what
the
goal
is
that
seemed
appropriate
or.
A
D
I
think
that's
very
reasonable
and
I.
Also
think,
though,
especially,
is
networking
getting
bigger.
Take
it
at
least
from
the
people
that
I
know
in
this
room
and
I
take
a
look
at
Daniel
just
going.
He
does
know
a
lot.
Let's
say
network
policy
really
well,
but
even
that
know
what
we
were
worried
enough
that
well
when
it
he
does.
He
knows
a
lot
of
things.
D
Let's
just
know,
so
he
kind
of
in
the
same
boat
to
that
area
is
big
and
there's
a
little
bit
of
triage,
even
within
the
state,
to
find
the
right
person
right.
Do
you
feel
I'm
going
to
admit
my
ignorance
event
who's?
The
official
sigchi
is
significant
right
now,
80s,
probably
Tim
so
yeah
I
don't
know
who's
listed
on
a
when
Tim
gone.
We
should
definitely
find
a
good
triage
official
person.
D
H
K
J
F
Along
with
Tim
and
KC
Carr
leave
your
fishing.
H
A
L
H
F
D
D
But
I
do
know
like
at
least
for
me
internally,
when
their
dns
issues
I
go
look
at
Bowie
and
when
there's
network
policy
I
go
ask
Daniel
and
you
know
when
there's
Q
proxy,
you
know
I
tend
to
go.
You
know
sometimes
look
at
the
sig
overall
and
say
you
know
who
knows
more
about
these
things,
but
I
don't
really
know
across
all
of
the
sig
for
the
experts
and
who
are
the
people
who
both
want
to
be
in
the
know,
or
it
would
be.
You
know
held
accountable
for
when
sings
act.
Funky.
M
M
I
B
D
B
A
B
B
M
K
N
D
B
B
K
G
Now
this
one
I
got
a
Gotham
context,
so
this
one
is
okay,
here's
the
thing
I
reviewed
that
there's
a
PR
open
and
then
I
review
the
PR
and
then
it's
touching
some
very
old
and
then
stable
pieces,
so
I
feel
like
we
should
not
rush.
It
rush
the
PR
into
one
seven
branch,
but
I
wanted
what
I
want
to
get
emerged
like
as
soon
as
possible
and
in
a
soak
and
maybe
terrific,
later
yeah
sounds
good.
Yep.
B
D
Just
assert
sweet
so
in
general,
I
like
to
you
know
not
use
them
as
much
in
the
networking
area
because
I'm
not
as
familiar
with
it
as
some
of
the
other
areas,
so
if
possible,
if
you
guys
want
to
move
it
off
to
next
milestone,
that
would
be
great
I
kind
of
did
a
few
of
them
earlier
this
week,
just
because
41.7
deadline.
If
anyone
feels
like
imma
be
using
that,
please
let
me
know
I'm
trying
to
do
this
generally,
as
a
last
resort
to
get
one.
B
F
A
I
mean
it
could
be
wrong,
but
the
problem
is
that
there
isn't
any
logging
for
the
ginkgo
before
sleep
and
well
before
sweet
stuff
does
have
failure
messages.
It
appears
that
those
two
it
echoed
into
the
log
in
a
good
way
again.
I
could
be
wrong
on
that,
but
I
couldn't
see
any
of
that
happening.
So
I
mean
it's
kind
of
a
question
of
well.
First,
there
isn't
really
a
lot
of
information
in
photography
or
anything
like
that.
Tell
us
where
he's
going
wrong.
A
M
Quickly,
the
next
ones
in
the
same
bill
doesn't
fail
in
the
same
place,
but
it's
just
one
of
those
random
ones
where
something
built
somewhere,
but
a
cascaded
down,
subset
into
missing
ball
catches
as
a
little
bit
of
DNS
currently
signed
to
you,
I
think
lowly,
which
one
I
are
referring
to
four
six
one.
Eight
five.
A
Yeah
I'm,
looking
at
a
different
one,
the
subjugating
master,
160c
IQ
control
scheme,
and
that
does
have
a
little
bit
more
information.
Perhaps,
and
it's
talking
about
how
the
fluid
deep
pods
are
not
running
correctly
and
what
I
was
looking
into
some
of
the
artifacts
on
the
QC
igc,
alias
build
failures,
I
noticed
a
lot
of
fluently
shows
with
respects
to
not
being
able
to
get
credentials,
and
so
it
looks
like
it
was
continuously
being
restarted.
M
A
A
A
B
So
maybe
some
of
the
next
actions
for
these
ones
that
are
in
the
gray
areas
for
someone
to
reach
out
to
suggesting
and
see
what
their
advices
on,
how
to
go
forward
with
this
I.
M
B
B
F
I'm,
having
trouble
really
diagnosing
what
I'm
seeing
in
the
logs
I
could
use
some
help.
I'm
just
new
to
this
I
posted
in
the
kubernetes
dev
slack
channel
got
no
response,
so
I
don't
need
to
take.
You
know
the
group's
time,
but
if
someone
could
just
work
with
me
and
diagnose
figure
out
whether
my
fix
is
suffering
from
another
flake
or
actually
has
a
problem,
that
would
be
great
sure.
F
So
the
PR
number
I'll
see
if
I
got
it
around
here
somewhere.
Oh.
B
B
I
Hi
this
is
I
was
working
on
four
five,
nine
one:
five
there
is
the
PR
open
for
like
15
days,
but
then
I
go
to
the
PR
page.
It's
safe,
come
with
you
over.
This
is
different
in
my
zone,
is
for
a
future
release
and
cannot
be
merged
in
the
true,
but
it
shows
up
in
the
list.
Quadrants
of
myself
for
distance.
I
E
B
I
B
L
L
L
A
So
I
think
what
you're
waiting
for
in
the
CNI
side
is
probably
a
16
out
of
these,
which
I
think
I
promised
in
Manhattan
last
week.
We're
not
promised
to
the
promised
to
bring
up
in
the
maintainers
reading,
so
we
can
still
bring
that
up
again
and
try
to
get
a
see
and
I
release,
but
then
we
can
get
added.
L
B
G
For
you,
clean
testing,
so
there's
a
easier
way
to
test
out
your
changes.
I
assume
that
you
will
start
from
cubelet
4.
With
regard
to
the
ipv6
related
changes,
you
can
try
it
out
an
OD
to
e
test.
You
can
find
out
the
commands
in
the
make
file
under
the
root
directory
of
Solaris
Rico
and
then
there's
a
III
know
test
something
out
there
and
then
you
can
find
an
entry
point
and
there's
a
bunch
of
flag
available
yeah
and
then
you
can
try
can
run
remotely
or
locally
remotely.
G
You
basically
will
provision
VM
flat,
defying
Google
Cloud
but
but
like.
If
you
want
to
run
it
locally,
then
you
can
select
a
local
run
and
local
run
by
default,
uses
the
darker
0
bridge.
It
doesn't
mean
use
the
cni
set
up,
so
you
have
to
override
some
of
the
tests.
Come
fix
to
to
ask
it
to
use
CN
I've,
never
plug-in
yeah.
That's
it
and.
L
L
And
I
think
that's
what
Tim
was
alluding
to
two
weeks
ago
and
using
GCE
and
and
a
month
emitter
lives
to
look.
Maybe
some
other
restrictions
of
GCE
and
if
it
is,
if
it
is
a
cluster
or
tested,
is
like
the
GC
some
of
the
other
tests
it
not
so
much
the
node
and
then
test
with
some
of
these
other
tests
like
AWS
or
GCE,
or
that
cd3
tester
out
there.
Those
any
good
reference
points,
of
course,.
G
A
And
then
I
think
the
ideal
thing
would
be
to
run
tests
and
GC
the
same
as
we
can
run
all
the
rest
of
them.
Unfortunately,
it
doesn't
support
ipv6
and
we
need
to
make
some
kind
of
overlay
test.
Maybe
that's
a
little.
You
know
too
complicated,
but
you
know
if
it's
valuable,
to
have
these
kinds
of
CI
tests
to
run
this
somewhere
with
infrastructure
like
GC
or
AWS.
M
F
A
A
L
B
Okay
must
is
more
discussion
on
that
I
think
that's
the
end
of
our
agenda
for
today.
If
anything,
yeah.
O
O
Was
going
to
about
the
multi
networking
I
know
that
there
was
some
discussions
going
on
and
some
decisions
whether
annotation
should
be
use
or
kubernetes
api.
So
I
had
your
comment
on
the
network
object
that
may
be
the
best
way
to
go
about.
It
is
through
this
shared
parties
in
the
party
resource,
but
my
question
is:
it
was
a
lot
of
components
involved,
such
as
service
logic.
Network.
B
O
Yes,
like
Hello,
everything
has
to
be
outside
of
annotations
or
on
the
inner
plugin
site,
or
we
want
to
have
some
of
them
implemented,
kubernetes
api
and
some
of
them
through
annotation,
but
some
of
them
is
for
example.
Here
we
have
the
network
of
Jake
that
could
be
done
using
third-party
resource,
and
we
also
have,
let's
say
the
network
object.
We
want
to
use
for
the
pod
that
has
to
go
into
the
pot
annotation.
So
that's,
basically,
where
I'm
a
bit
confused.
What
was
the
consensus.
A
K
D
I
have
two
items:
is
it
okay
to
pretty
quick?
One
of
them
is
just
making
sure
everyone's
aware
of
the
the
ingress
load,
balancer
issue
that
showed
up
in
164
and
one
sixty
five
and
one
five,
eight,
basically
we're
kind
of
making
the
recommendation,
at
least
in
Google,
but
no
one
ever
users
releases
error.
D
There's
an
oversight
in
that
I
think
this,
actually
is
something
that
we
should
add
to
an
agenda
item
for
a
later
date
about
ingress
in
general,
because
I
think
at
least
on
from
our
perspective
in
Google.
It
looks
like
there's
a
lot
of
work
to
be
done
there.
The
back
of
the
ingress
object
has
a
lot
of
different
ways
to
set
up
health
check
or
readiness
handlers,
I'm
looking
around
the
room,
the
people
who
understand
this
better
than
I
do,
and
you
know,
on
the
sixth
good
conference
call
table,
and
it
looks
like
what
happens.
D
Is
that
there's
a
one
way
to
set
a
health
check
with
manual
settings
and
during
in
164
one
six,
five
one
five
eight,
we
clobber
those
settings
and
the
customer
has
them
they're
gone
and
they
don't
come
back
at
least
in
degrees,
and
so
the
result
is
that
you
stop
forwarding
traffic
to
back
ends
because
the
back
ends
look
on
and
they
always
will
look
unhealthy
during
an
upgrade
process.
And
so
you
know
can
set
the
settings
on
a
new
cluster.
It's
not
a
problem,
but
we've
lost
them.
D
D
To
those
releases-
and
you
have
manual
settings
set,
then
you're
going
to
lose
them.
Thank
you
so
yeah
be
aware
and
there's,
even
though
this
exists
in
me.
What's
the
name
of
that
repo?
That's
for
the
English
people
again
over
Europe!
Oh,
it's
just
called
the
ingress
repo
I
mean
we
name
something
like
what
it
should
be
named:
kubernetes,
okay,
cool.
So
if
an
ingress
repo,
the
issue
is
there,
but
don't
actually
ask
for
it
to
also
be
brought
into
the
kubernetes
people.
So
it's
a
bit
more
obvious
and
easier
to
track.
D
So,
if
you
want,
we
can
send
that
to
Signet
working.
But
if
you're
in
fig
networking,
you
should
be
aware
of
it.
It
really
does
bring
to
light
that
we
have
a
lot
of
debt
with
ingress
overall
in
both
the
functionality
and
the
execution
and
also
just
kind
of
the
API
and
the
spec
itself
and
I'm,
hoping
that
in
1.8
that
I
know
at
least
over
in
Google.
D
We've
got
some
people
that
will
start
focusing
on
this
and
we
invite
anyone
else
is
interested
to
work
with
us
and
so
I'm
excited
to
see
what
will
come
about.
So
that's
thing
number
one:
is
there
any
controversy
on
this
I
mean
it
is
I
wrong?
Is
there
people
who
feel
an
ingress
looks
great
as
it
is,
and
you
know
we
shouldn't
touch
it.
If
so,
that's
cool
too
I'd
love
to
hear
that
I.
D
Cool
so
good
home,
it's
hard
to
know
on
VC,
and
then
the
other
item
is
if
this
is
coming
up
in
a
lot
of
the
SIG's,
which
is
I.
Think
we
may
need
something
like
what
we
call
it
build
comp
rotation
whereby
you
know
we
have
these
dashboards.
Now
for
all
these
EDD
tests,
who
watches
the
like
in
Federation
at
least
and
in
storage,
we
had
a
problem
where
we
had
all
these
tests.
D
When
they
went
red,
you
know
nobody
knew
or
we
kind
of
even
worse,
everyone
knew,
but
you
sort
of
thought
somebody
else
was
going
to
be
on
it.
Do
we
have
a
rotation
now
where
I
don't
think
it
need
to
be
like
a
24/7
sort
of
thing?
But
you
know
someone
comes
in
in
the
morning
and
goes
oh
look.
This
is
broken.
We
should
file
an
issue
or
look
into
it
or
triage
it
appropriately.
O
D
Chisel
dependence
between
we
have
PR
submit
blockers.
That
should
let
people
know
beforehand,
I,
don't
know
if
we
have
any
tooling
now
it
says:
Olaf
PD
test
is
busted.
You
know
it
could
be
from
this
group
of
folks.
That's
that's
a
good
question,
but
I
think
the
the
big
thing,
though,
is
just
knowing
that
when
it
is
busted
at
least
goes
having
the
signal
and
I
can
do
it.
I
know,
at
least
in
Federation.
We
they
have
to
at
this
point
dive
into
it,
find
out
that
PR
themselves.
D
G
A
E
G
H
E
Actually,
I'm
thinking
that,
for
the
next
meeting,
I
will
get
it
talk
together
about
sort
of
since
were
app,
hopefully,
after
the
one,
seven
release,
and
certainly
one
eight
of
these
cycles.
Kind
of
I'll
just
put
some
ideas
for
caps
about
the
tests
and
cleaning
them
off
and
sort
of
sort
of
populating,
this
dashboard
and
that
sort
of
stuff.
So
that's
thinking
your
bandwidth
to
get.
E
All
these
papers,
this
cousin,
started
off
next
meeting.
Okay,.
D
D
D
There
was
sort
of
a
sentiment
that
I
thought
you
know
in
line
of
this,
which
is
where
some
of
this
is
coming
from
is
that
kubernetes
that
seemed
widely
held,
that
in
many
ways,
kubernetes
is
very
successful
because
of
the
functionality
of
what
it's
doing
today,
and
that,
while
there
is
definitely
room
for
oh
by
the
way,
Christopher
says
he
can
help
you
with
over
the
distance
stuff.
Oh
thank
you.
D
I
know
that
the
storage
sig
definitely
went
through
this
period
about
six
to
nine
months
ago
and
this
sort
of
gone
through
it
in
many
ways
it
still
has
work
to
do,
but
it's
a
lot
more
stable
than
it
used
to
be
definitely
a
lot
more
stable
that
it
used
to
be,
and
so
this
is
something
else
that,
with
internal
to
the
Leadership
Summit
I
can't
remember.
I
was
kind
of
really
tired,
okay
there.
But
you
wanted
me
to
comment
on
this
instead
of
me
talking
about
it.
D
B
I
mean
there's
kind
of
an
overwhelming
sentiment,
I
felt
from
from
the
discussions
that
you
know
we
should
be
looking
to
you
bringing
all
the
stuff
we've
got
to
to
to
GA
polish
it
off
make
it
a
flaky
and
food
out
all
the
rough
edges
and
quarter
cases.
It
might
not
be
being
addressed
a
day,
and
so
you
know
there's
a
conflict
there
between
wanting
to
a
lot
of
people
wanting
to
add
all
kinds
of
new
features
and
a
kind
of
less
glamorous
work
of
polishing
off
what
we've
got.
But
there
was
certainly
agreement.
B
Everybody
else
I
spoke
with
and
in
the
group
discussions
as
well
agreed
that
the
best
way
to
kind
of
continue
to
vanetti
success
at
this
point
and
then
make
sure
that
it
stays
as
successful
is
to
spend
some
time
polishing
off
and
stabilizing
with.
What's
currently
there.
So
API
is
like
ingress,
which
are
beta
and
haven't
seen
some
love
in
a
while
now
I'm
putting
in
the
effort
there
to
bring
that
to
GA
and
decide
what
what
needs
to
be
done
in
order
to
do
that.
So.
F
F
So
I
guess
I'll
just
finish
asking
the
two
right:
I
asked
the
first
one.
The
second
one
is
as
far
as
I
understand
the
kubernetes
api
machinery
simply
does
not
have
a
way
to
maintain
referential
integrity.
That's
to
say
if
one
object
references
another
in
a
way
that
it
really
depends
on
it,
and
we
don't
want
the
second
object
to
be
deleted.
As
long
as
the
first
object
is
referencing
it.
The
API
machinery
doesn't
really
have
a
support
for
that
and
it's
an
object.
N
F
There's
there's
kind
of
there's
kind
of
some
support
for
that,
but
there
is
some
difficulties
now
I'm,
just
realizing
I
haven't
cut
it
all
swapped
in,
but
I
asked
recently
about
the
example
of
a
pod
and
PVC
right.
If
a
pod
is
using
a
PVC
as
a
volume,
you
don't
want
the
PVC
to
be
deleted.
While
the
pod
is
there
I
think
I
asked
in
the
API
machinery
sig
and
they
said
the
GU
asked
in
the
storage
sig
and
they
said
yeah
we're
talking
about
it.
We
haven't
really
got
a
solution.
F
F
So
anyway,
I
just
wanted
to
know.
My
question
was
not
to
pretend
that
this
group
is
the
API
machinery
group
or
what
my
confusion
came
from
the
fact
that
I
they
sent
me
to
the
storage
guys
said:
there's
discussions,
I,
don't
know
where
these
discussions
are
I'd
like
to
catch
up
with
what
they're
talking
about.
So
my
question
to
this
group
is
simply
who
do
I
talk
to
if.
D
O'brien
are
probably
the
people
the
tag
with
that,
and
you
know
what
I
mean
if
I
were
you
as
this
thing
sort
of,
if
I
were
in
your
shoes
and
I
really
cared
I
would
have
sent
mail,
the
kubernetes,
dev
and
say
you
know
this
feels
to
me
like
a
sort
of
cross,
siggy
sort
of
question
and
see
if
I
get
a
response
there.
All
right,
thank
you
want
to
I
can
I
can
try
to
bring
Brian
to
see
if
he
can
notice.
This
also,
all.
D
You
know
you
may
want
to
I,
have
no
memory,
because
you
know
I'm,
just
that's
just
who
I
am
it's
very
sad
feel
free
to
like
poke
me
like
send
that
mail
and
if
you
want
sort
it
to
me
personally
and
say,
hey
Mike,
you
know
you
know
this
might
be
something
Brian's
interested
in,
because
I
may
forget.
Okay,.
D
F
So
my
other
question
then
was
about
the
successor
to
third-party
resources.
I
hear
there
is
one
in
the
work,
so
I
was
just
wanted
to
get
as
the
reading
of
the
status
I
feel
like
I'm
kind
of
missing
something
I,
don't
really
know
how
to
track.
What's
going
on
overall
in
kubernetes
so
and
in
particular,
I,
don't
know.
What's
the
status
of
the
successor,
I.
B
D
You
are
also
communicating
something
I'm
finding
from
a
bunch
of
people
and
I
feel
myself,
which
is
how
do
you
keep
track
of
all
of
the
stuff
going
on
in
kubernetes?
And
you
know,
I
was
just
talking
to
Sarah.
I
know
that
Nia
about
this
in
the
community
meeting
today
and
it
seems
to
be
even
more
avenues
and
channels
and
then
I
can
shake
a
stick
at
so
I.
Think
she's,
probably
a
good
person
to
sort
of
ask
this
question.
F
G
Well,
I've
got
a
quick
comment
on
your
first
question
so
regarding
the
operation,
interaction
between
objects,
I
asked
the
same
questions,
a
similar
question
to
Daniel
slick,
and
then
he
said
that
right
now
it's
not
doable
with
the
current
API
machinery,
but
probably
you
can
look
into
the
admission
control.
Maybe
but
not
depends
on
your
use.
Now.