►
From YouTube: Kubernetes SIG Network meeting 20200514
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
D
B
D
B
E
E
B
E
B
B
Don't
think
so
either
I
think
it's
copied
in
not
LinkedIn
right
yeah.
How
can
I
sign
this
with
you
yeah
is
it?
Is
it
copied
in
every
time
iPod
has
started
I
think
it's
bind
mounted
in
when
the
POTUS
started
from
a
copy
cuz,
because
we
have
to.
We
can't
use
the
way
this
is
in
the
host.
Network
I,
don't
know
we'll
have
to
go.
Look
at
it.
I'm
gonna
give
it
to
Cal
yeah.
C
B
B
B
I'm
gonna
remove
the
triage,
leave
it
as
a
feature:
I'll
assign
it
to
you,
but
doesn't
sound
like
it's
surface
I
can't
type
talking
type
doesn't
sound
like
it's
super
high
urgency,
so
cool
allow
sitting.
This
is
heavier
so
we'll
just
table
this.
For
now
the
internal
error
girl
did
you
fail
in
calling
webhook
I?
F
B
B
A
reminder
for
everybody
watching
while
we're
assigning
some
of
the
regulars
to
go
and
actually
like
address
these
issues.
Many
of
these
issues.
What
we
need
is
somebody
to
volunteer
for
triage,
which
is
you
know,
ask
questions
to
figure
out.
If
this
is
real
or
not,
it's
not
go
solve
the
problems.
So
please
don't
hesitate
to
volunteer
yeah
you
can
you
can
round-robin
me
intend
for
you
got
it
just
speak
up.
When
you
see
one
that
looks
tasty,
ipv4
crannies,
pod,
mount
volume
fail,
there's
is
pod.
J
G
H
B
Really
a
big
change
and
he
sent
me
to
the
poor
request
and
I.
Don't
know
alright
I'll
leave
this
one
assigned
to
me
and
I'll
take
a
look
at
it.
The
truth
is,
you
know,
the
cider
allocator
is
pretty
dumb
and
it
assumes
that
all
the
cider
is
the
same
size.
We
could
make
it
smarter
all
right,
reduced
cost
for
communities
note.
This
is
a
feature
request.
B
C
B
K
I
B
C
Does
really
put
one
half
kernel
5.0?
No,
it's
got
a
4x
kernel,
but
most
of
the
network
stack
is
back
ported
from
later
kernels,
so
it
may.
Relly
may
well
have
that
fix.
But
if
there
is
a
link
to
a
kernel
patch
in
there,
then
we
can
try
to
get
that
back.
Ported,
Terrell,
potentially
I,
think.
I
B
A
L
H
L
H
B
B
All
right,
I'm
gonna,
stick
that
one
on
Jay
see
if
you
can
make
heads
or
tails
of
it
and
come
back.
Oh
this
doesn't
even
have.
This
is
just
open
as
a
bug,
interesting
wonder
how
I
clicked
on
it
all
right,
well
see
what
you
can
figure
out.
Obviously,
we've
got
a
little
bit
more.
These
ones
are
still
fresh.
This
was
eight
days
ago,
so
this
was
a
not-so-great
two
week
cycle
or
we
did
not
get
through
the
whole
triage
or
two
weeks.
It's.
E
E
E
So
as
we
move
towards
public
cloud
and
on
gke
like
it
would
be
great
if
we
can
keep
this
like
I
think
this
video
is
more
about
interoperability,
especially
with
all
all
software
that
has
been
developed
like
years
ago
in
traditional
systems,
not
thinking
about
service
architecture
or
like
Burnett,
is
kind
of
way
of
exposing
things
as
a
service
and
not
caring.
So
much
our
port,
which
is
kind
of
like
a
recent
model
right
yeah.
E
So
it's
mostly
like
wanted
to
get
feedback
from
from
you
guys
what
you
think
about
this
feature
like
chances
that
this
is
going
to
fly
like
how
we
can
approach
it
to
code
it
like
team
already
mentioned
that
is
probably
better
like
today.
That's
another
thing
today
we
do
this
as
a
badge
to
the
couplet.
E
F
I
am
I
really
don't
have
a
view
on
the
change.
I.
Think
it's
okay,
like
a
Oh
like
on
to
bring
your
attention
on,
is
the
windows
part
of
the
story,
so
it
might
be
a
good
idea
to
share
this.
Cap
was
windows
and
get
people
to
actually
comment
on.
This
is
breaking
or
non
breaking
on
their
site,
because
I
think
it
might
be
same.
C
E
B
Thought
it
was
worth
discussing
with
the
group
because
it's
an
interesting
change,
because
it's
not
exactly
safe
right
because
they
for
for
the
context
for
the
context,
the
pod,
the
Colonel's
hosts
name
field
is
only
64
bytes
long
and
the
60
63
bytes
is
the
same
limit
for
pod
name
names
based
name
and
service
name.
So
it's
possible
to
craft
names
that
overflow,
that
64
byte
kernel
limit
and
the
failure
mode
is
kind
of
ugly
right
like
pods
will
just
fail
to
create
because
you
can't
set
their
host
name.
B
This
thing
cannot
actually
I
think
whatever
yeah,
but
but
to
fix
it
properly,
is
enormous
ly
complicated
because
it
crosses
all
the
different
layers
of
workload
deployment.
Anything
that
might
modify
the
name
of
a
pod
might
get.
You
know,
add
or
subtract
from
that
total
of
64
bytes.
So
I
was
initially
personally
very
down
on
this,
because
it's
just
not
there's
no
there's
no
safe
way
to
do
it,
but
I'm
talking
to
folks
and
thinking
about
it
more.
E
E
B
So
you'd
have
to
do
it.
The
problem
is:
there's
an
arbitrary
number
of
top
level
or
cloud
controllers
right
like
there
are
workload
controllers
that
wrap
deployment,
and
there
are
workloads
that
just
use
replica
set
and
bypass
deployment,
and
so
like
in
the
limit.
You
have
to
fix
all
of
those
to
understand
all
of
the
layers
below
them
and
to
consider
the
size
of
the
namespace
name,
that
you're
being
written
into
and
to
consider
the
size
of
the
cluster
suffix
that
you're
using
right.
B
M
E
C
K
E
E
B
E
B
C
B
C
B
B
You
need
node,
probably
no.
It
might
be
interesting
to
see
what
apps
thinks
about
this
as
owners
of
deployment
and
runs
cassette
and
stateful
set.
They
might
be
interested
to
think
about
the
failure
modes,
but
I
think
that,
ultimately,
the
decisions
sits
between
the
city
network
and
cig
node
and
probably
leans
towards
signal.
So.
F
F
E
B
B
E
That's
a
win
so
like
yeah,
so
I
guess
like
a
bit
one
question
I
had
like
they
put
some
box
on
the
it
seems
to
be
like
a
different
systems,
use
different
underlying
runtime.
So
you
know
like
I,
don't
know
if
you
had
time
to
check
that
the
other
they
have
with
a
couplet
because
they
changed
the
sandbox
and
the
runtime
will
be
the
same
regardless.
How
we
pass
the
variable
there,
but
I,
don't
know
there
are
places
that
I
missed.
E
L
Yeah
so
I
came
I
think
a
couple
months
ago
now
to
kind
of
drop.
The
multi
set
cluster
services
kept
basically
and
signal
to
cluster
we've
been
working
on
extending
the
service
concept
across
clusters
and
it's
kind
of
evolved
and
a
bunch
of
people
here
have
commented
on
that
on
that
PR.
For
that
kept
really
appreciate
all
the
input
but
I
think
where
we're
getting
to
now
is
like
actually
figuring
out
the
the
real
implementation.
L
So
we've
got
a
demo
set
up
right
now,
where
we
have
a
basically
a
forked
cube
proxy
that
runs
alongside
regular
cube
proxy
and
it
just
handles
multi
cluster
services
that
have
been
imported
into
a
cluster
which
functions
but
doesn't
seem
like
a
great
way
forward.
You
basically
have
these
two
separate
queue
proxies
that
fighting
over
IP
tables
and
it
seems
kind
of
hairy
and
I
think
we
also
want
to
be
able
to
build
more
on
top
of
on
top
of
this
multi
cluster
services
concept.
L
L
Yeah
so
yeah,
our
kind
of
take
was
that
the
idea
of
having
a
prima
forked
cube
proxy
that
needs
to
do
all
the
things
queue
proxy
does
is
probably
not
good.
So
that
said,
you
know
looking
at
to
proxy.
Basically,
internally
multi
class
of
services
is
designed
to
to
feel
and
act
as
much
as
possible,
like
regular
cluster
local
services,
so
the
concepts
really
map
well
to
basically
the
existing
cube
proxy
implementation,
where
there
is
a
basically
there's
two
struts
that
we
care
about
before
programming
on
PBS
I'd
be
tables.
L
What
have
you
and
it's
the
proxy
that
service
port
and
the
proxy
that
endpoint?
So
the
thinking
is
if
we
could
make
it
easier
to
plumb
this
the
service
import,
which
is
basically
the
multiplexer
service
into
a
service
port,
and
it
still
uses
anyway
slices.
We
could
take
advantage
of
all
the
work.
That's
already
been
done
in
queue
proxy,
and
these
things
would
just
work
place
it
again.
L
So
the
other
option
is
a
CR
D,
where
Q
proxy
basically
watches
for
the
CR
D,
and
if
it's,
if
it
gets
created,
multi
cluster
services
work
taking
it
further
and
definitely
more
more
effort
is
what,
if
Q
proxy,
had
some
kind
of
external
API
like
a
G,
RPC
extension
or
something
that
let
you
hook
up
other
feeds.
This
of
course
means
designing.
L
L
Endpoints
yeah,
so
endpoints
would
basically
just
we'd,
probably
make
it
work
just
like
endpoint
slice.
We're
really
targeting
kubernetes
with
endpoints
life
support,
though
so
endpoint
slice
is
the
target
I'm,
not
sure
what
we
yeah.
So
so
you
just
want
to
understand
like
how
I
always
slice
maps
to
service
import
or
yes,.
L
H
H
L
Will
be
available
within
the
clusters
byaku
proxy
within
all
of
the
importing
clusters
now
I
think
there's
there's
still
some
kind
of
open
questions
of
whether
every
cluster
imports
the
service
or
some
are
allowed
to
not,
but
but
yeah
it
would
be
available
within
the
services.
Basically
like
a
cluster
IP,
only
we've
been
calling
it
super
cluster
IP.
H
H
L
And
and
super
cluster
services,
the
idea
being
that
you,
you
can
still
actually
access
like
if
you're
an
exporting
cluster,
you
can
talk
to
a
super
cluster
service
which
might
have
backends
and
multiple
clusters.
But
you
can
still
talk
to
your
cluster
local
service,
which
is
just
within
your
own
cluster,
but
it
also
means
that
we
don't
run
the
risk
of
accidentally
changing
existing
service
behavior,
because
somebody
else
exports
the
service.
Okay,.
I
Not
sure
I
understand
that
answer,
would
it
suffice
to
simply
add
some
labels
or
fields
to
the
existing
services?
Are
they
introduced
new
kinds
of
services.
L
So
that
I
guess
the
thing
here
is
too
many:
things
currently
interact
with
service
so
that
the
thinking
was
that
it
would
be
safer
to
have
a
new
resource.
Then
try
to
say,
services
with
this
label
should
be
treated
differently
like
services
with
this
label
should
also
have
be
exposed
at
the
multi
cluster
level,
and
then
the
service
name
is
important
too.
So
if
we
wanted
to
preserve
the
original
services
name
we'd
have
to
you
know,
maybe
we
have
some
well-known
pattern.
F
F
Yeah,
that's
that's
fine!
So
we're
not
really
looking
at
services
we're
looking
at
how
environment
pointer
model
we
to
proxy
doesn't
really
care.
If
that
end
point
for
me,
smoking
or
outside
it
just
right
flute
and
that's
that's
a
mat
across
all
the
implementation.
So
we
figured
out
a
way
where
we
can
model
data
endpoint
in
in
a
certain
way
that
says:
okay,
this
endpoint
is
from
external
provider
that
way
endpoint
slice
an
end
point
and
the
endpoint
control
made
like
ignore
it,
sort
of
it.
F
If
it
exists,
I'll
just
have
it
and
then
that
will
solve
all
your
problem.
It
doesn't
need
any
new
API.
It
doesn't
need
it
like
any
new
API
for
the
sake
of
data
path,
it
may
need
the
API
for
the
sake
of
control,
control
being
passed
like
the
controller
and
so
on,
but
the
realization
of
it
doesn't
require
any
change.
I
mean.
C
L
I
C
C
B
So,
first
of
all,
there's
a
there's
a
cap
on
this
that
I
wish
II
encourage
everybody
to
go,
read
it.
So
it's
a
really
good
cap.
The
thing
that
I
think
gets
weird
when
you
start
to
like
shoehorn.
This
into
services
is
all
of
these
cluster
dot.
Local
names
are
no
longer
local
to
your
cluster,
which
to
me.
I
B
I
Right
so
getting
back
to
what
Kyle
said,
I
think.
Is
it
true
that
the
coop
proxy
doesn't
just
look
at
endpoints?
It
also
looks
at
service.
So
it's
not
enough
to
just
stuff
things
into
endpoints
right.
Ok,
but
is
this
a
pretty
small
matter,
then
we
introduced
a
new
service
objects.
The
super
cluster
service
object.
You
know
let
the
coop
proxy
opt
into
it,
as
you
know
way
to
hint
it
to
look
at
the
right,
endpoints
and
yeah.
L
Yeah
exactly
so
that
would
be
this
CR
D
approaches
that
we
have.
This
we've
been
calling
it
service
import,
but
we
have
this
new
CRD
that
keep
rocks
you
can
watch
and
and
it
maps
to
existing
invoice
like
the
same
n,
quick
slice
struck,
but
basically
there's
another
way
to
to
pick
up
this
service
ports.
That.
I
D
G
The
JavaScript
JIT
needs
to
like
catch
up,
yeah
I,
think
some
of
this
is
like
a
software
engineering
and
maybe
versioning
issue,
because
Q
proxy
is
in
core
and
then
the
surface
exporter
is
going
to
be
rebus
right
now,
it's
evolving
outside
of
core
and
like
what
happens
if
you're
trying
to
pull
the
CRD
into
the
core
with
the
Q
proxy
like
how
do
we
manage
that?
If
you
wanted
to
teach
it
about
other
types
other
than
the
core
type,
so.
B
This
isn't
the
first
time
we've
crossed
this
storage
has
dealt
with
this
too,
and
so
we
should
consider
these
things
in
independent
decisions
of
like.
Should
it
be
a
built-in
or
a
CR,
D
versus,
should
add
this
CRD
live
in
the
core
or
out
of
the
core.
I
will
say
that
I
think
the
likelihood
of
getting
a
new
entry
type
is
very,
very
low.
We've.
You
know
putting
my
other
sig
arch
hat
on
we're
asking
everybody
who's
bringing
up
new
types.
You
know
why
can't
you
do
this
as
a
CRD.
L
Multi
cluster
is
definitely
not
something
you
need
for
every
single
cluster.
So
having
a
CR
D
makes
sense,
you
can
opt
in
easily.
You
know
we
have
them.
We
have
the
mechanism
where
Q
proxy
could
watch
CR,
DS
and
just
basically
turn
the
feature
on
when
you
install
the
service
import,
C
or
D
like
yeah
I,
don't
think
it's
actually
a
big
change,
in
fact,
I've
I've
done
it
very
hacky
prototype
of
it,
and
you
know
it's.
We
have
a
demo,
that's
basically
doing
it
up
now.
I.
F
B
In
theory,
the
name
can
be
changed
and
lots
of
customers
do
change
the
name,
but
an
order
of
magnitude
or
two
more
customers.
Don't
change
the
name
and
I
find
it
personally
very
odd
to
say
that
food
clustered
at
local
accesses
things
that
are
not
local
to
my
cluster,
like
principle
of
least
surprise
to
me,
is
violated
even.
F
The
point
I
was
trying
to
make
earlier
is
it's
as
a
consumer
as
a
client
right
I'm,
a
pot
that
I
just
want
to
go
to
a
service,
endpoint
and
I'm
done
right.
Do
I
really
need
to
to
worry
about
internal
or
external
or
just
I
need
to
worry
about
the
name
that
somebody
assigned
to
me
to
call
and
I'm
done
the
fact
that
this
name
lives
inside
or
outside
that
the
cluster
is
somebody's
business
yeah.
C
B
Right,
I
will
have
to
deal
with
Gateway
to
island
clusters
and
crossing
over
between
between
segments
and
those
sorts
of
things,
and
we
will
need
some
sort
of
topology
support
if
we're
going
to
allow
multiple
clusters
across
regions
right,
you
don't
want
to
go
to
your
Australia
cluster
from
your
US
cluster,
just
because
it
happened
to
be
there.
So
you
know
we're
gonna
need
something
to
fix
topology
there,
but
I
think
for
now
we're
making
an
assumption
that,
let's
just
assume
you've
got
multiple
clusters
in
the
same
region
and
the
cost
metric
is
one
so.
I
Back
on
the
topic,
then,
about
clients
just
wanting
to
get
to
a
service,
how
many
clients
actually
have
hard-coded
clustered
up
local
in
the
right.
Don't
clients
usually
say
give
me
a
domain
name
and
I'll
open
a
connection
to
it
and
it's
you
know
some
higher
level,
things
problem
to
say
what
the
service
name
is.
What
the
domain
name
is.
I
I
So
it
seems
to
me
like
it
should
not
be
a
problem
to
say:
okay,
we've
got
some
names,
some
domain
names
that
are
really
you
know
in
plain
English,
cluster,
local
and
some
are
regional
and
the
the
higher
level
thing
that
is
choosing
what
domain
name
to
feed
into
a
pod
or
whatever
you
know,
is
responsible
for
making
the
decision
appropriately.
Yeah.
C
B
B
L
B
History
shows
here
that
the
the
building
of
these
new
you
know,
plugin
API,
is
like
CRI
or
device
plugins
take
months
and
months
and
months
and
so
I
don't
think
we
want
to
deal
with
that.
The
XDS
I
think
is
an
interesting
proposition.
It's
pretty
wildly
different
I,
don't
it
may
warrant
its
own,
like
our
long
discussion
on
signet
like
what,
if
we
taught
cute
proxy
to
consume,
XDS
I,
don't
know
how
familiar
everybody
else
is
with
XDS.
It's
the
the
RPC
protocol
that
Envoy
proxy
introduced
for
consuming
endpoints
from
a
controller.
B
F
E
L
B
B
L
F
B
C
B
C
F
B
Is
a
different
story?
Actually
I
got
a
sign
that
bug
from
cigar
Chan
I
dropped
it,
but
in
this
case
I
don't
even
think
it
needs
to
be
loaded.
I
think
like,
if
cube
proxy,
just
added
a
watch
on
custom
resources.
Then,
if
it's
not
present,
if
we
don't
find
a
CRD
that
that's
satisfies
this,
this
kind,
then
we
don't
need
to.
We
can
just
skip
that
path,
and
if
we
do
find
it,
then
we
install
a
watch
on
that
resource
right.
B
B
A
B
Yes,
please
I
implore,
everybody,
please
go
take
some
look
at
triage
and
if
you've
been
assigned
issues,
please
go
and
work
on
those
and
ping
them
I'm
gonna
run
through
from
the
back
and
I'll
try
to
close
out
some
ones
that
have
been
idle
for
more
than
a
month.
But
there
are
some
new
ones
in
the
last
two
weeks
that
we
didn't
get
to
today.