►
From YouTube: Kubernetes SIG Network meeting 20200521
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
So
I
direct
the
introduce
dr
the
we
have
there
some
interesting
the
issues
in
the
a
multi
suicide
also
I'd
like
to
the
share
that
these
stuff
and
then
the
also
they
are
happy
a
little
bit
discuss
about
the
what
should
be
so
dia.
The
this
issue
is
about
the
some
some
corner
case
in
case.
Actually,
this
is
the
when
the
EDD
is
really
to
change
it
at
that
time.
Bsc
the
cube
API
just
returned
the
503,
so
the
unbearable
error
messages
at
that
time.
B
I
mean
the
one
second
or
a
little
bit
less
and
then
the
retry
a
several
time.
It's
be
okay,
but
the
Saudia
this,
of
course
The
Unbearable,
does
not
guarantee
that
the
five
seconds
so
being
that
is
issue
not
so
so.
This
is
pretty
yeah.
It's
it's
pretty
connor
case
and
then
they
are
a
little
bit
complicated
to
bug.
So
I
direct
to
the
introduce
this
stuff
and
then
there,
if
you
have
some
idea
or
have
some
comments,
I'd
like
to
hear
about
that,
it's.
C
Almost
thanks
for
bringing
this
up.
I
guess
I
see
where
it's
a
possibility
that
we
may
do
a
spec
update.
Is
that
something
that
I
guess
is
a
little
bit
different
from
what
we
have
in
the
spec?
Is
that
I
think
the
spec
sizzling
long
lines
of
you
know
if
an
attachment
fails,
the
whole
thing
should
fail,
and
in
this
case
it's
the
failure
happens
before
we
know
there's
an
attachment
or
not
right.
So
it's
when
we're
querying
the
API
and
so
anyways
I
just
wanted
to
bring
that
up
and
I'll
make
it.
D
It
does
feel
to
me
like
it's
really
bad
if
you
try
and
create
a
pod
with
multiple
networks,
and
it
comes
up
with
only
one
network.
There's
gonna
be
quite
a
painful
thing
to
debug.
I
could
see
the
argument
that
says
well.
In
most
cases
there
will
only
be
one
network
and
why
stop
but
I
think
I'd
rather
have
it
error
out?
If
it
can't
do
the
whole
job
is
meant
to
do
properly.
C
Yeah
I
think
that
makes
sense,
I
think.
Maybe
we
should
create
some
kind
of
guideline
that
it
says
you
know
you
should
take
and
number
of
tries
or
attempts,
or
some
X
amount
of
time
trying
to
get
the
annotation
before
you
fail
out
and
just
use
the
default.
I
guess
is
maybe-
and
you
know
I
guess
it
could
be
one
of
those
things
where
we
put
it
in
it's
like
we
have
a
section.
B
B
C
Right
cool,
so
I've
got
the
proposal
in
the
state
that
I
believe
that
it
should
be
in.
So
basically
all
the
changes
here
really
is
that
we
say
that
the
status
annotation
should
include
the
namespace
reference,
and
then
we
update
the
examples
to
to
just
include
that
and
then
I
just
used
namespace
a
namespace
B
as
the
as
the
example
names
there.
C
A
Yeah
I
also
agree
that
it's
an
estate
ready
for
a
vote
and
I
think
that's
what
we
decided
last
time.
It's
kind
of
good
with
that
has
everybody
had
a
chance
to
review
of
it,
or
at
least
a
number
of
people.
I
had
a
chance
to
review
Doug's
proposal
thanks
Ben
on
the
agenda
for
a
couple
of
meetings
already
so
shouldn't
be
a
surprise.
I.
A
A
All
right,
so
the
way
that
we
do
this
is
usually
a
majority
of
people
who
show
up
to
the
meeting
get
to
vote
on
it.
Please
don't
pack
the
meetings,
but
for
something
like
this:
let's
just
do
a
normal
voice
vote.
So
all
in
favor
of
adding
Doug's
proposal,
as
worded
with
the
exception
of
fixing
up
the
InfiniBand
keys
to
be
InfiniBand
good
instead
of
Mac,
please
say:
aye.
E
There
is
also
information
that
has
to
go
to
the
CNI
from
the
device
plugin
I
I
thought
of
this
use
case.
Maybe
someone
here
can
think
of
other
cases
that
may
match
this
description.
I
found
that
in
the
SRA
we
see
and
I
it
has
to
do
different
things
depending
on
whether
the
device
is
in
the
PDK
mode
or
not.
That
information
is.
E
The
CNI
has
to
them
like
we
scan
the
device
and
the
cysts
of
s
in
order
to
find
out
what
driver
is
currently
bound
to
the
device.
So
there
is
information
there
that
the
device
plug
in
already
knows
and
already
knows
how
it
entre
knows
how
how
the
device
is
going
to
be
exposed
and,
and
yet
the
CNI
is
redoing.
Some
of
that
logic,
so
I
thought
that
it
would
be
useful
for
the
CNI
to
have
a
more
rich
way
of
getting
device.
E
E
E
E
The
the
guest
PCI
address
is
a
determinant
by
cata
by
the
runtime,
and
the
devices
are
hot
locked,
but
the
pod
still
sees
an
environment
variable
that
is
pointing
to
a
non
existing
PCI
address,
so
cata
would
also
benefit
from
a
standardized
way
of
showing
out
that
information.
At
least
it
would
be
able
to
do
the
remapping
or
a
rewriting
of
of
that
information
accordingly.
F
There
are
only
two
things,
I
thought
of,
and
it
was
more
recent
or
I
would
have
put
them
in
the
document
yeah.
So
for
the
device
information
from
the
device
plug
in
to
the
CNI
I,
don't
know
if
it's
worth
adding
but
I
mean
today
with
the
way
SRV
works
is
the
device
plug-in
gets
the
PCI
address,
but
the
CNI
needs
it,
and
so
Malta's
is
kind
of
intercepting
it
and
passing
it
down
through
the
CNI
arc.
So
that's
already
a
piece
of
information.
F
E
This
enough
I
think
I
think
it
is,
but
I
think
it's
not
so
I
think
the
fact
that
is
pci
address
is
let's
say:
accidental
I
mean
it's
not
accidental,
but
it's
in.
There
is
no
semantics
that
actually
expresses
that
it's
pci
address
I.
Think
it's
just
the
device
ID,
which
is
what
the
device
plug-in
returns.
C
F
Yes
thanks
so
I
guess.
The
point
was,
though,
that
is
a
piece
of
information
that
is
getting
passed
one
way
or
another
from
the
device
plug-in
over
to
the
CNI.
So
I
mean
you,
don't
have
to
put
it
coming,
but
it
was
just
a
it's
just
a
thought
and
then
the
other
piece
of
information
is
from
the
CN
is
to
the
pod.
Is
there
is
like
some
config
data?
F
That's
like
passed
in
through
passed
into
the
CNI
that
we're
trying
to
get
into
the
container,
specifically
like
DP
DK
information
like
server
client,
you
know
other
stuff
may
be
number
of
cues
on
a
interface
there's
just
config
data
that
needs
to
go
and
I.
Think
the
CNI
is
the
only
one
that
knows
that
today.
F
F
And
I
would
put
it
as
a
different
bullet
because
it's
outside
of
the
user
space
CNI,
because
I
mean
SRV
when
it's
in
DP
DK
mode
needs
to
get
the
IP
address
into
the
container.
I
mean
it's
doing
it
through
the
network
status
today,
but
I
would
just
kind
of
lump
it.
As
you
know,
there
is
config
data
that
the
CNI
knows
that
needs
to
get
into
the
pod
so
that
that
may
not
be
able
to
pass
like
through
a
kernel
interface,
because
it's
not
there
in
DP
decay
mode.
H
Is
that
an
improvement
much
much
better?
Thank
you.
Yes,
that
was
the
my
laptop
microphone
yeah,
the
Estrella
V,
the
VLAN
tag
can
become
essential
and
you've
got
kind
of
more
complicated
network
topologies
and
you
don't
want
to
create
device
goals
for
every
single
network.
You've
got
going
so
you
we've
seen
use
cases
where
passing
in
the
VLAN
tag
into
the
pod.
A
A
E
E
D
A
Yeah
I'm
just
wondering
like
if
Montes
knows
that
you
know
PCI
device
a
on
the
host
is
for
this.
You
know
network
attachment
definition
blue,
even
if
it
could
pass
that
information
into
the
VM.
Somehow
that's
going
to
be
useless
to
the
VM,
because,
as
you
point
out
in
the
document,
the
you
know,
PCI
topology
is
going
to
be
completely
different
inside
the
VM.
But
cada
knows
what
the
mapping
is
there.
So
it
does
this
just
not
work
today.
E
A
E
E
E
A
E
D
F
A
E
Okay,
so
I
I
just
wrote
this
proposal
very
I
mean
I,
have
not
thought
about
it
like
a
lot,
but
I
think
this
is
a
good
way
to
start.
I
also
noticed
that
Dan
wrote
some
an
equivalent
proposal
on
today's
agenda.
So
looking
at
that,
I
see
that
my
original
thought
was.
We
could
use
the
annotations
in
the
device
plugin,
so
the
device
plugin
can
add
extra
information
in
the
annotations
as
long
as,
though,
that
information
is
index
indexed
or
it
can
be
accessed
by
the
device
ID
that
it
returns.
E
A
E
E
E
E
A
Way,
I
guess
what
I'm
getting
at
is
like
device
ID
itself.
I,
don't
think
we
have.
You
know
that's
kind
of
the
crux
of
the
problem.
We
don't
really
have
a
good
definition
of
exactly
what
it
is
right
now
I
mean
we
have
some
environment
variable
names
and
we
have
you
know
some
structured
values
for
what
those
should
contain.
But
what
I'm
trying
to
get
at
is
it's
not
clear
to
me.
I
I,
think
I
know
what
it
is,
but
it's
not
clear
from
the
document
or
any
of
the
other
discussions.
A
A
F
A
E
E
E
Because
the
the
our
key
request,
the
payload
of
that
request,
is
a
list
of
device
IDs
and
for
each
part
that
requests
devices
cubelet
determines
what
devices
it's
going
to
allocate
and
it
sends
a
list
of
those
devices
to
the
device
plug-in.
So
I
think
we
could
start
from
there
as
our
definition
of
a
device.
Id.
D
A
F
E
Okay,
perfect
yeah.
A
E
Yes,
so
one
option
I
think
is
to
try
to
convey
the
maximum
amount
of
information
in
the
value
try
to
format
it
in
a
way
where
you
can
put.
For
example,
in
the
case
of
the
B
host,
you
can
put
the
path
and
the
mode-
and
you
know
more,
more
information
or
or
have
something
as
the
as
the
value
that
you
can
then
use
to
either
query
the
downwards
API
or
look
into
a
path
that
you
have
mounted
in
the
container
where
you
can
find
more
information
or
even
split.
E
F
Yeah
that
was
going
to
be
one
of
my
questions
is
whether
we
were
going
to
try
to
like
preserve
that
existing
name
or
did
we
need
like
for
these
environmental
variables?
Is
it
easier
to
have
like
a
common
prefix
when
you're
searching
through
them
to
kind
of,
say,
okay,
here's
my
subset
that
we
care
about.
So
it's
only
or
you
know
like
a
common
prefix
in
front
of
them
or
or
limit
it
to
two
or
something
so
that
you're
not
like
having
an
unlimited
supply
of
of
these
names.
A
A
A
Maybe
one
difference
between
my
attempt
at
this
and
Adrienne's
was
I
was
trying
to
encode
more
information
in
the
value
itself
because
of
the
different
types
of
things
like
V
host
and
V
DPA
I,
don't
know
if
that's
a
good
idea
or
not
I
didn't
put
any
examples
down
there
in
my
proposal
section
but
I,
just
added
some
for
what
I
was
proposing
there.
So
in
your
comments
on
those
would
be
useful
and
again
I
don't
have
a
ton
of
experience
with
V
host
or
V
DP
a
there.
A
F
And
I've
one
other
question
is
assuming
all
these:
are
we
leaning
towards
like
paths
anywhere
we
say
path?
We
talking
about
paths
on
the
host
and
we're
trying
to
like
do
more
of
a
fixed
location
within
a
container
or
we're
going
to
have
to
try
to
get
that
into
the
container.
To
tell
tell
the
container
where
he
should
look,
I
mean
I,
just
kind
of
want
to
mine's
more
of
a
what's
common
practice
or
what's
the
right
way
to
do.
F
A
F
Least
on
the
host
I:
don't
right,
you
can
have
fixed
paths
and
then
a
lot
of
it
has
to
do
with
how
you
install
OVS,
it's
all
with
it
has
to
do
around
the
installation
and
part
of
the
installation.
You
tell
it.
You
know
where
some
of
this
stuff
goes,
but
so
I
think
there
is
some
configurability,
but
it's
all
at
install
time
of
movie
s,
but
I
think
it's
safer
to
not
have
fixed
on
the
hosts
to
have
a
little
variability
on
the
hosts,
but
within
the
container,
what's
the
best
practice
there?
F
A
I
mean
there
are,
there
are
two
options
for
that:
I
mean
like
well,
you
just
gave
those
two
options.
We
could
have,
for
example,
a
hard-coded
paths
that
I'm
not
heart,
go
to
a
specified
path
for
all
of
the
network
status.
Excuse
me
all
of
the
network,
attachment
definitions
and
associated
device,
ID
environment
variables
or
whatever
else
that
then
could
be
read
and
that's
kind
of
where
I
was
going
with
mine.
A
A
A
You
know,
as
in
if
we
decide
to
specify
this
saying
information
that
you
need
to
configure.
This
will
live
at
you
know,
VAR
run
networks,
that's
a
bad
name,
so,
let's
not
use
it,
but
var
run
networks
in
both
the
container
and
any
and
as
long
as
that's
fairly
well
standardized,
and
as
long
as
we
have
good
clarity
on
what
those
values
should
contain.
A
I
think
that
would
be
acceptable
and
it
just
needs
one
less
level
of
redirect
or
in
directions
that
things
inside
the
container
of
VM
would
need
to
have
to
access
that
information,
because
at
the
end
of
the
day
you
need
something
either.
You're
gonna
have
an
annotation
on
the
pod
star.
The
pod,
annotations
and
you're
gonna
need
to
know
exactly
what
that
annotation
is
or
it's
gonna
have
to
be
a
directory
and
either
one
of
those
things
can
be.
A
You
know,
namespaced
you
can
I
they
should
be
named
spaced,
or
at
least
we
should
have
the
ability
to
say.
Okay,
you
know
in
v2
of
this
API
we're
going
to
use
this
other
directory.
That's
just
that's
common
practice,
but
other
than
that,
you
still
need
some
key,
that
everything
can
look
for,
whether
that's
a
path
in
the
file
system
or
an
annotation.
E
E
A
Yeah,
that's
a
good
question:
I.
It
depends
a
lot
on
the
cluster
and
how
you've
set
up
the
credentials
and
which
cube
config
gets
passed
around.
So
yes,
you
could
certainly
use
admission
controllers
to
get
that
information
to
even
mutate
that
information
when
it
added
to
the
API,
and
that
would
require
that
each
client
use
a
distinct.
E
I
H
I
H
F
F
D
A
I
F
F
That's
funny
that's
kind
of
where
I
ended
up
on
some
of
the
user
space,
because
I
could
number
okay.
It
was
kind
of
chicken
and
egg
I.
Could
you
know?
I
was
more
on
the
cni
side,
but
I
could
never
figure
out
who
knew
what,
where
and
make
it
all
work.
So
I
ended
up
to
him
to
just
like
I
wanted
to
do
something
that
was
that
made.
You
know
had
a
context
like
a
container
ID
or
something
but
I
always
ended
up
doing
random.
A
Okay,
all
right
I'll
take
that
into
account
and
see
if
I
can
update
my
strongman
proposal
here
for
the
next
time,
thanks
good
points
and
that's
we're
here
for
poke
holes
and
everything
alright.
So
we
have
about
two
minutes
left
I.
Think
it's
good
discussions
so
far
and
we've
got
some
things
to
do
coming
out
of
it
to
consolidate,
at
the
very
least,
the
what
information
around
how
environment
variables
should
look
and
then
continue
iterating
on
the
proposal
here.
A
All
right-
and
we
can
also
continue
discussion
on
the
plumbing
Google
Group-
slash
mailing
list
just
so
that
everybody
knows
if
you're,
not
on
that
list
or
you're
not
subscribed
to
that
Google
Group.
We
have
a
link
at
the
top
of
the
plumbing
working
group
agenda,
so
you
can
just
go
request
subscription
and
you
will
get
approved
yeah.
F
So
I
had
two
things:
one
I'm
think
it's
I'm,
thanks
for
spending
the
time
in
the
network
plumbing
to
talk
about
this
stuff,
I'm,
fine,
keeping
this
stuff
at
the
end
of
the
agenda,
so
that
we
can
cover
all
other
business
because
we
tend
to
eat
up
the
rest
of
the
time.
So
I
think
it's
good
practice
to
keep
this
at
the
end.
So
we
can
just
let
it
go
until
unless
someone
has
something
urgent,
the
need
to
bring
up
second
quick
question
purely
just
random,
with
the
with
these
Google
Docs
like
this.
F
A
A
Then
you're,
good
and
I
think
that's
best
practice.
At
least,
if
you're
going
to
be
commenting,
you're
editing,
you
know
just
make
sure
that
you're
logged
in
at
least
for
like
the
plumbing
working
group,
CRD
stuff.
That
requires
a
login
to
I
think
to
comment
or
suggest
just
so
that
we
can
keep
track
of
who's.
Changing
what
and
you
know
have
some
kind
of
attribution
trail
so
yeah.
If
you're
gonna
leave
comments
or
you're
gonna
leave,
suggestions
or
edits.
Please
make
sure
you
log
into
a
Google
account
good
points.
Billy
all.