►
From YouTube: Network Plumbing WG Meeting 2018-07-05
Description
Network Plumbing Working Group meeting for July 5, 2018
A
Sure,
okay,
we
are
recording.
This
is
the
network
plumbing
working
group
meeting
for
Thursday,
July,
5th
2018,
and,
of
course,
as
always,
if
people
don't
have
the
agenda
document,
I
will
paste
that
into
these.
In
chats
there
it
is
zoom
chat,
okay,
yeah
until
my
did
add
interface
status
to
just
the
little
spec
updates,
so
that
should
be
okay,
so
that
should
be
there
all
right.
We
will
kick
it
off
and
so
spec
updates.
A
Because
of
the
questions
that
have
been
brought
up
by
pong
and
some
others,
I
did
spend
some
time
to
make
the
spec
a
little
bit
more.
Generic
and
I
moved
I
still
think
it's
important
to
keep
the
defined
behavior
for
CNI
based
plugins,
like
maltose
and
CNI
genie,
so
I
kind
of
defined
a
concept
of
what
I
called
a
CNI
delegating
plugin,
which
is
basically
multi-source
eni
genie,
which
is
a
a
CNI
plugin
that
itself
delegates
to
a
couple
of
other
CNI
plugins.
A
So
basically,
what
we
had
all
been
thinking
of
before
most
of
us
have
been
thinking
of
before
is
now
called
a
CNI
delegating
plugin
and
I
took
the
sections
of
the
spec
that
dealt
specifically
with
CNI
and
how
to
call
CNI
and
how
to,
for
example,
interpret
the
CNI
result,
object
and
push
that
into
a
status
annotation
and
pulled
that
out
into
specific
subsections
that
are
marked
with
siena
delegating
plugin,
so
I
think.
What's
left
I
mean
there
again,
there's
no
real
additions
or
changes
for
this
particular
rework.
A
So
I
think
after
this,
if
you
ignore
anything
that
says
CNI
delegating
plug-in
and
any
of
the
subsections
underneath
something
that
says
C&I
delegating
plug-in,
it
should
be
a
fairly
generic
spec
and
it
should
allow
a
you
know.
We
had
sort
of
been
calling
a
thick
plugin
to
do
whatever
the
thick
plugin
needs
to
do.
If
that
thick
plugin
is
not
going
to
call
out
to
other
cni
plugins.
So
hopefully
that
addresses
some
concerns.
I!
Don't
really
think
this
complicates
the
spec
really
it.
A
So
we
added
a
section
to
that
spec
about
what
to
do
in
the
case
of
or
in
that
case,
and
what
we
have
sort
of
agreed
owner
talked
about
last
time
was
that
it
would
be
okay
for
the
plug-in
to
block
pod
attachment
and
detachment.
Well,
I
guess
attachment
in
this
case,
until
the
cluster
wide
default
network
was
available,
at
least
for
a
period
of
time
and
I
think
I
seem
to
recall.
We
had
talked
about
max,
maybe
two
minute,
timeout
and
so
I.
Just
kind
of
put
that
number.
A
But
you
know
that
number
can
be
changed
as
well,
and
the
last
thing
was
there
were
some
comments
on
the
spec
doc
about
failure,
handling,
I,
think
that
came
from
chorale
and
so
I
added
a
section
there
about
what
to
do
on
failures.
I
had
thought
it
was
in
the
documents
already,
but
I,
checked
and
I
did
not
see
it
anywhere.
So
there's
a
new
section,
seven
point
four
about
what
implementations
should
do
on
the
failure
of
one
attachment,
which
is
basically
failed.
The
entire
pod
Network
set
up
at
this
point,
but
for
detachment.
C
A
C
A
The
curt
well,
the
way
that
cubelet
currently
works
is
that
it
allows
the
ad
to
fail
and
things
can
be
half
configured
at
that
point
and
then
cubelet
will
come
along
and
garbage
collect
the
pod
and
will
actually
call
teardown.
So
at
this
point,
cubelet
more
or
less
guarantees
that
ad
fails.
There
will
be
a
subsequent
teardown
from
the
run
time
and
then
the
plug-in
would
be
expected
to
clean
up.
At
that
point,
so
I
mean
we
can
go
both
ways,
most
plugins,
that
I
know
of
right
now.
A
Actually
do
you
attempt
to
clean
themselves
up
internally
before
returning
the
ad
failure,
and
then
they
are
just
kind
of
permissive
on
delete.
So
there
are
some
things
they
expect
to
exist
and
delete
don't
exist
and
they
just
continue
on
so
I.
Don't
know
if
that
addresses
your
concern.
We
can
know
yeah.
A
It
will
yep
it'll
garbage
collect
the
container.
It
is
the
only
issue
there
might
be
that
it
could
garbage
collect
the
container
up
to
like
30
seconds
after
the
ad
or
maybe
a
little
bit
later.
So
if,
for
some
reason
I
mean
you
could
have
some
resource
exhaustion
there,
where,
if
you
start
a
lot
of
containers
at
once,
and
they
all
fail,
you
might
run
out
of
some
of
those
resources
before
the
garbage
collection
kicks
in.
A
A
So,
okay,
if
that's
okay,
I
think
it
is
okay
for
tear
down
I.
Let
me
see
what
I
said
for
tear
down,
because
tear
down
should
be
a
little
bit
different
and
kind
of
following
what
the
cni
spec
says
for
tear
down
that
plugins
should
be
I,
said:
failure
to
detach
a
network,
let's
see,
you'd,
feel
okay,
yeah
so
it'll.
It
should
eventually
fail.
A
Pod
network
teardown
I
mean,
as
in
most
errors,
should
end
up
getting
returned,
but
the
plug-in
or
the
implementation
should
continue
to
tear
down
all
of
the
you
know
all
the
attachments
that
were
set
up
during
the
ad
and
then
return
the
error
at
the
end
of
all
of
those.
So
just
because
one
of
the
attachments,
one
of
the
attachments
fails
on
detached
does
not
mean
that
it
should
quit
and
return
to
cubelet.
It
should
attempt
to
clean
up
all
the
attachments
and
then
return
that
final
error
back
to
cubelet.
Does
that
make
sense?
C
Says
right
decision
I
have
to
say
on
the
first
sentence,
a
little
bit
misleading,
okay,
saying
you
know
in
plain
English,
saying
that
it
fails
the
plod
tear
down.
That
is
that's
not
as
precise
as
saying
it
continues
to
pod
tear
down
in
the
end
returns.
Failure
at
the
end
all
right,
but
reading
the
rest
of
it
makes
it
clear
right.
C
Yeah
there
was
I'm,
not
sure,
that's
here,
oh
yes,
it
is
later
concurrency,
okay,
yep
on
this
thing,
but
the
first
one
here
about
the
the
more
general
statement
I've
been
trying
to
argue
for
even
greater
generality,
which
is
to
say
that
at
some
point
in
the
future,
we
want
the
implementation
to
not
necessarily
be
a
plug-in
at
all
right.
We're
I
mean
coop
projects
in
general,
so
thinking
about
going
to
a
daemon
model,
I
think
we
want
to
allow
a
dynamic
set
of
attachments
which
is
going
to
demand
a
daemon
model.
A
D
C
E
E
A
I
mean
I
would
hope
that
it
would
be
as
soon
as
possible.
I
think
that
we're
pretty
close
at
this
point.
We
still
have
two
bike
shed
about
the
name.
You
know
I,
in
the
absence
of
any
further
votes
on
the
name.
I
was
basically
just
going
to
pick
one
I
think
I
said
that
last
meeting
too,
but
it
seemed
like
there
were
enough
larger
spec
updates
in
this
round
that
I
just
kind
of
punted
on
that
I.
A
Think
personally,
my
frontrunner
for
the
moment,
even
though
it's
a
little
bit
longer
than
I'd
like
would
probably
be
like
net
attachment
template
or
something
like
that.
But
we'll
see
yeah,
it
looks
like
I
guess:
Tomo
voted.
If
anybody
else
wants
to
vote
at
the
top
of
the
spec
and
say
what
they
what
they
would
prefer
there
I
think
for
a
lot
of
the
reasons.
If.
C
C
On
that
one
only
Tomasz,
oh
I'm,
sorry
well,
okay,
I'll
just
say
you
know
the
reason
I
didn't
like
network
attachment
is
essentially
the
same
as
the
reason
they
don't
like
network
in
that.
This
is
not
one
of
those
I'm
implementing
an
SDN
using
Kubb
api
machinery
and
we
have
networks
as
well
as
network
attachments.
So
it's
gonna,
be
our
credentials
actually
called
Network
endpoints.
At
the
moment
we
might
change
the
name
in
the
future,
but
the
network
attachment
network
end
corner
pretty
synonymous
and
it's
going
to
be
confusing
to
people.
A
D
Well
to
me
the
simple
name
is
I,
prefer
it
and
then
the
shorter
name
is
also
preferred.
That's
their
most
important
things
to
me,
I
think
so.
The
area,
maybe
like
the
network,
attachment
tape.
Let's
see
ni
stuff,
is
there
a
little
bit
longer
I
think
so,
maybe
I'd
like
to
keep
under
the
tank
character.
That's
the
most
important
things
to
me.
So
I
agree.
C
D
A
Yeah
we
can
do
that.
I
mean
you
and
I
can
also
talk
to
Doug
in
between
meetings
and
just
you
know,
go
over
it
and
see
what
Doug
thinks
too
either
way.
I
don't
think
we
should
I
mean
like
fun,
was
suggesting
I.
Think
I,
don't
think
we
should
let
this
drag
out
too
long.
At
some
point,
we're
just
gonna
make
a
decision
on
that
that'll
kind
of
be
final
for
the
moment.
A
D
D
So
country
in
the
network
spec
not
only
the
this
de-facto
standard
as
well
at
the
CNI.
So
current
idea,
we
are
ready
to
discuss
about
the
command
get
in
the
spec
and
then
Sen
expect
as
well
as
they
here.
If
we
hope
to
have
the
network
status
notation
and
then
I
am
found
that
there
is
no
description,
no
information
about
the
interface
status,
whether
it
is
up
or
not
from
the
continuity,
the
point
of
view.
These
interface
should
be
apt.
However,
sometimes
the
continent
apps
or
sometimes
the
physically.
The
links
in
goes
to
down
I.
D
A
And
I
think
one
of
the
things
that
explained
it
a
little
bit
better
for
me
was
in
the
SR
iov
case.
The
interface
would
get
moved
into
the
pod,
but
then,
if
something
like
a
user
space
stack
like
DP
DK
was
being
used,
then
that
interface
actually
isn't
fully
configured.
It
isn't
usable
until
the
pod
has
begun
to
run
and
DP
DK
has
set
up
the
network
stack.
Is
that
correct.
E
It
it's
correct
for
some
cases,
so
the
one
case
of
which
I
say,
one
of
the
customer
told
me,
is
like
they
have
a
virtual
router
vnf
application
in
which
they
have
primary
network.
It's
a
final
network
and
rest
of
the
NER
interfaces
are
like
srvv
F,
and
they
want
all.
This
VF
should
be
in
down
state
so
that
virtual
router
v
enough
application
whatever
it
is.
E
It
will
take
the
interface
and
bring
it
up
and
down
depending
upon
the
application
configuration
that
is
so
the
use
case,
but
the
challenging
pada
is
when
I'm
doing
the
coding
for
the
network
status.
I
will
get
the
network
status
from
the
result
of
the
CNI,
where
I'm
not
sure
whether
the
result
print
whether
the
interface
is
up
or
down
I
want
to
check
that
one.
But
I
was
I'm
not
sure
about
that
one.
Actually,
whether
the
CNA
will
send
back
the
result
of
the
down
interface
or
not,
but
the
use
case
is
real.
A
Mean
I
does
not
to
me
seem
like
a
huge
addition
to
the
spec.
I
would
not
go
as
far
as
saying
that
you
know
encoding
Linux
isms
in
it,
for
example,
as
in,
like
you
know,
up
or
the
other
colonel
network
interface
states,
but
I
think
it
would
be
okay
to
have
just
kind
of
like
interface
ready
as
a
key
and
true
or
false,
as
the
values
does
that
make
sense,
and
would
that
cover
the
use
cases
I'm.
D
Just
thinking
about
the
sometimes
the
yes
I
say,
interface
signal
state
and
the
rhein
state,
and
then
they
interface
that
Adame
administrative
state
is
a
different
I.
Think
I
mean
that
yeah
sometimes
they
were
either
trying
to
go
to
that
IP
link
set
up.
However,
sometimes
the
ether
cable
is
disconnected
at
that
time.
Here
we
have
the
paper,
so
that
user
sets
the
app.
But
the
signal
goes
down
at
that
time.
The
we'd
like
to
identify
the.
D
Difference
between
the
user
sitting
on
the
down
and
then
the
snow
goes
down
I'm
just
thinking
about
so
maybe
we
we
have
I
suppose
that
there
is
two
options:
one
is
the
VR
VR
for
status,
I
mean
the
2x2
or
the
one,
maybe
the
week
at
the
two
interface
status.
Well,
let
me
summarize
after
the
meeting
in
Google
Doc
or
this
agender
I
will
update
the
summary
of
the
my
totes
is
that
okay
yep
yeah.
That
sounds
good
thanks.
A
Okay,
okay,
next
step
follow-up
to
the
concurrency
discussion
from
last
time
in
which
we
had
talked
quite
a
while
about
parallel,
plug-in
execution,
and
things
like
that,
so
it
turns
out
that
the
cni
spec
does
have
language
around
parallel
execution,
which
we
didn't
think
it
had
last
time,
but
it
actually
does.
Let
me
pull
that
language
up
really
quick
and.
A
C
C
A
If
I,
oh
there
we
go,
it
says
the
container
run.
Time
must
not
invoke
parallel
operations
for
the
same
container,
but
is
allowed
to
invoke
parallel
operations
for
different
containers
yeah,
and
so
we
also
discussed
that
in
the
CNI
maintainer
meeting
I
think
two
or
three
weeks
ago
and
the
issue
there
is
that,
while
most
of
the
operations
probably
could
be
done
in
parallel,
there
are
some
things
like
default
route
and
other
stuff
that
might
conflict,
because
those
are
obviously
shared
between
well
I'm
sure,
before
the
entire
container
they're,
not
interface,
specific.
A
A
Mean
it
I
think
with
the
timeline
that
we
have
right
now
and
the
fact
that
we
probably
want
to
make
this
happen
sooner
rather
than
later,
we
may
be
left
with
just
doing
kind
of
serial
execution
for
v1
and
then
for
any
kind
of
v2
work
with
the
CNI
maintainers
and
this
working
group
to
try
to
figure
out
exactly
what
we
do
here.
What.
C
A
B
Well,
here,
I
give
you
a
short
description
about
Korea
Korea
is
a
project
of
this
type
community.
The
purpose
of
cure
kubernetes
is
to
use
neutral,
which
is
OpenStack
Network
part
as
the
backing
of
kubernetes.
So
it
means
the
community
part
can
utilize
the
infrastruc
ture
kin,
spa
structure
built
by
on
stack.
So
a
can.
B
You
can
get
some
benefit,
such
as
you
have
a
unified
network
for
both
virtual
machine
and
and
second,
you
don't
have
a
double
encapsulation
and
in
case
you
deploy
the
container
inside
of
virtual
machine.
So
there's
only
one
encapsulation
in
that
case,
which
is
the
encapsulation
done
by
a
stack
by
Neutron,
so
I
star.
B
B
C
B
No,
yes!
Yes!
So,
but
the
thing
is
that
I
put
some
comments
in
the
de
facto
standards
and
I
would
like
to
have
this
configuration
field
more
flexible,
more
generic,
because
currently
it
says
that
this
should
be
the
contains
the
configuration
of
CI
plugin,
as
defined
in
CI
specification,
so
I
use
this
name
and
tab
field,
but
actually
I
didn't
use
this
in
my
code,
so
only
the
subnet
Eddie
is
used.
The
subpoena
ID
here
you
can
see,
is
a
UUID
which
is
the
UID
of
neutron
subnet.
B
Here
you
can
see
this
at
night:
I
define
the
some
Nitesh
a
and
subnet
HP,
and
they
have
different
IP
range,
so
the
additional
interface
will
attach
to
those
two
subnet
and
the
default
one
will
not
would
not
be
in
this
diff.
Always
this
one.
This
is
the
default
network
which
is
not
controlled
by
the
networks.
Object,
I,
define
so
well
here,
I
show
you
the.
B
B
This
field
was
my
career,
so
courier
put
the
interface
details
into
this
career
we've
caught
a
notation
so
that
the
CI
of
courier
can
read
this
annotation
and
attach
the
part
to
networks
accordingly.
So
here
you
can
see
the
ETA
to
zero,
which
is
the
default
Network,
which
is
which
is
not
controlled
by
the
network
CRD.
So
you
can
see.
Here's
which
is
this
is
the
data
for
network
now,
the
each
one.
B
B
Career
gate,
all
the
pot
the
same
name,
but
it
doesn't
matter
for
OpenStack
because
OpenStack
use
you
your
ideas,
identifier,
also
upon
name
default,
slash
ninja,
x3,
and
you
can
see
that
this
Y
is
34
1,
and
this
one
is
the
one
in
the
subnet
a
and
this
one
is
the
one
in
subnet
P
and
go
back
to
the
container.
You
can
find.
B
B
This
one
is
the
architecture
of
Curia.
So
basically,
there
are
two
component
through
a
controller
and
curiously
high
Curie
controller
is
a
daemon
which
is
running
all
the
time
and
it
will
monitor
the
to
be
API
to
monitor
the
event
of
community
resource,
even
such
as
part
of
creation,
division
and
also
the
service
and
point
creation
deletion,
and
it
will,
after
cactus,
even
Korea
controller,
will
to
the
human
resource
management.
So
it
will
try
them
go
in
our
case.
B
F
B
B
Yes,
as
I
mentioned,
that
this
part,
this
tool
field,
I
didn't
to
use
at
all.
I
put
it
here
just
to
keep
compliance
with
the
requirement
in
the
de
facto
standard.
So
I
can
do
something
here,
but
without
using
to
my
is
a
good
way
to
use
the
configuration
section
or
no
or
actually
used.
The
annotation
part
is
that
as
suggest,
bang
yeah.
A
Yeah
two
points
on
that
to
support
the
thick
plug-in
use
case,
based
on
some
discussions
that
we
had
had
I
believe
a
month
ago
in
this
meeting
in
the
latest,
spec
iteration
I
did
end
up
removing
the
plug-in
field
from
the
network,
attachment
template
object
and
that
would
get
rolled
into
the
config
field.
You
would
have
to
define
some
if
you
were
a
C
and
I
plug-in
calling
other
Sienna
plugins.
A
This
config
would
have
to
define
some
minimal
things
like
the
type
at
the
very
least,
but
then
the
implementation,
if
they
are
calling
further
CNF
plugins,
is
responsible
for
injecting
the
name
field
into
the
config
before
sending
to
the
plug-in.
So
as
the
spec
currently
stands,
you
do
not
need
to
have
config,
be
a
100%
valid
CNI
configuration,
but
the
implementation
needs
to
make
that
a
valid
C
and
I
config
before
it
actually
calls
a
further
C
and
I
plug
in.
A
If
you
don't
want
to
it,
had
to
get
moved
in
there,
the
plugins
stuff
had
to
get
moved
into
the
config,
because
we
do
need
to
allow
for
CNI
versions
and
CNI
runtime
config,
so
that
we
can
pass
some
things
down
into
the
plug-in
if
we
need
to
but
I,
don't
think
any
of
that
affects
you
or
the
the
courier
stuff.
That
would
only
be
for
like
Malta
sand,
CNI
Genie.
A
A
If
you
feel
free
to
comment
in
the
spec,
I
personally
would
like
to
keep
config
as
a
CNI
configuration
field
just
so
that
we
can
plug
ins
that
implement
the
spec,
know
what
they
can
expect
there
and
then
anything
that
is
not
actually
a
CNI
delegating
plug-in
I
I
would
recommend
to
use
annotations
to
store
their
what's
the
word
for
it
any
of
the
the
private
data
that
they
want.
I
think,
at
least
in
your
case
I.
It
looks
like
couriers
already
using
annotations
and
I.
A
Don't
think
it
would
be
too
much
harder
to
do
that.
I
think
it's
something
we
can
definitely
think
about
for
v2
in
the
spec,
but
I.
Think
at
this
point
you
know
I'd
like
to
just
keep
it
seeing
I
config
so
that
everybody
knows
what
to
expect
out
of
that
field
as
opposed
to
having
to
try
to
detect.
What's
in
there
whether
it's
valid,
C
and
I
can
figure
not
does
that
make
sense.
B
Yes,
exactly
so,
the
these
are
just
used
by
by
career
controller.
So,
in
this,
in
this
diagram,
in
this
proton
is
the
process
of
pod
creation
and
how
career
and
notice,
and
so
the
patch,
the
thing
I
showed
you
have
not
murdered
to
a
cure.
Yet
so
it's
a
primitive
creative
area
implementation,
so
all
other
things
I
have
done
is
in
controller
side,
not
in
the
career
CSI,
which
means
I
just
in
in
the
cure.
B
In
order
I,
do
we
when
it
got
the
part
creation,
even
as
we
integrate
the
for
annotation
and
the
network
CRD
and
create
the
neutron
thought
accordingly,
so
that
is
the
the
meter
different
I
think
career
gap
different
from
other
CI,
because
I
do
we?
It
has
to
monitor
the
community
resource
change.
Then
it
will
create
the
object
in
of
the
stacks.
Add
first,
then,
let
the
career
say
I
to
use
that
resource,
such
as
the
neutron
power.
A
A
A
B
E
So
the
I
want
to
reiterate
the
same
statement
that
the
current
spec
have.
You
I
saw
the
documentation
that
you
remove
the
implementation
like
plug
in
one,
so
that's
kind
of
a
thick
plug
in
use
case,
but
currently
I
see
that
you
added
a
different
configuration
like
config
and
there
you
added
the
CNA
version
and
the
type,
but
it's
a
kind
of
I
was
confused
with
that
term
because
it's
a
it's
again
like
a
thing
plug-in.
Actually
it's
not
kind
of
a
thick
plug-in
case.
E
Actually,
because
if
you
see
couriered
most
of
the
work
is
done
by
the
control
at
the
CNA
side
as
nope,
it
has
nothing
to
do
with
the
network
at
all.
So
in
futures
we
want
to
do
same
thing
for
the
demons
that
kind
of
thing
future.
If
someone
came
of
some
idea,
so
I
think
this
plug-in
is
I,
don't
know
like
it's
very
suitable
or
not
like,
because
the
CNA
will
invoke
the
binary
all
the
time.
So
if
you
would
want
to
implementation,
we
invoke
that
type
all
the
time,
but
it
won't
reflect.
E
A
If
the
CNS
back
and
could
not
return
any
other.
Like
the
the
new
result
structure,
that's
in
the
cni
0
3,
0
spec.
So
the
issue
there
is
that
we
need
to
have
a
little
bit
more
information,
or
at
least
the
ability
to
pass
a
CNI
version
that
that
thick
plug-in
supports.
And
so
that's
why
it
was
kind
of
rolled
into
the
config
field,
but
again
to
keep
that
as
simple
as
possible.
A
If
all
your
thick
plug-in
supports
is
0
1
0
of
the
CNI
spec,
then
you
don't
have
to
put
the
Siena
version
field
in
there.
You
can
just
leave
type,
and
essentially
your
CNI
config,
for
the
network.
Object
would
just
be
a
adjacent
structure
with
type
as
the
only
key
in
it,
and
then
it
would
call
out
to
your
thick
plug-in
binary,
but
and
the
implementation
would
be
responsible
for
inserting
the
name
field
from
the
network
object
into
the
cni
config
before
it
called
that
thick
plug
in
binary.
A
E
E
A
A
F
A
F
E
So
current
implementation,
I
did
it
is
like
if
we
have
this
cube,
config
or
cue
client
information,
then
by
default
I'm
writing
the
network
status.
But
if
something
failed
on
the
network
status
did
I
want
to
send
a
error
message
or
I
just
need
to
skip
the
network
status.
And
that's
what
my
question
like.
E
If
something
is
errors
like
the
spec
changes
or
something
is
changing,
because
when
I
walk
in
that,
if
you
have,
you
will
see
an
eye
plug-in
which
is
like
zero
point,
two
or
one:
it
won't
populate
enough
result
information,
so
you
can't
get
any
information
out
of
those
plugins,
so
the
cn8
effect
should
be
zero
point.
3.0
by
that
plugins
only
will
get
enough
network
information
from
the
result,
so
I'm
thinking
it
should
be
optional
or
yeah.
A
We
should
update
this
back
to
say
that
plugins
should
try
to
put
in
as
much
information
into
the
status
as
they
can
from
the
cni
spec
so
that,
even
or
from
the
cni
results,
so
that,
even
if
it's
a
zero
to
zero
or
earlier
result
that
doesn't
have
all
that
detailed
information
that
it
still
fills
in
as
much
as
I
can,
because
it
still
should
be
able
to
get
at
least
the
IP
address.
You
know.
E
A
Of
the
deepest
details
out
of
it,
but
it
won't
be
able
to
get
the
interface
name,
but
that
said,
the
implementation
knows
what
interface
name,
or,
at
least
in
the
mall
test
case,
knows
what
interface
name
that
attachment
should
be
using,
and
so
it
could
fill
in
the
interface
name
itself.
If
it
wanted
to
okay.
A
E
A
A
Let
me
see
where
that
is
really
quick,
that
is
in
the
spec,
and
that
is
section
three
point:
three
network
object
naming
rules,
so
it
has
to
all
object.
Names
in
cube
have
to
be
units
of
DNS,
1,
1,
2,
3
label
format,
and
we
thought
that
that
was
and
I
think
Doug
was
had
brought
this
up
a
number
of
meetings
ago.
He
thought
that
that
might
be
unnecessarily
restrictive
to
require
that
the
network
object
to
have
the
same
name
as
whatever
external
network.
A
The
network
object
is
actually
an
attachment
template
for,
and
so
the
spec
has
language
that
strongly
recommends
that
the
network
object
name
be
the
same
as
whatever
that
external
network
happens
to
be,
whether
for
a
thick
plug-in,
that's
something
like
in
a
neutron
database
or
whether
for
mole.
This,
for
example,
that
is
the
name
inside
the
cni
Jason
configuration,
but.
A
A
A
E
A
Yeah
sorry
I'll
clarify
that
the
problem
is
of
that
race
condition,
because,
if
you're
unable
to
well
the
wording,
they
are
supposed
to
say
that
it's
recommended
that
you
configure
cubelet
with
a
a
specific
CNI
config
directory
for
your
plugin.
So
in
the
mall
test
case,
you
would
pass
test.
Cni
comforter
equals
I,
don't
know
at
c--
CN,
I'm
meltus
d,
cubelet
and
then
multiple
right,
it's
config
at
C
multi-multi
when
it
was
ready
and
then
kubernetes
would
find
that,
as
opposed
to
trying
to
write
everything
dad
CC
and
I
met
D.
A
E
A
A
Yeah
section
6.1
in
the
specification
in
six
point
one
point:
one
attempts
to
address
that
and
I
think
six
point
one
point:
one
does
have
a
more
or
less
viable
alternative,
even
if
it's
not
preferred
because
it
does
indicate
node
readiness
before
the
cluster,
where
I
default
network
is
actually
ready
which
is
sort
of
what
we
want
not
to
happen,
but
just
probably
not
a
good
way
around.
It.