►
From YouTube: Network Plumbing WG Meeting 2018-02-01
Description
Network Plumbing WG Meeting 2018-02-01
A
Yep
recording
is
now
on
and
we
will
kick
off
the
meeting
so
again,
I've
pasted
a
link
to
the
agenda
document
into
the
zoom
chat,
so
you
can
kind
of
follow.
Along
with
us.
We
basically
had
a
pretty
short
agenda,
but
we
can
also
go
over
the
changes
to
the
document.
I
send
out
a
mail
earlier
today
talking
about
the
changes
that
I
had
made
based
on
last
week's
excuse
me,
two
weeks
ago,
meeting
and
I
think
the
biggest
changes
were.
A
Okay,
alright,
so
those
changes
were
I
had
clarified
that
the
run
time
at
a
plugin
must
always
attach
the
pod
to
the
cluster
wide
default,
Network,
keeping
the
existing
queue
behavior
and
then
any
additional
or
any
network
specified
through
this
specification
and
the
annotation
would
be
additional
sidecar
networks.
I
have
a
link
to
the
video
there
where
that
was
discussed
from
last
time
and
the
time
on
that.
So
you
can.
If
you
want
to
go
back
and
look
at
that
or
listen
to
that
and
the
reasonings
for
that.
A
Removed
the
bits
about
default,
Network
stuff-
we
had
also
discussed
that
quite
extensively
last
meeting
and
just
to
recap
on
that.
That
was
a
question
of
what
network
gets
reported
to
kubernetes.
What
network
gets
the
default
route
and
what
network
gets
used
for
health
checking
because
of
the
change
for
always
attaching
the
cluster
wide
default
network.
Those
questions
mostly
disappear,
because
the
cluster
by
default
network
will
obviously
be
the
one
that
gets
reported
back
to
kubernetes
and
currently
that
would
be
the
one
that
gets
the
default
routes
and
so
on
question.
There
yep.
A
A
C
C
D
Done
like
on
a
completely
site,
don't
care
like
I,
don't
know
Kate
on
telling
us,
like
you
know,
there's
like
this
17
minutes
of
extending
kubernetes
or
something
like
that
that
we
should
look
at
before.
We
do
this
and
did
anything
come
out
of
it
because
I
don't
remember
like
a
pointer
to
that
yeah.
A
I,
don't
think
there
was
a
pointer
because
some
of
the
extension
points
for
network
type
stuff
aren't
either
either
aren't
there
or
well
defined.
You
know
Mike.
Actually,
you
had
talked
in
the
past
about
using
what
is
it
called
an
overlay,
API
server
or
whatever
that's
called
for
some
of
your
stuff,
yes
yep.
So
that
is
one
extension
point,
but
some
of
the
other
things
that
people
are
really
interested
in
like
cube
proxy
and
obviously
like
network
stuff,
like
this,
don't
really
have
extension
points
yet.
A
B
A
We
need
and
also
scheduler
I
think
there
are
some
ways
to
do:
scheduler,
plugins
and
that's
a
pretty
big
one
for
some
of
this
stuff,
because
you
might
not
be
able
to
run
a
particular
pot
on
a
given
node
if
it
doesn't
have
some
of
those
know,
resources
so
and
most
of
the
stuff
is
explicitly
left
out
of
this
document.
For
the
moment
and.
E
D
A
D
Thing
that
we
need
to
pass
some
checklist
before
we
can
take
it
upstream
cube.
That
was
like
more
my
concern
right
because
the
the
vacated
spoke
like
made
it
sound
to
me
like
that
I.
Oh,
do
you
want
to
take
this
up
like
upstream
cube,
then
you
have
to
tell
us
like
why
these
other
things
don't
work.
Yeah,
maybe
I
understood
that
wrong,
but
that's
the
end
of
subtext
I
saw
in
what
he
said
yet.
A
What
I
got
out
of
that
conversation
was
generally
that
kubernetes
is
upstream
trying
to
take
a
philosophy
of
adding
extension
points
rather
than
trying
to
add
a
bunch
of
code
and
new
API
and
stuff
to
keep
itself
it's
kind
of
a
way
to
manage
the
project
and
keep
things
that
aren't
of
extremely
general
interest
to
all
of
kubernetes
out,
as
you
know,
kind
of
third-party
plugins
or
other
things.
People
could
just
add
in
that.
A
A
And
I
mean
I
think
also
that
was
kind
of
an
encouragement
to
start
thinking
about
this,
and
you
know,
since
all
of
us
have
some
common
use
cases,
but
then
we
all
have
our
individual
use
cases.
Extensions
do
make
some
sense
because
they
allow
us
to
obviously
do
the
stuff
that
we
want
to
do
that.
Not
everybody
else
needs
to
so
figuring
out
where
those
are
is
definitely
something
that
sig
networks
should
be
involved
in.
A
Also
getting
back
to
the
default
route,
discussion
I
did
add
two
sections
at
the
bottom
of
this
document,
as
we
talked
about
last
time,
which
were
open
questions
which
we
could
discuss
today.
If
we
run
out
of
things
to
discuss
and
then
previously
discussed,
topics
and
one
of
those
was
the
default
in
this
question,
and
we
made
a
decision
last
time
to
always
attach
the
default
cube,
Network
and
all
Network
specified
by
the
CRT
is
sidecars
or
should
not
refer
to
cube
and
thus
the
default
gets.
A
Trying
to
think
of
other
big
things
that
changed
in
the
document.
From
last
time,
oh
I
did
add
and
I'm
tree
price
aw.
This
mic
I
added
the
status
stuff
I
think
you
had
requested
that
from
the
last
time,
I
something
like
that
and
so
I
added
a
kind
of
example
in
for
what
that
would
look
like
and
changed
it
to
a
dictionary.
I
believe.
F
A
A
A
B
A
B
B
B
B
A
B
A
C
A
That
does
bring
up
another
question
that
maybe
we
want
to
talk
about
which
I
think
Mike
had
brought
up,
which
is
also
in
the
open
questions
section
below
in
the
specification,
which
is
pot
attachment,
specific
properties,
for
example.
If
you
want
to
be
able
to
specify
the
IP
address
and
the
MAC
address
and/or
the
MAC
address
for
a
given
pod
network
from
the
cube
API.
Obviously
that
is
pod
specific.
It
is
per
pod
per
attachment.
It
is
not
permanent.
C
C
D
A
And
there
is
a
scene,
a
convention
for
specifying
a
static,
MAC
address
that
is
only
currently
honored
by
the
host
local
plugin.
But
since
it's
a
convention
we
could
theoretically
repurpose
that
for
any
other
ipam
plugin,
which
means
that
you
know,
if
you're
writing
your
own
C&I
plugin
for
this,
then
you
could
consume
that
value.
A
D
Some
work
ongoing
that,
like
I'm
kind
of
like
kicking
off
to
me,
to
make
sure
that,
like
a
DHCP,
can
hand
out
like
MAC,
addresses
like
on
a
temporary
basis,
but
it
won't
be
like
done
anytime
soon.
This
just
kicked
off
so
like
there's
I,
please
doing
some
work
to
like
figure
out
like
you
know
how
we
do.
D
D
Like
you
know,
we
can
have
a
you
IP
as
the
identifier
for
the
DHCP
client,
and
then
it
makes
us
easier
to
just
get
a
MAC
address
at
that
point,
but
there's
like
what
to
do,
but
I
can
probably
share
like
an
initial
draft
sometime
in
March.
How
do
you
plan
to
do
this?
Like
there's
the
idea
of
meeting
in
London,
but
I'm
gonna,
discuss
this
with,
like,
like
Cisco
and
like
bunch
of
other
people,
so
we'll
have
I
can
share
like
some
early
thoughts,
probably
at
end
of
March,
okay,.
A
A
A
B
Off
in
general,
kubernetes
tries
to
follow
the
style,
it
says
for
any
map.
The
keys
are
defined
at
you
know,
developer
time,
not
not
that
you
know
but
runtime
data
in
the
keys.
So
in
this
case
right
there
it
would
be
actually
a
list
of
objects
with
a
name
field
rather
than
a
map,
and
if
we
follow
that
approach
that
it's
a
list
of
maps
well,
you
know
at
least
as
long
as
we're
just
slinging
JSON.
Here
we
can
say
well
we'll
take
a
list
of
either
a
map
or
string.
A
B
B
Yeah
I
think
we
could
separate
it
so,
whether
it's
a
map
or
a
list
of
objects
with
a
name
field,
I
think,
is
a
separable
question.
It's
merely
a
design
style,
but
I
have
seen
that
the
kubernetes
somewhere
there
is
a
strong
expression
of
this
design,
style
and
I've.
Seen
things
getting
changed
to
follow
this
design
style
so
yeah
we
might
as
well.
Do
it
because
it's
it's
the
style
that
kubernetes
is
preferring
in
general
when.
B
Instead
of
a
map
from
whatever
kind
of
key,
you
want
to
the
rest
of
the
information
or
I'm,
sorry
I'm,
sorry
right
for
the
speculum,
supposing
just
a
list
of
strengths
I'm.
So
instead
of
oh
I,
see
I,
see
okay,
so
for
the
status
right.
The
status
is
contrary
to
the
usual
design,
because
it's
putting
user
data
in
the
keys
in
a
dict
right.
So
the
status
should
be
rather
a
list
of
objects
that
have
a
name
or
reference
field
or
something
that
gets
collated
back
to
the
spec.
A
B
A
B
Okay,
so
I've
been
calling
that
the
spec
right,
so
the
current
design
here
is
that
it's
a
list
of
strings
which
are
each
thing
stream
being
a
reference
to
one
of
these
objects.
I
would
suggest
that
it
instead
it's
a
list
of
again
a
JSON
and
well.
The
JSON
term
is
object,
unfortunately
right,
but
it's
a
dictor,
a
map
right,
it's
record
in
ordinary
terms,
and
one
of
the
fields
of
that
can
be
the
string
that
refers
to
the
network
and
no
attachment
factory
object,
and
that
gives
us
the
option
to
have
other
fields.
C
A
All
right
see
next.
A
Specify
the
interface
name
for
CNI
plugin.
So
what
happens
is
when
you
call
a
given
C
and
I
plug
in
or
a
CNI
config
list?
You
are
required
to
specify
an
interface
name
and
the
plugins
are
required
to
honor
that
interface
name
for
the
interface
that
they
actually
attach
in
the
pod.
Technically,
that's
how
those
are
the
requirements
of
CNI.
A
Because
the
user
currently
cannot
request
those
names
for
a
interface,
so
what
happens
currently
is
that
cubelet
actually
passes
eth0
as
the
requested
interface
name
for
the
single
cluster
wide
default
network.
I'd.
Imagine
that
doesn't
change,
but
that
brings
up
the
question
of
what
do
we
do
for
the
side
card
networks
that
are
specified
by
this
document
right?
A
A
I
mean
that
said.
Currently
there
is
nothing
that
prevents
the
plug-in
from
not
using
the
given
interface
name
in
CNI,
C
and
I
doesn't
care.
It
doesn't
actually
check
what
the
interface
name
is
coming
out
of
the
call.
So
you
can
sort
of
get
away
with
using
a
different
name
and
just
report
that
in
the
spec
or
sorry
in
the
cni
result,
but
there
are
some
plugins
that
do
use
that
interface
name,
especially
the
default
reference
plug-ins
for
CNI,
and
so
we
do
need
to
figure
out
how
that
should
work,
I
mean.
A
A
B
So
I
think
we
should
have
independence
between
the
you're
calling
it
the
meta,
plugin
I'm
thinking
of
a
daemon
process.
We
probably
want
to
get
into
that
at
this
point.
Your
point
is:
there's
some
kind
of
code.
That's
invoking
CMI
plugins
to
make
these
individual
attachments
yep.
We
want
some
independence
between
that
code
that
we're
kind
of
designing
here
and
the
individual
scene
I
plugins
themselves.
The
key
called
yeah.
B
G
A
So
some
background
on
that
I
think
that
we
might
be
able
to
change
that
in
CNI
in
the
future,
because
I
think
that
was
mostly
historical
before
Si
and
I
was
able
to
report
the
interface
names
that
had
actually
configured
back
to
the
runtime.
This
was
actually
the
only
way
that
a
runtime
would
know
what
the
interface
name
was.
B
A
A
Well,
I
mean
in
the
end
it
does
currently
look
like
the
interface
name
is
required.
We
could
I
mean
what
I'll
do
is
I'll
just
update
the
specification,
the
CRD
document
here
to
say
that
the
meta
plugin
is
responsible
for
ensuring
that
the
interface
names
sent
to
each
CNI
config
list
invocation
is
unique
for
that
pod,
which
isn't
that
hard,
because
to
actually
run
multiples
of
these.
If
you
want
to
run
it,
every
single
network
that
specified
in
the
pot
annotation
you're
gonna
be
doing
that
most
likely
in
a
single
executable
invocation
anyway.
A
B
A
So
and
I
think,
as
I'd
said
in
the
comments
you
can
solve
this
by
creating
additional
network
attachment
descriptions
that
have
a
separate
network
name
and
then
select
all
of
those
yeah
and
I.
Think
in
the
comments
you
had
said
that
the
pod
author
may
not
be
able
to
create
these
or
the
pod
creator
may
not
be
able
to
create
additional
network
attachment
that.
B
B
B
Sorry
I'm,
so
if
you
propose
to
enable
multiple
attachments
to
the
same
network
through
the
hack
of
making
multiple
network
attachment,
factory
objects,
yep
heard
of
the
same
network
that
much
you
could
do,
but
the
spec
says
that
when
the
C&I
plug-in
is
invoked,
the
network
name
that
is
given
is
the
name
of
a
network.
Attachment
factory
object
yep,
so
it
will
be
given
to
different
network
names
when
it's
invoked
correct,
but
I
thought.
Our
goal
here
was
to
enable
multiple
attachments
of
the
same
network.
Yeah.
A
B
A
The
description
excuse
me,
the
network
description
that
we
have
in
the
object
can
be
the
same
for
multiple
objects.
You
know,
for
example,
you
could
include
like
a
UID
or
something
like
that
in
the
CNI
conformation,
for
this
given
network.
That
would
be
the
same
for
multiple
objects,
even
though
they
have
different
names.
That's
one
way
to
potentially
get
home.
So
then
in
your
plugin,
you
would
identify
that
network
through
the
UID.
If
you
have
a
long-running
plug-in
or
a
thick
type
plug-in.
A
If
you
wanted
multiple
attachments
to
the
same
network
using
kind
of
the
reference
CNI
plug-ins,
then
you
would
essentially
just
have
the
same
cni
jason
and
a
different
network
name
now.
Obviously
this
wouldn't
work
for
host
local
because
host
local
actually
uses
a
network
name
as
part
of
its
datastore
for
figuring
out
what
the
ipam
should
be.
B
Something
there
was
some
place
in
this
document.
I
forget
exactly
where,
where
it
is
said,
when
the
C&I
plugin
is
invoked,
the
network
name
past
is
the
name
of
the
kubernetes
object.
No,
if
ands
or
buts
I
think,
maybe
your
meant
that
the
isn't
the
name
is
the
name
of
the
careers
object
if
it's
not
otherwise
specified,
but
it
can
be
otherwise
specified.
No.
A
A
That
is
named
something
completely
different
from
the
actual
network
name
that
is
sent
to
the
CNI
plugin
when
the
CNI
plugin
is
invoked,
and
also
due
to
this,
you
would
not
be
able
to
use
on
disk
kubernetes.
Excuse
me
on
disk
CNI
Jason,
with
a
different
network
name
for
multiple
objects.
If
you,
my
point,
is
probably
not
very
clear.
B
Ok,
so
when
the
meta
code
gets
around
to
invoking
the
C&I
plugin
that
makes
attachments
to
network
one
for
this
pot,
it's
gonna
want
to
make
two
invocations.
Let's
talk
about
the
first
one
when
it
makes
the
first
invocation
there
is
a
studying
right
is
given
a
some
JSON,
which
includes
a
network
name
field.
I
forget
whether
it's
called
networker
name.
It's
just
called
name
okay,
but
it's
intended
to
identify
the
network
correct.
So
what
value
is
in
there?
That.
A
And
obviously
that's
what
you're
getting
at
and
I
understand
that
that
is
a
little
tricky.
The
problem
that
we
have
here
is
that
CNI
defines
a
unique
invocation
of
a
CNI,
defines
a
unique
invocation
of
a
config
or
what
have
you
as
network
name
and
container
ID.
So
in
this
case-
and
this
is
through
the
arguments-
the
environment
arguments-
that's
CNI,
network
I
think
is
what
it's
called
and
CNI
container
ID.
A
B
With
CMI
in
general,
and
here
we're
talking
about
an
expansion
of
CNI
semantics
to
some
degree,
yes,
yeah-
and
this
is
exactly
this-
is
part
of
it
right.
The
degree
to
which
we're
expanding
the
semantics
is
before
now:
C&I
is
only
gonna
make
a
given
plug-in
is
gonna.
Make
knows
that
it's
going
to
make
the
one,
and
only
no
well,
ok
and
we're
changing
it.
So
it's
gonna
be
keep
making
several
several.
A
Yeah
I
would
say:
that's
not
entirely
accurate.
We
are
changing
how
or
we
are
discussing
changing
how
CNI
identifies
uniqueness
of
invitations.
Previously
CNI
has
always
defined
that
to
be
the
network
name
and
the
container
ID,
and
so
on.
Disk
you
could
not
have
technically
could
not
have
two
files
with
the
same
name
field
in
them
and
ever
expect.
An
invocation
for
the
same
container.
A
I
would
point
out
that
seeing
I
was
explicitly
designed
with
multi
network
Ness
in
mind
and
rocket
has
always
allowed
that
to
happen,
and
that's
why
there's
a
directory
for
seeing
I
config
files
at
CC
and
I
net
D
and
the
way
that
C
and
I
originally
worked
before
cube,
came
in
and
sort
of
restricted.
How
it
works
was
rocket
would
actually
execute
every
single
CNI
config
file
in
at
CC
and
I
net
D
for
a
given
container.
A
So
you
were
able
to
do
that
with
rocket,
and
you
know
since
the
beginning
that
required
an
individual
network
name
for
each
of
those
configs.
What
we're
discussing
here
is,
and
actually
I
pack
up
slightly
and
point
out
I.
Don't
think
that
rocket
was
really
thinking
about
multiple
attachments
to
the
same
network,
in
the
way
that
we
are
so
yeah.
That
is
something
that
CNI
is
a
little
less
flexible
on.
A
B
So
I
stand
corrected,
we're
restoring
some
of
the
generality
that
rocket
had
and
kourounis
took
away
correct.
That
is
the
the
first
and
primary
statement
of
this
wholeness
yep
and
right
now,
with
this
multiple
attachments
to
the
same
network,
we're
talking
about
an
additional
level
of
generality
that
was
not
even
present
in
rocket.
Yes,.
A
Yep
and
originally
c-
and
I
was
I
believe
it
was
the
rocket
network
code
that
then
got
split
out
into
its
own
project
and
then
evolved
independently
on
its
own
for,
like
the
last
three
or
so
years
so
yeah.
That's
why
rocket
enters
the
discussion,
so
yeah
I
mean
the
two
solutions
to
this
currently
are
somehow
explore:
changing
the
cni
spec
to
add
multiple
or
to
add
an
additional
point
of
uniqueness,
whether
that
is
the
interface
name
or
something
else
or
the
other
option
is
to
if
your
plugin
wants
to
support.
A
B
A
A
Well,
that's
not
quite
the
right
way
to
explain
it,
but
Sina
requires
an
add
and
delete
pairing.
So
you
can
add
a
container
once
and
the
next
ad
is
technically
there
supposed
to
be
an
error,
because
CNI
ads
are
not
a
dumb
patent,
because
plugins
are
it's
harder
to
write
that
kind
of
plugin
and
so
to
keep
things
simple.
You
can
only
do
an
ad
and
a
delete
and
an
ad
Natalie.
You
can't
do
ad
ad
ad
delete.
A
We
did
redefine,
delete
to
be
an
impotent,
so
you
should
be
able
to
do
ad
delete,
delete,
delete,
delete,
and
that
was
mainly
because
cubelet
was
kind
of
brain-dead
and
did
that
a
lot,
and
so
that's
kind
of
where
another
reason
the
restriction
comes
in.
But
even
if
that
restriction
wasn't
there-
and
currently
you
called
ad
with
the
same
network
name
and
container
ID,
then
most
CNI
plugins
would
assume
that
was
for
the
exact
same
network
and
probably
return
you
the
same
result
as
opposed
to
doing
an
additional
attachment
so
yeah.
A
So
let's
keep
that
as
an
open
question,
I
mean
I
can
bring
that
question
up
in
the
cni
maintainer
z'
meetings.
There
should
be
one
next
week,
so
I
can
try
to
get
some
more
information
on
that
or
see
what
other
people
think
about
this
problem,
and
maybe
they
have
some
other
ways
to
solve
it.
Okay,
please
do
will
do
okay,
so
the
last
option
there
was
resources
instead
and
that
was
brought
up
by
Dan,
Jennings
burg.
A
I,
don't
think
he's
on
this
call,
but
Jeremy
is
on
this
call
and
he's
worked
a
lot
with
resources.
Maybe
he
has
additional
ideas.
There
I
thought
it
looked
like
an
interesting
idea,
but
I
feel
like
it
would
be
something
that
would
need
more
exploration
and
not
something
that
we
would
be,
including
in
this
particular
spec
for
the
moment,
because
it
would
require
further
changes
to
kubernetes
and
I
think
you
had
also
brought
up
some
of
those
changes.
Mike
yeah.
B
E
A
Dan
had
left
a
comment
on
the
dock
and
he
was
he
said
how
about
specifying
required
networks
as
required
resources
within
the
container
spec.
This
allows
everything
else
that
is
defined
in
the
stock,
but
also
lets
each
node
expose
which
CNI
plugins
are
deployed
on
to
it.
He
had
also
sent
a
mail
to
the
Signet
list
earlier
expanding
on
that
a
little
bit
right.
B
And
so
my
semantic
objection
is
that
in
kubernetes
today,
the
container
resources
are
explicitly
defined
as
compute
resources
and
distinguished
from
API
resources,
and
it
says,
noted
these
compute
resources,
things
that
can
be
allocated
and
consumed,
and
you
know
there's
there's
things
right
there:
the
resources
that
the
node
has
that
could
get
used
up
by
pods
and
that's
just
not.
What
we're
talking
about
here
is
a
different
kind
of
resource.
A
Well,
he
says
in
the
mail
if
pod,
if
a
pod
needs
access
to
a
network,
one
of
its
containers
should
specify
the
vendor
slash
name
of
the
network
as
a
required
resource.
On
the
other
hand,
nodes
that
have
this
network
connected
would
have
a
vendor
specific
device
plug-in
advertising
that
they
have
access
to
the
vendor.
Slash
workname,
when
a
pot
is
scheduled
and
I
know
the
device
plugin
is
expected
to
allocate
the
requested
device
for
network
vendors.
This
allocation
would
take
place
via
the
invocation
of
CNI.
E
Are
scheduling
purposes
Network
was
one
of
four
or
five
different
constraints
that
we
are
going
to
try
and
match
against.
This
is
verging
on
this
is
a
resource
class
discussion.
That's
going
on
if
you've
seen
that
we
basically
have
you
know
for
any
reasonable
application,
we're
going
to
want
the
network's
one
piece
and
there's
there's
like
there's
arbitrarily
complex
selectors
that
we
need
to
be
able
to
put
in
there.
E
So
the
way
we
are
going
to
build
these
things
is
we're
likely
going
to
do
in
mission
controllers,
which
automatically
paint
certain
thoughts
and
I'm
sorry
add
colorations
to
pods,
and
we
will
take
nodes
with
a
problem
with
a
daemon
set
that
uses
something
like
that
feature
discovery
to
see
what's
plugged
in
and
what
the
node,
what
the
notes
capabilities
are
I
believe
is.
This
is
the
way,
at
least
in
the
resource
management
space,
for
thinking
of
the
full
kind
of
user
experience,
so
so
Dan.
B
To
put
this
in
perspective,
right,
I
think
if
I
completely
agree
that
network
attachments
may
have
implications
in
terms
of
consuming
compute
resources,
but
they
are
not
prima
facie
compute
resources
yeah
like
so,
you
would
use
perhaps
an
admission
controller
to
automatically
add
the
scheduling
constraints
to
deal
with
the
compute
resources
that
are
involved
in
implementing
network
attachments.
But
a
network
attachment
itself
is
a
different
kind
of
thing.
Yeah.
A
I
would
agree
with
that.
You
know.
I
can
see
cases
where
you
know
if
you
need
a
connection
to
a
particular
network
and
the
note
does
not
actually
have
that
or
if
the
note
is
not
configured
for
a
specific
VLAN
or
you
know,
the
node
doesn't
have
some
kind
of
really
fast
40
gig
card,
or
something
like
that
that
you
need
those
right
strengths
and
those
are
things
that
might
map
well
the
resources.
But
at
the
same
time,
if
you're
thinking
about
more
of
the
software-defined
network,
that
is
kind
of
meaningless
well,.
B
Those
are
more
like
taints
and
toleration
for
resources.
You
might
think
about
something
like
virtual
functions
in
a
NIC
which
only
has
32
virtual
functions
and
they
can
get
used
up
by
network
attachments
yeah
those
those
are
compute
resources
that
the
scheduler
needs
to
count,
but
again,
like
the
the
non
counted
ones.
These
are
consequences
of
the
attachment
they're
part
of
the
implementation
of
the
attachment
they're,
not
the.
A
Yeah
I
would
say
for
this:
you
know.
Maybe
we
should
kinking
about
it,
ask
Dan
a
little
bit
more,
what
he
means
and
maybe
keep
watching
it
again.
I,
don't
think
it's
as
relevant
for
this
discussion
at
this
point
or
not
discussion
before
this
standard
de-facto
standard
at
this
point,
given
that
it
would
probably
take
changes
to
kubernetes
itself,
but
possibly
something
to
keep
in
mind.
The
only
reason
I
thought
it
was
interesting
was
that
we
could
potentially
piggyback
on
other
infrastructure.
A
Something
to
keep
in
mind
perhaps
does
that
sound,
okay
Jeremy,
if
you
want
to
kind
of
keep
us
updated
on
where
things
go
with
resources
and
if
those
continue
to
get
a
little
bit
more
generalized.
And
maybe,
if
you
see
further
points
for
networking
to
use
some
of
those
things
feel
free
to
jump
in
and
speak
up
in
the
future.
Yeah.
E
E
One
of
the
key
pieces-
that's
missing
from
that
solution
is
quota
and
I.
Think
Derek's
opinion
was
that
we
needed
to
quota
these
things
before
we
can
just
turn
people
loose
and-
and
so
we
may
have
to
do
both
of
those
before
we
sort
of
in
parallel
I'll.
Let
you
guys
know,
but
I
think
one
thing
we
need
to
do
is
sink
with
doggins
Don
I
saw.
E
Hey
my
question,
for
you
was
you
know
if
you've
had
any
issues
standing
up
and
see
our
knees
and
and
for
your
use
cases,
would
you
find
the
resource
classes
useful,
essentially,
if
anyone's
not
familiar
the
resource
classes?
Allow
it
to
select
partner
based
on
metadata
that
something
supplies
like
with
the
device
plug-in
supplies,
such
as
the
capabilities
of
the
car
and
firmware
levels?
Weird
things
like
that,
so
I.
G
Maybe
it
was
just
before
the
last
working
group
meeting,
but
I
did
do
a
staring
compare
of
the
spec
with
the
fourth
or
branch
that
crolick
created
of
mulches
CRD
that
matched
the
spec
and
I
did
I
was
able
to
get
that
up
and
running
with
just
a
little
bit
of
work
that
later
went
into
that
branch
too,
but
it
worked
generally
as
a
baseline.
I
need
to
circle
back
around
to
that
and
look
at
this
new
map
structure
that
we've
added
there
and
put
my
hands
on
it
and
see.
D
A
Right,
okay
next
meeting
will
be
February,
15th
and
I
know.
We've
talked
about
some
people
have
brought
up
different
times,
I'm,
not
really
sure,
if
there's
a
great
time
that
works
for
everybody.
But
let's
take
that
on
list
I've
sort
of
ignored
those
males
for
the
moment.
But
if
anybody,
if
we
can
arrive
at
a
better
time
that
works
for
more
people,
that
would
be
awesome,
I'm,
not
hopeful,
but
we
could
try.
Maybe.