►
From YouTube: Multi-Network community sync for 20230531
Description
Multi-Network community sync for 20230531
A
A
Oh
there,
it
is
okay,
I'm
back
so
I
want
to
continue
and
finalize
the
discussion
about
dra
usage
and
then,
if
time
permit
just
continue
on
the
on
the
cab
discussion.
So
I'm
not
sure
we
have
anyone.
Oh
yeah
Patrick
is
Patrick,
is
here,
I
I
had
some
thoughts
about
the
dra
and,
let's
discuss
and
I
want
to
I
want
to
know
kind
of
what
you
think
or
what
I'm
going
to
want
to
say.
C
A
C
There
is
a
currently
a
mechanism
for
enabling
container
device
interface
descriptions
or
making
injecting
those
into
the
Container
runtime
and
container
device.
Interface
itself
is
a
fairly
flexible
extension
mechanism
for
container
runtimes.
That
support
almost
anything
that
you
could
imagine
with
adding
device
nodes,
mounting
volumes,
setting
environment
variables
and.
A
C
A
So
so
so
the
to
the
second
part,
what
I
would
look
at
it
as
as
one
of
the
implementation
ways
for
multi-networking,
I'm,
not
sure
Patrick.
You
had
a
chance
to
read
through
our
cap
that
we
are
writing
currently
or
for
the
requirements,
but
we
are
trying
to
create
multiple
ways
to
how
you
can
implement
the
multi-networking.
We
have
existing
Motors,
we
have
other
Google
builds.
A
We
have
our
own
stuff
that
we
are
building
for
multi-networking
support
if
we
I
were
to
use
the
array-
and
we
would
integrate
fully
with
it
like
what
you're
thinking
that
would
hinder
that,
because
now
that
we
are
enforcing,
you
have
to
implement
through
the
array
end
of
the
store,
not
nothing
else.
B
I,
don't
think
that's
the
right
way
to
think
about
this.
So
in
my
mind
we
have
a
multi-network
proposal
and
we
have
dra
and
they're
independent
things,
but
we
want
them
ideally
to
be
positive,
for
it
to
be
possible
to
use
the
two
together
now
for
the
most
part,
they
don't
need
to
know
about
each
other,
in
that
you
could
be
using
dra
to
get
SRI
OV
devices
or
something
like
that,
and
it
isn't
necessary
for
the
the
stuff
that
we're
talking
about
in
these
meetings
to
be
engaged
with
that
at
all.
A
How
would
you
probably
so?
No
so
so!
That's
probably
true,
because
I'm
not
I,
don't
want
to
I'm
my
first
maybe
kind
of
tone.
Maybe
what
was
about
this
thing,
but
not
that
because
I'm
trying
to
kind
of
address
the
other
part
of
the
array,
the
cubelet
side,
but
I
will
step
back
to
the
scheduler
side.
A
First,
then,
what
I'm
saying
it's
an
enhancement
of
how
a
schedule
can
be
done,
which
I
do
like
and
basically,
as
you
Peters,
you
know,
Patrick
last
week
you
said
you
would
have
to
anyway
change
how,
for
example,
you
do
stuff
for
the
pods,
because
our
stuff
has
to
be
on
a
pod
level,
not
on
the
container
level.
Considering
you
already
have
to
change
that
part
and
then
adjust
a
application
of
this.
A
Why
even
bother
with
that?
Why
not
just
use
what
we
initially
create
with
our
apis?
What
I'm
saying
here
is
the
scheduler
part.
I
do
agree
that
let's
reuse
the
scheduler
part
for
our
implementation
as
well
for
multi-networking,
but
what
I'm
trying
to
kind
of
get
to
is
the
scheduling
part
that
we
at
least
at
that
part
and
I
understand
there
is
the
cubelet
site,
which
we
can
kind
of
talk
in
a
second,
but
the
scheduling
part
of
the
whole
thing.
A
This
is
something
that
we
definitely
want
to
touch
in
multi-networking,
but
I
don't
see,
and
this
is
something
that
on
our
requirements
list
is
some.
It
is
on
the
second
phase.
If
you
look
at
our
requirements
list,
there
is
a
second
phase
in
which
we
have
a
requirement
which
says
Let,
Me
Maybe.
No,
that's
not
it,
which
says:
where
is
that
I
can
just
recall
I
had
somewhere
here,
pod
Network
can
be
selectively
Fair
node
available
in
the
cluster.
So
basically,
then
we
need
to
tweak
with
the
scheduler
and,
what's
not
so
what?
A
If,
at
that
phase,
when
we're
gonna
try
to
address
this
requirement?
Look
at
the
array
so,
instead
of
us
we're
implementing
the
whole
thing
and
adding
new
parts
in
the
scheduler
just
reuse,
the
array,
but
the
array
can
work
off
of
the
apis
that
we
come
up
with
all
right,
at
least
for
the
our
scheduler
side.
A
For
the
Pod
side,
then,
is
the
under
hood
the
cubelet
side
on
how
we
are
going
to
deal
with
the
additional
sync
and
even
with
what
you're
saying
that
it
can
leave
kind
of
beside
that's
something
that
we
need
to
kind
of
think
about
right,
because
can
that
be
really
be
true
right
today?
A
If
let's,
let's
call
out,
for
example,
how
srov
is
done
today
in
with
maltus
and
with
the
resources,
and
the
SRV
operator
today
is
that
there
is
the
SRV
operator
which
is
completely
independent
and
it
will
prepare
your
resources,
so
it
will
create
the
numbers
of
VFS.
You
have
available
on
each
of
the
nodes
right
and
then
there
is
motors
which
is
completely
independent
and
we'll
just
point
out
to
the
specific
device
resource.
So
basically,
you
have
two
separate
components
that
can
one
prepares
the
resource
and
the
other
just
leverage
them.
A
Whether
they
are
there
or
not
here,
the
array
is
all
integrated
right
because
you
can
with
a
scheduler
okay,
you
can
list
the
the
resources,
because
I
can
query
the
controller,
and
it
can
tell
me
who
has
what,
but
then
Patrick
you're
saying
there's
a
cubelet
site
which
then
can
do
additional
stuff
and
I
assume
those
are
not
disconnected
today,
like
maltus
and
and
the
devices
reason
the
device
resources
right
I
and
they
come
in
pair
because
there
might
be
some
additional
configuration
required
for
that
special
device.
A
A
What
I
was
saying
initially,
where
we
are
stuck
with
it
and
that's
the
only
way
you
can
Implement
these
things
and
which
something
that
I
definitely
would
not
want
to
go
or
unless
I'm
missing
something
and-
and
please
correct
me
if
I'm
missing
something
or
maybe
there
are
some
other
changes
we
can
do
here
to
kind
of
address.
Alexander.
D
I
just
wanted
to
say
what
what
is
some
I
don't
know
like
misconcept
or
misunderstanding
how
the
area
is
working
so
like
yes,
the
couplet
and
controller
part
of
gray.
We
are
connected.
D
Part
is
also
connected
to
low
level
things
like
two
to
run
time
to
password
information.
D
So
the
whole
idea
behind
with
the
array
is
to
avoid
all
kind
of
like
out
of
band
or
back
door
communication
about
like
resources,
so
so
practically
like.
We
think
what
we
are
currently
doing
in
Motors
by
analyzing,
the
annotations
and
one
from
those
annotations
connecting
to
a
API
server
and
when
fetching
the
rest
of
the
configuration
and
so
on
and
so
forth.
For
me,
it
sounds
as
a
not
really
a
good
architectural
decision,
so
like
low
level
component
is
called
into
operators
information.
D
So
the
main
point
of
couplet
part
of
gear
Ray
driver
is
to
actually
prepare
where
all
we
needed
configuration
from
high
level
objects
like
cirs
or
whatever
else
store
it
locally
on
my
host
and
when
pass
it
down
to
runtime
where
this
configuration
parameters
can
be
used.
A
And
Alexander
understand
this,
like
multis
is
going
I
know
if
you
think
about
mutus,
that
was
that
was
like
two
years
ago
and
one
year
ago,
that's
true
today,
maltus
is
a
what
they
call
it
a
tick
Tina,
and
this
is
something
that
most
of
the
implementation
does,
that
what
it
means
is
this:
the
the
cnips
is
just
a
stab.
A
A
It's
it's
not
true,
because
it's
just
that
small
thing
that's
being
called,
and
then
it
calls
a
big
agent
that
has
the
full
knowledge
about
everything
can
do
all
the
proper
like
a
controller
operations
because
it
has
to
and
because
that's
the
implementation,
Alexander
And
I'm
not
saying.
D
A
And
basically,
what
I'm
getting
at
is
that's
that's
one
of
the
implementation
and
that
they
can
be.
They
can
be
the
case
where
what
you're
saying
where
yes
I,
don't
want
to
have
a
implementation.
What
that
com
just
gonna,
get
a
pass
down
parameters
and
do
this
binary
without
any
API
calls
without
network
without
any
other
networking
calls
from
that
being
been
a
binary.
It
will
just
do
and
be
contained
within
the
node
and
and
that's
it
and
that's
another
implementation,
that's
valid.
What
I'm
saying
is.
A
We
cannot
lock
ourselves
in
one
and
force
everyone
on
one
approach.
That's
what
I'm
trying
to
push
back
on,
if
the
array
and
basically
now
when
you
were
saying
that
the
array
passes
parameters,
and
that
might
be
a
useful.
A
Maybe
that's
that's
something
that
then
we
can
leverage
as
well
and
maybe
that's
something
that
instead
of
us
doing
some
additional
passing
of
the
and
creating
another
path
of
passing
whatever
pod
networks
we
selected
into
the
the
CRI,
we
can
leverage
the
array
I'm,
not
saying
that
that
cannot
happen
because
we
can
in
the
Pod
we
will
specify
our
stuff
the
way
we
want
to
and
then
through
the
array
we
can
pass
it
up
to
the
cubelet
and
then
cni,
right
and
CRI
and
then
along
the
path
along
the
road.
A
If
someone
has
a
need
to
implement
more
and
leverage
more
of
the
array
where
maybe
some
of
the
qos
staff
can
come
into
play,
and
that's
fine,
then
then
can
reuse
that
and
and
leverage
that,
but
otherwise
maybe
they
just
care
about
just
passing
the
parameters
into
the
cni
and
then
the
other
side.
Agent
does
the
whole
thing.
A
Like
straightforward
API,
this
is
something
that
I
said
last
week
right
that
we
do
have
to
have
the
API
definition.
I
don't
want
to,
because
I
see
the
array
has
a
specific
solution
for
resource
handling
and
then
passing
stuff
to
container.
That's
perfectly
fine,
but
it
doesn't
fulfill
all
the
requirements
that
we
have
for
multi-networking.
A
That's
fully
fit
the
all
requirements
that
we
are
trying
to
kind
of
discuss
here
now
and
I
agree
to
leverage
things
right,
because
then,
when
we're
gonna
go
to
the
scheduler
and
then
when
we
talked
about
when
we
talk
about
passing
things
through
to
the
cni
I
agree,
then
we
can
just
leverage
the
array
paths
so
that
we
don't
have
to
create
our
own.
C
Because
because
he
is,
he
is
showing
example:
General
files
where
there
is
a
custom
API,
the
thing
that
you
seem
to
want
and
in
addition
for
some
things,
it's
using
tra
to
me
that
looked
like
a
good
compromise,
where
you
do
have
custom
Network
apis
that
do
something
that
goes
beyond
What
dra
handles,
but
it
also
leveraged
the
array
that
particular
proposal
made
a
lot
of
sense.
To
me.
A
B
B
So
I
I
was
I,
put
a
link
to
it
in
the
slack
Channel,
so
yeah
now.
B
So
sorry
sure,
let
me
give
you
a
kind
of
30
second
summary
of
it.
Sometimes
so
dra
in
my
mind,
is
a
it's
a
bit
of
a
sledgehammer
to
crack
and
not
sometimes-
and
sometimes
it's
exactly
what
you
want.
It's
got
a
lot
of
capabilities
in
both
scheduling
and
in
terms
of
device
management
and
they're
very
tightly
tied
together
in
ways
that
close
a
lot
of
the
awkward
cases
such
as
you
schedule
appointment.
Oh
look.
The
resource
has
been
stolen
by
something
else
before
you
got
there
that
kind
of
stuff.
B
So
it's
quite
it's
very
useful
for
certain
use
cases,
but
it
is
also
probably
too
much
from
any
of
the
use
cases
cases
we
have
the
simpler
use
cases
we
have.
So
what
I
was
thinking
about
was
what
could
we
use
dra
and
what
would
that
actually
involve,
and
almost
all
it
would
involve-
is
some
kind
of
Link
from
the
important
attachment
to
say-
and
this
is
my
Dre
device
and
let
the
implementation
worry
about
setting
everything
up
and
I.
B
Don't
know
if
that's
the
right
approach,
but
it
felt
like
something
along
those
lines
might
make
sense
to
do
something.
Something
like
that.
I
certainly
think
the
thing
that,
in
my
mind,
is
missing
so
far
is
we
haven't
talked
about
if
I
have
a
pod
Network
attachment,
how
do
I
get
parameters
and
that
might
reference
dra?
It
might
reference
something
completely
different.
What
could
that
possibly
look
like
in
the
attachment
to
say
this
this
this
thing
here
has
got
to
have
this
cause,
or
this
dra
device
or
whatever
it
might
be.
A
So
this
is
something
that
we
are
I
think
our
in
our
discussion
for
the
cap
itself.
We
kind
of
I
I
touched
on,
but
I
think
we
didn't
expand
it
on,
and
this
is
what's
what's
kind
of
missing
when
you
say
resources,
so
I'm
trying
to
the
last
part
you
mentioned.
So
where
would
that
be?
What
would
that
fill
that
role
of
identifying?
What
sort
of
parameters
for
the
pots
I
want
for
a
specific
attach,
so.
B
The
way
that
I
suggest
that
I
don't
think
this
is
necessarily
the
right
way,
but
if
you
keep
going
down
to
where
it
says,
pods
you'll
see
the
link
there.
So
so
you
have
a
pod,
it
dra
tells
it
what
devices
it'll
be
attached
to
and
I've
said
the
example
I
made
up
is
an
fpga
and
some
Network
device,
and
then
you
have
something
in
the
Pod
networks
that
says
okay
for
this
one.
B
For
this
pod
Network
there
would
need
to
be
some
kind
of
linked
click
linked
claim.
That
says
this
is
the
dra
that
this
is
using
I,
don't
think,
that's
quite
right,
I
think
probably
we
need
something
more
generic
than
dra,
but
we
we'd
have
the
option
for
a
pod
Network
to
say-
and
this
is
the
the
thing
that
you
need-
that
the
implementation
needs
to
look
up
to
find
more
about
how
this
is
going
to
be
attached
to
the
network
and
I.
B
D
D
I,
want
to
repeat,
is
what
like
Geary
is
a
big
hammer
and
we
probably
don't
want
to
use
it
for
all.
The
scenarios
so
like
Madera
is
good
if
you
want
to
attach
multiple
network
interfaces
and
with
different
parameters.
D
But
if
you
have
like
part
of
your
cap,
where
you
are
talking
about
like,
let's
say
like
which
one
is
the
default,
Network
selection
like
out
of
several
variants,
we
can
do
something
more
simpler,
like
we
can
leverage
the
concept
of
qos
classes
to
select
like
blue
network
from
Red
Network
as
a
default
Network,
and
what
will
be
like
use
it
easier
for
user
experiences
for
scheduler
easy
for
everyone.
A
B
D
No,
no,
no,
not
exactly
no,
nothing,
not
exactly.
Let
me
let
me
like
try
to
open
up
a
bit
like
overall
thing,
so
our
team,
like
we
are
working
on
several
things
and
but
Geary
Qs
is
part
of
Phillips.
D
We
have
also
think
which
was
called
an
array
node
resource
interface
like
the
plugin
on
the
runtime
site,
so
we
can
affect
how
the
containers
are
created
and
how
our
ports
are
created
and
networking
part
like
we
see
in
eye
processing
is
also
something
what
we
are
looking
at.
But
all
of
those
features
were
targeting
to
one
big.
D
Well,
we
can
say
it
like
epic
story
or
something
like
what
so
under
one
umbrella,
where
we
think
what
I'm
trying
to
do
with
my
team
is
what
we
want
to
improve
the
whole
communication
between
my
coblet
between
runtime
between
well
all
kind
of
plugins
like
remove
all
this
Legacy,
what
kubernetes
accumulated
because
of
a
Docker
as
a
runtime
like
Dr,
shrimp
legacies
like
we
see
groups
management
a
comparative
interface
over
with
CRI,
using
like
back
doors
like
pod
resource
API,
with
celium,
using
to
fetch
some
of
information
of
goblet.
D
So
our
overall
goal
is
to
make
sure
what
our
CRI
apis,
where
communication
between
components
is
good
enough.
So
we
don't
need
to
create
a
new
work
around
to
password
information.
D
So
we
with
it's
not
really
bending
Theory
to
to
the
network.
It's
more
about.
We
are
trying
to
Define
what
it
means
spot
level
resources.
D
Is
it
something
what
will
be
just
like
needs
to
be
injected
to
a
container
or
it
needs
to
be
injected
into
a
namespaces
of
pod
itself,
like
Network
or
ped
or
Mount,
or
whatever
applies
to
like
information
passed
between
the
components?
So
what
we
like,
when
we
are
tailing
to
runtime,
let's
create
report,
what
like,
who
is
responsible
for
for
doing
like
c
groups,
management
or
like
namespaces
at
Apple,
or
something
right
now?
What
is
this
like
split
brain
between
kublet
and
runtimes?
D
For
what
and
what
creates
like
bunch
of
problems
like
huge
page
handling
like
in
place
vertical
pod
after
scaling,
like
always
implicit
communication,
what
is
happening
for
CMI,
like
image,
pooling
for
confidential
VMS,
and
so
on
so
like
there
are
a
bunch
of
problems
in
those
protocols,
and
we
are
trying
like
piece
by
piece
handle
that
to
to
shape
it
like
how
a
CRI,
how
is
interfaces
we'll
be
looking
in
the
next
like
few
years?
A
So
that's
that's
kind
of
fits
to
what
I
was
saying.
Cannot.
Can
we
not
just
then
Leverage
the
array
to
pass
all
the
parameters
through
the
array
that
should
be
fine
or
it
won't
fit?
It
won't
kind
of
work
if
I
were
to
have
this
piece,
Define
define
I
want
this.
That
works
right
next
element
would
be
to
leverage
the
array
to
identify
which
nodes
are
available
for
those
networks.
A
That's
one
part
to
kind
of
answer
the
question
on
on
the
scheduler,
where
specific
network
is
there
and
then
in
cubelet
we
leverage
the
array
to
pass
any
and
all
informations
about
a
specific
networks
right
whatever
we
need
to
pass
and
based
on
on
the
dra
and
then
when
I,
when
I
think
of
of
controller
for
the
schedule.
This
is
for
something
that
will
be
part
of.
We
would
have
to
create
4K
and
I'm
telling
that
will
be
part
of
the
core
where
we
would
create
our
own.
A
The
array
code,
as
well
with
in
a
scheduler,
would
understand
this
piece
because
it
will
look
either
on
the
resource
planes
or
the
Pod
networks
for
a
specific
for
the
pods.
So
it
will
understand
those
and
this
and
that
and
then
for
the
other
ones
it
will.
It
will
query
it
will
new
okay
for
pod
Network
I
queried
this
built-in
resources
or
on
this
endpoint,
which
we
can
then
make
it
pluginable
someone
wants
to
create
their
own.
They
have
a
template:
how
to
do
it.
A
They
can
create
their
own
with
more
sophisticated
stuff
if
they
want
to
so
there
is
that
capability
later
on,
and
then.
Lastly,
that's:
where
kind
of
what
I'm
missing?
How
because
I
I'm,
not
sure
that
the
CRI
part?
How
would
that
fit
in
this
is
something
that's
where
I'm
I'm
kind
of
missing
on
that
part.
So.
D
How
we
would
do
I
can
try
to
explain
so
like,
like
imagine
like
you,
are
trying
to
request
a
network
interface
like
VF,
which
is
attached
to
particular,
like
we
will
on
right
and
imagine
what
this
will.
Unport
is
not
very
configured,
so
it's
not
enough
for
you
to
just
create
with
here.
If
you
need
to
also
like
do
something
to
configure
the
switch,
so
this
will
end
will
be
available
to
this
physical
interface
and
then
obviously
to
a
sphere.
D
So
the
whole
idea
of
the
array,
node
part
like
we
just
talking
with
kublet,
is
actually
to
do
all
this
preparation
steps
on
the
Node.
What
is
needed
so
right
like
so
when,
when
report
arrives,
to
like
like
yeah,
you
have
scheduling
peace
like
let's
forget
about
it.
So
what
arrives
to
the
node.
D
Saying,
like
Okay
I
got
a
pot.
This
port
is
going
to
use.
This
particular
claim.
Do
whatever
is
needed
to
prepare
everything,
and
at
that
time
your
like
gra
Driver
part,
is
responsible
to
do
all
we
needed
preparation.
So,
for
example,
like
setup,
we
have
call
to
network
infrastructure
saying
like
Okay
when
we
switch
port
on
this
physical
report.
Provision
me
give
me
access
to
Bill
and
XYZ,
so.
A
D
D
I
I
understand
I
understand,
but
let
me
finish
so
at
this
point
where
at
this
point,
so
as
soon
as
the
resources
prepared,
where
the
driver
part
needs
to
return
something
and
in
case
of
devices,
how
we
are
using
we're
just
returning
with
unique
ID
how
to
reference
the
local
information
about
this
device.
D
So
this
ID
is
passed
down
to
like
runtime,
where
it
expanded
to
what
is
injected
into
container
time.
The
same
thing
applies
to
to
your
part
like,
like
example,
what
you
gave
like
previously
with
motors
so
Motors
will
get
us
a
parameter
from
runtime.
This
string,
ID,
saying
like
well.
This
port
is
requesting
this
particular
ID
and
maltose
can
reach
out
to
this
demon.
A
So,
okay,
so
correct
me
if
I'm
wrong
here,
if
I
understand
right
what
the
scheduler,
okay
and
polarized
to
the
node.
Let's,
let's
start
from
this
point,
there
is
a
additional.
What
I'm
hearing
is.
There
is
an
additional
pluginability
capability
and
I
can
introduce
per
each
network
if
I
want
to
before
the
request
arrives
to
cni.
A
D
So
if
you
know
over
storage
so
like
like,
you
have
like
CSI
plugin,
which
prepares
like
mounts
or
a
node
device
mounted
to
a
particular
directory
and
when
was
Google,
let's
just
say,
like
container,
you
are
going
to
use
with
directory
like
volume
which
is
already
mounted
to
this
director.
4D
array
concept
is
really
similar,
so,
like
kublet
says,
I
could
I
got
a
port.
This
port
is
going
to
use
with
resource
I,
don't
know
anything
about
resource,
hey
vendor
code.
D
Please
do
whatever
is
needed
to
prepare
what
resource
when
we
surrender
plugin
just
replace,
okay,
I,
prepare
it
or
no
I'm.
I
am
not
able
to
prepare
it
for
some
reason,
and-
and
it
gives
like
kick
it,
and
this
ticket
is
passed
down
the
stack.
So
this
ticket
can
be
used
to
fetch
for
additional
parameter.
A
Right,
so
basically,
so
no
that's!
That's
I!
Think
that
then
fits
right.
If,
if
we
look
at
it
in
that
way,
that
can
be
an
optional
enhancement
where,
before
you
even
run
into
your
cni,
because
your
plugin
right
can
basically
if,
if
I,
want
to
and
I
be
kind
of,
maybe
not
depending
on
how
far
it's
not
integrated
the
the
in
on
this
level,
because
I
have
now
two
places
where
I
can
do
something
on
my
own
right
too
so
I
have
the
dra
plugin
and
then
I
have
the
cni
itself
right.
A
D
A
Yeah
I
hear
you
that
that's
that's
fair,
yeah,
you're,
right,
you're
right,
so
basically,
it's
a
very
okay.
Okay,
I
hear
you,
but
basically
that
gives
us
that
expands
our
multi-network
capability
with
additional
prepare.
If
you
want
to
on
before,
even
the
Pod
is
being
started,
yeah,
that's
I
I
see
that
as
a
as
a
win
but
question
to
you,
Alexander
and
Patrick.
Are
you
okay
with
what
I
said
like
we
will
leverage
still
our
apis
and
then
the
array
would
adjust
to
and
use
those
like
this
piece
here.
A
I
I
like
this
piece
here
like
what
what
Pete
kind
of
proposed
here
I,
don't
like
a
user
having
to
specify
both
of
those
things,
I
would
say:
user
specify,
spot
networks
and
the
this
section
and
dra
understands
that
and
then
leverages
later
on.
In
some
ways
this
part
is,
this
part
is
for
the
strictly
just
the
dra.
If
you
want
to
do
the
array
something
completely
unrelated
to
multi-networking,
yes,
go
ahead
and
schedule,
those
for
your
own
sake.
D
Sorry,
I
I
might
be
not
understanding
like
properly
well.
Let
me
see
it.
I
haven't
read
what
document
so
repeat:
I
I
saw
it,
but
I
haven't
read
it
so
I
I,
don't
know
about
examples,
but
the
main
thing
about
with
the
array
like
as
overall
concept
were,
was
scheduling,
part
and
the
port
spec
modifications
it's
it's
really
minimal
to
give
a
maximum
flexibility
to
our
vendors.
D
At
what
I'm,
seeing
on
my
screen
under
this,
like
pod
networks,
part
of
object,
I,
don't
know
what
kind
of
controller
will
be
consuming
with.
So
in
my
battery
collect
me
in
February
1
o'clock,
yeah,
it's
possible
probably
to
write
a
controller
which
will
be
converting
it
to
some
redefinite
template
of
a
claim,
but
how
user-friendly
it
will
be.
I,
don't
know
we
need
to
see
it.
We
need
to
play
these
examples.
How.
A
The
password
look
like-
maybe
maybe
let
me
share
this,
what
I
don't
like
with
the
I
think
Pete's
Pizza
example.
Has
intentions
not
the
way
I
want
it,
because
my
core
element
in
this
whole
thing
is
that
pod
Network
attachment
is
on
a
level
of
containers
the
something
that
Patrick
catch
up
last
week,
I
caught
up
last
week
as
well,
where
you
would
have
to
change
your
apis
understanding
of
your
apis
to
have
those
on
a
contain
on
on
a
pod
level.
Another
container
level,
because
we.
D
A
So
this
is
what
I'm
getting
at
is.
If
we
are
to
change
that,
why
can
we
not
just
directly
use
those
kinds,
this
type
of
API,
that's
where,
where
I'm
trying
to
say
that,
then
your
dra
would
understand
the
Pod
Network
attachment.
Please
that's
what
I'm
getting
at-
and
this
is,
is
that
acceptable
for
your
that
under
the
hood
somewhere
there,
unless
you
want
to
tell
me
that
that
was
your
plan
anyway,
to
have
a
pod
level,
resources
is.
C
D
C
So
to
to
phrase
that
differently,
dra
will
go
ga
without
implementing
anything
related
specifically
to
Port
Network
attachments.
That
would
be
an
add-on
that
could
leverage
a
era,
but
it
would
be
a
separate
cap
describing
how
that
works
and
how
the
new
API
gets
mapped
to
existing
map
mechanisms
in
dra,
I.
B
B
D
So
we
can
do
what
we
claim
really
in
report.
Specs
semantics
be
usable
for
saying
what
its
pod
level
object.
We
can
modify
the
kublet
gra
apis,
what
it
can
return
saying,
what
is
a
support
level
attachment
and
we
can
say
what
yeah.
Yes,
this
is
something
related
to
a
network
and
we
can
do
modification
and
we
say
right
to
pass
it
down,
so
it
finally
reaches
the
cni
protocol
right.
C
C
So
if
you
have
a
controller
that
sees
a
pod
Network
attachment
with
certain
fields
and
then
creates
a
resource
claim
that
may
be
okay,
we
also
need
to
teach
certain
components
like
for
scheduler
and
cubelets
who
currently
iterate
over
the
Pod
spec
resource
claims
slice
to
list
well.
Containers
is
the
is
the
other
part
for
the
other,
the
two
parts
really
so
if
we
are
already
at
the
Pod
level,
so
don't
don't
mistake
that
we
are
already
at
report
con.
Currently,
the
array
is
a
port
level
resource.
C
It
just
happens
to
be
only
be
used
for
containers.
So
if
you
you
find
a
port
at
the
Port
level,
you
define
a
resource
reference
and
when
the
containers
reference
that
reference,
it's
it's
a
two-step
referencing
thing,
so
you
can
have
some
containers
using
the
resource
and
other
stoned.
That's
why
we
have
these
two,
this
two-step
reference
level
I,
think
that
would
be
possible.
Basically,
you
would
have
to
have
you
have
would
have
to
iterate
over
two
things.
Now
the
schedule
would
need
to
know
about
a
pods
has
a
reference
to
a
resource
claim.
A
A
Don't
want
to
do
that
right,
so
basically,
I
still
want
to
have
the
Pod
Network
API
as
a
front-center
representation
of
the
network
object
of
the
networking
inside
the
kubernetes
cluster
and
then
I
would
prefer
to
have
this
field
from
the
usability
point
of
view
rather
than
having
the
dra
apis
here
and
use
them
and
and
Hammer
them
into
the
dra
apis.
That's
that's.
A
Let
me
think
so,
basically,
this
points
to
a
this
object,
and
let
me
show
you
an
example
of
what
object.
This
is
the
object
that
it
will
reference
to
right.
So
here
I
have
a
network
and
it
can
additional
have
some
custom
parameters
right.
It
in
informs
additional
stuff,
like
ipam
or
and
other
stuff,
that
we
need
to
know
this
one.
This
can
point
to
who's,
doing
and
handling
the
specific,
the
specific
pod
Network
and
then
the
array
yeah.
That
will
be
tricky
because
you
have
a
client.
A
D
It
looks
like
you're
replicating
what
we
have
with
driver
and
like
a
class
and
a
claim
part,
but
just
we
call
it
pod,
Network
and
pod
Network
attachment
and.
A
B
The
the
analogy
I've
got
in
my
head
is
that
with
dra
at
the
moment
you
have
the
resource
claims
which
are
kind
of
pod
level,
they're,
an
array
and
a
pair
of
containers.
That
say
these
are
the
resources
that
are
that
this
pod
is
going
to
have
to
have,
and
then
in
each
container
it
says
for
there's
a
field
which
says
this
is
the
resources
that
are
actually
attached
to
this
particular
container.
D
B
And-
and
that
would
be
an
optional
field,
so
you
don't
have
to
have
dra
if
you're
using
something
more
simple,
but
it
does
mean
that
the
any
given
pod
can
specify
some
parameters
based
for
its
Network
or
parameters
based
for
that
specific
pod
and
it's
all
down
to
the
implementation
to
deal
with.
All
of
that.
A
A
This
is
something
that
that
Alexander
mentioned
with
the
cubelet,
that's
which
I
did
like
where
I
want
to
reference
my
network
here
and
then
I
have
an
option
for
my
network
to
have
this
additional
plugin
to
do
something
for
that
Network,
before
it's
being
passed
to
the
cni
right,
I
would
like
to
have
that
capability
right,
because
that
enhances
what
we
are
doing
here
right
gives
us.
If
the
array
understands
this
piece
right,
as
is.
D
Right
what
about
what
about
maybe
well
compromise
might
be
the
wrong
word,
but
just
like
the
like
bring
down
from
my
side
so
like
you,
can
have
a
broad
spec
level
like
your
own
Fields
about
like
Network
or
whatever
you
call,
it
doesn't
really
matter,
but
you
can
point
like
you
can
use
like
resource
reference
and
one
point
to
claim
like
to
generic
object,
so
so
vendor
specific
parameters
will
be
part
of
a
claim,
object
and
everything.
D
A
A
D
You
jump
a
bit
on
my
left
side.
What
I'm
saying
like
like
see
what
like
we
have
with
Fields
resources
claims
and
when
you
have
pointer
name
equal
to
resource
on
the
left
side.
So
what
I'm
saying
like
you
can
have
like?
Instead
of
resources,
you
will
have
like
your
board
networks.
You
will
have
like
all
other
parameters
and
afterwards
you
will
have
like
claims,
and
you
will
points
to
work
right,
render
specific
parameters.
B
A
A
Now
so,
basically,
this
is
the
same
kind
of
so
so
this
and
this
this
is
the
same
kind
of
story.
What
what
there
is
with
mounts
in
the
pot
right.
You
have
volumes,
you
Define
volumes.
D
A
A
C
C
It
would
be
somehow
implicit
because
well
that
particular
type
of
allocation,
couplet
and
the
couplet
Plugin
or
something
in
the
controller
status
will
make
it
so
that
this
is
Port
level
and
not
not
something
that
gets
passed
into
container
runtimes,
unless
perhaps
the
specific
Network
implementation
needs
it,
in
which
case
it
might
be
useful
to
have
something
that
mounts
a
device
into
the
Container
container
runtime
or
makes
the
network
device
available.
C
That
depends
so,
but
this
is
this
is
where
we
basically
need
to
have
a
discussion.
If,
if
you
agree
on
this
this
rough
idea
for
doing
it,
then
we
can
talk
about
specific
changes
to
the
array
to
make
this
possible,
but.
C
Have
you
have
you
talked
with
people
if,
if
there
is
a
path
from
that
idea
to
a
specific
implementation
at
all,
because
we
had
big
trouble
getting
any
changes
into
the
Pod
spec
and
the
resolution
was
to
have
very
minimal
changes
in
core
V1
and
do
everything
else
in
our
resource
API,
where
we
can
have
more
flexibility
to
change
things
as
as
we
go
along,
but
you
seem
to
start
from
the
other
way
around.
You
want
to
specify
a
full
API
first
and
then
think
about
how
to
implement
it.
A
C
A
So
let
me
let
me
let
me
answer
that
so
because
we
are
focusing
on
the
implementation
side
of
the
network
itself,
which
I
mean
either
way.
Sorry
sorry,
I
said
it
differently
that
part
of
implementation
of
the
API
is
done
by
the
vendor,
so
we
don't
have
to
solo.
So
we
don't
care
about
that.
We
really
want
to
represent
a
network
because
we
cannot
Implement
in
kubernetes
all
the
networks.
So
that's
why
the
API.
A
You
saw,
and
basically
my
API
as
you
as
you
above
said,
Alexander
and
and
Patrick
you
both
said
our
stuff
is
similar.
We
don't
have
the
class,
we
have
a
provider
which
is
optional
and
we
have
a
parameters
reference
for
the
customer
to
provide
their
own
stuff.
Exactly
the
same
thing.
C
A
We
wanted
to
provide
them
with
their
means
on
how
you
can
expose
availability
of
the
specific
Network
and
now
I,
don't
have
to
figure
it
out,
because
you
already
provide
that
through
the
array.
I
can
just
leverage
you
to
create
my
own.
My
own
sorry
I
can
create
a
core
provided,
a
controller
that
will
kind
of
provide
capability
to
specify
and
give
some
apis
for
the
Implement
implementers
to
indirectly
say:
okay,
where
the
specific
network
is
available,
which
then
be
tweaked
through
the
array
into
the
scheduler.
A
So
I
have
that
you,
you
covered
that
for
me,
so
I'm
already
covered
from
that
one
I
was
thinking
a
completely
some
other
elaborated
things
where
a
node
will
have
some
fields
to
provide
whether
the
network
is
up
and
down
now.
I.
Don't
have
to
do
that.
I
have
a
controller
through
the
array
and
I
already
covered
that.
So
that
part
already
you
covered
that
part.
For
me
that
was
my.
My
idea
was
slightly
different
and
I.
Now
right
now,
my
thinking
is
to
and
probably
most
of
folks
here
to
leverage
you.
A
The
only
thing
here
is
the
kind
of
the
apis
on
how
to
kind
of
deal
with
that
one
and
then
the
the
the
the
cubelet
start
even
is
better
because
now
you're
enhancing
what
we
are
proposing,
because
now
I
can
have
additional
dra
plugin
ability
to
do
some
additional
pre-emptive
stuff
or
before
I,
even
do
something
on
my
network.
So
that's
even
better.
So.
A
My
our
goal
was
to
create
the
and
specify
all
the
generic
stuff
that
are
not
implementation,
specific,
so
scheduler
definition
of
the
network
in
the
Pod
and
I
can
going
back
to
the
kind
of
to
my
cap
and-
and
let
me
share
this
stuff,
and
if
you
look
at
here
right
now,
we
have
the
resource
definition
itself
and
then
default
pod
Network,
because
we
have
to
have
backward
compatibility
attaching
to
the
Pod
Network
I,
think
that
would
be
I
think
most
of
it
that
we
want
to
cover
in
this
phase.
A
So
that
would
mean
we
don't,
and
in
this
phase
we
didn't
want
to
touch
as
I
mentioned
in
the
very
beginning.
We
will
not
get
into
the
array
because
keep
in
mind,
as
you
yourself
said,
this
is
a
big
engagement
and
we
know
that
and
I'm
aware
of
that.
That's
we
are
just
you
see
just
this.
This
discussion
took
us
another
hour.
It's
already
takes
us
a
month
to
kind
of
discuss
this
just
this
piece.
A
A
So
this
is
where
the
array
we
need
to
kind
of
discuss
on
how
we
do
that
and
how
we
marry
all
those
two
things
right:
I
want
to
make
sure
that
that
pod
network
is
still
front
and
center
I
I,
don't
want
to
hide
it
behind
something
because
we
have
to
have
pod
Network,
as
I
said
a
referenceable
through
all
the
other
objects,
so
that
part
has
to
happen.
D
So
like
in
this
particular
example,
we
have
like
many
of
the
things
implicit,
so
the
resource
claims
are
referenced
to
report
and
when
the
driver,
like
webender
part,
says
what
Miss
claim
is
actually
something.
What
is
Network
related?
My
understanding
with
all
of
what
you're
describing
is,
what,
like
you,
don't
have
implicit
things.
You
want
to
have
explicit
field
in
report
spec.
What
what
it
says
like
this
is
the
network,
and
you
have
like
few
of
fields
which
is
like
always
must
be
present,
regardless
of
implementation.
Sure.
D
Well,
that's
fine!
So
like
it,
it
means
like
you.
You
need
to
have
an
explicit
Declaration
of
usage
of
things,
but
what
I
was
trying
to
say
is
what,
like
the
rest,
we
can
do
the
same
thing.
What
like
what
Patrick
showed
in
this
example,
like
all
the
vendor
specific
parameters,
can
be
hidden
as
a
claim.
D
So
like
your
object,
instead
of
like
you,
can
have
a
reference
which
says
like
for
the
rest
of
the
parameters
in
this
claim
and
with
a
vendor
specific,
like
I,
don't
really
care
about,
and
it's
when
that
part,
which
will
figure
out
what
to
do
this.
One.
C
C
Of
yeah,
it
depends
if
you,
if
you
can
generate,
if
you
can
take
that
information
and
map
it
directly
to
a
resource
claim.
The
the
provider
field,
for
example,
could
become
your
resource
class
name.
The
parameter
reference
that
you
have
can
be
directly
become.
The
resource
claim
parameters,
ref
yeah.
A
C
A
C
We
can
just
make
it
a
convention
yeah,
it
could
be
just
for
the
provider
yeah
or
or
it
could
be,
the
name
of
a
class
you're.
Basically,
the
provider
defines
his
names
and
a
provider
that
uses
the
array
or
well,
we
kind
of.
If,
if
we
use
the
entry
implementation
of
this
API,
they
kind
of
would
have
to
have
a
dra
driver,
because
that's
the
the
expectation.
A
So
yeah,
so
so
the
driver
can
either
be
in
the
parameter,
so
we
have
to
do
custom
stuff
anyway.
So
basically,
what
no?
We
don't
want
to
pull
the
custom
stuff
in
in
core
in
core
code,
so
that
cannot
happen.
So,
basically,
a
driver
name,
a
provider
is
one
thing
and
it's
slightly
different
to
what
the
driver
name
would
be,
but
we
could
have
a
driver
name
that
will
point
out
to
to
to
to
something.
If
you
want
to
use
the
because
I
assume
the
driver
name
is
the
plugin
name
that
we
want
to
use.
D
C
Both
both
okay
so
for
the
scheduler,
they
are
well.
Actually,
it's
the
other
way
around
where
the
scheduler
doesn't
care,
they
just
create
the
resource
claim,
and
then
the
driver
need
to
look
at
the
resource
class
and
the
resource
claim
to
figure
out
whether
that
is
a
resource
claim
that
they
are
responsible
for.
B
C
My
thing
is,
it
depends
it
depends.
So
in
this
example
here
the
resource
claim
name
is
one
resource
claim
all
parts
referencing
it
get
the
same
instance
the
same
Hardware
instance.
C
B
It
would
be
difficult,
it
would
be
different
configuration
in
each
of
those
resource
claims
for
each
pod,
though
so
the
particular
example
I
can
think
of
is
when
you're
on
a
network,
and
you
have
different
parameters.
You
want
to
provide
for
the
network
for
a
different
pods,
so
you
so
in
that
case
you
have
a
different
resource
claim.
You
know
you
have
a
a
big
fat
network
interface
claim
and
a
small
low
capacity
networking
different
interface
claim
for
potentially
for
the
same
network.
A
So
I
think
that
this
picture
is
so
so
basically
the
analogy
here
Patrick
is
slightly
different.
Our
pod
network
is
the
template.
C
A
The
rest
is
this
is
something
that
I
I
wanted
to
kind
of
introduce
into,
because
that
part
is
the
same.
What
we
were,
what
I
was
thinking,
maybe
because
I
didn't
get
to
that
right
now,
I
wanted
to
keep
it
simple
and
only
use
the
template
right.
You
see
the
my
my
pod
attachment
has
only
pop
onto
a
network
which
is
the
template
and
then
I
don't
have
any
paired
attachment
configuration
specified
here.
A
The
only
thing
that
I,
what
I
would
think
at
least
for
networking,
is
that
a
resource
claim,
so
this
pair
attachment
thing
cannot
be
shared.
I
cannot
attach
the
same
network
to
the
same
interface,
with
the
same
parameters
to
two
ports.
That
should
not
be
possible.
Let's
say
because
in
our
case
it
is,
it
can
contain
things
like,
let's
say,
mac
address,
IP
address.
If
that's
the
case.
D
Yeah,
we
think
what
we're
designed
at
the
array
to
be
for
is
to
have
like
vendor
flexible,
to
expose
whatever
you
want
so
like.
If
you
think
about
like
putting
Mac
address
yeah,
it
might
be
not
working
or
like
you
might
want
to
have
a
chain
of
parameters
to
to
be
when
you're
requesting
something
per
particular.
But
the
idea
was
what,
like,
you
can
create.
D
A
A
B
A
A
Yeah
all
right
and
yeah,
let's
I,
hopefully
we
can
come
out.
This
is
good
because
this
fits
into
a
like
the
area
that
we
are
currently
at
the
kind
of
how
we
attach
because
I
think
what
boils
down
to
right
now
is
how
do
we
kind
of
design
the
API,
so
they
fit
with
the
array
on
the
on
the
Pod
level
right,
because
this
is
where
we
exactly
are
at
and
we
need
to
discuss
and
walk
through
okay.
So
let
me
think
about
it.
A
Let
me
capture
all
the
discussion
from
the
recording
and
then
let's
kind
of,
let's
try
to
finalize
this
next
week.
Hopefully
I
said
that
last
week,
but
we'll
see
all
right
thanks.
Everyone
hear
from
you
next
week.