►
From YouTube: Multi-Network Community Sync for 20230726
Description
Multi-Network Community Sync for 20230726
A
B
A
B
A
That's
behind
we
can
probably
just
talk
and
then
after
this
I
want
to
touch
on
I
think
something
that
in
in
we
crossed
out
I,
have
gave
it
a
bit
of
a
thought
and
I
want
to
kind
of
come
back
to
something
that
what
I'm
calling
primary
indication
in
pod,
Network,
I've
I,
can
bring.
It
up.
I
think
the
last
time
we
talked
about
it
was
was
Michael,
Cambria
I,
think
he's
not
here
around,
but
I
can
bring
up
the
the
topic,
and
then
we
can
have
more
discussions
on
this.
A
If
you
have
any
other
topics,
please
add
yourself,
and
one
of
the
items
I
did
forgot
missed
on
is
the
description
in
the
cap.
I
will
definitely
do
that.
A
I
had
a
time
to
kind
of
look
into
this
there's
something
will
come
together,
I
assume
with
like
refactoring
of
the
whole
cap,
as
we
have
it
today,
just
like
notes
most
of
the
time,
but
we
need
to
start
I
I
need
to
start
looking
into
it
and
make
it
more
Concrete
in
terms
of
kind
of
a
real
full-size
cap
so
that
we
can
kind
of
post
it
as
a
PR.
A
From
last
week
we
I
think
covered
most
of
the
apis
and
we
started
talking
on
what
sort
of
changes
do
we
need
to
do
on
the
kind
of
bottom
of
cubelets,
and
is
there
anything
that
we
have
to
address
for
pod
Network
right
now
we
discuss
about
apis
only
high
level,
whatever
is
seen
from
the
cube
cattle
kind
of
preferably
saying
from
the
level
of
cube
cattle.
So
basically
what
API
will
see,
but
then
is
there
anything
we
have
to
do
for
this
on
the
connection
between
cubelet
and
CRI
and
Michael
go
ahead.
B
My
visuals
so
there's
just
two
items
that
caught
my
attention
from
the
cry:
API
changes
and
where
we
probably
will
fall
into
hit
some
small
bumps,
the
the
most
obvious
one
is:
if
you're
going
to
be
making
changes
to
the
Pod
Network
namespace.
The
current
approach,
obviously
to
get
your
network
namespace
path
is
have
the
cni
plugin,
you
know
write
that
to
disk
or
enter
interrogate
the
cni
cache.
B
So
that's
one
area
that
I
have
talked
to
The,
Container,
D
and
cry
maintainers
about
potentially
adding
in
the
past,
because
without
that,
then
you
have
to
go
this
roundabout
approach.
The
second
one
is
that,
when
it's
more
related
to
the
multiple
IP
addresses
is
that
a
lot
of
people
assume
that
run
pod
sandbox
is
actually
running.
You
know
returning.
B
B
So
the
issue
that
I
was
I
think
will
have
to
have
the
conversation,
because
I'm
kind
of
unfamiliar
of
how
we're
going
to
go
about
this
is
that
when
the
Pod
status,
CRI
method
is
called,
that's
going
to
container
D
and
say,
hey
give
me
all
of
your
IP
addresses
that
you
know
about
and
update
the
Pod
resource.
So
my
thought
here
is
my.
Oh,
my
concern
is
what
prevents
and
I
can
update
the
the
appropriate
notes
here
too.
B
So
we
have
a
record
is
what
prevents
you
know,
multi-network
from
adding
IP
addresses
to
this
slice
and
then
having
the
Pod
status.
Your
I
call
come
in
and
you
know
potentially
remove
those
and
then
the
obvious.
The
other
part
of
that
is
that
in
the
Pod
resource
status
or
pod
Network
status
or
whichever
resource
has
you
have
IP
address,
and
then
the
network
name
obvious.
That's
an
obvious
one
that
needs
to
be
addressed
is
the
Pod.
The
network
name
is
missing
from
the
Pod
IP
type
or,
however,
we
would
want
to
do
it.
B
So
there's
actually
three
I
said
two:
those
are
the
the
three
areas
that
would
probably
need
to
be
addressed.
A
B
D
B
Ips
pod
IP
is
the
first
element
out
of
that
slice
and
then
all
the
other
ones
are,
you
know,
index
but
I
think
it's
all
zero
through
the
end
and.
A
So
so,
but
then
on
on
the
connection
between
cubelet
and
the
CRI,
is
there
any
additional
x
call
for
that
status
or
that's
all
comes
with
a
single
okay
and
CRI,
give
me
the
spot
and
then
it
Returns
the
status?
It's.
B
Pulled
part
of
the
Run
there's
a
run
function
in
the
kubelet
that
will
interrogate
the
pod
and
I
got
to
get
the
timer
on
that.
So
I
have
the
Glide
numbers
buried
away.
I
just
need
to
look
at
that
and
then
run
my
cni
test
to
verify
all
these,
but
there's
nothing
in
the
code.
If
you
trace
pod
IP
the
specific
field
on
the
pot
of
the
IP
field
on
pod,
IP
type,
there's
nothing,
that's
going
to
say:
don't
don't
use
these.
A
B
And
you,
you
definitely
can
in
the
same
IP
family
is
the
where
the
Pod
has
one
IP
address
per
family
is
a
kubernetes
construct
and
there's
nothing
to
like,
actually
prevent
more
from
going
into
that.
B
A
B
Yeah
we
would
need
to
well.
Can
we
go
to
the
Pod
resource
document
really
quick.
C
E
B
The
one
that
actually
has
scroll
down
right
here
right
here
so
pod
IP
and
you
know
pod
IPS.
Obviously
that
is
the
first
element
in
the
slice,
so
you
can
actually
infer
that.
Obviously,
pod
Network
here
is
the
default.
However,
if
you
have
data
plane,
1
and
data
plane,
two
the
approach
falls
apart,
and
that
is
an
area
that
would
need
to
be
addressed.
D
B
Slice
would
have
to
be,
you
know
it
may
be
a
flag
or
something
to
say:
hey,
don't
touch
these.
These
aren't
managed
by
kubernetes
I
mean
by
container
D
or
the
cni.
These
are
outside
of
the
scope
of
those
two
areas.
These
are
managed
by
something
in
kubernetes
and
don't
touch
these
that's
one
way
or
you
have
to.
We
have
to
actually
start
figuring
out
a
way
to
add
that
pod
Network,
you
know
a
name
somehow.
A
D
A
Question
to
you,
and
is
that
possible
because
we
question
here
would
be:
do
I
have
to
pass
the
network
name
to
CRI,
eventually,
I
assume
I
would
do
right
and
I
assume
we
all
so
this
is.
This
is
a
pod
spec
which
has
an
interface
name
and
then
two
pod
networks,
right
yeah.
But
then
this
is
a
slice
already
right.
So
order
is
preserved.
A
Match
them
one
to
one
to
whatever
is
here
right
question
being
that,
of
course
right
now
the
author
is
different.
Let
me
just
maybe
fix
the
order
to
what
we
have
done
in
here
right
so
default.
Then
there
is
a
data
plane
and
then
lastly,
is
the
other
guy
right.
So
basically
do
we
need
to
pass
to
the
CRI
the
name
of
the
network.
B
A
A
There
is
multiple
of
CRI
implementations
and
those
will
decide
how
they
do
it.
My
my
here
we
just
want
to
Define
of
why
the
CRI
apis
would
be.
Maybe
we
would
expand
it
at
additional
Fields
what
I
mentioned,
and
then
what
is
expected
from
this
era?
How
it's
been
implemented,
how
it's
leveraged,
maybe
they
will
initially
they
will
just
ignore
it,
and
that
will
have
to
be
the
case
so
that
we
don't
be
blocked
by
those
Cris
but
or
maybe
not.
A
Maybe
we
have
to
enforce
something,
but
basically
how
they're
going
to
use
that's
up
to
the
Cris
and
then
the
and
down
below
to
the
cnis
right,
because
this
is
one
of
the
implementations
keep
in
mind.
Cni
is
one
of
the
implementations
that
we
would
support
right
and
it's
not
for
us
to
decide.
I
assume
you.
You
will
be
one
of
them
persons
that
will
eventually
decide
how
the
cni
is
going
to
leverage
the
Pod
Network.
That's
how
I'm
seeing
it
yeah.
C
B
B
B
B
That
we
talked
about
because,
right
now,
what
I'm,
basing
it
off
is
that
I've
been
with
pretty
much
95
confidence
as
I
read
through
all
the
code
between
the
kublet,
the
CRI
API
and
the
container
container
D
runtime.
B
So
now,
I
want
to
verify
that
will
this
happen,
because
if
it
does
happen,
that's
the
the
big
problem.
However,
like
there
are
the
other
data,
the
other
bullet
points
that
I
mentioned
that
are
more
tangible
and
that
we
can
easily
talk
about
now.
B
How
do
you,
how
would
you
go
about
in
the
current
state
to
change
the
IP
addresses
in
a
network
name
space?
Where
is
your
source
of
Truth
for
the
network,
namespace
change.
A
A
Exactly
that's:
that's
the
current
requirements.
Let's
not
go
into
the
the
hot
plugin
ability
of
ips.
That's.
A
Down
the
roadmap
so
right
now
we
just
want
to
kind
of
get
the
okay
I
I
we
specified.
Let's
say:
let's
assume
we,
we
managed
to
get
the
Pod
spec
to
be
changed
and
having
that
right
and
what's
what's
of
it
right
now,
it
doesn't
doesn't
gives
us.
It
doesn't,
gives
us
so.
A
Exactly
nothing,
it
doesn't
doesn't
give
me
nothing
right,
because
I
think
we
were
here,
it
doesn't
give
us
as
much
because
what
of
it
right?
Oh
you
pass
those
okay,
some
implementation
will
be
able
to
do
something
about
it,
but
then
we
don't
have
any
any
feedback
about
it
right
from
from
anything.
So
the
implementation
will
be
very
kind
of
Hollow,
because
there
is
nothing
there
is
not
much
in
it
right.
So
basically,
Motors
will
have
to
in
case
of,
for
example,
multis.
A
It
will
have
to
read
those
and
then
would
have
to
automatically
it
by
itself
populate
this.
This
is
the
the
easiest
way
out
that
we
could
go,
for
example,
initially
at
least
for
this
phase.
We
say:
okay,
this
guy
doesn't
do
much,
and
then
we,
as
we
have
this
API
out
now,
let's
talk
with
the
CRI
and
discuss
on
how
how
the
apis
can
be
filled
by
the
CRI.
A
Is
that
one
of
the
approaches
we
want
to
take
here
or
do
we
want
to
push
push
that
right
away
and
and
change
their
cubelets
apis?
Because
yeah,
that's
that's
kind
of
I'm
I
I,
never
work
around!
That
particular
area
so
I
wonder
how.
E
A
So
Doug
I
think
this
can
be
initially
our
approach
but
I
think
long
term.
Today,
this
first
IP
is
populated
from
Syria
and
get
get
got
from
Syria.
So
basically,
what
would
be
expected
is
eventually
we
should
be
able
to.
Allow,
however,
is
going
to
be
implemented,
but
we
should
be
able
to
allow
the
CRI
to
return
a
full
list
of
this
right.
A
So,
instead
of
you
doing
this
update
of
the
Pod,
like
you
mean
multis
or
any
of
the
other
cni
implementations,
instead
of
doing
them
doing
this,
CRI
should
be
able
automatically
to
do
that.
For
you
right,
you
return
all
that
stuff
through
the
cni.
The
way
it
is
done
today
for
just
the
first
interface
and
then
it's
piped
through
up
to
the
up
to
the
here
to
to
between
the
cubelet
and
CRI
I,
would
see
that
or
do
you
see
that
part.
E
A
Right,
what
what
do
you,
what
other
folks
feeling
about
that?
Maybe
the
first
approach,
because
that's
something
that
we
I
think
discussed
last
time-
I'm,
not
sure
that
I
mentioned
that,
but
I
think
we
all.
Last
time.
Last
week
we
talked
about
this
I.
Think
most
of
us
felt
strong
to
have
the
CRI
the
cubelets
last
year,
right
API
modified
right
away.
A
But
what
do
you
feel
today?
Maybe?
Is
there
some
some
change,
or
is
that
something
that
we
could
do,
maybe
in
a
phased
approach
where
we
would
expand
the
API
only
for
the
status,
so
we
will
give
the
capability
of
populating
them
next
step
will
be
to
ensure
the
Cris
can
now
leverage
that
eventually
and
populate
that
by
themselves.
A
Right
as
pushing
my
concern
here
is
that
the
equivalent
CRI
API
it's
it's
discussion
with
directly
with
some
of
the
other
implementations
and
it
might
be
difficult
or
I'm,
not
sure
how
that
kind
of
front
looks
like
I.
Never
did
that
Michael,
you
maybe
have
some
experience.
Is
it
hard
to
add
an
API
in
that
kind
of
connection
in
this
area,
for.
A
For
the
yeah
cry
between
between
cubelet
and
cry
yeah,
it.
B
So
if
it's
a
new
like
an
entire
new
message,
like
type
you
know,
new
service,
that's
probably
be
a
bigger
discussion
versus
like
appending
on
to
something
already
existing.
Assuming
it's
not
gonna,
be
a
breaking
change
because
between
like
V1
and
V2
cry,
they
were
just
like
obvious
splits
in
the
like
the
runtime
code,
also
in
kublet
to
handle
you
know
different
versions.
So
it's
a
very
depends
answer.
A
B
B
A
Okay,
any
other
thoughts
on
this
are.
C
We
just
sorry
this
may
be
just
showing
that
I'm
misunderstanding
some
of
what's
being
said,
it
sounds
like
there's
very
limited
changes.
We
need
to
see
our
interface,
which
is,
would
be
good
good.
The
one
thing
we
that
I'm
not
sure
if
we
said
what
the
plan
was
is
don't
we
need
to
extend
the
CRI
interface.
To
just
add
the
list
of
pod
networks,
Etc,
so.
A
This
is
what
changes
should
be
in
your
list.
It's
so
my
question
here.
Pete
is
maybe
first
of
something
what
doc
mentioned
as
well
right.
Do
we
want
to
initially
just
expand
the
apis
and
I
mean
by
kubernetes
apis
and
the
next
phase?
We
would
work
with
the
Cris
on
how
to
kind
of
Leverage
those
apis
and
how
how
Cris
can
fill
in
those
apis.
C
So
so,
if
I
wanted
to
set
up
a
demo
say
that
actually
used
multiple
pod
networks,
I'd
have
to
make
sure
that
my
runtime,
whatever
whatever
runtime
being
used
and
whatever
I,
don't
know,
cni
sanitation
or
network
implementation
was
going
to
call
into
the
kubernetes
API
server
to
get
the
right
information.
So
it
could
use
those
pod,
Network
Fields.
A
So
right
now
yeah,
and
not
only
so,
that's
that
would
have
that
will
have
to
happen
regardless,
because
we
are
not
going
to
go
to
the
Cris
right
away
and
tell
them
and
and
Implement
that
for
them
anyway.
So
that
would
have
to
happen
anyway.
So
what
we
are
discussing
here,
only
the
ability
for
the
Cris
to
retrieve
that
information
and
then
pass
it
back
when
and
so.
Basically,
first,
we
would
have
to
be
able
to
change
the
CRI
apis
to
they
will
get
this
information.
A
What
pod
networks
are
requested
and
then
they
will
be
having
ability
to
return
this
type
of
slice
right.
What
I
P2
belongs
to
which
I
through
which
network
right-
and
this
would
this
is
if
in
in
terms
of
the
multi-network
scope
as
as
your
community.
This
is
where
we
would
I
would
say
where
the
back
stops
because
later
on
is
Cris
and
their
implementations
and
how
they
do
that
right
and
then
how
they
pass
that
cni,
that's
all
up
to
them.
A
So
this
is
where
where
I
would
say,
this
is
where
the
back
stops,
because
later
on
it
come
come.
The
Cris
which
have
to
implement
those
and
then
it's
a
matter
of
us
whether
we
want
to
go
to
those
communities
to
specific,
like
container
the
cryo
or
or
I'm,
not
sure
what
other
they
are
and
let
drive
them
to
implement
those
apis.
C
A
So
I
am
trying
to
you
know,
wait
scoping
here
as
well,
always
if
I
can
cut
something,
then
then
it's
easier
because
this
I'm
finding
this
being
large
cap
anyway.
So
so
this
is.
Why
I'm
you
know
if
we
can
cut
something.
Can
we
agree?
It's
okay
with
cutting
something
but
I'm
trying
to
do
that.
I!
Think
Tomo!
You
were!
You
were
next.
F
Yep,
this
is
just
a
comment,
so
there
I'm
I'm
the
agree
that
the
how
do
I
say
for
ideal
or
design
approach
DCI
should
the
target
they
should
mention
that
the
shares
should
support.
There
are
multiple
networks
apparently
appropriate
way,
and
then,
at
that
time
I'm
also
addressed
the
icrb.
The
related
stuff
is
also
should
address.
That
I
mean
that
the
current
idea
CRI
do
not
adding
the
device
related
information,
but
so
they
are
maybe
the
as
the
ideal
design
approach,
I
suppose
the
foreign.
F
C
A
Okay,
so
we
that's
a
that's
a
question
good
point
tomorrow
on
and
then
we
can
get
back
to
that
on
in
a
second,
but
and
what
I
mean
by
here
is
what
you,
what
you're
bringing
up
is?
Should
we
change
anything
and
that's
I
think
this
is
I'm
slightly
too
I
I
would
I
want
to
ask
it
slightly
differently?
What
you
I
think
said:
Tomo.
A
Do
we
want
to
change
how
the
for
example,
SRO
vbfs
are
passed
into
the
pot
I
think
that
is
the
question
here,
because
that's
what
I
think
what
you're
expecting
should
we
then
pass
because
right
now
we
are
passing
just
the
name
of
the
of
the
of
the
Pod
Network
right.
What
if
the
Pod
Network
comes
with
a
hardware
resource
that
we
have
to
pass
as
a
resource
device
right?
What's
then,
how.
F
A
We
change
today
we,
we
all
Leverage
The
the
resource
requests
per
per
container
to
force
device
plugin
to
pass
that
all
the
this
is
into
our
path
right.
So
should
we
change
that
thing
or
should
we
do
something
about
that?
One,
that's
a
that's!
A
I
think
that
is
that
true
Tomo.
Is
that
what
we
want
to
want
to
bring
up.
F
C
F
Then
another
yeah,
the
the
your
your
question
is
also
about
it.
I
mean
that
so
kind
of
the
current
Mouse
Network
app
do
not
mentions
that
how
this
will
be
related,
the
how
the
above
Network
related
to
the
SRB
devices,
so
maybe
that
this
should
be
addressed,
because
there
are,
as
far
as
I
remembered
the
Euro
design.
The
requirement
contains
the
survey
related
or
how
do
we
update
it?
I
think
so.
A
Maybe
this
should
be
yeah,
that's
very
important.
Let
me
we
can
get
back
Michael.
B
Thanks
so
the
more
I
was
even
thinking
about
this
from
Kubla
cry,
API
container,
D
or
you
know,
continue
to
run
time
and
it's
current
implementation
for
the
cni.
Just
the
cni
specification
here
going
all
the
way
to
the
ground
level.
Is
that
if
we're
proposing
this
change-
and
you
know
it's
obviously
a
vertical
approach
that
pod
Network
like
wouldn't,
this
may
require
changes
from
the
cni
specification
to
support
the
runtime
and,
more
obviously,
there's
a
lot
of
fields
that
would
be
cheap.
B
We
would
need
to
just
kind
of
figure
out
how
this
works.
Currently,
this
pod
network
doesn't
exist
in
the
current
state
in
the
container
runtime,
which
obviously
doesn't
exist
in
the
CRI,
so
I
just
want
to
raise
that
that
we
would
need
to
take
a
bigger
look
from.
Obviously
returning
multiple
IP
addresses
is
obviously
supported.
Interface,
name
and
pod.
The
Pod
network
name
isn't
currently
returned,
and
that's
where
I
was
asking
earlier
is
like
who
is
going
to
be
responsible
who's
going
to
be
the
source
of
Truth
for
these
fields?
B
Is
it
the
container
runtime
or
is
it
the
kublet?
Because
if
we
don't
decide
that,
then
we're
going
to
have
two
effectively
a
split
brain
situation
where
one
is
going
to
say
I
own
the
IP
address
the
other
side's
gonna
say
no
I
own.
The
IP
addresses
I
own
the
networks.
So
that's
something
I
just
want
to
raise
and
like
I'm
I
touch
both
container
D
and
the
cni.
B
So
I
certainly
can
fight
said
battles
here,
but
I
just
want
to
raise
that
we
probably
need
to
have
some
conversations
at
the
lower
levels
as
well,
because
things
probably
aren't
going
to
get
you
know.
Pushed
from
the
kubernetes
on
down
is
probably
needs
to
be
an
effort
of
collaboration
across
all
these
vertical
slice
components,
and
that's
where
my
concern
is
kind
of
raising
and
I
just
want
to
make
you
aware.
A
No,
that's
and
that's
fair,
and
when
you
mean
the
split
split,
brain
I
would
can
I
say
it's
between
the
CRI
versus
the
kind
of
implementer
of
the
Pod
Network.
Can
we
is
that?
That's
what
you
mean
well.
B
So
right
now,
obviously
the
flow
is
kublic
uses
the
cry
API
to
communicate
with
the
runtime.
Give
me
your
IP
addresses.
That's
obviously
sent
back
over
the
cry,
API
and
ends
up
in
FCD
so
where
like
and
that's
where
I'm
kind
of
getting
confused
about
this
overall
flow,
so
we're
stating
that
with
the
Run
pod
sandbox,
we
are
going
to
be
pushing
what
down
and
that
what
is
very
important
of
what
we're
pushing
down,
because
obviously,
in
the
current
it's
going
to
be
returning
IP
addresses.
B
So
are
we
proposing
that
during
the
rudpod
sandbox
that
creates,
you
know
responsible
for
creating
the
network?
Namespace
pause
container
executing
the
cni
returning
the
IP
address
and
not
the
the
Pod
I
ID,
and
then
pod
status
comes
in
and
returns
hey.
These
are
the
IP
addresses
that
I
know
about
it.
A
know
about
for
this
pod
which
completely
works,
and
then
we're
missing
like
the
pot
that,
if
interface
name
like
e02
Net
One,
whatever
you
want
to
call
it
and
then
the
network
name,
and
now
it's
like.
B
Currently,
you
probably
could
get
a
little
squirrely
with
pod
Network
and
the
cni
type.
That
sounds
really
dirty
or
there's
another
field
at
it.
That
could
be,
you
know,
parsed,
that's
where
I'm
kind
of-
and
this
is
where
I
want
to
have
the
visuals
so
appreciate
you
building
it
right
now
in
the.
C
D
B
A
I
I
wanted
to
do
this
so
that
we
we
know
so
right
now.
Let
me
just
draw
that
quickly.
So
right
now,
what
we
don't
have
is
Siri
right
doesn't
get
any
pod
specs
right
now
that
doesn't
exist.
So
basically,
this
is.
Let
me
just
draw
this
as
a
this
is
non-existent
right.
That
dash
line
is
non-existent
is
so
then
we
will
have
this
exist
right.
So
CRI
cni,
that's
that's
existing
and
we
know
that
right
now
we
have,
we
have
very
limited.
Let's
say:
I
will
just
call
it
like.
C
A
A
A
This
is
just,
but
this
is
just
a
slice
of
ips
slice
of
eyepiece
and
what
I
mean
by
that?
No
pod
network
alignment
right
so
right
now
we
have
just
this
and
what
we
want
to
do
is
we
want
to
achieve
the
slice
of
ips
with
a
pod
network
alignment
right.
So
this
is
where
we
want
to
get
to
right.
Yes,.
A
A
So
we
have
eventually
we
want
to.
Let
me
just
make
sure
that
we,
this
is
where
we
want
to
get
to.
So
this
is
the
future
we
want
to
achieve
so
first
feature
is:
we
should
be
able
to
pass
the
Pod
spec,
so
basically,
whatever
list
of
pod
networks
we
desired,
we
want
to
pass
that
to
the
C
right.
That's
the
future
right!
This
is
what
we
want
to
achieve
right
then
CRI
do
whatever
it
has
to
do
with
that,
and
then
it
should
return
us
a
it
should
return
as
this.
A
This
is
the
future
right,
so
basically
to
return
as
a
slice
of
ips
for
pod
network
alignment,
which
then
we
assigned
to
pod
Network
status.
So
that's
I
think
this
is
where-
and
this
is
where
what
I
think
we
talk
with
kind
of
as
a
faced
approach.
What
we
would
want
to
do-
maybe
is
this
instead
of
CRI
doing
this.
A
What
if
cni,
does
that
right?
Because
cni
has
the
power
right
and
then
basically
this
one
is.
This
is
a
very
far
away
future
right,
but
this
is
the
way
the
Dual
do.
A
split
Duality
is
right
because
it's
either
done
by
CRI
returning
to
cubelet,
or
it's
done
by
a
cni
done
by
cni
directly
to
the
Pod
status.
I.
Think
that's
is:
is
that
the
kind
of
split
what
you
have
in
mind.
B
Here,
I'm
trying
to
replay
what
you
said
because
are
you
used
to
be
as
like
part
of
the
CRI?
Obviously
is
the
Run
pod
sandbox
function
that
does
call
the
cni
so
and
were
you
proposing
adding
the
specific
fields
to
the
cni
specification
for
that
return?.
A
Sorry,
yeah,
eventually,
no,
no,
eventually,
of
course
that
part,
but
this
is
where
it's
a
connection
between
the
CRI
and
so
so.
What
you're
referring
to
is
this
I'm
going
to
show
you
some
of
my
arrow?
So
so
this
is
gonna
the
disconnection
and
this
is
out
of
ours
kind
of
control,
because
it's
on
a
connection
between
the
C
currently
as
it
stands
today,
it's
a
connection
between
the
CRI
and
cni
right
and.
A
And
the
question
here
is
when
we
introduce
in
in
this
in
this
connection
right,
if
we
in
this
connection,
we
introduce
ability
for
CRI
to
return
for
the
CRI
to
return,
multiple
IPS
and
then
cubelet
will
save
that
here
right
going
here,
that's
what
we
eventually
want
to
do
today.
What
I'm
proposing
is
the
red
arrow?
Let's
say
this:
this
guy
is
gone
and
basically
a
pod
spec
is
not
passed
to.
It
is
I.
Think
you,
let's
seize
that,
but
basically
Cube
this
is
as
well.
Let
me
just
do
that.
A
So
red
is
very
far
away
future.
That's
what
we
want
to
choose,
so
this
is
Parts
back.
What
I
want
to
do?
No,
not
this,
but
basically
what
today
we
are
I'm
thinking
is
going
to
happen.
Is
this
support?
Spec
is
pulled
by
the
cni
right,
so
what's
going
to
happen
is
today
is
if
we
were
to
not
do
anything
around
this
guy
is
cni
pool.
A
Sports
spec
gets
all
the
Pod
networks,
it
has
to
know
whatever
it
has
to
connect
and
that's
by
down
by
today
by
maltus
as
well,
where
we
just
pull
the
pods
and
grab
the
annotations
for
for
the
Pod
spec
and
what
I?
What
do
you
want
to
connect
so
cni
pulls
that
does
its
own
stuff
and
then
multusey
vendors
today
as
well
slice
of
ips
with
what
networks,
but
it's
being
saved
in
a
form
of
annotation
back
on
the
Pod
start
on
the
on
the
Pod
spec
in
this
metadata.
So
basically,
multis
already
does.
A
This
is
the
matter
of
the.
The
only
thing
here
is
right
now
is:
do
we
want
to
expank
Port
status
and
do
it
directly
Super
Starter,
rather
than
doing
through
annotations,
because
right
now,
this
whole
this
this
this
this
cycle?
This
path
is
done
through
annotations,
so
they
only
and
we
want
to
change
and
move
away
from
annotations
so
that
we
can
grab
the
Pod
spec
and
then
return
to
Port
status,
right
and
move
away
from
annotations.
A
And
the
question
here
is
then,
whether
I
use
CRI
or
the
C9
to
read
the
Pod
spec.
It's
up
to
the
implementation.
The
thing
here
is
I'm
thinking.
Do
you
want
to
enable
CRI
right
away
for
that
at
least
aspect
right?
That's
one
part,
and
then
the
other
part
is
The
Return,
part
right,
which
is
more
difficult,
because
now
we
do
need
a
CRI
to
deliver
right
and
I.
A
Don't
think
we
can
get
that
for
November
release,
I,
don't
think
we
can
achieve
that
to
let
a
CRI
return,
the
list
of
ips
and
then
through
Cube.
Let's
save
that
save
that
list.
So
basically
this
part
right.
So
basically
this
thing
let
CRI
return
the
list
of
ips
with
us,
with
alignment
to
the
Pod
Network
return
to
cubelet
and
then
Cube.
Let's
save
test
spot
status
right
I,
don't
think
we
can
achieve
that
in
the
timeline
that
we
want
to
do
this.
A
So
that's
why
I
would
want
to
maybe
just
keep
it
for
now
for
the
cni-
and
this
is
where
and
correctly
Michael
if
I'm
wrong.
This
is
where
the
Dual
Sprout,
the
split
Duality
kind
of
the
true
sources
of
Truth,
can
come
in
because,
if
we
currently
say
okay,
cni
is
will
be
responsible
for
updating
the
Pod
status
right.
What?
If
we
eventually
get
to
this
stage
right
where
CRI
returns?
This
is
that
what
your
concern
is,
or
you
see,
because
I'm
not
sure,
I,
I
I
grasped,
which
two
components
can
save
the
port
status.
B
So
the
Pod
status
and
we'll
just
talk
about
how
that's
called
pod
status
of
hard
Network
status
is
called
via
the
kublet
that
traverses
the
cry
API
that
goes
to
the
container
runtime.
At
that
point,
the
cni
is
no
longer
involved.
It's
the
runtime
has
its
own
kind
of
data
store,
and
it's
utilizing
that
and
that
data
store
with
all
the
IP
addresses
is
populate
it
by
the
execution
of
the
cni
in
the
Run
pod
status,
request,
method.
A
A
Will
assume
the
same
for
cryo?
Okay?
So
that's
what
you
refer
to?
Okay,
I!
Wasn't
aware
of
that
one!
So
so
that
that's
where
my
Gap
was
okay,
so
there
is
I
I
completely.
Wasn't
aware
of
that.
So
basically-
and
you
say
that
no
and
and
that's
fine
right
if
CRI
holds
all
that
info
right,
it
just
has
to
update
the
status
of
the
pod
with
that
information
and
the
source
for
that
information
is
CRI
right.
A
Eventually,
if
it's
going
to
support
it,
that's
that's
the
the
kind
of
the
the
the
gist
that
we
eventually
want
to
get
to
that
the
CRI
can
finally
update
the
Pod
status.
The
question
here
is:
can
we
do
that
in
case?
A
If
we
were
to
say
initially,
let's
see
and
I,
do
that
how
we
gonna
transition
from
cni
doing
this
update
to
the
stage
where
CRI
returns
it
and
says
by
cubelet
I
assume
this
can
be
done
easily
by
cni
checking
is
the
Pod
pod
IPS
list
configured?
Is
there
all
the
IPR,
all
the
Pod
networks
there
or
not
right,
or
maybe
it's
not
empty
right
or
that
might
be
tricky
so
yeah?
That
might
be
something
because
then
we
want
to
make
sure
they
see
right.
A
So
your
concern
is
the
Etsy
versus,
what's
in
the
port
status,
I
assume
that's
what
you're
referring
to
right.
What's
in
the
data
store
of
the
CRI
versus,
what's
in
the
Pod
status,
I
think
that
has
to
be
in
two
places
and
we
just
need
to
make
sure
CRI
updates.
Whatever
is
in
data,
Store
updates
the
port
status
correctly,
yeah.
B
Right,
we
need
to
say,
like
you
know,
we
need
to
say
this
IP
addresses
will
be.
You
know
we'll
just
Source
those
for
the
cni
for
this
scope
of
this
discussion
and
we're
effectively
be
passing
in
like
hey
with
this
pod
Network.
You
need
to
come
back
with
an
interface
name
and
an
IP
address.
Thus
that's
stored
in
the
you
know,
the
runtimes
database,
pod
status
would
say,
hey
we'll.
You
know
going
through
a
new
cry
with
the
appropriate
Fields.
B
Give
me
what
you
know
about
this
pod
Network
or
what
just
give
me
your
data
with
like
right,
a
type
that
has
IP
address,
pod
Network
and
its
interface
name
right.
So
those
yeah
and
the
changes
there.
You
know
the
cry
changes
to
the
smallest
ones,
and
then
you
know
updating
the
spec
to
kind
of
make
the
cni
aware
of
the
network
name
here.
Is
you
know
it
all
is
doable?
It
just
requires
someone
to
we.
We
need
to
reach
consensus
across
the
runtime
and
the
cni
and
I.
B
Don't
think
that's
a
huge
hurdle.
It's
just
in
just
something
to
be
aware
of
and
I'm
trying
to
like
piece
this
together
in
my
head
and
I
I.
Don't
want
to
bring
up
other
conversations
that
are
happening
in
parallel
to
potentially
distract
this
one.
A
No,
this
is
fine,
so
that's
there
is
another
aspect
that
Michael
I
think
Michael
Cambria
is
not
here.
It
stuck
with
me
and
I
like
that
idea,
where
one
of
the
simplest
implementation
of
this
whole
thing
could
be
based
on
pod,
Network
name.
Only
without
so
I
don't
have
any
additional
parameters.
I
don't
have
to
have
anything.
I
have
I,
have
my
notes
deployed
with
multiple
conflicts
named
ABCD
and
then
I
just
create
pod
networks
named
ABCD,
yup,
very
straightforward,
the
simplest.
A
What
you
can
have
right
and
then
basically,
whichever
I
pod
Network
I
am
I'm
having
in
my
pod,
spec
I'm
gonna
pick
that
conflicts
and
and
use
that
right.
That's
the
most
simple
that
where
this
doesn't
exist,
where
this
connection
between
a
cni
doesn't
even
care
about
sportsback,
it
doesn't
care
about
its
own
stuff.
So
basically,
this
is
the
the
bare
minimum
where
pod
spec
cubelet
CRI
receives
the
list
of
the
Pod
networks,
and
she
write
translates
that
to
the
names
of
the
conflict.
A
It
is
going
to
use
by
the
cni
it
doing
the
stuff
and
just
calling
that
multiple
time,
depending
on
how
many
pod
networks
do
I
have
right.
That's
that's
the
more
bare
bone,
very
basic
kind
of
implementation.
We
could
have
here
right
without
it
with
this
with
this
approach.
Right
and
the
question
is:
is
that
the
one
we
want
to
have?
And
that's
not
the
question
to
me.
That's
the
question
to
goes
a
team
responsible
for
this
connection
right
between
CRI,
CRI
and
cni.
It's
that's
acceptable.
A
That
kind
of
just
simple
right
and
I
assume
it
might
be,
and
then
a
matter
of
cubelet
to
exposing
this
whole
thing
right.
So
cubelet
has
to
pass
from
the
spec
has
to
pass
the
names
of
the
Pod
Network
and
then
has
to
accept
the
product
is
what
you
said
right
when
I,
when
I
grab
stuff
from
so
CRI,
has
to
track
what
pod
Network
the
specific
connection
belongs
to
right
and
that's
it.
So,
basically,
that's
what
that's
what
I'm
thinking?
That's
what
I
am
thinking
here
with
this
one?
A
The
question
here
is
the:
how
successful
we
could
be
with
with
pushing
that
kind
of
new
apis
in
the
cubelet,
so
that
Cris
can
adapt
it
I
assume
we
I
assume
what
I
assume
what
we
could
achieve
is
if
we
could
achieve
an
expansion
of
the
cubelet
API
without
backboard
compatibility,
breakages
right.
If
we
were
to
be
able
to
do
that,
just
expand
that
and
then
having
that
API
is
there,
at
least
on
the
this
phase
right,
even
even
I.
A
Think
with
this
we
could
achieve
that
in
this
space,
where
we
would
say,
Okay
cubelet
can
pass
you
those
spot
specs
and
can
receive
those
additional
pod,
Network
aligned,
IPS,
but
then,
whether
it's
here
I
can
do
that.
That's
separate
right,
that's
that's
something
that
can
be
independently
done
by
the
CRI
implementers
right.
When
this
whole
thing
kind
of
catches
up.
C
B
B
Just
yeah,
you
should
be
able
to
write,
go
to
participants
and
like
right,
click,
I.
B
Let
me
share
so
I
hate
how
it
does
this,
so
what
Michael
Cambria
was
stating-
and
this
is
kind
of
you
know
what
he
was
saying
about-
iPod
man
is
that
in
the
current
specification
we
do
have
like
the
name
not
to
be
confused
with
type
type
will
be
the
executable,
but
name
could
be
your
pod
Network.
You
know
name
here
and
that
can
return
IP
addresses,
obviously
and
more,
and
that's
where
we
got
to
iron
out
the
limitations
and
the
current
state
of
container
D.
D
A
B
Mean
so
it
can
actually
execute
container
D
can
actually
execute
multiples,
cni
conflicts
and-
and
it's
actually
that
doesn't
follow
the
specification
strictly
you're
supposed
to
have
one
cni
conf
list.
However,
it's
actually
in
the
realm
of
Linux,
it's
actually
already
executing
two
of
them
with
every
single
use
of
the
cry.
Api,
it's
using
two
conflicts
under
the
covers:
it's
actually
executing
loopback,
as
it's
independent
CNA
network
configuration
and
then
whatever
you
specify
is
like
it
picks
as
the
lowest
lexical.
Whatever.
A
B
It's
actually
set
up
to
do
this
and
then
there's
code
changes
say
in
the
Run
pod
sandbox
that
it's
actually
explicitly
calling
out.
You
know
fi,
give
me
the
IP
addresses
for
E
to
zero
and
make
that
the
Pod
IP
and
then
return
everything
else
as
the
additional
pod
IPS.
B
So
it
is
very
much
possible
to
do
this
from
the
container
D
perspective,
probably
I,
think
I've
read
through
ocic
and
I.
It
is
also
possible
in
cryo
and
then
we're
just
adding
and
I,
don't
want
to
say
just
adding.
We
are
adding
the
two
fields
to
the
cry:
API
and
slightly
modifying
the
cry,
because
at
this
point
we
have
the
network
name
and
then
we
also
have
the
IP
addresses
and
we
also
have
the
interface
names.
A
B
A
C
B
Network
name:
no,
it
is
not
saved
they.
We
don't
have
that
concept
it
because
in
container
D
there's
a
library
called
go
cni
which
wraps
libs
cni
that
executes
the
cni
plugins
itself.
So
we
do
have
all
of
the
information
from
go
cni
on
up.
We
just
don't
do
anything
other
than
filtering
out.
We
use
eth0
as
the
primary.
C
A
Got
it
so
so
this
is
another
thing
right:
okay,
I
think
another
thoughts
came
to
my
mind,
but
we
can
get
to
it
in
a
second
but
yeah.
This
I
think
so
we
can.
What
I'm
thinking
is
right
now
from
this
discussion.
I
think
we
should
work
on
how
we
can
modify
the
cubelet
API.
Let's
try
to
Define
that
on
how
we
can
Define
that
cubelete
API
change
it.
So
it
was
barcode
compatible
and
then
we
we
definitely
In
This
Very
space
will.
A
We
should
not
expect
this
being
ready,
so
basically
Peter
to
what
you're
saying
can
I
right
away?
Have
this
as
a
demo,
you
will
not,
probably
because
those
Cris
will
have
to
adapt
to
that
new
to
that
new
apis
right.
So
in
this
phase,
what
I
would
think
is
we
will
expose
those
cubelet
apis
so
that
Cris
can
leverage
them,
but
I
wouldn't
expect
the
Cris
catching
up
right
away
right
unless
they
could.
That
will
be
a
different
thing.
A
A
A
A
If
not,
then
thanks
everyone
hear
from
you
next
week
and
yeah.
Let's
I
will
try
to
myself
to
look
as
well
in
the
in
the
cubelet
kind
of
apis.
What
we
can
change
there
to
to
get
this
kind
of
to
what
to
propose
what
sort
of
changes
so
and
I
recommend
everyone.
If
you
could
to
look
at
it,
what
sort
of
changes
we
could
propose
there
so
that
they
are
Backward
Compatible?
That's
the
main
aspect
of
this
right.
Okay,
thanks
everyone
see
from
you
next
week,.