►
From YouTube: Multi-Network Community Sync for 20230913
Description
Multi-Network Community Sync for 20230913
A
All
right
all
right
welcome
everyone
at
the
multinetal
community.
Sync
today
is
September
13th
and
today,
I
just
want
to
go
over
the
changes.
I
did
for
the
phase
one
cap
doc
and
basically,
what
I'm
just
kind
of
thinking
how
to
do
this.
The
best
way
so
I
would
like
to
go
through
the
dock
for
through
the
whole
dock.
Today
and
I
was
thinking
too.
If
you
have
any
question
about
the
cap,
I
will
just
interrupt
me.
A
A
I
still,
I
just
want
to
go
over
the
doc.
What
I
did,
but
still
I
assume
I
would
appreciate
if
everyone
could
just
read
through
it
offline
just
word
by
word
and
and
like
if
any
wording
is
wrong
or
anything,
please
just
correct
comment.
What's
not.
This
is
a
joint
effort.
It's
not
like
just
mind
dogs,
so
please
feel
it
as
a
and
own
it
on
your
on
your
site
as
well.
A
All
right,
let
me
just
share
the
doc,
and
the
link
to
the
doc
to
follow
me
is
on
the
top
of
the
Community
minutes
is
the
first
link
at
the
beginning
of
the.
A
If
you
want
to
move
along
with
me
and
just
even
just
not
interact
me
and
just
add
your
own
comments
as
I
go,
then
then
you
can
do
like
that
all
right.
So
basically,
what
I
did
here
is
just
gonna
go
through
some
of
the
changes,
so
terminology
goals,
I
think
golfing
should
be
kind
of
unified,
with
what
we
have
I
haven't.
Did
that
one,
so
that
probably
has
to
be
unified
with
what
we
have,
because
I'm
going
to
add
this
dot
to
the
existing
requirements
cap.
A
So
this
has
to
be
unified
with
this
one
and
the
other.
The
terminology
didn't
change
in
the
background.
I
added
semi
kind
of
explanation
about
Network,
and
the
explanation
is
that
the
purpose
here
of
it
is
that
we
don't
want
to
define
a
network.
We
just
want
to
ensure
that
network
is
a
represent,
is
a
has
some
representation
in
kubernetes,
but
what
it
means
is
up
to
the
implementers
kind
of
will
right
on
how
what
defines
a
network.
So
we
don't
want
to
put
any
definition
like
on
it.
A
Just
it
is
defined
by
whoever
implements
a
pod
Network
and
then
that's
how
they
basically
then
treat
a
network
as
so
so
so
that
is
just
going
to
clear
that
out
of
the
way,
then
the
next
is
the
cni
usage.
I
think
I
mentioned
that
the
two
models,
the
Standalone
and
the
agent
based
I'm-
not
sure
whether
those
are
the
correct
names.
Someone
maybe
probably
tell
me.
Maybe
there
are
some
already
definitions
defined,
but
basically
the
other.
The
one
from
the
difference
between
the
two
is
the
agent
with
the
cni
communicates
with
API
server.
A
So
that's
the
main
kind
of
purpose
of
this
and,
as
the
background
section
announcing
overview,
I'm
trying
to
kind
of
Define
the
in
short,
what
we're
doing
here
so
introducing
the
two
new
objects,
adding
the
controller,
adding
validation
for
those
guys
and
some
modification
for
the
default
Network.
A
So
basically,
things
like
pod
can
reference.
Multiple
pod
networks
can
reference
multiple
for
Network
attachments,
pod
Network
can
point
only
to
one
pod,
Network
so
and
so
forth.
You
can
have
one
of
either
pod
Network
or
pod
Network
attachment
for
a
specific
bot.
Network
I
tried
to
describe
it
like
this
over
here.
So
please
read
through
it
with
a
kind
of
offline
and
see
whether
that's
if
I
missed
something
or
maybe
something
is
not
understood.
It's
too
complicated
so
help
me
with
maybe
wording
it
better
or
presenting
it
in
better
way.
A
So
there
is
that
part
and
then
going
to
the
design.
Let
me
show
because
I
usually
like
going
by
the
outline
on
the
left.
A
Let
me
just
show
those
guys
off
so
basically
going
by
outline
so
first
time
I'm
introducing
the
new
resources
that
we
talked
about.
I.
Think
there's
now
much
change
here:
the
Pod
network,
as
we
defined
as
we
talked,
which
has
the
ipam
objects
and
then
parameters
reference,
end
up
optional
provider.
A
Some
examples
how
those
would
look
like
with
some
conditions
and
some
parameters,
then
the
conditions
themselves,
because
we
Define
we
have
the
conditions
so
condition
status,
has
ready,
Palms
ready
any
news
with
explanations
that
hasn't
changed.
That's
exactly
how
I
written
it
had
it
initially
written.
That's
that
state
and
then
the
provider
wording.
What
is
the
use
case
of
it?
Basically,
that
it's
used
for
the
implementer?
It
is
defined
and
presented
for
the
implementers
to
be
a
kind
of.
A
The
ipam
fields,
this
is
explained
to
kind
of
identify
what
they
are
used
for,
and
there
is
a
mentioned
about
the
kubernetes
type
that
it
will
not
be
introduced
in
this
phase
right.
So
we
will
introduce
the
apis,
but
we
will
not
Implement
a
node
ipam,
that's
built
in
in
this
in
this
phase
as
well.
A
So
so
that
will
be
have
to
be
a
separate
cap
which
has
to
come
up
after
the
cluster
side.
Their
stuff
is
resolved
and
then
main
thing
about
our
discussion
about
oneip.
That's
a
pod.
A
pod
network
has
to
replace
be
represented
by
one
IP
per
family,
and
this
is
to
be
to
contain
the
backward
combat.
We
don't
want
to
change
that
one
next
one
is
the
in-use
indicator
so
how
that
works?
A
Where,
because
this
is
going
to
be
reset
as
soon
as
there
is
at
least
one
pod
or
a
pod,
Network
attachment
pointing
to
a
pod
Network
right.
So
when
that
happens,
we
will
put
that
the
the
Pod
Network
object
is
in
use
and
there
is
additional
condition.
I
think
I'm,
not
sure
who,
but
I,
had
a
comment
on
how
far
we
will
feel
which
pod
we
will
pick
up
for
this,
and
basically
we
will
not.
We
will
filter
out,
succeeded,
pods,
so
succeeded.
A
Pods
basically
are
done,
one
does
run
a
job
and
and
are
done
and
they
are
not
running
anymore,
so
the
containers
are
basically
deleted
and
the
namespaces,
so
I
am
thinking
of
filtering
out
the
succeeded,
State
pods
to
kind
of
not
hold
a
network
just
because
there
is
a
job
that
just
finished
right,
so
filtering
out
those
guys
now
about
mutability
for
the
object
that
basically
the
whole
thing
is
immutable.
You
have
to
recreate
it
from
scratch.
A
If
anything,
then
the
only
there
is
a
small
caveat
around
the
ipam
fields
where,
if
not
specified,
you
can
update
them,
but
when
they
are
set,
you
cannot
change
them
anymore,
and
this
is
basically
the
same
behavior
as
the
some
of
you
might
be
familiar
with
the
pot
cider
of
a
field
in
the
node,
basically,
the
same,
the
same
type
of
a
behavior.
A
A
And
basically,
of
course,
if
there's
any
additional
control
kind
of
admission,
control
for
the
object,
implementations
can
add
more
stuff
like,
for
example,
checking.
A
What
is
the
reference
parameters
file,
whether
it's
correct
or
not,
that
kind
of
thing,
but
that's
on
the
implementation
side,
life
cycle
I'm,
not
sure
whether
this
is
correctly
set,
but
basically
in
life
cycle
I,
try
to
kind
of
how
how
the
object
is
going
to
behave
to
describe
that
I'm,
not
sure
whether
that's
that's
understandable,
but
please
read
through
it
through
it,
but
in
a
just
you
can
create
object,
it's
without
any
conditions
and
basically
no,
but
nothing
can
attach
to
it.
A
The
object
is
not
ready
because
the
validation
failed
or
there
is
maybe
there
is
a
parameters
reference
this
set,
but
the
controller
on
the
other.
On
the
implementation
side
didn't
said,
the
params
params,
ready
condition,
and
basically
this
is
not
ready.
Pods
cannot
attach
in
this
in
this
state
right
if
it's
not
ready,
already
ready,
basically,
everything
all
validation,
passed
and
now
ports
can
attach
to
this
pod
Network,
and
then
there
are
some
pod
running.
A
A
In
that
case,
we
will
set
the
Pod
Network
to
not
ready,
and
so
no
new
pods
will
be
able
to
attach
to
suchbot
network,
because
it's
in
in
the
deleting
progress
right,
so
no
Newports
will
be
able
to
be
attached
to
this
and
then
lastly,
this
deleted,
when
in
use
condition,
is
not
present
or
false,
and
then
basically,
we
can
just
delete
the
object
that
more
or
less
the
the
life
cycle
of
the
object.
How
I
would
Envision
it.
A
Please
read
through
it
and
again
help
me
if
there
is
any
it's
not
fully
clear
how
that
should
work.
Any
question,
no
okay.
A
Moving
on
pod
Network
attachment
here
is,
the
definition
is,
as
we
said,
but
I
am
describing
as
a
pod
level
of
parameters
into
the
Pod
right
with
basically
just
two
Fields,
the
port
network
name
and
the
parameters
reference
and
as
well
the
conditions
so
I
am
copy
pasting,
whatever
I
just
described,
for
a
pod
Network
to
pod
Network
attachment
same
story,
so
basically
that
means
that
the
statuses
are
the
same
plus
one
more
one.
More
is
the
Pod
network
not
ready.
A
So
basically,
if
attachment
is
the
Pod
Network
attachment
pointing
to
a
pod
Network?
That
is
not
ready.
We
will
indicate
that
right.
So,
basically,
the
Pod
network
is
not
ready,
a
failure
right.
So
so
that's
additional
reason
for
not
ready
of
the
whole
Global
condition,
but
the
rest
is
similar.
There
is
an
in-use
indicator
where
at
least
at
least
one
pod
is
using
this
guy
and,
of
course
same
same
thing.
We
will
filter
out
the
succeeded,
State
pods,
then
the
whole
pod
Network
attachment
is
immutable.
A
So
basically
you
set
it
once
and
then
you
have
to
recreate
it.
If
you
want
to
change
something,
life
cycle
is
similar
to
pod
network
created
I.
Think
the
only
change
is.
Is
there
any
change
here?
No
I,
don't
think
there's
any
change
against
against
a
pod
Network,
so
the
life
cycle
is
is
very
similar
to
to
the
other
guy.
A
Basically,
there
is
that,
and
that's
all
about
the
Pod
Network
attachment
right.
Please
read
through
it
I
I,
wonder
I
just
want
to
make
sure
that
we
have
the
kind
of
the
functionality
of
the
object
properly
defined
and,
and
please
help
me
recruit
if
that
will
reflect
what
what
it
is.
If,
if
there
may
be,
or
can
it
be
better
kind
of
expressed.
A
Moving
on
now,
I
want
to
introduce
the
default
pod
Network,
and
this
is
basically
what
I
what
we
discussed.
I
haven't
changed
much
here,
the
default
pod
Network.
The
one
thing
is
that
cubelets
I
think
this
I
did
mention
that
before.
But
it's
going
to
say
it
again
that
I
want
to
change
Cube
lesson:
I
have
a
section
for
cube.
Let's
changes
there
to
kind
of
react
and
discover
check
whether
that
that
default
pod
Network
exists
right.
A
So
basically,
that's
one
aspect
of
this
deletion
prevention.
So
basically,
since
default,
is
a
network
that
always
has
to
be
there
similar
to
the
default
namespace
I
I
discussed
with
some
folks
on
how
to
kind
of
achieve
this
and
I
think
the
way
to
do
it
is
considering.
We
currently
are
going
to
use
the
finalizer
to
prevent
any
direct
deletion
of
a
specific
object.
A
The
way
we're
going
to
handle
the
default
Network,
on
the
other
hand,
is
that
as
soon
as
someone
is
going
to
try
to
delete
that
object
and
delete
timestamps
field
is
set,
we
will
just
recreate
the
object
immediately.
The
way
it
is,
we
will
just
delete
it.
The
remove
the
finalizer
delete
the
object
and
recreate
it
immediately
and
that's
I'm
planning
to
do
this
in
API
server,
where
basically
this
hopefully
will
be
set
and
ended
caught.
A
So
that's
that's,
basically
how
I'm
trying
to
prevent
the
deletion
of
the
default
Network
and
preventing
the
default
Network
to
be
in
the
deleting
progress
State,
because
we
cannot
have
that
right.
I
see,
there's
some
chat
check
so
because
we
cannot
have
the
default
Network
be
in
in
deletes
State
because
it
always
has
to
be
available.
So
that's
that's
the
way
I
want
to
achieve
that
similar
to
default
namespace.
A
A
Yeah,
so
if
what
if
someone
deletes
the
parameters
right,
if
I
have
my
default
network
has
parameters
and
they
are
deleted
or
something
so.
That's
that's
I
think
at
that
case,
I've
just
described
that
infrastructure
providers
will
have
to
ensure
that
the
the
the
custom
object
is
always
there
for
the
DD
for
the
default
Network
to
be
present.
I
will
just
call
it
out
in
case
someone
has
a
reference
parameters
inside
the
default
Network,
and
someone
can
delete
that
one
right.
A
A
Going
on,
oh,
this
is
automatic
creation,
so
basically
how
we
would
emerge
migrate,
existing
cluster
to
pod
Network
and
ensure
that
delete
the
default
network
is
there.
So
there
is
the
whole
description
we
will
leverage
the
existing
arguments
of
KCM
and
how
those
would
translate
to
a
default
Network.
A
So
this
is
I,
think
all
the
possible
combinations
and
for
so
some
folks
that
maybe
not
aware
those
are
the
flags
that
you
can
set
for
KCM
and
basically
then
we
expect
some
things
and
based
on
that,
there
will
be
the
node
ipam
taking
part
the
node
icon,
that's
basically
part
of
the
KCM,
and
it
will
do
basic
note
configuration
of
the
ciders.
This
is
the
pots
either
field
that
I
mentioned
before
on
the
Node.
A
So
basically,
this
is
this
is
how
we
this
is
the
kubernetes
basically
mode
that
says
that
yes,
I,
want
to
use
the
the
node
ipam,
which
we
are
initially
will
not
support,
but
for
default
network
is
basically
there
and
support.
So,
basically,
that's
why,
if
you
specify
a
cider
and
allocate
notes
either,
this
is
true.
We
say
that
I
pumps
are
for
both
in
this
case,
because
there
are
two:
there
is
V4
V6,
so
we
say
those
two
guys
as
kubernetes,
so
this
is
the
default
Network.
A
This
is
a
case
where
only
one
is
set
so
basically
in
this
cluster
you
will
have
only
kubernetes
for
V4,
but
nothing
for
six.
A
One
aspect
to
those
those
guys
I
think
I
did
I
skipped
that
to
let
you
know
that
and
I
I
think
I
extended.
The
the
description
in
the
spec
of
this
pod
Network
object
is
that
ipam
as
well
indicate
the
requirement
of
Ip.
A
So
if,
if
I
have
specified
in
ipam
4
and
6
like
in
the
above
here,
there
is
kubernetes
I
expect
to
receive
a
IP
respective
ip4
dot
port
for
that
pod
Network
in
that
pod,
for
that
pod,
Network
there's
something
that
initially
will
not
be
enforced,
but
eventually,
when
we
have
CRI
catching
up
and
and
building
being
able
to
return
us
the
name
of
the
Pod
Network,
the
specific
IP
belongs
to
this
is
where
we
will
enforce
enforce
this.
So
maybe
let
me.
A
So
that
aspects
into
a
DOT
missed
that,
so
that's
another
detail
the
case
where
basically
one
one
before
I
think
I
mentioned
that.
But
there
is
a
case
where,
for
example,
someone
doesn't
doesn't
use
this
at
all
right.
So
in
this
case
we
would
just
probably
there
won't
be
any
spec.
Even
we
will
just
create
a
or
maybe
there
will-
or
it
will
be
like
this.
A
So
basically
in
this
case
we
will
create
just
empty
and
then
the
I
would
imagine
the
installer
of
the
of
the
cluster,
the
the
kind
of
Provider
of
the
cluster,
if
if
they
do
support
pod
networks-
and
they
want
to
do
some
additional
stuff
here-
they
have
means
to
Now,
update
this
post
creation
right
or
as
well
update-
maybe
basically
that
right,
so
they
can.
A
They
can
update
this
field
and
there's
another
there's
another
flag
to
all
this,
where
maybe
the
the
providers
will
just
use
the
manual
creation
of
the
object
right.
So
basically,
there
is
a
I'm
thinking
of
introducing
this
guy
into
KCM.
That
will
just
prevent
the
creation
of
the
default
Network
completely,
and
then
it
will
be
up
to
the
installer
of
the
cluster
to
to
create
it
by
themselves.
So
basically
they
find
some
yaml
and
apply
it
initially,
when
the
cluster
is
being
created.
A
B
A
Note
is
ready.
We
want
to
kind
of
add
this
additional
condition
for
that
aspect:
Network
migration.
This
is
for
a
case
where
oh,
how
do
I
change
my
default
Network
as
I
said,
pod
networks
is
immutable,
so
how
do
I
migrate
from
one
to
the
other
type?
If
something
has
to
change,
and
there
is
some
idea,
it
will
not
be
part
of
this
phase-
the
implementation
of
it,
but
there
is
some
idea
of
how
to
do
that
in
the
future.
A
We
do
this
because
this
is
in
a
line
of
one
of
the
other
requirements
that
we
have
in
later
phase,
which
basically
covers
similar
use
case,
where
we
want
to
override
the
default
pod
Network
for
a
namespace.
So
this
will
be
this
kind
of
a
following
similar
idea,
but
overriding
the
the
default
Network
on
a
node
level,
and
this
is
how
we
want
we.
We
could
leverage
and
introduce
the
migration
store
here.
A
All
Right
Moving
On,
so
now,
as
I
mentioned,
we
want
to
have,
for
this
whole
thing,
a
pod
network
controller.
A
This
will
be
additional
control
in
the
KCM
component
of
of
upstream
kubernetes
and
it
will
have
those
few
functions,
basically
handling
the
conditions
for
our
two
new
objects,
handling
the
finalizer,
maybe
yeah
conditions
it
includes.
Maybe
I
will
just
do
that,
maybe
just
say
all
conditions
and
because
that
con
as
well,
includes
the
E
news
and
ready
those
two
conditions
it
is
going
to
handle
and,
lastly,
it
will
handle
the
whole
automatic
creation
of
the
default
Network.
A
So
that's
this
guy
is
going
to
handle
all
this
and
then
I
do
mention
that
we're
going
to
introduce
the
new
feature
guy
feature
gate
for
this
feature.
It
will
be
called
multi-network
and
everything
will
be
behind
this
gate.
Then
the
aspect
of
another
API
changes
aspect
of
attaching
pod
Network
to
a
pod.
So
basically,
this
is
where
what
we
discussed,
how
we
envision
this
I
changed
the
name
before
this
was
named.
Pod
Network
attachment
I
just
switched
to
Networks
I,
don't
know
we'll.
A
Let's
see
what
the
app,
what
the,
what
the
rest
of
the
community
is
going
to
tell
us,
but
I,
just
I
think
someone
was
mentioning
the
skus
networks
and
I
just
went
went
for
it
so
in
pod
it
will
be
networks
we'll
see
and
basically
single
object
is
called
Network.
So
we
will
have
a
structure
with
that
name
to
be
introduced
by
this.
We'll
see
how
that
goes,
and
this
is
just
a
field
in
the
Pod
spec.
A
So
so,
let's
see
how
that
will
work
out
and
then
inside
we
have
the
the
standard
things.
The
one
thing
that
did
change
is
the
primary
name,
and
now
it's
called
is
default
gateway.
This
was
I
think
when
om
is
here.
Yes,
so
basically,
when
a
proposed
default
gateway.
C
A
Added
these-
and
this
is
where
we
at
ended
now,
if
you
have
other
proposals,
let
us
know
and
that
we
we
can
talk
about
this,
but
that's
the
current
proposal
and
as
as
basically
to
describe
those
fields,
those
are
mutually
exclusive,
either
per
Network
or
attachment.
You
specify
one
of
those
two
for
each
Network
structure,
then
interface
name.
This
is
just
basically
what
what
is
going
to
be.
A
The
interface
name
of
the
Pod
I
am
adding
for
the
additional
Fields,
a
sentence
where
that
this
field
functionality
is
dependent
on
the
cni,
so
basically
any
support
for
it.
So
it's
not
something
that
we
can
enforce
so,
whether
those
those
last
three
Fields
work,
it's
up
to
the
kind
of
implementation,
how
they
are
going
to
be
used.
C
A
Yeah
I'm
gonna
get
there
so,
okay,
I
will.
Let
me
let
me
finish
yeah,
it's
all.
Let
me
okay
I
need
to
fix
this.
A
So
then
those
are
I
think
I
think
about
the
fields.
That's
this,
then.
This
is
example
of
how
that
would
look
like
right.
A
Then
I
am
mentioning
two
validations
for
this
first
will
be
the
static
I'm
calling
them
static
and
then
the
other
one
is
active.
So
the
static
validation
is
basically
to
I,
have
the
pods
back
and
what
can
I
check,
so
this
will
be
down
in
the
API
server.
A
What
I
heard
this
is:
where
generally
does,
that
kind
of
web
hooks
are
done
for
the
core
objects,
so
what
we
are
going
to
do
is
ensure
there
is
just
single
pod
ensure
a
single
pod
references,
a
given
pod
Network
only
one
time
so
I
want
to
introduce
that
rule
and
I
have
explanation
for
that
rule
down
below
here,
meaning
that
why
we
have
been
adding
that
rule
to
be
more
restrictive
now
and
if
anything
in
the
future,
we
can
lose
losing
this
app,
but
that's
that's
an
adjust,
so
it
will
check
for
this
check
that
either
one
of
those
two
guys
is
unique.
A
With
the
True
Value
across
all
the
networks
objects
same
for
the
interface
name.
It
has
to
be
unique
across
all
the
networks.
If
it's
specified
and
then
interface
name
has
to
have
a
kind
of
Applied
constraints
for
Linux
and
windows
and
oh
yeah.
This
is
it
so
networks
objects
should
not
be
specified
when
host
network
is
set.
So
there
is
that
this
is
a
small
tidbit.
That
I
think
we
never
discussed
about
this.
But
this
kind
of
feels
straightforward,
right,
ifos
network
is
defined
and
networks
are
defined.
We
should
fail.
A
We
should
have
only
one
of
those.
They
are
mutually
exclusive
right
then
going
to
active
validations.
So
those
are
validations.
So
so
those
characterize
themselves
that
I
try
to
apply
a
pod
with
something
wrong.
I
will
get
a
instant
error
right.
That's
what
static
validations
are
active
validations.
You
will
not
have
that.
You
can
apply
the
the
the
incorrect
spec
and
over
time
you
will
see.
Oh,
my
pod
doesn't
start
up
what's
wrong,
so
basically
the
the
error
will
be
presented
as
a
pod
event
and
basically
those
active
validation
require
additional
objects.
A
A
And
lastly,
oh
they're
still
because
there
is
an
aspect
of
still
to
apply
that
rule
that
I'm
here
ensuring
a
pod
reference.
Only
one
pod,
Network
I,
cannot
do
this
directly
from
the
Pod
Network
attachments.
So
the
only
way
I
can
do
it,
because
I
need
to
pull
the
network
attachment,
see
which
pod
Network
it
references
and
then
I
can
check
across
whether
that's
all
correct
or
not.
So
this
is
a.
This
is
I'm
pushing
that
into
the
active
validation.
A
So
if
someone
uses
a
pod
Network
attachment
with
a
pod
Network
that
is
used
again
in
some
indirect
directly
or
with
some
other
networks
that
will
be
done
only
over
here,
so
that
will
be
a
active
validation
and
presented
as
a
pod
events.
A
So
read
through
that
think
about
this,
but
I
think
this
is
what
I'm
proposing
in
terms
of
the
additional
validation.
Let
me
know
if
I
missed
something
in
terms
of
well.
Is
there
any
other
case
for
the
spec
that
we
need
to
test,
but
I
think
I
covered
most,
and
this
is
where
the
now
moving
on
is
the
outer
population?
This
is
I,
think
Kevin
that
you
were
referring
to
so
when
this
field
is
not
set
and
the
host
networks
is
not
set
right,
so
those
two
conditions
we
will
just
populate
this.
A
We
will
set
pod
networks,
and
this
will
be
do
done
similar
like
scheduler
sets,
node
node,
name,
I,
think
feel,
there's
a
node
name
or
something
like
that,
which
basically
indicates
which
no,
the
specific
part
is
going
to
be
run.
You
can
manually
specify
that,
but
if
you
don't
specify
it,
scheduler
will
pick
one
for
you
and
it
will
set
it
same
with
this.
If
you
don't
specify
networks,
I
will
populate
it
with
pod
network
name
default,
and
basically
this
will
just
say
small
mutation,
webwork,
or
something
like
that.
A
I
think
that's
Kevin,
that
that's
that's
what
you
were
asking
right:
yeah,
okay
status,
so
basically
pod
status
changes
so
I'm
calling
out.
We
want
to
expand
the
there
is
a
a
structure
called
pod
IP
and
we
want
to
expand
it
with
those
two
additional
elements.
So.
A
Currently
we
have
only
IP
I,
want
to
add
a
pod
network
name
so
and
then
it
will
have
as
well
interface
name
and
basically
better
to
look
at
it
from
the
exact
kind
of
spec
here,
because
there
are
two
Fields,
there
is
a
pod
IP,
which
is
the
old,
very
first
field
that
was
created.
We
cannot
do
anything
about
this.
This
is
just
a
string,
so
that's
a
behavior
for
that.
A
Guy
doesn't
change,
maybe
slightly
because
now
this
will
hold
the
this
is
a
bit
tricky,
so
we
will
see
whether
that's
okay,
but
so,
basically,
what
I'm
calling
out
is
to
spot
IPS
where
the
change,
because
that
one
pod
diapers
this
is
a
list
of
struct
which
I'm
changing
and
there
is
the
Pod
IP
Singleton
right,
it's
the
string,
so
this
thing
will
hold
the
primary
pod
network
of
the
Pod.
A
Basically,
the
one
that
said:
I
am
the
default
gateway,
but
for
the
preferred
IP
family
there
is,
if
I'm
not
mistaken,
there
is
a
preference
in
the
cluster
to
say
which
IP
is
is
primary
or
I'm.
Mistaken
I
think
I'm
mistaken
on
this
one.
This
definitely
is
a
thing
for
services,
but
it's
not
a
thing
for.
A
Now,
when
I
think
about
this,
this
is
something
that
I
think.
So
maybe
that's
that's
a
tricky
one
right
now,
because
we
allow
to
disjoin
the
default
gateway
for
B4
and
V6.
So
this
might
be
a
tricky
one.
So
I
think
we
still
unless
we
just
always
do
V4
in
this
IP.
Maybe
that's
the
case.
We
always
said
D4,
so
maybe
that's
then
easy,
so
we
always
only
set
before
for
that
one.
Unless,
if
it's
a
dual
stack,
unless
it's
a
single
stack
V6,
then
it
would
just
a
single
stack.
A
So
that's
one
aspect
of
this
and
now
going
to
the
Pod
IPS
themselves.
So
basically
you
have
for
those
reference.
Pod
networks,
the
Pod
IP
as
I
mentioned-
is
the
default.
Because
do
we
have
that
defined?
We
did
not.
But
basically
so
let's
say
this
is
the
default,
because
the
default
network
is
here
and
it's
I
think
the
order
here.
I,
don't
think
we're
gonna
kind
of
do
something
about
the
order
that
will
be
probably
up
to
the
CRI,
how
they
return
this
to
us.
A
B
D
D
This
yes
yeah,
awesome,
okay,
very
good
awesome,
because.
A
I
balled
it
out
I
balled
it
out
the
I
bolted
out
the
new
things,
because
this
is
a
copy
paste
from
the
code.
Yes,.
D
I
see
that's
now:
I
follow
yeah
awesome
yeah,
thanks
for
addressing
that.
A
I
IP
already
exists
right,
so
those
are
the
two
new
things
we
would
add,
and
plus
some
description,
I
probably
need
to
add
a
the
interface
name.
Name
is
missing
all
right,
so
there
is
that
not
much
left
we
are
almost
at
the
end,
so
any
other
questions
to
the
status.
I
think
this.
This
is
covered
all
right,
so
there's
that
and
basically
I'm
I'm
describing
the
case
that
it's
supposed
to
be
the
kind
of
populated
by
cubelet.
A
But
this
is
something
this
is
dependent
on
the
CRI
kind
of
catching
up,
I
think
the
initially
this.
This
will
be
all
up
to
the
implementations
to
populate
this,
this
Fields
right,
so
initially
it
will
be
yeah.
We
will
be
dependent
on
the
Cris
to
kind
of
provide
that.
A
So
until
that's
done,
we
cannot
have
anything
about
this
right,
so
the
fields
will
be
there,
but
it
will
be
not
populated
until
CRI
catches
up
I
have
mentioned
URI
here
to
Define
that
we
did
look
into
that
kind
of
model
and
that
was
in
aspect
of
pod
spec
changes,
but
because
of
user
experience,
having
explicit
pod
Network
model
proposed
here
fits
a
better
needs.
This
is
a
Networks
yeah,
so
I.
Imagine
what
the
that
we
had.
A
It
kind
of
we
considered
dri
to
to
be
reused
here
and
now,
just
a
short
kind
of
list
of
what
kind
of
summary
of
the
changes
introduced
right.
So
we
will
have
an
IPI
server
changes
that
will
do
the
web
hooking
for
validation,
web
hooking
for
pod,
Network
and
pod
Network
attachment
this
is
the
static
I
call
them
static,
validations,
I
I
describe
above,
and
it
will
as
well
handle
the
deletion
of
the
default
pod
Network.
A
So
basically,
when
someone
accidentally
delete
default
Network,
we
will
just
recreate
it
and
then
it
will
as
well
have
the
changes
for
the
Pod
spec
validation.
A
A
Yeah
I
will
add
that,
because
I
think
that's
that's
completely
missing.
There
is
no
details
about
what
sort
of
validation
we
plan
to
do
is
just
calling
this
out
here.
So
let
me
just
add
details
on
that.
I
think
I
mentioned
about
the
the
default
Network
deletion
and
then
extended
pod,
spec
validation.
I
did
mention.
This
is
the
this
is
the
static
validation
for
pod
spec
right
scheduler
changes.
A
So
this
is
where
do
I
have
this
right
yeah,
the
one
thing
that
the
scheduler
will
do
is
do
the
active
validation
of
the
Pod
spec,
so
I
was
considering
where
to
place
it,
and
after
some
discussion
with
teammates,
we
I
think
we
we're
gonna
propose
to
do
it
in
the
Pod
scheduler,
like
example.
Today,
when
pod
requests,
some
resources
and
those
don't
exist,
a
scheduler
will
just
put
the
pod
in
pending
state.
So
I
just
want
to
follow
the
same
pattern
with
this
right.
A
So
basically
schedule
will
do
the
whole
active
validation,
I.
Think.
A
Oh
I
think
it
was
this
and
basically
here
I'm
describing
what's
going
to
happen
so-
and
this
is
this-
is
the
example
I
provided
with
the
resources.
This
is
exactly
what
is
being
done.
So
if,
if
something
fails
in
terms
of
those
validations,
we
will
set
a
pod
scheduled
condition.
There
is
a
that's
that
that
name
condition
in
in
the
Pod.
A
It
will
be
set
to
false
with
the
message
that
something
is
wrong
with
the
Pod
spec
and,
of
course,
if
there
will
be
an
event
on
the
Pod
stating
that
I
don't
know,
pod
Network
a
data
plane
doesn't
exist
right.
Something
like
that.
So
we
will
leverage
the
same
means
to
kind
of
block
and
put
the
pod
in
the
pending
State
when,
for
example,
specific
pod
network
doesn't
exist,
so
we
want
to
leverage
the
scheduler
for
that
work
and
I'm
squinting.
A
Here
a
bit
we'll
see
how
the
Upstream
we
need
to
talk
with
the
scheduler
folks
and
the
API
server
folks
to
see.
Is
that
that's
okay
for
us,
or
should
there
be
some
other
means
or
we
will?
We
will
hear
the
recommendations.
Where
will
the
best
place
should
do
this,
so
there
is
that
cubelet
changes,
so
this
is
all
about
the
default
Network
right,
so
I
am
describing
how
cubelets
today
checks
for
the
cni
config
presence.
This
is
going
if
I'm
not
mistaken.
A
This
is
done
through
a
CRI
call,
but
on
top
of
that
check
we
will
we
will
let
kubelight
check
whether
a
default
pod
network
is
found.
If
it's
not,
then
we
will
just
say
a
default
pod
network
not
found
in
API
and
make
them
the
node,
not
ready,
so
ready
conditions
will
be
touched
and
it
will
be
set
to
false
until
the
default
pod
network
is
not
found.
Oh
I
I
could
find
a
case.
A
Should
we
handle
a
case
where
someone
deleted
the
nodes,
the
the
Pod
Network
I,
wonder.
A
As
we
have
the
means
in
API
server,
to
always
restore-
maybe
not,
but
something
to
consider
there
is
this
and
then
the
interface,
so
so
I
have
to
say
separate
sections,
though
cubelet
CRI
interface
and
I'm
kind
of
describing
that
the
API
for
that
interface,
from
the
cube
kubernetes
side
is
the
V1
pod
which
we
change.
So
we
provide
all
the
required
data
for
the
Cris
already.
What
needs
to
change
is
the
CRI
apis
and
I'm
calling
out
it
here.
A
What
sort
of
changes
needs
to
be
done
at
least
what
I'm
expecting
that
has
to
be
done
and
then
that
it's
going
to
be
a
separate
cap,
so
this
cap
will
not
cover
the
C
right.
Changes
for
this
and
below
I
have
some
proposals
on
what
could
change.
What
should
change
to
have
this
done
right
and
for
those
three
items
that
I
listed,
but
this
is
again
just
a
proposal
for
not
part
of
this
cap,
where
we
will
not
touch
any
of
that
and
that's
it
folks.
A
Basically,
there
is
appendix
of
of
what
the
d-array
additional
the
array
integration
could
look
like,
and
there
are
some.
We
have
some
other
discussion
on
the
Sierra
srov,
so
how
that
could
look
like
with
a
configuration
for
maltus,
but
those
are
just
additional
nodes
in
terms
of
the
opens
that
we
have
here,
the
r
back
the
Arabic,
something
that
we
discuss
is
going
to
be
pushed
to
the
next
phase.
So
that's
something
that
we
will
tackle
in
the
next
discussions
and
how
to
support
multiple
IPS
per
family.
A
That's
another
thing:
I
think
this
is
not
in
our
requirements,
so
this
is
something
that
probably
I
will
officially
I'm
just
gonna
push
back,
because
this
is
a
new
just
feature
request
for
for
kubernetes
in
general
to
support
multiple
IPS
per
family
per
per
interface.
We
don't
do
this
today
for
default
Network.
Neither
so
that's
why
I
think!
That's
that's
how
you
just
we
don't
have
that
even
in
the
requirements
and
I
think
that's,
that's
it
yeah.
We
managed
50
minutes.
Okay,
it
took
me
the
whole
hour.
A
Please
folks,
read
through
it.
I
just
went
very
brief
across
this
I
would
like
you
all
to
read
it
word
by
word
and
see
whether
this
is
all
a
correct.
Please
help
me
refactor!
So
that
the
wider
group
understand
this
better.
So
if
you
don't
understand
it,
they're
probably
going
to
be
another
person
who
will
be
in
the
same
bucket
as
you
so
call
that
out
here
in
the
dock.
Please
comment
so
we
can
rephrase
or
maybe
something
you
don't
agree
with.
A
So
let's
change
something,
but
please
review
I
we'd
like
to
set
an
end
date
for
this
reviews,
so
that
I
can
post
this
as
a
PR
but
I
think
before
I
post,
the
pr
I
think
the
pr
can
be
posted.
But
before
we
going
to
ask
for
regular
views
for
the
folks
I
think
we
need
to
do
the
road
trip
with
this
presentation.
A
So
my
next
goal
is
to
kick
off
those
slides
with
what
we
are
proposing
with
all
the
changes
so
that
we
can
then
easily
Point
everyone
to
this
cap
and
review
that
I
think
that's.
That
will
be
the
next
steps.
Any
other
comments.
A
C
D
So
yeah,
thanks
for
taking
that
50
minutes
to
go
over
the
holster
bank,
so
yeah.
It's.
D
A
All
right
folks,
if
we
don't
have
any
other
topics,
we
probably
can
can
end
it
at
this
anything
to
add
here,
I
think
I'm,
I'm,
gonna
I
saw
Pete
you're
doing
some
hackathon
to
try
to
implement
this
now.
I
think
you
have
some
draft,
you
can
work
off
of
I
got
some
things,
yeah
sure
the.
B
Size
right
so
yeah,
that's
right,
yeah,
there's
a
few
of
us
are
just
going
to
try
and
do
a
very
basic
hackathon
project.
What
we're
going
to
do
is
add
the
new
objects
to
kubernetes
and
then
play
around
probably
with
changing
Malta,
so
it
can
use
them
and
just
see
how
far
we
get
right.
A
lot
of
people
on
it
frankly,
including
myself,
have
not
built
kubernetes
before
and
I've
not
got
that
much
experience
of
doing
some
of
these.
B
These
low
level
things
so
we're
learning
a
lot
I,
don't
know
how
far
we'll
get
we'll
have
fun
and
yeah.
Whatever
we
end
up
with,
will
make
public
afterwards.
I
suspect.
What
we'll
end
up
with
is
just
something
that's
created,
probably
all
of
the
new
resources,
but
not
written
any
of
the
controllers.
I
think
I
think
that's
a
reasonable
thing
to
expect
this.
To
have.
A
Definitely
I
think
if
you
were
to
integrate
with
the
the
new
model
of
maltus,
which
I
think
connects
with
API
and
then
pulls
the
Pod
as
well
I.
Think
then
you
will
have
all
the
data
right
from
the
Pod
and
instead
of
looking
at
The
annotation,
just
look
at
the
yeah
exactly.
B
Part
of
indirection,
but
it's
not
a
huge
amount
yeah.
It
feels
it
feels
like
it's
an
achievable
hackathon
I'm
about
halfway
through
the
week.
Doing
it
part
time
in
my
spare
time,
as
is
everyone
else,
so
I'm,
not
sure
whether
we're
it's
feeling
less
achievable
as
a
week
goes
on,
but
you
never
know
yeah.
D
So
I
am
only
interjecting
just
because
I'm
excited
about
this,
but
yeah
yeah.
Needless
to
say,
I'm
I'm,
pretty
excited
about
this
and
yeah
I'm,
really
interested
from
all
all
aspects
of
it
and
including,
though
like
workflow,
to
like
make
the
modification
and
build
kubernetes
and
then
lay
modified
maltes
over.
It
is
all
sounds
awesome.
B
D
C
D
Ping
me
as
well
and
and
Tomo
and
yeah
if
we
can
try
to
add
some
insight
on
the
modifying
multi
side,
at
least
so
yeah
I'm
super
cool
I'm,
pretty
excited
about
it.
So
thanks
Pete.
B
Yeah
thanks,
it's
certainly
been
fascinating.
So
far,
we've
learned
a
lot
of
stuff
yeah
it'll.