►
From YouTube: Multi-Network community sync for 20230503
Description
Multi-Network community sync for 20230503
A
A
All
right
welcome
everyone
at
the
multi-networking
community
sync
meeting
today
is
May
3rd
and
yeah.
Let's
continue
our
discussion
on
the
multi-networking
support
in
kubernetes
Shane
to
Reaper
idea.
Oh.
A
Seems
like
so
what
happened
is
what
happened
is
I?
Did
this
I
did
a
a
private
on
my
on
my
private
Fork
of
KK
I
created
the
branch
with.
B
A
Mean
so
we
don't
have
a
repository
per
se
right
now,
but
I
I'm
thinking
that
Branch
will
be
the
best,
because
then
we
can
work
directly
on
the
KK
rep
repo
and
do
stuff,
rather
than
just
going
with
the
crd.
So
there
is
an
initial
version
of
the
Pod
network
at
that
time.
It's
not
updated.
So
I,
probably
my
next
update
on
this
or
if
you
wish
to
just
create
a
PR
or
push
a
change
to
against
that
branch
of
mind,
then
feel
free
to
do
that.
A
But
basically
just
yeah
look
at
it.
I
think
someone
mentioned
that
it
compiled
I,
didn't
I,
didn't
compile
it
I
just
managed
to
add
the
code
itself,
because
there
is
not
much
to
it
right.
There
is
just
I
just
add
a
definition
of
the
crd
and
that's
it
so
my
next
stop.
When
we're
going
to
finalize
the
discussion
on
on
the
how
the
pods
should
be
linked
referenced,
how
we
should
do
that
reference
or
that
my
step
will
be
to
add
another
shell
comments
to
to
do
that.
A
Part
right
to
have
now
have
kind
of
a
proper
functionality
there
that
we
can
at
least
reference
the
the
depot
that
will
be
kind
of
my
next
step.
But
if
anything
by
the
way,
this
is
as
well
free
feel
free
to
clean
q8prs
against
that
I
think
that's
possible.
I
am
an
office
in
terms
of
GitHub,
so
I
I
was
said
that
this
is
possible
to
create
even
a
PR
against
a
private
repository,
so
feel
free
to
do
that,
and
if
you
need
to
do
some
changes,
all
right,
I
think
Shane.
A
That's
any
other
questions
on
that
repo
stuff.
B
A
You,
okay,
all
right,
so
any
other
topics.
A
A
Think
I
have
an
idea
and
I
will
get
to
it,
but
before
that,
what
I
would
like
to
kind
of
introduce
to
the
object
is
an
ability
to
indicate
that
the
object
is
used
by
any
of
the
thought,
so
basically
to
indicate
that
that
something
is
using
the
Pod,
and
that
will
give
us
a
signal
saying:
okay,
this
pod
is
in
use
and,
for
example,
you
cannot
just
easily
delete
that
object,
because
it's
being
genus
by
some
some
port
and
I
would
not
want
to
do
a
case
where
I
can
create
a
pod.
A
Network
create
a
pod
reference
attaching
to
that
and
then
I
can
easily
delete
the
Pod
Network.
That
I
think
we
should
not
be
allowed.
We
should
not
do.
We
should
prevent
against
that.
A
So
I
am
thinking
at
my
idea,
and
please
bear
with
me
is
to
do
it
via
a
a
condition.
I'm,
not
sure
this
is
the
best
way
to
do
it.
A
I
have
completely
no
I,
don't
think
there
is
any,
like
example,
a
template
in
kubernetes
API
to
do
something
similar
unless
I'm
mistaken,
unless
I'm
and
basically,
if
anyone
has
some
better
idea
because
of
some
experience
with
some
other
Projects,
please
let
me
know
I'm
just
trying
to
do
it
through
conditions,
so
basically
I'm
thinking,
I'll,
introducing
a
new
condition
which
would
say
basically
that
the
Pod
network
is
referenced
by
at
least
one
pod
and
basically
then
can
it
cannot
be
deleted.
A
So
in
use
this,
this
condition
will
be
handled
by
the
here.
The
discussion
will
be
handled
by
the
KCM
controller
that
we
will
have
to
have
anyway
for
the
Pod
Network,
and
basically
it
will
see
as
soon
as
any
pod
tries
to
reference
the
Pod
and
is
successful
to
do
that.
A
We
will
set
this
condition
to
True.
If
the
last
powder
removes
the
reference
attachment,
then
this
will
switch
back
to
false
any
comments
to
this
any
other
ideas.
Maybe
to
this,
how
other
other
ideas
on
how
we
could
kind
of
indicate
this.
A
I,
take
the
silence
as
a
agreement,
and
that's
a
good
idea
is
that
okay,
so.
D
Yeah
so
this
means
the
KCM
is
just
just
accounting
of
the
with
how
many
parties,
using
this
network
internally
or
some
stuff,
and
then
they
then
changing
with
the
eu's
too.
So.
A
A
Yeah
definitely
one
KCM.
This
is
core.
The
cube
control
manager,
so
that
will
be
in
in
Korea,
is
definitely
that's
what
I
mean?
Yes,
because
this
object
is
core,
so
it's
not
like
if,
if
I
would
mention
anything
that
implementation
handles
it,
then
then
that's
what
it
is.
If
I
mention
a
specific
controller,
that
means
it's
a
part
of
the
core
kubernetes.
So
yes,
that
will
be
handled.
D
So
maybe
this
I'm
I'm
not
understanding
why
we
need
this.
The
in-use
field
I
mean
that
how
they
say
maybe
they'll
be
just
just
the
qcm
manager
is
keeping
this
stuff
internally
and
then
the
then
the
if
the
someone
is
used
that
I'm
just
click
the
KCM
just
rejecting
the
directions
as
the.
A
Every
time,
every
time
you
would
try
to,
let's,
let's
you
see
that
that
use
case
tomorrow,
every
time
I
try
to
delete
an
object.
Kcm
has
to
go
and
do
a
list
on
all
the
pods
calculate
whether
any
pod
reference
in
this
object,
which
is
constant
and
every
time
I
do
delete.
So
basically,
I
can
Overkill
overload
my
my
my
control
plane
notes,
just
by
doing
a
delete,
delete,
pod
and
basically
spamming
that.
D
D
A
D
The
user,
how
many
qualities
use
sometimes
the
administrator-
wants
to
know
how
many
Port
is
using
this
network
right
there.
D
Let's
imagine
that,
let's
imagine
exactly
yep,
so
how
so,
let's
imagine
that
the
network
demonstrator
tried
to
removing
like
this
stuff
and
then
sometimes
the
this
network
is
in
use
and
then
the
administrator
wants
to
know
how
many
Port
is
using.
Are
they
sometimes
they
are?
They
may
try
to
put
their
understanding.
A
Right,
so
let
me
tell
you
this
demo
observability
pieces.
Metrics
is
a
separate
story
all
right
in
observability
when
we're
gonna
attach
on
that
part,
we
will
say
I,
don't
want
to
know
how
many
pods
I
used
I
want
to
know
all
the
data
right
I
want
to
know
for
each
pod.
Network
I
want
to
know
how
much,
how
many
apologies
using
this
specific
pod
Network.
A
So
we
will
get
to
the
story
of
that
and
and
that's
completely
separate
and
I'm,
not
saying
Inu
is
going
to
replace
it
just
having
two
and
basically
what
you're
referring
to
is
Prometheus.
E
A
Other
means
to
expose
metric
I'm
I'm
saying
from
it
uses
one
of
the
implementations,
but
this
is
basically
what
you're
referring
to
are
metrics,
which
akcm
I'm,
not
I'm,
not
familiar
so
much
with
KCM
I
hope
it
has
some
basic
Matrix
exposed
already,
and
we
will
just
add
another
one
that
will
say
pods
in
a
number
of
parts
per
pod,
Network
used
and
and.
D
That's
completely,
but
that
this
means
the
using
the
metrics
that
we
don't.
We
do
not
need
the
in
the
these
conditions.
No.
A
Matrix
and
Tomo,
we
cannot
have
controllers,
let's
say
I
want
to
have
a
controller
to
indicate.
Oh,
is
this
networking
username
for
my
own
personal
use
for
my
implementation
of
CC
of
the
of
this
multi-networking
compatibility
I,
don't
want
to
rely
on
metrics
on
my
controller
to
kind
of
grab
them
to
indicate.
What's
in
use,
I.
E
Want
to
have
it
I
think
that
Tomo
is
saying
that
if
you're
going
to
track
it
as
a
enumerated
or
a
Boolean,
then
it's
no
more
offensive
to
track
it
as
an
integer
value
is
what
I'm
hearing,
but
the
thing
I
am
confused
about
is
and
back
this
up.
For
me,
for
just
a
second
guys
is
there's
a
mention
of
calculating
this
on
delete
being
too
expensive,
but
I
am
curious.
E
A
A
E
A
No,
those
are
not
States,
okay,
so
conditions.
Those
three
conditions
are
okay.
How
conditions
work?
This
is
a
list
all
right
considering
this
is
a
list,
so
basically,
I
have
all
those
conditions
at
once.
At
the
same
time,
I
have
all
three
of
them.
This
is
not
like
they
transitioned
between
these.
These
each
three
are
independent,
but
like
ready,
would
be
dependent
on
params
ready,
so
basically,
ready
will
be
only
when
params
ready
is
set.
If
you
define
it
and
then
in
use
is
completely
independent,
because
but
it's
in
the.
A
It
will
be
always
there
I,
don't
think
it
should
be
any
at
any
point.
It
should
not
be
there
so.
F
A
Oh
yeah,
this
is
independent,
so
those
two
report:
what
is
if
the
object
is
red
itself,
so
basically
ready
means
that
there
is
a
whole
description
over
here
that
doesn't
change.
So,
basically,
Readiness
means
all
the
fields
that
I
have
deem
valid
are
valid,
and
if-
and
let's
say
there
is
a
duty
in
progress
so
basically
to
deem
so
to
deem
that
Osama
is
trying
to
delete
the
object,
but
it's
being
in
use
right.
A
So
basically,
I
will
set
already
and
that's
true,
not
true
and
then
say
a
specific
reason
or,
for
example,
I
created
the
object
but
I'm
referencing
a
an
object.
Then
I
expect
the
controller
of
that
object
to
to
set
my
params
ready
flag.
If,
if
I
don't
have
this
set,
then
I
set,
my
ready
is
not
set,
and
it
has
nothing
to
do
with
the
in
use.
A
The
in
use,
then,
is
just
to
indicate
that
the
this
object
is
referenced
by
something
else,
and
then
I
can
do
some
other
actions
based
on
that
did
I
answer
Daniel
your
question.
A
A
There
is
that
as
well.
Yes,
so
keep
in
mind
that,
if
I'm
not
mistaken
how
the
how
conditions
are
handled
handled
on-
let's
say
pod
as
soon
as
you
create
the
part
that
the
conditions
are
added
by
the
KCM
or
who
have
whatever
controllers
there
are
so
basically,
what's
going
to
happen
on
on
creation
of
the
object,
the
controller
that's
going
to
handle
this
object
is
gonna.
The
first
thing
it's
gonna
do
is
set
the
default
values
for
those
for
those
two
conditions
that
it
handles.
A
So
basically,
it
will
right
away,
create
those
two
conditions
ready
and
in
use
to
say
that,
let's
say
ready
will
be
by
default,
set
as
a
false
and
then
in
use
will
be
set
as
false
as
well
on
the
very
beginning,
right
as
it
probably
can
even
happen
like
in
the
in
the
mutating.
A
It's
as
if
it
was
a
mutating
webhook
of
during
creation
right.
How?
What
is
the
pattern
exactly
here?
I'm
not
so
familiar
with
the
KCM
to
kind
of
say
that
we
I
would
have
to
look
at
the
nodes
and
pods
how
they
are
handled,
how
the
conditions
they
are
handled,
but
basically,
what
you're
saying
is.
If
the
condition
is
not,
there
means
that
the
controller
didn't
looked
at
this
object
yet
and
basically
in
indicate
the
divorce
case
scenario.
A
So
basically,
or
in
this
case
in
use,
is
it
is
not
used
because
there
is
no
other
referencing,
but
basically
the
object
is
not
ready,
so
you
should
not
use
it
at
all.
So
there
is
that
as
well.
So,
of
course,
unless
you're
and
your
your
implementation
is
going
to
use
it
right
away
even
with
regardless
of
whether
it's
ready
I
would
consider
that
in
correct
implementation.
But
that's
up
to
the
implementation
itself.
E
Okay,
are
you
that's
really
helpful
thanks
maje
that
helps
me
understand
the
conditions
here
better,
which
clearly
didn't
understand.
Well,
so
I
appreciate.
A
That,
no
that's
fine,
that's
fine,
so
and
I'm
bringing
here
like
the
standard.
It's
not
something
that
I'm
doing
something
new
here.
This
is
the
standard
default
I
I,
at
least
for
condition
State.
This
is
the
default
pattern
that
I'm
copying
pasting
from
all
the
other
objects,
though
they
use.
Like
a
note.
A
Yeah,
so
basically,
this
is.
This
is
basically
nothing
new
here.
This
is
standard.
The
only
thing
I
I'm
thinking
to
do
it
through
I
want
to
have
an
indication
on
the
object
that
says
that
I'm
in
use
the
number
of
PODS
using
it
is
a
metric
status
and
it
will
be
there
as
well.
If
your
implementation
rather
wants
to
rely
on
that
metric.
It's
it's
it's
really
to
you.
The
thing
here
is
with
the
metrics.
A
They
might
not
be
reflected,
I'm,
not
sure
how
the
implementation
is
usually
done
for
such
things
inside
KCM,
so
in
terms
of
metrics,
they
might
not
be
event
driven
the
way
you
would
think
where,
as
soon
as
something
changes,
it
will
just
right
away
change
the
metric
because
it
might
be
too
costly.
So
maybe
that
can
be
delayed
by
a
few
seconds
or
even
few
minutes.
So
that's
why
metrics
are
not
fully
reliable.
Here
might
not
be
fully
reliable
and
that's
my
speculation
here.
A
Maybe
the
implementation
can
be
done
or
KCM
implements
in
such
a
way
that
it's
event
driven
and
it's
right
away,
updated,
but
I,
don't
know
that
one
so
I
prefer
to
have
definitely
want
to
have
a
a
flag
saying
it's
in
use
or
not
like
a
global
statement
that
there
are
some
pods
using
the
exact
number
will
be
metrics,
for
that
tomorrow.
Did
I
did
that's
kind
of
makes
you
kind
of.
Is
that
understandable
is
that
okay.
A
All
right,
let's,
if
you
have
other
other
top
questions
to
that
Tom,
let
me
know
moving
on
now
to
the
problem
of
the
default
Network
and
how
we
will
transition.
So
what
I'm
thinking
to
do
is
automatic
creation
manual,
Network
migration,
basically,.
A
I
am
thinking
of
this
problem
as
what
we
have
in
one
of
our
requirements
in
one
of
our
requirements.
We
have
an
need
for
ability
to
overwrite
the
namespace
based
default
Network.
So
basically
in
the
specific
namespace
I
want
to
say:
okay,
the
default
network
will
be
not
the
default
one,
but
something
else.
My
network
blah
and
kind
of
that
gave
me
idea.
A
Why
can't
we
not
do
this
on
a
per
node
basis
and
if
we
do
it
on
the
per
node
basis
that
basically
solves
the
whole
thing,
then
your
installer
I
think
someone
was
kind
of
mentioning
that
last
week
that
the
installer
will
do
something
on
the
note
on
a
per
node
pool
basis.
But
with
this
we
can
do
it
on
a
per
node
basis
and
what
the
installer
would
do
is
during
installation
during
the
upgrade
process
of
your
installation
of
the
platform
installer
it
would.
A
The
updated
note
will
be
put
in
a
maintenance
mode
and
then
before
before,
unlocking
that
node
we
would
override
a
put
a
value
in
this
field
of
the
node.
That's
my
idea
right
now.
The
name
would
be
override
default,
pod
Network.
It
will
be
in
a
spec
of
the
node
when
you
set
a
value
there.
Cubelet
will
default
to
that
value,
not
the
default
Network
when,
when
assigning
a
pod
Network,
when
pod
network
doesn't
have
any
networks
to
a
specific
pod
that
lands
on
a
specific
node.
A
So
so,
basically,
that's
how
we
do
that.
The
thing
here
is
what
I'm
a
bit
and
and
I'm
looking
for
some
ideas
or
some
concerns.
What
about
the
permissions
of
changing
this?
So
that's,
my
kind
of
because
you
should
be
able
to
easily
other
Port
Network
set
this
field
or
unset
it.
That
should
be
without
any
restrictions,
be
able
to
do
that.
A
pair.
G
Yeah,
so,
okay,
so
the
two
two
parts
one
is
I
mean
we
have
many
nodes
right,
some
others
can
be
done,
I
mean
they're,
encoded
and
then
empty
and
I
mean
change
it
to
move,
move
things
back
right.
Another
another
I
think
that's
what
you
described
and
then
you
have
the
order
case
when
you
have
a
one
node
cluster
sort
of
just
have
to
describe
I
mean
if
we
say
that
it's
not
possible
you'll
have
to
take
all
the
load
off
and
change.
G
A
Yeah
something
like
that,
so
basically
I'm
giving
you
the
tool
how
you
can
handle
it
so
either
in
a
single
node.
That's
a
tricky
because
you
would
delete
all
the
I
would
see
you.
You
delete
all
the
all
the
posts
anyway
to
upgrade
your
cubelet
and
do
all
the
operations,
so
you
have
to
like
Risk
recreate
the
pots
anyway.
So
mainly
during
that
process,
you
can
delete
the
default
Network.
So
that's
another
change.
What
I'm
thinking
to
this.
G
I
mean
everyone,
I
mean
this
is
from
writing
startup,
before
we
should
just
write
it
Loosely
enough,
so
that
it
should
be
okay
to
basically
kill
all
the
pods
change
network,
start
or
lift
pods
again
or
if
you
have
some
idea,
how
to
do
it.
When
you
have
a
case
that
actually
the
two
Network
can
work
together
right
that
you
can
do
a
transition.
I
think
you
asked
me
to
written
that
both
are
okay
from
an
implementation.
Then,
of
course,
each
implementation
needs
to
decide
what
the
hell
to
do,
but,
let's
not
just
right.
G
A
Yes,
so
you
you
one
thing
that
I
am
removing
from
the
list
of
rules
for
the
default
network.
Is
the
network
can
never
be
deleted
right
now,
I
this
this,
this
bullet
is
gone.
What
it
means
is,
it
applies
to
the
standard
characteristics
of
the
Pod
cannot
be
removed
when
at
least
one
port
is
referencing
it.
A
So,
basically,
this
applies
to
the
default
same
way
and
basically
based
on
that,
if
I'm
doing
the
transition,
if
no
no
pod
is
using
my
default
Network,
then
now
that
now
I
can
delete
it
and
now
I
can
create
a
new
default
network
with
my
new
values.
A
This,
of
course,
can
only
happen
when
either
all
my
pods
are
host
networked
and
nothing
is
using
my
my
my
default
Network
or
I,
set
on
every
of
my
nodes
in
my
cluster
I
set
this
override
default
pod
Network
to
some
other
value,
and
basically
all
the
pods
now
are
running
on
on
this
pod
network,
not
the
default
one
and
then
I
can
delete
my
current
default
and
create
a
new
one
with
the
new
values,
probably
copy
the
the
one
that
I'm
using
in
this
field,
so
that
yeah
that
that
that
that's,
that
I
have
I
can
transitions.
A
Basically
to
that,
and
basically
they
will
be
exactly
the
same
for
some
period
of
time
when
the
whole
life
cycle
is
there
and
then
at
one
point
I
can,
when
I
have
my
default
set,
I
can
clear
up
this
field.
One
conditions
for
clearing
up
this
field
is
I'm
mentioning
here
that
this
field
can
be
cleaned
up.
Only
when
the
default
Network
exists,
that's
something
that
we
definitely
want
to
check.
We
don't
want
to
kind
of
put-
maybe
no
that
was
I,
think
did
I
put
it
out
here
for
musician
process.
A
No
I
think
that
was
one
of
my
ideas,
but
then
I
I
can
I
I
reverted
on
that
sorry,
but
basically
you
should
be
able
to
mutate
this
field
at
will
and
then
the
one
other
thing
is
cubelet
when
I
mentioned
here
as
well
that
cubelet
so
in
that
was
in
the
manual
case,
where.
A
If
the
default
network
doesn't
exist
and
that
will
be
that
will
be
behaving
and
we'll
take
into
account
this
field
then,
and
basically,
if
that
field
is
set
as
and
this
will
be
a
overwrite,
so
this
will
be
having
higher
priority.
So
if
that
field
is
set
and
that
network
doesn't
exist,
I
will
say
default
default
network
not
found
for
that
node
right.
A
If
it's
not
set,
then
it
will
look
for
the
default
named
network
and
if
that's
not
exist,
then
it
will
again
say
this
or
or
not,
basically,
and
and
if
you
delete
that
field
that
will
trigger
your
local
cubelet
to
say:
okay
is
there
default
network?
If
it's
not
there,
I
will
start
saying
that
node
is
not
ready
because
I
cannot
find
default
Network.
So
that's
what
I'm
thinking
about
like
how
this?
What
is
the
dynamic
of
this
field
against
the
Readiness
of
the
node
and
presence
of
the
specific
networks
in
the
cluster.
G
A
Other
Tomo,
you
have
questions,
go
ahead.
D
So
so
the
yeah
I
understand
that
the
override
default
Network
field
is
useful
in
migration,
but
also
they
are.
This
field
is
not
regarded.
I
mean
that
the
network
was
made
out
of
someone.
They
are
having
admin
role.
Can
adding
the
field
to
four
three
to
override
default
network?
Is
that
correct.
A
That
is
correct.
That's
the
kind
of
concern
of
mine.
Is
that
a
good
idea
to
have
this
field,
and
do
this
that
way,
probably
when
we're
going
to
run
this
by
Sig
networking,
we
have
more
feedback
on
this,
but
that's
the
concern.
That's
one
of
the
concerns
of
mine
for
this
idea,
where
exactly
someone
with
the
admin
power
can
just
break
you
or
override
this
right.
D
Yeah
and
then
also
the
the
how'd
I
say
so
yeah
so
before
us,
configuring,
the
overall,
the
port,
Network
and
then
after
modifying
over
at
the
default
Network.
The
all
the
port
and
the
new
approach
may
have
connect
May
connect
to
different
network.
E
D
A
A
It's
not
like
the
field
will
be
there
and
it
will
not
be
said.
F
A
Will
always
always
be
said
by
by
I
think
KCM,
our
code
controller
pod
network
controller.
What
will
it
be
said
is,
even
if
you
don't
specify
anything
I
will
set
the
default
Network
and
I
will
set
the
value
that
is
at
that
point
the
default
Network.
So
what
I
mean
by
that
is
if
therefore
selected
no,
the
default
network,
is
you
didn't
select
any
networks?
A
The
Pod
is,
the
pod
doesn't
specify
any
any
any
field
over
there
we
will
Auto
populate
the
Pod
spec
with
the
current
default
Network
for
the
selected
node
right
or
even
namespace,
because
there
there
is
any
future.
We
will.
We
are
thinking
about
namespace
as
well.
So
that's
how
you
differentiate
that
okay,
this
pod
is
using
the
default
Network
because
it
was
created
beforehand
and
now
this
new
pod
is
is
using
the
full
Network,
because
that's
the
current
override
and
that's
how
you
see
that.
F
F
Yeah,
so
just
let's
assume
we
got
a
default
Network
for
the
cluster
I'm,
not
speaking
about
the
notes,
so
we
wouldn't
even
need
to
Define
data
specific
field,
because
we
would
assume
the
report
both
at
least
have
that
field.
Even
if
it's
not
said,
we
would
assume
that
as
a
default
for
any
uni.
You
know
other
code
is
following
up
with
with
thermos
way
of
speaking,
so
that
may
also
lead
I
mean
we
got
an
upper
right
for
a
note.
F
So,
let's
basically
what
I'm
saying:
let's
not
even
put
it
there-
I
mean
it's:
maybe
it
doesn't
really
hard
but
I
think
it's
a
little
without
you
know
putting
everything
access
there.
So,
let's
see
on
the
controller
but
write
that
and
because
the
the
KCM
we
just
fetch
the
default
network,
if
the
nothing
is
set.
F
A
True
yeah
yeah,
that's
what
it
will
do,
yeah,
that's
what
I'm
I
think
we
are
saying
the
same
thing
so
anyway.
Yeah!
Let
me
let
me
so
we
have
a
default
Network
right,
let's
right
now
and
basically,
whenever
I
create
a-
and
we
are
about
to
kind
of
finalize
on
this
topic,
so
maybe
we
can
move
to
the
attaching
stuff
but
totally,
but
when
I,
when
I
applied
today
at
the
airport,
that
doesn't
have
any
pod
networking
and
nobody
cares
about
it.
A
For
example,
I
have
a
cluster
and
and
I
am
a
a
web
service
that
I'm
deploying
so
I,
don't
even
care
about
multi-networking
and
that's
perfectly
fine.
Someone
applies
a
pod
without
any
spec
for
that
so
I
just
think.
But
what
we
will
do
is
in
KCM.
We
will
update
the
pot
that
such
default
pods
back
with
the
pods
annotation
saying.
F
A
Called
node
name
in
a
pod,
spec
node
name
is
a
field
that
you
can
set
yourself,
but
if
it's
not
set
will
trigger
the
scheduler
on
the
server
on
the
control
plane
side,
which
will
try
to
calculate
which
node
a
specific
Port
should
go
to
and
and
set
that
field
in
the
Pod
spec.
Even
if
you
didn't
set
it,
then,
after
after
it's
it's,
the
Pod
is
running,
you
can
look
at
the
Pod
spec.
A
There
is
a
field
called
node
name,
and
it
is
being
set
to
that
note
on
which
it
finally
landed
same
concept
here
with
this,
where,
if
there
is
no
pods
defined
means
I
want
to
attach
to
a
default
pod
Network,
then,
basically,
when
the
sasport
comes
and
I
will
just
say,
Okay,
this
port
attaches
to
default.
Network.
A
Then
the
the
logic
will
be
to
okay.
I
will
set
that
network
name
rather
than
the
other
one
rather
than
the
default.
Sorry.
So
basically,
at
that
point,
that's
where
it
will,
it
will
be
done
like
this.
My
my
maybe
my
that
got
me
thinking
as
we
are
discussing
this.
Should
this
field,
and
this
is
kind
of
to
to
deal
with
the
kind
of
security
of
this
field,
should
we
maybe
allow
setting
and
I'm
kind
of
throwing
an
idea
here?
A
Should
we
maybe
allow
setting
the
value
so
basically
going
from
empty
to
some
value
only
when
the
node
is
in,
let's
say,
maintenance
mode,
I'm,
not
sure
I
think
there
is
a
standardized
ways
on
how
to
put
basically
when,
when
the
node,
when
the
node
is
put
in
Cordon,
state
or
or
draining
state,
or
something
like
that,
I
I
would
have
to
look
that
up.
Where
is
there
is
a
standard
way
to
put
a
node
in
like
unscheduable
case
thoughts,
and
maybe
then
this
will
be
the
only
way.
This
will
be
the
only
state.
A
So
you
can
only
set
a
value
when
the
node
is
set
into
this
kind
of
maintenance
mode
and
then
when
and
then
the
only
thing
will
be
that
I
can
unset
the
value
at
any
point
that
that
is
allowed
so
going
from
any
value
to
empty.
It
can
happen
at
any
point,
but
going
to
a
specific
value
putting
from
from
empty
to
a
value.
It's
only
when
the
the
note
is
ending,
let's
say
in
a
maintenance
mode.
A
Then
we
will
avoid
this
even
this
case
tomorrow,
where
I
have
two
pods
using
by
default
using
different
Networks.
That
will
that's.
Why
that's
why
the
kind
of
that
idea
came
to
because
then
we
we
drain
the
node.
There
is
no
pods
on
the
note,
and
this
is
when
okay
I
can
I
can,
except
for
demon
sets.
Maybe
and
now
I
can
change
my
overwrite
field
right
because
there
is
no
other
pod,
so
anything
coming
up
will
always
have
this
default.
Pod
Network.
D
D
Let's
I'm,
just
thinking
that
the
network
migration-
it's
happened
at
that
time
of
the
year,
one
node
at
the
adding
the
overhead
nipple
default,
Port,
Network,
okay,
let's
let's
call
the
new
temporary
default
and
then
the
and
then
do
a
migration
and
then
the
during
the
during
the
migration
the
sample
having
the
non-defo,
not
not
the
default
Network
attachment
I
mean
that
the
temporary
default
Network
it's
used
and
after
that,
the
network
administ
data
want
to
remove
the
temporary
default
network.
D
But
this
cannot
be
because
this
temporary
default
network
is
in
use
because
the
the
the
pot
is
thought
that
created
during
the
migration
is
still
launched
right.
So
maybe
the
the
network
might
have
a
text.
The
after
Network
migration
network
administrator
wants
to
adding
adding
the
any
temporal
resources.
I
mean
that
this
should
be
cleaned
up
before
Network
migration
is
finished.
A
A
My
question
is
to
kind
of
for
the
sake
of
limiting
the
scope.
Should
we
do
this
in
this
phase,
this
override
default
pod
Network?
We
have
some
ideas
on
how
to
do
it,
but
my
questioning
would
that
be
okay
for
us
to
punt
it
to
a
next
phase
and
just
focus
on
on
the
API
and
attachments.
Only
thoughts
on
that.
D
D
Included
in
the
the
timing
we
introducing
in
the
port,
Network
I
think
well.
A
That's
what
I'm
thinking,
because
right
now
you
don't
have
to
support
it
right,
you
don't
have
to
enable
it.
This
will
be
set
as
behind
a
feature
flag
and
basically
right
now
we
have
a
ways
on
how
we
would
create
it
automatically
or
or
you
can
set
it
manually,
which
is
easy
and
then
later
down
the
road.
We
will
introduce
this
migration
capability,
where
you
can
now
enable
this
feature
in
your
production
classes.
Right,
I,
wouldn't
imagine
someone
after
this
cap
is
released
and
implemented.
A
I,
wouldn't
imagine
anyone
just
now
running
your
their
clusters
with
this
right
away
right,
so
this
will
of
course,
take
time
to
kind
of
adapt.
So
that's
why
I'm
saying
maybe
for
limit
for
the
sake
of
limiting
the
scope,
we
can
punt
it
to
later
phases,
though
we
have
the
idea
of
what
can
be.
How
can
this
be
handled?
That's
I
think.
If
anyone
now
in
in
during
the
the
a
cap
review,
anyone
would
come
and
and
ask
us
about
how
that's
going
to
happen.
A
We
have
an
answer,
but
we
will
not
implement
it
at
that
point
right,
because
it's
it's
something
that
we
we
can
do
later
on
thoughts
or
or
is
that
something
that.
D
A
D
Will
I'm
not
clear
what
what
he
mentions
I
mean
that
that's.
D
Once
we
provided
some
design
at
that
time,
we
should
conclude
I
mean
that
the
abuse
we
should,
if
the
finishing
that
is
putting
in
the
cap,
if
we
do
not
finishing
this
design
and
then
if
we
do
not
adding
well,
maybe
we
just
saying
network
migration
is
in
progress.
We
just
provide
the
in
the
future
So.
Currently,
the
your
description
is.
Oh.
A
Yeah
exactly
some
parts,
don't
look
at
the
don't
look
at
the
terminal
at
the
description,
I'm
trying
to
get
an
idea
and
yeah.
The
text
has
to
be
then
changed
to
what
I'm
saying
that
this
is
the
initial
idea,
but
it
will
not
be
part
of
the
step.
That's
what
it
will
say,
but
my
question
to
to
the
group
is:
is
that
okay
to
pump
the
migration
story
to
later
phases.
A
F
I,
don't
think
that
I'm
sorry
for
that
there's
something
lacking
in
the
migration,
at
least
in
Indica,
which
is
I
mean,
and
please
shoot
me
out
if
that's
outside,
to
discover
this.
But
I
was
wondering
how
about
if
we
migrate,
also
to
different
cni
blame
version,
but
I've
seen
that
up
there
we
are
using
bricks
plugin
and
we
want
to
just
migrate
to
whatever
it's
in
the
plugins.
A
A
Default
multiple,
so
so
we
say
there
is
only
one
with
this.
Currently,
if
the
migration
is,
the
kind
of
override
capability
is
not
there,
you
cannot
have
multiple
default
networks.
You
can
only
have
one
right,
that's
what
we
will
say-
and
my
question
is:
is
that
acceptable
for
the
first.
F
A
A
Sure
so
keep
in
mind
that
maybe
a
step
back
you're
talking
about
how
how
multi-home
will
be
resolved-
that's
I've,
that's
my
I!
Think
that's
my
guess
here,
which
there's
not
a
not
that's,
not
it.
We
would
never
have
a
case
where
a
pod
is
connected
to
the
default
Network
and
by
overwrite.
That
will
never
happen.
That
should
never
happen.
It's
one
of
the
one
or
one
or
the
other.
Only
so
I
will
my
pod
always
lands
with
only
one
network.
A
It's
not
like
I'm
going
to
have
a
default
and
then
overwrite
inside
single
thought
that
will
now
that's
not
the
point
of
this.
This
migration
story
talks
about
a
single,
always
interface
per
pod.
It's
just
which
one
all
right,
because
I
want
to
migrate.
My
default
from
A
to
B.
That's
the
reason
so
I'm,
not
sure
why
routing
is
mentioned
here,
I,
don't
think
it
should
be
a
concerned
about
that.
A
A
If
we
want
to,
though
I
think
this
is
up
to
the
implementation,
but
let's,
let's
get
to
that
later
on
right
now,
in
the
context
of
pod
Network,
all
right
default,
pod
Network,
which
will
always
one
per
node
and
when
I
don't
care
about
multi-networking
at
all.
That's
what
we're
talking
about
here
and
now
I
want
to
migrate
between
cnis,
because
I
am
grading
or
something
that's
the
use
cases
right
now,
at
least
for
the
default
Network.
A
We
are
not
in
as
we
let's
just
focus
on
that
for
now
and
let's
move
on
to
to
the
what
you're
saying
I
never
said
that
we
forced
to
a
single
Network.
That
was
never
my
intention
for
for
default.
Pod,
Network,
yes,
and
this
override
even
States
default
pod
Network.
It's
only
for
that
I
see.
I
I
saw
one
plus
one
I,
see
no
kind
of
against
so
I
think,
let's
try
to
I'm,
not
sure
how
much
complicated
this
might
be.
My
concern,
this
might
be
a
lot
of
this
field.
A
There
is
a
lot
it
might.
There
might
be
a
lot
behind
it,
which
I
don't
even
covered.
Some
security
concerns
some
other
problems,
that's
why
I
would
might
want
to
maybe
pound
it
to
a
next
cap
so
that
we
can
focus
over
it
or
about
it
over
there
rather
than
here,
because
there
is
a
lot
on
this
one
anyway
already
so
I
and
that's
I
think
that's
current.
This
kind
of,
let
me
add
another
comment
here
and
I-
don't
see.
Are
there
any
objections
to
that?
A
It
just
going
to
be
relegated
to
it.
No,
no,
no,
no,
it
will
be
mentioned.
I
will
keep
the
text
here
to
kind
of
provide
the
idea.
What
we
are
thinking
about
this
because
see
this
now.
This
group
is
a
well
of
all
this,
but
there
will
be
now
reviewers
for
this
cap
from
seek
networking
and
they
would
say:
okay,
you're,
adding
this
now,
how
you're
going
to
support
this
feature,
and
we
already
talked
about
it
and
we
will
say
yeah.
A
B
A
A
Oh
yeah,
I
love,
you
CD,
yeah,
probably
will
not
happen.
All
right.
I
think
we
have
the
default
pod
Network
most
of
the
cases
covered
I
think
with
even
the
override.
It
will
allow
us
to
configure
different
cni
per
node.
If
you
want
to
the
default
one
so
I
think
we
we
kind
of
killing
all
the
birds
with
this
one
stone.
A
So
hopefully
the
Pod
network
is
covered
to
summarize
default.
Network
can
be
deleted
if
it's
not
used
by
any
other
pod.
If
you
delete
it,
then
your
cubelet
would
report
network
not
ready,
node,
not
ready
with
a
default
network,
not
present.
A
Of
course,
the
other
rules
apply
for
the
default
Network
that
has
to
be
available
on
all
the
nodes
and
it
will
be
named
default,
and
basically
this
is
what
all
the
other
pods
will
be
connecting
to
if
nothing
referenced
in
in
the
Pod
spec
all
right,
and
there
is
a
way
to
automatically
create
the
object
through
KCM,
based
on
the
arguments
of
the
KCM.
A
A
That
is
the
kind
of
summary
of
the
default
pod
Network.
All
right,
I
think
we
have
five
minutes.
So
just
let
me
introduce
to
attaching
for
networks
and
next
week
you
can
probably
continue
discussion
or
discussing
on
it
on
your
own.
A
So
what
I'm
thinking
is
follow
a
bit
a
concept
of
the
volumes
that
we
have
today
in
the
Pod
volumes,
the
think
it
with
volumes
is.
There
are
two
things
to
volume:
there
is
a
definition
of
the
volume
itself
and
there
is
a
something
like
volume,
Mount
or
something
like
that.
So,
basically,
the
reason
because
of
that
is
volumes,
is
a
on
a
pod
level,
definition
of
whatever
Parts
I
want
to
have
available
for
my
pod
in
general.
But
then
volume
mounts
is
the
final
indication
of
okay.
A
This
container
wants
to
attach
to
the
specific
volume
so
that
that's
where
what
what
volumes
do
in
our
networking
case.
That's
not
the
case.
We
have
a
single
Network
namespace.
There
is
no
per
container
capability,
so
basically
the
configuration
is
not
done
on
on
that
level.
It's
done
on
the
level
of
the
whole
pod,
not
on
the
level
of
per
container.
A
A
Like
this
and
on
a
level
of
containers,
there
is
another
field
called
pod,
Network
attachment
which
blue
option
optionals
to
specify,
and
it
will
hold
a
list
of
network
attachments.
Now.
What
can
we
specify?
We
can
specify
the
the
name
of
the
network.
I
want
to
attach
optionally
the
interface
name
of
what
I
desired,
the
the
Pod
should
have,
and
whether
the
the
specific
pod
network
is
primary
primary
means
is.
This
is
mined
the
default
gateway
interface
inside
the
pod.
A
Thank
you
is
with
those
two
other
fields.
Those
are
fields
that
would
have
to
be
understood
by
CNN.
So
what
this
means
it's
not
enforced
by
anyone.
This
cannot
be
enforced
by
by
the
core
kubernetes
by
cubelet
or
by
by
any
image.
So
basically,
the
question
would
be
is:
are
those
fields
valid
here,
because
those
will
be
what
I
want
to
say
dependent
on
whether
the
implementation
supports
it?
So
I
can
have
those
fields
but
whether
they
really
work
I,
don't
know.
A
I
have
to
look
at
my
implementation,
whether
it
supports
it
or
not.
That's
what
it
will
mean.
So
that's
that's
something
that
I
think
like
folks
from
from
the
sigs
might
be
reluctant
for
us
to
have,
since
there
is
no
way
to
enforce
them.
So
that's
something
that
keep
in
mind
that
when,
when
we're
discussing
this
part,
how
would
this
look
like?
This
is
an
example
containers
and
then
we
have
pod
networks
and
then
there's
a
basically
a
list
of
a
pod
network,
name
of
the
Pod
Network.
A
It
can
have
optionally
name-
and
let's
say
this
is
a
pod
network
default
and
that's
how
it
will
look
like
the
thing
here
is,
if
I
specify
any
network-
and
this
is
something
that
we
mentioned
before-
but
this
list,
as
is
explicit,
so
what
it
means
is
there
is
no
assumed
anything.
There
is
no
assumptions
that
oh
podnet
default
network
is
listed,
no
if
I
say
I,
only
least,
that
data
plane
for
Network
I
am
going
to
attach
only
to
the
data
plane,
Network,
okay,
very
explicit.
A
So
basically,
if
I
want
to
attach
to
the
default
and
the
data
plane,
I
have
to
explicitly
list
the
default
and
the
other
thing
will
be
if
the
Pod
network
is
not
defined.
The
KCM
this
is
what
I
was
talking
about:
the
default
pod
Network,
and
this
is
what
I
was
referring
to
the
out
of
field.
A
So,
basically,
when
multi-networking
is
enabled
in
the
cluster,
the
KCM
would
automatically
populate
this
field,
probably
without
the
interface
name,
but
with
the
primary
true
and
the
Pod
Network
value
too,
that
default
or
the
overwrite
from
the
node,
depending
on
what
what
is
said
right.
So
basically,
this
field
would
always
be
when
the
pot
is
finally
in
the
cluster.
This
will
be.
Fields
will
be
set
with
this
at
least
one
field
over
here
questions
comments,
I,.
D
Default
the
the
attachment,
the
the
manipulation
of
the
Year
regarding
the
default
I.
A
H
Yeah,
you
know
I'm
new,
so
my
maybe
my
question
is
a
bit
dumb,
but
can
we
just
put
two
words
on
how
you
would
track
the
references
to
a
specific
pod?
So
you
say
that
if
there
are
no
references
left,
it's
going
to
be
put
in
the
state
that
can
put
no
things
using
it.
But
how
are
you
gonna
track?
What
is
referencing
that.
A
That
will
be
indicated
by
so.
Basically,
the
in
use
is,
are
you
referring
back
to
the
in
use
flag.
H
Yeah
yeah,
exactly
I,
didn't
want
to
interrupt
so
I
was
wondering
how
we're
going
to
keep
you
know.
Every
object
is
going
to
somehow
subscribe.
And
subscribe
from
that
so
saying
simpler
words,
so
you
said
that
your
track
references
to
that
and
when
there
are
none,
it's
just
not
going
to
be
in
use
anymore.
It's
going
to
be
marked
in
some
way,
but
what's
going
to
keep
track
of
this
so.
A
How
there
will
be
okay,
what
I'm
thinking
is
having
a
and
it's
up
to
implementation.
How
can
that
be
done,
but
one
idea
would
be
have
we
will
have
a
controller
for
the
Pod
Network
in
general,
so
basically
as
soon
as
and
and
that
controller
will
have
to
watch
codes
and
basically,
as
soon
as
the
Pod
comes
up,
I
look
at
its
pod
name,
pod
networks
and
see
what
are
being
referenced
and
then
the
controller
will
go
to
that
explicit
pod
Network
and
indicate
in
use.
A
So
that's
that's
what
I'm
I'm
thinking
of
that?
The
controller
will
do
it's
on
every
pod
Network
it
will.
It
will
trigger
reconciliation
for
all
the
networks
that
that
specific
pod
is
using.
A
So
basically,
let's
assume
I'm
applying
this
pod
into
the
into
the
cluster,
and
my
controller
is
based
on
network
so
now
I'm,
looking
at
the
names
here
that
you
reference
in
this
in
this
list
and
I'm
going
to
trigger
my
reconciliation
for
those
two
networks
which
then
will
list
all
the
pods
and
see
okay,
which
pods,
which
pods
are
using
me.
A
If
there
is
at
least
one
okay,
I
quit
and
and
and
say
in
use
and
and
carry
on
and
I,
do
the
same
for
the
for
all
the
networks
right
all
the
reconciliation
I'm
not
sure
how
much
you're
familiar
with
kubernetes
controllers.
H
H
H
B
F
Totally
so
if
we
can
quickly
go
to
the
definition
on
the
port
Network,
just
a
little
bit
higher
yeah
on
the
point,
Network
attachment,
that's
cool
but
and
maybe
again
I'm
losing
context.
But
should
we
also
be
tracking
we've
seen
applying
inside
we're
going
to
be
using
with
data
specific
Network
attachment.
A
So,
no
so
that's
done
in
the
in
the
Pod
Network
yeah
you're
missing
that
so
here's
an
example,
a
pod
network
has
that
reference
here.
If,
if,
if
it's
not
about
Daniel,
is
that
about
the
cni?
It's
about
your
implementation
and
what
how
it's
going
to
be
done
right.
So
it's
it's
whether
the
spot,
Network
references
to
a
specific
cni
or.
A
A
F
Well,
yes,
no
that
that's
cool
but
I
mean
I
was
also
thinking
that
it
may
be
a
little
bit
confusing
it
because
you
you
may
be.
You
know
basically
like
mistaken,
that
by
the
cni
name
actually,
but
if
you
also
got
that
in
a
wrapper,
then
that's
cool.
A
Yeah,
you
have
the
kind
of
capability
to
kind
of
go
all
the
ways
all
right.
Folks,
I
think
we
are
at
the
time.
Let's,
let's
start
finalizing,
I
am
going
to
be
off
for
the
next
two
weeks,
so
I
will
update
the
text
about
the
spot
network.
If
you're
gonna
have
more
ideas
on
what
to
do.
Please
know
them
in
this
I
think
most
of
you
have
edit
rights.
A
If
you
don't
I,
think
as
most
of
you
have
edit
rights
here,
so
you
should
be
able
to
just
add
text
here.
Leave
comments
it's
by
any
by
any
point,
this
doc
doesn't
belong
to,
or
just
me
this
is
all
our
all.
This
whole
group
effort
so
feel
free
to
just
add
here
or
leave
comments
for
whatever
you're
gonna
discuss
for
the
next
two
weeks.
Okay,
yeah.