►
From YouTube: Multi-Network community sync for 20230920
Description
Multi-Network community sync for 20230920
A
All
right
welcome
everyone
at
the
multinational
Community
sync.
Today
we
have
September
20th,
we
don't
have
much
of
a
agenda,
but
maybe
I
think
a
Victor
joined
today
and
and
others
have
idea
that
we
can
Disaster
Recovery
to
talk
about.
So
maybe
take
some
more
time.
But
if
you
have
any
other
topics,
please
add
them
to
the
list.
The
by
the
way,
the
doc
the
mini
stock
is
in
the
invite
of
this
meeting,
and
you
can
just
everyone
has
an
edit
right.
A
So
you
can
add
yourself
and
and
add
an
item
to
the
agenda.
Okay,
let's
go
for
the
first
one,
so
this
one
was
kind
of
brought
up
by
Pete
in
the
doc.
He
reviewed
the
doc
and
and
point
out
one
kind
of
aspect,
maybe
somehow
right
the
disaster
recovery-
maybe
maybe
not,
but
so
a
use
case
will
maybe
in
the
design
dock.
What
we
currently
have
is
that
if
someone
deletes
a
network,
the
network
will
be
put
in
a
deleting
progress.
A
What
you
do
in
the
case
where
we
have
an
existing
deployment,
running
a
scenario
that
that
I
think
an
NP
correct
me
if
I
wrong,
or
maybe
you
can
just
give
the
scenario
that
you
have
in
mind
and
then
we
can.
B
Yeah
so
I'm
just
thinking
of
what
would
happen
if,
let's
say,
we've
got
a
network,
that's
been
created
and
I've
got
some
deployments
that
are
using
that
Network
everything's
running
happily,
and
then
I
decide
I'd
either
decide
I
want
to
get
rid
of
the
network
or
I
accidentally
delete
it.
So
nothing
immediately
goes
wrong.
Just
the
effect
of
the
Pod
network
is
now
in
a
bad
way,
but
it
means
that
I
can
no
longer
schedule
any
new
pods.
B
If,
because
the
Pod
network
is
in
a
pending,
delete,
State
and
that's
bad,
because
it
means
that
all
my
deployments
are
now
in
a
state
where
they
no
longer
get
any
redundancy.
If
one
of
their
pods
fails
or
gets
restarted
or
whatever,
then
it's
not
going
to
come
up,
because
you
can't
create
new
button
on
that
pod
Network
and
there's
no
simple
way
of
getting
yourself
out
of
this
state.
You
could
obviously
create
a
new
pod
network
with
the
same
parameters
as
the
old
one
and
then
start
changing
your
deployments
to
point
to
that.
B
A
B
There,
but
it's
it
feels
like.
Maybe
we
could
make
that
more
robust
I
mean
having
some
kind
of
undelete
would
be
possible,
but
I
don't
think
that
fits
in
very
well
with
the
current
API
model.
It's
possible
that
the
right
thing
to
do.
I
know
that
I
was
the
one
who
suggested
that
we
should
just
have
it
in
a
deleting
progress.
State
recently,
or
at
least
I,
was
one
one
of
the
people
in
those
conversations
but
I'm
now
looking
at
it
and
thinking
well
hold
on
a
minute.
B
Actually
this
got
some
nasty
edge
cases
yeah.
Maybe
the
right
thing
is
to
say
that
you
can
only
delete
it
when
it's
not
attached,
but
maybe
we
have
a
case
where
you
can
mark
it,
as
you
can't
attach
to
me
in
the
meantime,
so
you
can
drain
all
the
pods
out
of
the
Pod
Network
or
maybe
we
just
go
back
to
the
simplest
thing
of
saying
you
can
only
delete
it
when
there's
no
pods
attached
and
yeah
there's
a
small
window
there,
but
it's
a
very
small
window.
A
So
looking
at
other
use
case,
let's,
let's
kind
of
explore
that
so,
let's,
let's
get
rid
of
deleting
progress
right
deleting-
will
make
the
accidental
or
a
purpose
deletion
of
an
object
would
set
a
delete
time
stamp.
This
is
an
a
metadata
object
in
object
in
in
kubernetes
objects.
That
will
indicate-
and
this
is
how
all
the
controllers
kind
of
look
at
objects
that
oh
you
want
to
delete
this
guy
right.
A
That's
basically
across
all
objects
in
kubernetes,
so
that
you
know
there
is
a
delete,
timestamp
I'm,
not
sure
how
much
of
you
are
aware
of
that.
Just
let
that
everyone
is
on
the
same
page.
This
metadata
field
indicates
explicitly
and
very
clear
that
okay,
someone
run
a
delete
method
on
this
object
and
wants
to
delete
it.
So
whoever
creates
a
controller
usually
will
just
check.
Is
that
field
not
non-empty?
If
that's
the
case?
Okay,
someone
wants
to
delete
this
object,
so
we
have
a
clear
indication
from
the
API
on
when
this
happens.
A
I
was
planning
over
using
this
field
to
say
oh
someone's,
trying
to
delete
it.
I
will
put
red
defaults
with
this
error
with
this
message
right,
but
let's
explore
a
case
where,
okay,
we
know
don't
do
that
ready
is
ready
because
all
we
didn't
change
any
other
parameters
right
like
if
I
reference
another.
A
If
I
were
to
reference
something
and
that
got
deleted,
then
yes,
then
clearly
I
would
put
the
ready
false,
because
the
params
not
ready
with
case
will
be
triggering
here,
because
I
cannot
find
the
DDD
or
maybe
now
that
that
wouldn't
would
not
happen.
I
will
not
from
API
point
of
view,
and
let
me
correct
my
statement
from
the
API
point
of
view.
I
would
never
check
the
parameters.
Sorry,
that's
that's!
That's
I
I
went
into
wrong.
A
Your
implementation
will
have
to
mark
the
params
ready
condition
false
and
that
will
make
the
the
whole
network
false
right
if
you
implement
that
in
that
way,
and
I
would
consider
that
the
correct
way
so
when
you
delete
that
referencing
object
right.
So
in
that
case
Network
we
went
through
and
ready
false.
No
new
post
can
be
created.
So
that's
something
that
basically
obvious,
and
we
shouldn't
touch
on
that.
A
But
now,
if
I
accidentally
deleted
the
network,
it's
marked
as
in
deletion
progress,
but
it's
in
use,
so
it's
still
there
but
Readiness
doesn't
mean
it's
not
ready
right,
because
ready
means
basically
that
the
other
object
is
there
whatever
I'm
referencing
is
everything
ready,
I
pass
all
the
validations,
and
that's
still
the
case,
even
if
you
delete
it
right.
So
from
that
point
of
view,
we
could
say
that
why
do
we
set
it
ready
false
right?
A
So
let's
keep
it
as
is
so
the
problem
with
that
one
would
be
how
do
I
Mark
something
that
I
really
want
to
delete.
So
maybe
what
cupid
said
is
is
really
what
we
need
to
have
is
drain.
Maybe
we
should
have
a
drain
equals
true
field
in
here
to
state
that
yeah
I
want
to
start
draining
this.
This
network,
don't
create
anything
new,
I
I
like
that
idea.
A
Maybe
we
should
do
something
like
that
where,
instead
of
just
relying
and
automatically
stating
that
if
you
delete
them,
that
means
Don't,
Make,
Ready,
Falls
I
would
say,
let's
maybe
put
a
flag
called
train
or
something
like
that.
Let's
follow
the
path
that
node
does
and
then
we
can
clearly
ID
indicate.
Yes,
I
am
deleting
it
explicitly
and
yes,
I
don't
want
to
have
any
more
networks
on
it.
Maybe
that
should
be
the
case
right
and
that
would
solve
everyone's
problem
right.
So
accidental
deletion
is
not
a
problem.
A
You
deleted
it
be
mindful
that
it
will
get
removed.
If
you
all
the
workloads
go
away
so
that
so
PT
in
your
case,
let's
say:
I'm
upgrading
a
deployment.
Now
you
would
be
probably
safe.
If
you
have
a
like
a
max
search
of
two
in
a
10
replicas
of
a
part
of
the
deployment,
you
would
just
update
two
at
a
time.
A
I
can
just
put
the
drain
on
it
flag
and
then
that
way,
I
can
indicate
that
yeah
that
this
time
I
do
want
to
delete
it.
I,
don't
want
any
new
workloads
on
it
and
eventually
I
want
to
just.
B
B
And
the
use
case
that
I'm
nervous
about
is
the
one
that
I
inevitable,
inevitably
get
dragged
into,
which
is
some
customer
has
accidentally
deleted.
Something
and
a
bad
thing
happened
and
if
they've
accidentally
deleted
set
the
drip,
they
accidentally
deleted
it
and
nothing
was
using
it.
Well,
that's
harmless,
they
can
recreate
it
and
if
they've
accidentally
set
the
drain
flag
well,
they
just
switch
it
back
off
again.
B
A
B
D
A
Yeah,
thank
you
Kevin,
as
I
mentioned
same
model
as
as
what
the
note
has.
B
A
You
I
think
right.
Is
that
not
right
you,
no
you
put
card
on
an
accordion
to
to
a
note,
a
drainage,
that's
what
we
just
say
and
call
it
clockwise.
B
A
Yeah
exactly
yep,
okay
and
how
that's
going
to
be
realized.
There
are
things
on
node,
so
I
will
have
to
look
at
I.
Don't
think
we
want
to
add
things
to
a
network,
so
there
will
have
to
be
some
sort
of
like
a
just
a
field,
maybe
chord
on
field
in
a
spec
that
can
be
flipped
back
and
forth.
We.
A
A
Basically,
that
means
that
okay,
pod
is
not
ready
and
I
would
just
control
the
Readiness
and
then,
in
the
reason,
I
would
say,
Cordon
right,
no
new
parts
and
that
that's
how
I
would
control
it
through
this,
so
that
there
is
only
one
point
that
all
the
implementation
has
to
look
at.
A
It's
no
implementation
should
look
at
this,
this
new
field,
which
they
should
keep
watching
only
the
Readiness
of
the
object
right,
and
that
way
it's
easy
to
kind
of
which
whatever
happens,
whether
the
the
parameters
goes
away,
or
this
being
said,
I
can
easily
determine
that.
Oh
yeah,
this
this
network
is
not
is
not
ready
and
I
cannot
add
any
new
things
for
it.
So
you
have
one
place
to
look
at
it
rather
than
multiple
right,
so
I
would
still
just
use
Readiness
as
a
control,
external
control
for
everything
else.
A
D
Maybe
a
question
my
check
is
that
so,
if
so
that
it
doesn't
mean
that
the
lead
can
only
happen
when
everything
is
drinked
or
is
that
a
check.
A
So
yeah,
so
in
use
right,
so
basically
every
time
you
watch
you
were
to
delete
a
pause
from
a
from
a
pod.
Network
right
there
will
be
a
check
of
there
is
the
in-use
condition
and
I
would
see
the
control
the
controller
checking
for
a
pods
right.
So
every
time
you
delete
a
pod
I
will
check
is.
Is
that
the
last
one
right?
If
if
it
is,
then
I
would
remove
the
in
use
in
use
condition.
D
A
We
I
talked
with
Antonio
on
this
one
and
we
would
not
want
to
do
any
custom
stuff
for
some
objects
right,
because
that's
what
you're
probably
asking
for
so.
We
still
have
to
work
within
bounds
of
current
API
machinery
and
that's
what
I'm
trying
to
do
so.
Basically,
if
so,
if
you
look
at
my
dog,
for
example,
how
I'm
handling
the
default
Network
right,
I
mentioned
that
last
week,
if
someone
accidentally
deletes
it,
there
will
be
just
a
small
controller
that
will
just
remove
the
finalizer,
remove
the
object
and
recreate
it.
A
That's
the
only
way
to
do
you
see
how
it's
a
bit
hacky,
but
there
is
no
way
to
just
there.
There
is
it's
all
code
you
probably
can
program,
but
they
they
wouldn't
want
to
create
new.
You
know
API
Machinery,
some
new
parts
for
some
objects
right,
so,
instead
of
doing
that,
just
let's
reuse,
whatever
is
there
in
a
ways
that
that
kind
of
use
the
existing
components?
So,
yes,
you
still
can
accidentally
delete
an
object,
delete
a
network
object.
A
A
If
we
had
this
new
field
drain,
you
could
say
that,
okay,
until
that
guy
is
drain
a
flag
through
only
then
you
can
set,
the
deletion
in
progress
right
could
be
a
case,
but
then
would
require
additional
handling
in
API
Machinery
to
to
ensure
that
you
never
set
the
delete
timestamp
right
for
those
objects,
unless
that
so
that's
additional
code
paths
that
I
think
I
I
was
recommended
not
to
do
any
custom.
Ip
machinery.
D
A
So
validation
web
hooks
definitely
could
be
done
by
externally
by
implementation
directly.
They
those
can
do
that
where
basically
a
validation
workbook
can
check.
Oh,
it's
a
deletion
event.
All
we
can
use
and
and
reject
such
code
right.
You
could
do
that
right.
D
A
Implications
I
think
on
that
one,
since
that
would
be
part
because
any-
and
this
is
a
core
object,
any
web
hooks
would
be
part
of
the
any
web.
Hooks
will
be
part
of
the
of
the
core
API.
So
it's
not
like
you
can
disable
that
webhook
right,
but
on
the
other
hand,
what,
if
you
really
want
to
delete
it?
Let's
say
you
want
to
you
want
to
get
rid
of
this.
A
That
exactly
how
would
you
force
deletion?
Because,
if
I
create
that
web
hooks,
there
is
no
way
because
in
if
I
create
a
custom
web
hook,
I
can
disable
the
web
hook.
If
I
add
it
here
for
the
score
object,
there
is
no
way
you
will
be
able
to
delete
this
object
unless
you
delete
all
the
workloads
and
imagine
you
have
1000
pods
on
this
and.
A
Yeah
yeah:
it's
a
good
idea
that
will
definitely
prevent.
We
could
have
that
deletion
timestamp.
But
let's
look
at
the
consequences
right,
because
then
you
really
cannot
delete
it.
The
web
hooks
will
Pro.
If
it's
in
use,
you
can
do
nothing,
you
don't
have
access
to
those
controllers
and
and
web
hooks,
because
those
will
be
baked
into
your
API
and
KCM.
So,
basically,
you
will
not
be
able
to
do
much
here.
It's
like
in
terms
of
explicitly
disabling
so
that
I
can
hack
it.
A
A
So
that's
that's
kind
of
so
and
with
this,
as
we
have
it
right
now
today,
without
the
web
hook,
yes,
it
will
get
be,
it
will
would
get
set
in
the
it
will
have
the
delete
timestamp
set
right,
So,
eventually
with
time
it
can
go
away.
If
by
any
chance
it
will
be
at
any
point
not
in
use,
so
it
might
get
removed.
A
So
maybe,
if
you
accidentally
deleted
it,
then
maybe
what
you
should
do
is.
A
D
A
So
you
can
still
have
it.
The
problem
here
would
be
what,
if,
oh
sorry,
I
have
a
production
accidentally,
someone
said
that
delete
flag
right
so
set
the
timestamp
right
set
the
timestamp
to
some
value,
because
accidentally
created
that
right
and
by
any
chance,
I,
don't
know
a
node
went
out
that
has
all
the
pods
running
using
this
using
this
network
and
the
API,
so,
okay,
no
more
pods
on
this
guy
and
because
the
pods
were
removed
or
something.
B
A
The
AP
Machinery,
so
okay,
no
more
pods
on
this
on
this
thing,
I
will
just
delete
it
right
because
because
it
was
no
more
workloads
right,
so
I'm
removing
the
finalizer
and-
and
it
goes
away
and
you're
running,
work.
Your
running
production
suddenly
just
crashes,
because
all
the
new
pods
that
supposed
to
use
this
network
is
not
there
right.
That
can
happen.
That
can
be
a
case
right
where
someone
accidentally
deletes
this
and
then
in
delayed
time,
let's
say
in
a
week
or
in
a
month
you
come
in
and
see.
A
True,
that
is
a
corner
case
right,
so
so
this
is
where
it's
it's.
It
might
be
tricky
right
on
what
will
be
the
best
kind
of
middle
point
of
this
whole
thing
right
or
maybe
we
shouldn't
even
maybe
delete
this,
then
prevent
deletion
if
you
delete
delete
it
right
away,
but
then
what
about
the
existing
pods
right?
A
That's
something
that
I
think
we
cannot
go
that
path,
I'm,
trying
to
explore
various
cases,
how
we
could,
because
that
will
be
the
kind
of
the
like
pod
I,
can
just
add,
remove
pod,
doesn't
matter
right
and
and
and
and
and
deal
with
that,
but
basically
here
this
object
is
dependent.
A
A
If
I
you
delete
the
spot,
Network
I
am
going
to
delete
all
the
pods
that
are
assigned
to
this
pod
and
delete
that
pod
Network.
Maybe
we
should
do
because,
when
you
think
about
this,
namespace
is
similarly
critical
right.
You
could
accidentally
delete
the
namespace
and
that
will
be
called
the
workloads
over
there
right.
D
D
A
A
So,
basically,
if
you
delete
that
thing,
we
can
make
any
pod
attaching
an
airport
network
is
becoming
a
child
of
that
pod,
Network
right
and
then,
whenever
you
delete
that
I'm,
just
gonna
delete
all
your
or
your
workloads
and
then
delete
the
Pod
Network
and
then
don't
deal
with
all
this
deletion
in
progress
finalizers
and
all
that
right
we
could.
We
could
do
that.
Yeah.
C
A
A
A
A
I
think
with
namespace
it's
more
of
a
you
know
a
structural
you
have
folders,
and
then
you
have
the
children's
for
that
folder
and
that's
why?
If
you
delete
the
folder,
you
delete
all
the
children
right.
So
that's
like
folders
and
files
kind
of
Parts
in
Linux
or
something
so
you
can
look
at
it
that
from
that
perspective,
I
assume
that
was.
That
was
the
reasoning
behind
here.
A
You
attach
right.
So
it's
not
really
children,
because
you
can
have
across
multiple
namespaces
those
attached
right.
So
that's
that's
slightly
different
right.
A
So
that's
why
we
want
to
do
this
different
mechanic
and
this
will
be
a
new
Machinery
or
maybe
a
new
pattern,
how
things
objects
behave
interact
with
each
other
in
core
right,
because
that's
what
it
is,
but
some
folks
might
say
it's
too
complex
and
then
we'll
get
a
pushback
so
we'll
see
when
we're
gonna
talk
with
wider
group,
but
that
might
be
a
case
right
as
I
might
say
why
this
is
so
complicated.
D
B
And
see
what
they
find
Convenient.
The
other
thing
that
occurs
to
me
is
delete
if
you
have
a
model
where
you
delete
the
Pod
Network
and
that
takes
out
all
the
pods
that
that
sounds
really
more.
That
actually
doesn't
sound
so
unreasonable
now,
but
what
about
in
future
when
it
is
possible
to
delete
a
pod
network
from
a
pod?
If.
A
D
D
The
ultimate
thing
you
just
create
a
delete
that
network
from
the
bot
right.
So
if
we
have
the
ability
to
block
dynamically,
then
you
will
do
that
right.
Yeah.
A
That's
that's
you
know,
then
you,
you
mute,
muted
airport,
that's
a
big
discussion,
probably
in
in
itself
right
because
you
mutate
a
pod
right
before
you
had
it
now
you
don't.
The
pod
in
a
spec
says:
I
want
to
connect
to
this
pod
Network
and
then
externally
you
can
change
the
spec.
So
that's
you
know
that's
kind
of
tricky.
Unless
you
ex
it
will
be
a
different
story.
If
you
explicitly
change
the
spec
slating,
okay,
I
don't
want
to
connect
to
this
network.
That's
cool!
Okay!
A
That
probably
is
fine,
but
then
I
deleted
the
Pod
Network
and
you
went
and
modified
my
spec
for
my
pods.
Oh
that's
a
big
deal
right.
So,
if
you
think
about
this
that
way,
so
all
right
I
think
the
drain
I
will
try
to
look
into
the
drain
field
that
will
I
think
semi
somehow
cover
at
least
this
use
case.
There
are
some
different
use
cases
as
I
mentioned
the
delayed
deletion,
but
I
don't
think
we
can
handle
that.
A
We
can
cover
that
one
right:
let's,
let's
propose
with
this
deleted,
delayed
deletion
with
the
drain
capability
and
see
how
the
the
rest
of
the
community,
what
the
rest
of
the
community
thinks
about
this.
C
A
C
I
I
mean
it's
a
tough
decision.
I
see
it
from
like
I
see
it
from
numbers.
Perspectives
like
if
somebody
deletes
their
default
Network
and
then
they
delete
all
of
the
pods
on
their
closet
right.
There's
that
and
I
was
figuring
whatever
implementation
would
probably
have
something
for
Story
number,
eight
right
that
it
like
has
a
controller
for
these
Etc,
but
it
does
sound
complicated
to
mutate,
the
pods
based
on
the
deletion
of
the
networks
Etc
so
yeah.
This
is
a
great
discussion
item
on
on
you
brought
it
up.
A
So
back
to
what
you
said,
one
more
thing.
A
As
I
as
I
mentioned
right,
oh
the
default-
you
mentioned
default
Network
being
deleted.
We
would
behave
same
way
as
default.
Namespace
I
have
to
test
that's
easy
test.
I'd
create
a
pot
in
default
and
delete
the
default
namespace
and,
what's
going
to
happen,
do
we
delete
the
pods
because
we
probably
could
do
some
special
handling
for
this
case,
where,
if
you
try
to
do
the
default
Network
as
we
just
we're
going
to
recreate
it,
we
could
maybe
then
skip
division
of
all
the
parts
I,
something
that
would
have.
B
Sense,
the
bit
that
I
think
might
be
problematic
technically,
is,
if
you
ever
have
a
pod.
That
is
a
reference
to
pod
Network
that
doesn't
exist,
and
then
you
try
and
delete
that
pod.
One
of
the
things
that
things
things
that
cni
usually
do
is
they
look.
They
need
the
config
for
that
Network
in
order
to
correctly
tidy
up
the
networking
left
over
from
the
pod.
B
C
I
love
that
you
brought
that
up,
because
certainly
thinking
about
it
from
a
like
cni
wise
perspective,
certainly
the
way
I
was
thinking
about
implementing
it.
Cni
wise
was
to
no
matter
what
still
keep
the
cash
of
what
you're
gonna
do
with
cni.
So
you
could
get
that
proper
delete
because
and
I
mean
there's
improvements
coming
for
cni
for
garbage
collection.
But
it's
like
you
gotta,
do
your
best
to
clean
up
the
garbage
there.
So.
B
If
you
have
a
pod,
that's
attached
to
pod
Network
and
it's
had
its
Network
set
up
and
you're
using
cnis
somewhere
to
stack
the
implementation.
When
you
delete
the
Pod,
the
cni
has
to
go
and
delete
the
resources
so
get
rid
of
the
network
interface
and
whatever
one
end
of
a
v-net
pair,
potentially
whatever
routes.
All
of
that
stuff.
And
all
of
that
requires
you
to
know
what
configuration
that.
A
The
finalizer
will
still
be
there
Pete,
so
the
and
basically
you
would
never
what
I
want
to
say
until
I
delete
all
the
pods
I
will
keep
the
finalized
on.
So
basically,
your
cnis
can
reach
out
and
get
the
Pod.
So
you
don't
that's
that's
guarded.
Basically,
so
that's
kind
of
you
have
the
guard
around
with
the
finalizer
to
prevent
that.
But
the
finalizer
is
not
permanent
and
doesn't
wait
for
some
in
use
to
be
removed
because
there
is
action
ongoing
action
towards
removing
the
finalizer.
A
A
So
that's
the
difference
right
and
and
compared
to
I,
clicked
I
I
I
I
set
up
the
time
time
the
timer
for
that
one,
basically
right
where
so,
you
have
a
kind
of
estimate
time
when
it's
gonna
kind
of
finish
right
compared
to
you,
don't
know
when
so
that's
the
difference
so
and
the
object
will
be
there,
so
you
don't
have
to
cash,
and
the
cni
should
be
able
to
then
get
it
all
Cris
will
will
see
everything.
A
We
should
that's
why
the
finalizer
is
there
so
that
we
never
had
a
case
where
okay,
this
guy
went
off,
but
there
are
some
things
depending
on
it.
If
that's
the
case,
you
either,
and
in
that
case
as
well,
there
is
a
always
the
implementations
always
can
add
their
own
finalizers
on
the
object
as
well.
Just
keep
that
in
mind
right
like
there.
Even
if
you
remove
all
the
pods
okay
I
need
to
do
I
need
to
let's
say
a
disconnect
switch
or
something
I
need
to
do
some
other.
A
My
implementation
does
some
magic
and
has
to
do
additional
things,
and
you
cannot
delete
until
I
did
those
things
so
I
put
my
own
finalizer
on
top
of
it
yet
again,
so
that
the
object
doesn't
go
away.
So
there
are
means
to
that
to
to
kind
of
handle
that
right,
I
think
we
we,
the
decision,
is
to
kind
of
introduce
the
trained
flag
and
remove
the
the
the
ready
will
be
kept
through
when
someone
deletes
the
object
and
when
only
when
I
explicitly
want
to.
A
Okay-
don't
don't
put
don't
add
new
new
pods
to
the
Pod
Network
I
just
said
that
they're
ready
to
false
any
other
talk
comments
on
this.
A
All
right,
Peter
I,
think
that's
you.
B
B
We
got
it
yes
cool,
so
this
is
just
okay,
so
this
is
just
a
very
quick
summary
of
what
we
did
in
our
hackathon.
So
a
little
bit
of
background
on
this
Microsoft
have
an
annual
hackathon.
So
we
decided
we
were
going
to
see
if
we
could
implement
the
whole
of
multi
multi-network
in
a
week
with
a
group
of
people.
Who've
largely
had
never
touched
kubernetes
code
before
and
some
of
whom
had
never
used
kubernetes
before
so
that
was
quite
fun.
B
So
let
me
a
little
bit
of
a
summary
of
what
we
actually
did
so
the
thing
we,
the
thing
we
ended
up
doing.
We
implemented
the
core
API.
So
all
of
the
resources
we've
got
here,
pod
Network,
pod,
Network
attachment
the
Pod,
spec
and
pod
status
changes
we
implemented
all
those
I
say
we
implemented
all
those
site
implemented
all
those
while
I
watched
him.
Every
every
project
needs
to
have
a
really
good
developer
on
it.
B
He
was
that
for
us
and
a
bunch
of
things
we
didn't
do
we
didn't
do
all
the
hard
stuff
we
didn't
want.
Any
controllers,
no
validation
status.
Fields
were
not
doing
anything
with.
So
we
just
hacked
our
way
through
that
and
we
came
up
with
a
implementation,
so
there
we
just
use
maltes
now
maltes.
As
you
know,
it
takes
annotations
in
the
Pod
to
decide
what
networks
to
attach
it
to
so
change
Malta
so
that
they
would
look
at
a
pod.
B
If
there's
no
annotations
in
it,
it
would
then
go
and
read
the
Pod,
Network
and
pod
Network
annotation
links,
follow
them,
collect
the
resources
and
then
chain
down
those
to
get
the
custom
resources
linked
to
from
the
Pod
Network,
so
that
it
would
do
all
the
same
building.
There
are
lots
of
limitations
on
that
as
well.
There's
no
error
handling
that
we
haven't
got
any
default.
Network
just
always
put
that
in
regardless
and
we
didn't
do
anything
with
pod
status.
B
It
just
writes
The
annotation
the
way
it
always
did
and
there's
no
pod
specific
configuration.
So
we
didn't
get
all
the
way
through
to
doing
everything
you
could
think
of
doing.
But
we
got
quite
a
lot
of
stuff
and
we
ended
up
demoing.
We
had
Cube
Kittle
to
set
up
the
new
resources
to
set
the
new
fields
in
the
pods,
and
then
we
demonstrated
that
you
can
create
pods
and
maltes
would
set
up
the
additional
networks
in
them.
B
So
the
reason
I
think
this
is
interesting
is
the
next
steps.
So
we
we
implemented
the
API,
so
we
have
some
API
implementation
code.
So
whoever
is
going
to
implement
the
API
code
for
this
cap.
We've
done
some
part
of
it.
You
should.
You
should
probably
look
at
what
we
did
as
a
star.
Actually
I
know,
you've
done
some
initial
and
you
had
an
initial
Branch.
We
cherry-picked
your
commits
to
start
and
then
sorry,
it
added
some
more
stuff.
B
On
top
of
it,
so
I
think
that
link
is
worth
having
and
the
other
thing
is,
we
obviously
did
a
multis
implementation.
It
might
be
worth
pursuing
that
it
might
not.
We
didn't
that's
much
less
sophisticated
in
terms
of
there's
a
it's
fairly
Oaky
code,
but
I
think
the
the
bit
I
think
that
I
came
away
from.
It
was
with
a
bit
more
confidence
that
hey
this
stuff
is
real.
You
can
actually
make
all
this
happen.
B
You
can
create
pod
networks
and
actually
have
pod
networks
with
extra
network
interfaces
being
added
to
them,
and
we
didn't
really
come
across
anything
that
didn't
work
or
seem
wrong
as
far
as
we
got.
So
it's
quite
positive,
a
few
links
here
so
kubernetes
code
and
the
multiscode.
We
do
have
a
bunch
of
scripts
and
demo
stuff,
but
unfortunately
that's
all
Microsoft
internal
I
can
sanitize
there's
nothing
exciting
in
there.
I
can
sanitize
them.
It's
just
that.
We
probably
don't
want
you
to
know
the
names
of
our
internal
vpns
and
stuff.
B
A
A
B
It,
but
as
far
as
I
could
see
it
just
it
was
very
straightforward.
So
far
I
mean
that
we've
only
done
in
terms
of
having
a
full
implementation.
You
could
put
into
production
where
you
know
two
percent
of
the
way
there
or
something
in
terms
of
something
we
can
demo.
That
does
all
the
features
where
you
know
maybe
10
of
the
way
towards
there,
but
I
was
left
with
quite
a
good,
confident
feeling
about
it,
and
it's
quite
nice
to
have
something
we
put
together.
B
If
anybody
wants
to
do
what
we
did
or
to
take
it,
then
I'm
more
than
happy
to
I
mean
the
code
is
largely
there.
It's
on
GitHub,
but
I'm,
also
more
than
happy
to
help
people
set
it
up
or
or
take
it
further.
If
they
want
to
any.
B
Yeah,
so
so
these
slides
are
just
in
so
I
will
just
pop
them
into
the
chat.
A
All
right,
this
is
awesome.
Thanks,
beat
yeah
proves
it
it's
possible
and
so
I
need
should
I
just
try
this
yeah.
Let
me
just
try
this
foreign.
C
Yeah
I
just
want
to
say
yeah
thank
you
for
spending
the
time
on
this
I'm
extremely
interested
in
it
and
yeah
pass
along.
My
thanks
to
your
team.
I
definitely
want
to
to
give
it
a
try.
C
I
think
that
Tomo
might
have
a
recipe
for
a
custom
kubernetes
build,
but
I
have
not
done
one
in
a
few
years,
so
I'm
gonna
give
that
a
try
as
soon
as
I
can
and
yeah
I
yeah
I
just
appreciate
all
the
work
that
went
into
it
and
I
think
from
a
multi's
perspective,
we're
still
trying
to
figure
out
like
what's
what's
the
best
next
steps,
I
mean
there's
part
of
me
that
thinks
you
know.
C
Maltes
could
be
adapted
to
this
reasonably,
which
I
think
that
you've
kind
of
proven
that
it's
possible,
but
I
also
want
to
kind
of
socialize
with
the
cni
community.
Some
more
and
see
like.
Does
this
fit
into
a
cni
2.0
cni.next
Paradigm
yeah?
We.
A
Had
Michael
here
before,
but
he
he
didn't
I
hope
we
do
considering
Michael
was
Michael.
Zappa
was
was
always
here
and
he
was
I
think
favorable
towards
what
we're
doing
here
so
I'm
I'm,
I'm,
I'm,
I'm,
hoping
that's.
That
will
be
a
positive
as
well
kind
of
take
on
this
yeah.
Absolutely.
A
But
but
I
agree
with
you.
We
should
have
the
Cris
in
cnis
aligned
as
well
to
kind
of
pursue
this
so
that
we
know
that
again,
then
eventually
kind
of
catch
up
providing
the
Cris
API
so
that
we
can,
instead
of
us
directly
going
to
the
Pod
CRI,
can
give
all
this
to
me
right.
At
least
the
names
right.
I
don't
have
to
get
the
whole
pod
with
all
the
data
I
can
just
they
can
just
give
me
the
names
and
then
I
can
based
on
that
I.
A
A
All
right.
We
have
15
minutes,
Victor,
I,
hope
that's.
At
least
we
can
kick
it
off
the
discuss.
The
recovery
I
I
would
want
to
maybe
I'm
Disaster
Recovery.
Can
we
Define
what
that
means?
Yeah.
A
That
that
maybe
that's
where
I'm,
where
I'm
that's,
what
I
I,
was
I'm
missing
bit
my
mail
of
context.
Maybe
this
is
a
common
thing.
What
what
does
it
is
that,
let's
can
we
Define
what
a
disaster
disaster
is
and
Disaster
Recovery
is
maybe
that's
what
I'm
missing
and
that's.
E
Why
I
think
it's
probably
starting
from
where
I
started
from
I'm
a
traditional
like
database
guy,
the
database
consultant
working
on
traditional
databases,
so
for
me,
Disaster
Recovery
really
means
you
know
having
a
database
backup
and
a
backup
is
immutable,
it
cannot
be
deleted
and
then
you
also
have
a
if
the
database
is
running
like
in
New,
York,
Data,
Center
and
and
if
that
data
center
got
you
know
flooded
or
whatever
a
fire
and
that
that
whole,
even
the
whole
region
got
like
flooded,
then
that
data
center
that
database
together
with
all
the
middle
here
and
applications,
will
run
in
like
Chicago,
so
so
basically
Disaster
Recovery
is
just
one
whole
region
is,
is
no
longer
doesn't
exist
anymore.
B
E
In
particular
to
kubernetes
it
started
again
not
for
me
I'm,
just
saying:
okay,
how
do
I
run
it
kubernetes
database
on
kubernetes?
So
then,
just
like.
Actually,
the
first
discussion
that
you've
been
talking
about
is
really
irrelevant,
like
you
know,
deleted
progress
deployment
because
in
kubernetes
there
are
at
least
for
database,
it's
very
common
I
guess
for
most
workflow
to
automated
the
the
application
management
using
the
Operator
kubernetes
Operator.
E
At
the
pattern
right
and
in
addition
to
that,
there's
just
so
many
different
ways
to
provision
you
know
manage
manage
the
resources
you
know
using
the.
E
Manual
process
versus
you
know,
class
API,
you
know
in
cloud
provider,
there's
all
kinds
of
different
things
involved,
so
so
to
simplify
it.
Let's
so,
let's
say
just
is
on
premise
just
to
virtual
machine,
because
a
virtual
machine
and
and
preparement
is
a
little
different,
let's
say
start
with
the
virtual
machine.
If
I
have
a
virtual
machine
in
in
New
York
around
the
one
in
Chicago
and
a
bunch
of
virtual
machines,
yeah
you're
running
on
top
of
it,
we
have
a
kubernetes
cluster
running.
E
Well,
so
so
housing
goes
down
it's
possible,
but
let's
say
if
it's
they're
using
someone
is
doing
Delete
Network
and
then
and
delete
the
Pod
at
the
same
time,
so
other
than
storage
other
than
storage
needs
to
be.
There
need
to
be
replicated
there.
E
You
know
the
the
operator
need
to
know
the
application
specific
characteristic
on
the
network
side.
What
kind
of
a
networking
need
to
be
put
in
place
to
make
sure
that
first
of
all,
the
the
pipe
is
available
for
you
know
before
it
goes
down
that
all
the
data
should
be
replicated
there
and
then
DNS
need
to
be
like
transferred
like
pointing
to
the
new
site
once
the
primary
side
goes
down
and
then
for
the
for
the
users
who
log
in
remotely
to
the
primary
site?
How
do
they
got
transferred?
E
You
know
connectivity
transfer
to
the
to
the
new
site
and
and
of
course,
including
the
all
the
load
balancing
you
know,
API
Gateway,
whatever
networking
component
components
is
needed
as
part
of
that
Disaster
Recovery,
okay,
so.
A
I
think
you're
going
very
deep
into
the
core
components
of
kubernetes,
and
probably
this
this
would
be
the
topic
for
Sig
networking
going
or
maybe
even
a
p
Machinery
to
go
and
talk
about
how
do
I
recover
a
control
plane.
Fader
right
I
would
not
want
to
go
there
in
this
meeting,
let's
like
if
we
could,
let's,
let's,
let's
focus
on
multi-networking
aspect
of
this,
which
is
basically
limited
to
cnis
right.
A
So
what
would
happen
if
I
have
a
cluster
I
have
multi-networking
enabled
in
one
of
the
nodes
right,
something
that
we
today
don't
support
by
the
way,
at
least
on
this
phase
of
one
or
this
one
first
phase
of
multi-networking.
We
are
not
looking
on
on
a
what
I'm
calling
a
selective
availability
of
a
pod
Network.
Today
we
assume
every
network
is
available
on
every
node,
and
this
is
just
an
alpha
and
that's
why
it
is
like
this.
A
We
have
a
phase
two
where
we
want
to
introduce
the
selectiveness,
and
in
that
case,
what
do
we
do
then
right
because
then,
as
you
said,
I
have
Chicago
and
New
York
and,
let's
say
New.
York
only
has
access
to
this
additional
Network.
What
do
I
do
then
right
definitely
I
have
centralized
my
API
I
I,
believe
in
that
case
that
and
that's
why
I
don't
want
to
go
into
that
path.
Let's
assume
the
control
plane
is
resilient
right.
A
There
is
no
disaster,
it
has
its
own
three
nodes
across
zones
or
something,
and
if
some
Zone
goes
away,
I
have
two
other
ones
that
that
are
and
keep
that
are
up
and
keeping
the
the
API
available.
A
So
let's
assume
this
is
stable
in
This
Disaster
case,
but
what
happens
if
the
basically
the
specific
Network
goes
away
instead
available
on
my
on
on
the
one
of
the
other
sites,
where
my
VM
is
so
in
that
case,
depending
on
how
the
networking
is
configured
right,
because
if,
if
let's
say
New
York,
where
the
disaster
happened
had
a
connection
to
I,
don't
know
my
back
end
Network
local
to
New
York.
How
do
I
then
restart
that
thing
in
Chicago
right?
A
Is
that
even
possible
and
that's
kind
of
a
question
to
infrastructure
right
at
that
point?
Do
you
make
sure
that
those
are
redundant?
Those
those
connections
across
the
board,
so
you
can
have
like
let's
say:
I-
have
backups
in
in
New
York
and
maybe
some
in
Chicago
or
they
both
the
backup,
is
connecting
to
Something
in
maybe
Boston,
so
that
is
even
further
redundant
and
right.
Now
the
connection
is
the
VPN.
Let's
say
it's
connected
only
to
to
New
York.
A
How
do
I
ensure
that
this
VPN
switches
back
to
to
to
to
the
Chicago
place
right
so
do
I?
Do
that?
Do
I
worry
about
it.
So
that's
something
that
again
falls
into
infrastructure
right,
so
I
don't
think
we.
E
E
Think
it's
more
of
on
the
kubernetes
level
on
the
overlay
network.
So
how
do
we
let's
say
what?
What
kind
of
what
kind
of
first
of
all
for
multi-network?
What?
What
does
this
kind
of
problem?
Does
this
result
right
that
my
understanding
is
it's
the
because.
E
There
are
like
different
kind
of
a
like
silver
smash
that
doesn't
work.
It
doesn't
interoperate
right,
so
much
Network
sort
of
resolved.
That
problem,
my
understanding
is
that,
but
and
then,
if
we
Implement
what
network,
how?
How
do
we
make
sure
that,
when
disaster
happens,
what
what
needs
to
be
done
to
to
make
sure
that
the
overlay
network,
assuming
the
endless
network,
is
already
taken
care
of?
What
is
the
opening
Network
need
to
do
to.
A
Okay,
thank
you
so
in
in
this
case,
again
it's
up
to
the
implementation.
It's
my
go-to
answer
to
everything,
because
Port
network
is
an
API
that
just
defines
how
things
can
are
configured
by
the
cni.
That's
how
we
position
those
on
your
objects
for
multi-networking
and,
basically
even
for
default,
Network
right.
Those
are
just
parameters
that
then
the
cni
the
implementation
takes
and
implements
that.
A
So
it's
up
to
the
implementation,
how
they
recover
in
such
situation,
right
how
the
implementation
is
going
to
behave
when
the
when,
when
it
spans
those
two
across
those
two
VMS
across
those
two
physical
places,
and
then
if
there
is
anything
that
needs
to
be
synchronized
between
the
two
places,
so
that
when
one
goes
down
how
to
to
ensure
the
other
one
keeps
being
alive
and
it's
again
implementation
specific
psyllium,
copies
most
of
the
maps
evpf
stuff
across
the
board,
some
of
it
is
called
in
in
kubernetes
API.
A
The
other
is
in
memory
whatever
is
in
memory
is,
is
per
node
and
only
for
that
node.
So
this
is
an
example
I'm
giving
I'm
not
specifically,
but
this
is
what
I
know
of,
because
in
Google
we
use
a
lot
of
evf
and
celium
so
I'm
bringing
that
up
as
an
example.
But
there
are
other
ones
that
do
their
own
stuff,
so
Victor
I
think
this
boils
down
to
the
implementation
and
it's
I
don't
think
we
an
API
matters
in
this
case,
unless
it's,
of
course,
the
failure
of
the
control
plane
itself.
A
So
this
is
something
that
that
then
the
platform
provider,
the
cloud
provider
or
whatever
the
platform
provider
how
they
can
ensure,
because
at
that
point
I
don't
think
kubernetes
can
be
resilient
or
do
anything
if
your
control
plane
goes
away
right,
because
then
it's
really
dependent
on
the
platform
to
ensure
that
you
can
replicate
things
at
least
that's
my
opinion.
Anyone
has
some
other
takes
on
this.
A
D
E
Will
be
helpful
even
if
it's
that,
as
you
said,
the
actual
implementation,
so
my
understanding
is
multiple
working-
is
really
kind
of
API
specs
for
how
to
do
this
right.
So,
actually,
implementation
there's
a
lot
of
details.
Each
implementation
can
decide,
but
it
will
still
be
helpful
actually
to
spell
that
out,
at
least
for
me,
so
that
you
know
if
I
look
at
multinet.
Okay.
E
So
let's
say:
if,
if
I
build
a
network,
a
platform
using
kubernetes
and
then
I
use,
I
choose
multi-net
Motors
and
then
what
kind
of
options
do
I
have?
What
should
I
be?
Consider
like
I
said,
like
the
control
plane
need
to
be
available,
but
what
do
you
put?
Control
plane
itself
become
part
of
that
disaster.
It
got
yeah
what
what
can
be
do
done
to.
A
I
assume
up
to
the
and
I'm
just
I'm,
really
kind
of
going
off
the
regular
our
workload
Workforce
topic,
but
I
would
just
assume
that's
up
to
the
platform
provider
right,
how
they
be
able
to
back
at
the
edge
the
Etsy
staff,
to
ensure
that
you
can
have
the
API
pieces
kind
of
recovered
and
then
how
you
would
could
recover
and
rearrange
State
the
control
plane,
nodes
right,
so
that's
and-
and
this
is
completely
outside-
of
the
scope
of
kubernetes
itself,
because
it's
impossible
for
it
to
heal
itself
in
that
matter,
at
least
for
control
plane
right,
at
least
that's
that's,
that's
my
opinion
on
this
yeah.
A
There
would
have
to
be
some
some
sort
of
external
keys
that
would
that
will
monitor
your
control
Pane
and
then
be
able
to
restore
it
right.
A
Sure
I
think
we
are
out
of
time
all
right
thanks
folks,
I'm
gonna,
try
to
add
that
drain
kind
of
field
into
the
object
and
and
tag
you
or
I
will
ping
you
on
the
on
the
on
the
slack
channel
to
let
you
know
that
I
I
added
that
all
right
thanks.
Everyone.