►
From YouTube: Velero Community Meeting - June 13, 2023
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Right,
hello,
everyone
today
is
June
the
20th
and
that's
official
community
meeting
for
Puerto
glero.
That's
our
U.S
European
friendly
time
zone.
Meeting
I
I
can
see
some
folks
are
adding
themselves
already
to
the
meeting
notes
I'm
going
to
share
the
same
meeting
notes
into
our
chat.
A
So
please
the
topics
there
and
okay
I
can
see
some
stuff
all
right,
so
I'm
sure
I
see
you've
added
some.
Some
do
you
want
to
go
first.
B
Yeah
sure
can
I
maybe
share
my
screen.
I
need
to
walk
through.
A
B
Okay,
so
the
first
discussion
topping
is
just
to
get
a
review
for
this
PR,
so
the
support
for
multiple
volumes
captured
classes
in
the
CSI
plugin
I
brought
this
up
in
the
last
meeting.
Then
it
was
a
truck,
but
now
it's
in
a
good
ship
to
review
I
think
few
folks
started
reviewing
it,
but
this
will
request
for
folks
to
review
this
further,
so
that
we
can
check
this.
B
In
second
is
I
was
driving
a
PR
which
was
around
the
Json
substitutions
during
in
the
restore
workflow,
and
we
started
basically
doing
the
implementation
for
it,
but
we
kind
of
reached
some
realizations
in
the
actual
implementation
part
as
to
what
library
to
use
what
will
be
the
best
way.
We
kind
of
evaluated
various
approaches.
B
So
I
mean
there
are
certain
technical
challenges
in
terms
of
Library
support,
right
and
so
I
I
wanted
to
discuss
in
this
forum
with
maintainers
and
the
community
as
to
what
they
think
of
I
mean
I
basically
walked
through
what
we
went
through
and
what
we
are
planning
to
do.
I
know
maybe
the
pr
itself
The
Proposal,
Pier
or
the
issue
would
have
been
a
better
place
to
do
that.
But
I
think,
since
this
is
part
of
1.12,
it
will
help
accelerate
the
discussion.
B
If
we
discuss
this
in
part
of
the
meeting
in
case,
you
have
any
other
specific
design,
specific
meetings,
I
think
I
can
also
maybe
discuss
there,
but
I
I
hope
this
is
the
right
Forum
to
maybe
dive
a
little
bit
deeper
in
what
we
are
facing
right
now.
B
Let
me
know
if
there's
any
concerns,
otherwise
I'll
basically
start
so.
The
initial
Json
substitutions
proposal
that
we
had
right.
We
were
planning
to
Leverage
The,
Json
path,
Library,
which
is
part
of
the
core
kubernetes
code.
It
is
used
by
Cube
cuddle
and
it
supported
the
certain
operators.
You
can
do
operations
such
as
parameters,
dot,
something
you
can
do
dot
dot
for
a
recursive
thing.
You
recurs
across
through
the
whole
Json
and
you
can
find
parameters
using
it
and
yeah.
B
There
are
a
bunch
of
like
view,
operators
that
it
supports,
like
I,
said,
dot,
dot,
dot
star
for
wildcard
in
an
array
indexing
the
array,
those
kind
of
operators
it
supported.
B
This
is
the
first
approach
we
took,
but
what
we
are
realizing
is
it
it
might
not
be
the
right
approach,
because
it
will
kind
of
lead
to
Tech
debt
for
the
community
and
the
repo
over
a
long
time
So.
Currently,
what
we
are
planning
to
Pivot
is
so
this
path
which
I
explained
right.
This
is
like
a
a
dot
b,
dot
C
kind
of
thing,
but
there
is
also
an
method
of
for
referencing
Json
using
uses
slashes.
B
For
example,
you
say:
slash,
spec,
slash,
container,
slash,
zero,
slash
image
now
this
is
part
of
the
Json
patch
RFC,
like
Json
patch
is
a
well-known
thing
in
the
industry,
and
there
is
an
RFC
I,
don't
recall
the
exact
way
and
Cube
quartile
for
patching
various
resources.
They
provide
this
capability,
where
you
can
say,
do
an
operation
replace
on
this
particular
path
into
Json,
and
you
give
a
new
value
that
you
want
to
replace
there.
So
that
is
like
another
capability.
B
The
other
route
we
started,
exploring
Cube
cuddle
itself
uses
a
library
for
Json
patch,
which
is
as
following.
So
the
current
plan
is
like
we
at
least
the
Json
path
is
not
seeming
to
be
fruitful.
Json
patch
at
least
gives
us
these
capabilities
out
of
the
box
and
which
is
suiting
us
for
us,
but
couple
of
things
to
call
out
right
like
by
changing
these
approach.
We
are
losing
on
a
couple
of
things
and
we
are
gaining
on
a
couple
of
things.
B
B
What
we
gain
is
that
earlier
in
the
Json
path
we
write,
we
could
not
support
operations
such
as
like
this
operations
right,
you
have
remove,
you
have
copy,
you
have
ADD
and
I
think
a
couple
more,
but
operations
such
as
ADD
were
not
very
trivial
to
let's
say
Implement
in
Json
paths,
code
implementation.
B
So
we
kind
of
this.
We
are
getting
some
benefits
out
of
it.
We
are
still
following
what
customers
are
already
kind
of
aware
of
using
the
patch
route
is
very
common
in
the
industry
and
in
kubernetes
world,
but
we
lose
out
on
wild
cards
and
in
addition,
there
was
another
thing
that
we
had
proposed
was
around
providing
a
current
value.
Regex
So.
The
plan
was
that
when
we
go
to
this
path,
we
say
if
the
value
at
this
path
is
X
only
then
change
it
to
y
now.
B
The
issue
is
that,
even
in
this
library,
to
achieve
that
kind
of
regex
thing,
we
will
kind
of
end
up
needing
to
do
a
little
bit
of
changes
in
the
library
itself,
so
that
is
like
a
gap
like
like
a
gap
which
is
there
in
both
where
we
have
a
regex
support
in
some
sense,
which
we'll
have
to
add.
So
what
I'm
looking
for
for
the
community
from
the
community
is.
B
Firstly,
are
you
folks,
okay
with
Json
patch,
to
Json
patch,
we
personally
felt
it
is
solving
pretty
much
the
same
purpose,
and
it's
not
like
a
very
negative
thing.
Second,
is
the
prioritization
like
the
need
for
wild
cards
in
parts?
We
feel
that
in
for
most
customers
they
can
still
kind
of
stitch
up
this
part
and
wildcard
is
a
delight,
but
probably
not
something
which
is
a
blocker
right.
Third
is
regex
regex
for
current
value.
B
This
is
this
is
something
which
we
we
think
we
should
probably
support,
but
to
support.
We
will
kind
of
have
to
Fork
the
Json
patch
library
to
some
extent,
so
yeah
I
mean
I
yeah,
any
thoughts,
initial
thoughts,
questions
or
maybe
maybe
some
folks
do
not
have
the
context.
I
can
maybe
walk
through
a
little
bit
back,
but
if
there
are
maintainers
on
the
call
who
are
perhaps
familiar
with
this
thing,
maybe
your
suggestions
would
be
very
helpful
here.
C
A
C
Just
checked
the
pr
because
I
didn't
have
the
context,
and
so,
while
you
were
explaining
these
things,
I
quickly
went
through
it
yeah,
it
looks
quite
powerful.
Actually,
I
think
it
will
subsume
a
lot
of
Transformations
that
are
there
independently
right,
like
storage
class
mapping
and
things
like
that.
Yeah.
D
We
should
take
those
Case
by
case,
because,
because
some
of
those
that
already
have
a
well-known
kind
of
history
of
being
used
like
search
class
mapping,
may
not
be
the
best
fit
for
this,
because
that
you
know
is
easily
configurable,
but
but
we
can
look
at
those
Case
by
case
I'd,
say
first
and
foremost,
this
solves
a
need
for
those
that
don't
currently
have
internal
actions
where
a
lot
of
times
you
know-
and
you
know
you
see
an
issue
on
GitHub
that
says:
hey
I
need
to
do
this
and
the
usual
response
from
maintainers
is,
though,
you
need
to
write
a
login
to
that.
E
D
Where
this
really
shines
is
in
something
that's
a
specific
transformation
is
needed
for
a
specific
user
workload.
That's
not
generically
applicable
to
all
of
all
users,
the
more
generic
things
like
storage
class
and
mapping,
and
some
of
those
other
ones
might
make
sense
to
leave,
as
is
because
it's
a
generic
need,
that's
easy
to
document
how
to
do
it,
and
this.
D
This
approach
is
a
lot
more
flexible
than
that,
but
it's
also
a
lot
because
more
of
an
advanced
use
case
I
think
users
need
to
know
what
they're
doing
to
do
this,
and
so
for
some
of
these
General
Uses,
where
you
know
lots
of
users
may
need
to
do
something
very
similar.
I
think
those
special
case
actions
still
makes
sense,
but
this
is
great
for
the
one-off
cases
and
for
advanced
users
that
want
to
do
something:
that's
not
out
of
the
box,
but
they
don't.
C
B
D
B
Let
me
see
just
to
point
out
right
right.
If
you
really
say
we
need
wildcard
support,
it's
just
a
matter
of
like
modifying
the
Json
patch
library.
To
some
extent
it
might
not
be
trivial,
but
I
think
it
is
achievable
it's
more
of
either.
We
take
it
as
a
scoping
call.
We
go
out
without
wildcards
and
after
that
we
kind
of
ship
it.
The
only
part
is
that
we,
we
will
pretty
much
end
up
forking
this
library
in
the
Valero
code
base.
B
If
we
want
anything
additional
by
additional
I
mean
regex
and
wild
cards
right.
So
that
is
a
evaluation
I
want
from
the
community
right
like.
Would
you
folks
be
comfortable
if
it's
not
a
very
big
Library?
It's
like
this
if
four
five
six
files,
but
it
will
be
a
significant
code
based
right,
so
would
you
be
comfortable
with,
rather
than
just
taking
this
as
a
dependency,
we
take
a
fork
and
our
have
the
freedom
to
add
more
features
to
it
for,
like
I,
called
out
here.
Any
thoughts
around
Dan.
D
F
D
Implement
them
it's
something
to
consider
at
least
I
mean
obviously
working.
Sometimes
you
have
to
do,
but
if
you
can
get
the
project
you're
using
to
take
your
patches,
then
you
won't
need
a
4kid.
B
No,
it's
a
good
idea,
but
actually
the
issue
is
that
that
Library
claims
that
they
are
based
on
the
RFC
of
Json
patch
RFC,
so
I
I
really
doubt
that
they
will
be
an
interest
to
deviate
from
that.
That
is.
F
So
I
I
have
interviewed
the
design
yet
but
looks
good
to
me
like
the
use
cases
and
stuff
like
that.
One
question,
though,
like
what,
if
the
user
tries
to,
gives
us
a
path
and
that's
an
immutable
thing
to
do,
how
would
you
tackle
that.
B
So
this
is
before
the
create
restore
call.
So
if,
like
it's
like
a
new
resource
creation,
it
it
won't
really
land
up
in
an
immutable
State.
The
immutable
issue
will
come
when
you
go
for
a
patch
like
you
have
skipping
patch
policies
in
belaro
right.
So
in
Skip
it's
it's
not
really
an
issue,
but
in
patch
I
think
that's
a
user
enforced.
D
A
D
D
Is
still
you
know
the
the
resource
and
then
the
then
the
output
is
the
modified
resource.
D
That's
right,
so
this
is
not
a
plug-in.
This
is
actually
in
the
in
the
restore
logic
itself.
Is
that
right,
correct.
B
D
Unstructured
object
that
we've
that
we're
modifying
here
and
then
we
do
the
create
call.
So
the
effect
is
identical
to
doing
in
the
plug-in,
but
we're
not
actually
doing
in
the
vlogging
yeah.
F
D
So
this
so
so
so
I
would
expect
this
might
be
able
to
replace
some
plugins,
but
I
mean,
but
certainly
not
all
plugins.
F
Yeah
I
I
I
get
that,
but
if,
if
Ria
is
able
to
solve
your
use
cases
then
like
I
would
ask
do
we
do
we
really
need
this.
B
I
think
there
is
such
one.
There
is
consensus
mostly
around
this
here
I
mean
I
I
would
I
basically
I'm
trying
to
defer
discussing
if
it's,
if
you
want
it
or
not,
I
think
it's
already
part
of
the
Milestone
we
debated
on
this
earlier
as
well.
Copy
customers
have
their
own
crds,
which
you
can't
like
I
mean
you
can't
expect
every
customer
to
go
and
write
their
own
plugin.
If
this
is
solving
it
in
a
generic
way
for
everyone,
I
think
so.
D
So
I
I
it
may
be
that
I
should
remember
I.
Think
another
way
of
asking
a
similar
question
about
your
asking,
which
again
we
discussed
previously
but
I
I
forgotten
about
that,
is
that
this
entire
thing
could
be
implemented
as
a
plug-in
rather
than
in
the
direct,
rather
than
in
the
restore
logic.
D
D
D
B
Yeah,
it
was
around
the
number
of
grpc
calls.
We
might
end
up
making
to
the
plugin.
That
was
the
concern
because
there
could
be
potentially
hundreds
of
resources
and
you
don't
want
to
make
a
grpc
call
for
the
plugin,
and
that
is
why
Daniel
proposed
that,
instead
of
going
the
plug-in
row
text,
make
it
core
functionality
right.
D
Well
and
again
this
this
is
this
is
kind
of
basic
Valero
stuff,
so
should
I,
maybe
you
know
the
answer
here
for
the
internal
restore
item
actions,
the
ones
that
aren't
built
to
separate
images,
do
we
make
grpc
codes
for
those
or
do
those
end
up
being
just
internal
regular
calls.
D
That
are
implemented
using
the
interface
but
they're,
not
in
a
separate
image
like
the
the
storage
plugins
they're
in
the
Valero
image
itself.
I
didn't
think
those
major
PC
calls,
but
I
could
be
wrong.
I
guess!
The
point
is
if
the
only
reason
to
avoid
it
is
a
grpc
call.
Is
it
possible
to
write
this
as
an
internal
restorative
action
that
does
it
make
grpc
calls
I?
Guess
we
need
to
confirm
that?
D
First,
you
know
whether
our
understanding
is
correct
because
I'm
just
thinking
if
it
uses
the
restore
item,
action
workflow
because
it
influenced
the
interface,
but
it's
still
an
internal
call
when
Valero
server
registers
all
those
internal
or
startime
actions
at
server,
startup,
I
guess
the
question
is:
do
if
those
don't
go
through
grpc,
then
we
could
write
this
as
a
restorative
action
that
takes.
F
D
Same
inputs
just
like
it
is
now
so
that
the
restore
code
flow
and
the
workflow
is
unchanged,
but
we
still
avoid
the
grpc
calls
I
know
making
it
an
external
plug-in.
That's
a
separate
image
would
involve
that
grpc.
F
Overhead,
exactly
and
regarding
the
RIS
for
crds,
that's
the
whole
point
of
the
plug-in
framework
and
yeah
I
mean
definitely
use
RIS
nbis
for
CID
changes.
Yeah.
D
B
D
Some
bit
of
you
know
configuration
here
that
some
internal
restore
item
action
takes
care
of.
For
me,
then
I
don't
have
to
write.
You
know,
go
land
code
as
an
end
user
of
Valero.
D
Think
I
think
I
think
I.
Think
part
of
the
issue
here
is
that
the
overhead
to
write
a
plug-in
just
to
make
a
simple
change
across
all
your
resources
for
an
end
user
who's,
not
developing
a
product
based
on
Valero,
is
a
bit
high
now
for
complicated
use
cases
that
still
makes
sense.
It's
a
very
powerful
thing
to
be
able
to
write
a
plug-in
to
do
whatever
you
want
there,
but
at
the
same
time,
if
we're
handling
some
of
these
more
generic
cases,
those
are
generic
but
simple
cases.
D
For
so
when
a
user
has
something
that
only
matters
to
their
cluster,
but
it's
a
simple
transformation.
This
implementation
will
let
them
do
that
without
having
to
write
their
own
plug-in.
D
Ideally,
it
would
be
implemented
as
a
restorative
action,
I
I
would
think
and
I
know
we
may
be
going
back
with
something
we
decided
to
walk
back
and
and
but
if
we
can
avoid
the
grpc
call
at
least
I
think
it
makes
sense
if
performance
concerns
require
us
to
do
it
differently.
That's
that's
one
thing,
but
if
an
internal
restorative
action
also
avoids
the
grpc
call,
then
this
at
these
Transformations
happen
through
the
same
course
of
the
regular
plug-in
operations.
Even
if,
if
I'm
not
making
grpc
calls.
F
Yeah
and
definitely
I
agree
with
the
ease
and
the
performance.
If
issues
if
that
exists,
then
yeah.
This
feature
totally
makes
sense.
I
just
wanted
to
double
check
with
you
guys.
B
D
D
Be
missing
something
maybe
Daniel
has
some
other
reasons
for
suggesting
this
I.
Just
don't
remember,
I,
remember
the
conversation
I
just
don't
remember
all
the
details.
B
B
The
current
that
why
I
brought
it
up
today
was
around
this
Library
support
and
are
you
okay
with
putting
a
little
bit
more
code
so
that
we
are
able
to
have
that
extensibility?
Are
you
comfortable
with
that
and
Valero
is
what
I
would
have
wanted
to
touch
base
on
any
thoughts
on
that
point?
I.
D
Would
I
guess
one
thing
I
would
say:
is
that
if
we're
talking
about
doing
things
that
are
outside
of
the
Json
patch
RFC
I
wonder
if,
instead
of
just
forking
and
hacking
that
would
it
make
sense
to
create
our
own
API,
that's
built
on
top
of
Json
packs.
So
in
other
words,
we
do
Json
patch,
plus
some
things.
E
D
It
doesn't
bother
me
so
much
to
Fork
collaborate
to
enhance
it
if
we
need
to
in
one
sense,
but
if
we're
forking
the
library
to
make
it
no
longer
Implement
what
the
library
claims
to
implement
that.
That.
B
B
That
Implement,
whatever
it
implements
so
far,
but
whatever
additional
functionality
we
need.
That
is
just
a
cherry
on
top
right.
That's,
for
example,
I
just
want
to
see
what
is
the
value
at
this
place?
I
just
want
to
function
for
that.
The
current
Library
won't
have
a
public
function.
To
do
that.
Specific
thing.
I
just
want
to
see
what
is
the
value
here?
Do
a
regex
match
and
then
call
Json
patch
now
that
function,
just
that
function
will
be
private
today
or
not
public
yeah
that
those
kind
of
things.
B
D
Think,
at
the
same
time,
it
might
be
worth
at
least
you
know
putting
an
issue
in
Upstream
to
see.
Are
these
things
these
the
kinds
of
changes
that
would
be
accepted
as
patches
or
not
if
they
enhance
functionality
in
ways
it
doesn't?
You
know
contradict
the
RFC
they're
trying
to
implement.
B
D
C
B
No,
nothing,
nothing
I
was
just
summarizing
this
part
I'll
revisit
and
for
this
I
think
I
still
need
more
inputs
on
like
if
it's
I'll
maybe
start
with
the
four
or
maybe
start
with
the
librarian.
We
can
take
it
from
there.
B
Just
just
wanted
to
fill
the
pulse
of
the
community
and
let
you
folks
know
what
I'm
planning
to
do
so
that
it
doesn't
come
really
as
a
surprise.
Yeah
go
ahead.
C
Okay,
so
as
part
of
this
I'm
guessing
the
Valero
command
will
be
enhanced
to
pass
in
this
information.
C
B
Yeah,
so
we
already
decided
that
this
will
so
similar
to
how
resource
policies
came
as
a
country,
my
preference
in
the
Valero
CR.
We
kind
of
got
approval
for
passing
this
as
a
configma
preference
in
the
backups
here.
I
have
a
reference
here.
Basically.
C
B
C
C
C
Wonderful,
okay
and
one
final
thing
so
in
the
previous
discussion
right
I
think
we
were
all
talking
about
this,
only
being
done
before
create
before
the
actual
resource
is
created.
But
if
the
resource
replacement
policy
is
update,
right
doesn't
Valero
actually
patch
the
existing
resource,
so
I'm
guessing.
D
True,
but
that's
to
me,
that's
kind
of
separate
a
kind
of
out
of
scope
of
this
I
mean
this
is
just
another
way
to
modify
resource
on,
restore
any
other
plug-in
to
do
the
same
thing.
So
the
the
concern
about
Valero,
you
know
patching
on
restore
anytime.
You
restore
something
that's
already
there
and
we
patch
it.
If
you
try
to
modify
an
immutable
object
or
an
immutable
field
fail.
D
D
D
Code
kicks
in
the
same
place
that
a
plug-in
does
so
so
these
are
changes
we
met.
You
know,
basically,
on
the
rest,
when
we
restore
the
first
thing
we
do,
is
we
load
the
Json
from
the
from
the
backup
to
our
ball?
We
then
call
the
restore
item,
actions
which
makes
modifications
to
that
content,
and
you
know
so
essentially,
if
you
have
a
restore
item
action
even
outside
of
this,
that
you
know
say
it
removes.
D
A
PVC
the
end
result
is
we
create
that
Resource
as
if
the
backup
table
it
has
that
modified
resource
in
it?
So
we
read
from
a
file.
We
meet
modifications
to
the
Json
in
memory,
then
we
create.
D
D
You
know,
after
you
read
the
file,
you
make
a
series
of
modifications
to
that
Json
and
memory.
This
is
you
know
one
in
that
chain
and
then
we
do
a
create
if
the
thing
already
exists
and
the
resource
existing
resource
policy
is
set
to
the
patch
or
update
update.
D
Sorry,
then
we
patch
it
if
that
patch
fails,
because
it
tries
to
modify
something,
that's
immutable,
then
we
you
know,
then
we
get
a
failure,
warning
I
forget
which
one
it
is
right
now
in
fact
we're
at
and
then
I
think
where
I
guess
we're
in
the
process
of
adding
another
another
option
to
that
resource
policy
to
recreate
so
that
if
the
patch
fails,
this
new
resource
policy,
which
again
shubham,
was
targeting
either
through
112.
D
Okay,
so
if
we
create
a
set,
then
if
the
patch
fails
because
it
tries
to
modify
in
a
beautiful
field,
then
we
delete
the
resource
in
the
cluster
and
we
create
it
based
on
the
restore
okay.
But
again,
all
of
that
is
kind
of
separate.
This
is
just
the
way
I
look
at
this
is,
and
this
is
true,
whether
or
not
we
actually
implemented
as
a
item
action
or
in
the
work
in
the
restore
workflow.
This
functions
as
yet
another
thing
that
modifies
the
content
before
we
before
we
restore
it.
C
D
B
Okay,
any
questions
for
me,
you
were
I
mean
that's
contented
of
answering
any
other
questions
you
had.
You
were
asking.
C
Oh
no,
no
I'm,
good
yeah.
This
looks
pretty
good
and
pretty
powerful,
and
especially
for
people
who
are
building
products
on
top
of
Weller
right,
the
end
user.
It
may
be
a
little
bit
tricky
for
them
to
really
provide
all
this
information,
but
if
the
other
products
like,
for
example,
cloudcast
I've,
make
it
easy
for
people
to
provide
that
information
and
behind
the
scenes
create
this
config
map.
I
think
this
will
be
really
powerful.
Yeah
yeah
thanks
for
doing
it.
B
Yeah
so
I
think
just
to
close
right.
This
I
think
maybe
shubham
Scott
a
few
folks,
then
maybe
help
I
have
gone
through
the
plug-in
invocation
code,
but
I
am
also
not
very
sure
if
it's
grpc
or
internal
for
internal
plugins
yeah
if
it
works
with
maybe
help
with
that.
That
would
be
great
yeah.
That's
fine!
B
Yeah
this
for
this
one
I,
don't
see
a
hard
No-No
against,
like
maybe
for
doing
a
soft
four
I'll,
perhaps
start
with
it
and
see
what
are
the
bare
many
very
minimal
changes,
or
if
I
can
maybe
Mix
and
Match
multiple
packages
to
let's
say
just
one
package
for
just
fetching
details
and
one
for
editing
I'll
play
around
with
that.
But
for
now
I
don't
see
a
strong
up
opposition
to
like
it's
a
small
package.
We
can
perhaps
Fork
it
as
well.
Yeah
yeah.
D
F
Reviewing
data
more
PRS
for
112
and
I
had
put
up
some
bug
fixes
and
also
would
ask
the
community
folks
to
review
the
updated
application
policy
PR
that
we
have
in
place
so
that
we
can
go
ahead
with
the
proposal.
Workflow.
F
D
In
terms
of
the
application
policy,
I
guess
the
the
our
primary
concern
with
the
application
policy
is
once
that
gets
approved,
they're,
making
good
down
the
process
of
deprecating
rustic,
or
were
there
other
things
as
well?
That
you
were
thinking
of
is
rustic.
The
main
thing
we're
talking
about
deprecating
at
this
point.
F
D
D
F
A
Thank
you
and
I
have
a
few
stuff.
Daniel
is
an
call
it
sick
leave,
so
we
can
sure
we
have
ended
the
week
to
submit
for
our
open
source
Summit.
We
started
a
document
with
Scott
and
Daniel
to
figure
out
the
whole
cfp
and
obstruction
and
stuff
anyone
is
going
to
apply
or
because
Bitcoin,
China
and
cubecon
North
America
are
ending
up
on
80th
of
June.
So
that's
this
Sunday
I
think
for
the
cfp,
so
anyone
gonna
submit
for
for
North,
America
or.
D
I'm
pretty
sure,
on
the
red
hat
side
that
I'm
not
available
for
that
just
talking
to
Wes
about
that
earlier
today,
because
we're
kind
of
trying
to
get
approval
for
Shanghai
and
part
of
that
is,
you
know
deciding
hey
we're
only
asking
for
one.
So
that
way
we
can
maximize
the
chance
of
getting
the
one.
If
we
ask
for
two,
then
you
know
it's
a
slight
riskier,
so
that
says
my
understanding
is
where
we
are
with
that.
D
F
D
And
whatever
slides
that
we
come
up
with
for
Daniel
and
me
for
Shanghai
or
whatever
you
know,
if
some
assuming
that
gets
approved,
you
know
we
can
use
the
same
ones
there
and
and
because
they're
both
submitting
at
the
same
time
that
whole
thing
of
oh.
If
this
talk
has
been
done
before,
how
is
it
different,
doesn't
apply
here
because
we're
submitting.
D
D
D
Work
it
out
with
you
and
me
and
Daniel
the
rest
of
this
week
to
get
whatever
content
together
for
that
for
the
abstract.
And
then
we
can
submit
both
and
you
know,
for.
D
For
Shanghai
and
for
you
for
Chicago
and
yeah.
A
A
Yeah,
okay,
okay,
so
in
that
case,
I
think
deep
panel
discussion
for
n
a
will
drop
for
this
time,
then,
as
you're
not
sure
you're,
gonna
join
and.
C
On
the
broadcaster,
definitely
we
are
going
both
to
we'll
be
there
at
Shanghai,
as
well
as
in
North.
America
exact
people
haven't
decided
yet
to
Shanghai
at
the
very
least,
but
it's
guaranteed
that
somebody,
some
people
will
be
there
from
cloud
cancer.
D
Do
something,
are
you
submitting
a
panel
discussion
for
Shanghai,
or
is
that
not
happening
or.
C
A
Sorry
we
can
submit
for
Shanghai
as
well
the
same
thing.
If
you
want,
we
can
have
like
one
submission,
new
and
Daniel
and
then
a
parallel
one
yeah.
D
Right,
yeah
yeah
as
I
understand.
The
rules
for
submitting
multiple
submissions
is
that
each
each
person
can
be
submitted
as
a
speaker
for
one
type.
So
in
other
words,
you
know
Daniel
or
myself
could
be
in
a
panel
discussion
and
a
regular
talk,
but
not
in
two
panel
discussions
and
not
in
two
talks.
A
D
So
so
so,
and
it's
up
to
five
in
a
panel
discussion,
so
we
can
figure
out.
You
know
which
five
people
we
want
to
submit
are
those
that
are
planning
to
be
there
on
the
red
hat
side
tiger
or
Wes
may
be
available
as
well.
I,
don't
know
unless
there's
another
panel
discussion
that
Wes
is
submitting
for
I'm,
not
sure.
A
Yeah
and
also
I
I,
don't
think
the
the
regular
diversity
rules
apply
for
for
China
also,
so
the
rules
are
a
bit
loose
there.
Well.
D
Well,
I
do
know
the
rules
about
the
number
of
submissions
you
can
have
do
apply
there
because
it
was
listed
in
the
the
cfp.
D
A
A
D
A
D
Think
it's
three
to
five
people,
so
we
can
submit
that
to
five
people,
as
the
speakers.
A
Yeah
we
have
to
yeah,
okay,
so
and
I'm
gonna
send
links
to
to
previous
document
that
we
use
to
combine
the
ideas
for
for
the
panel
discussion.
C
A
C
A
Let's
try
to
figure
out
the
next
few
days
who
wants
to
join
on
that
panel
discussion,
so
we
can
fill
out
the
form
until
whatever
18th
is
I.
Think
it's
Sunday.
C
C
The
cfps
right
I
mean
we.
We
are
also
considering
something
like
the
namespace
level
are
back
for
Valero
plus
cloud
castle,
because
we
have
something
like
that.
You
know
that's
possible
today,
but
you
know
we
haven't
submitted
yet
and
frankly,
the
our
record
of
you
know
is
kind
of
poor.
We
submitted
almost
every
Cube
car,
multiple
proposals.
So
it's
a
bit
you
you
get
disheartened.
After
a
few
years,
yeah.
E
C
Not
going
to
be
accepted
right,
so
there's
always
some
resistance
in
some
inertia
in
that,
but
I
guess
this
is
a
good
topic
and
we
have
seen
people
asking
for
it.
Namespace
level
are
back
for
Valero
and
we
you
can
do
that
with
cloudcaster.
C
In
this
in
this
group
wants
to
collaborate
with
us,
at
least
in
the
present
day.
In
this
talk,
let
me
know,
but
we
will
most
probably
submit
that
cfp
for
North,
America,
I,
think
yeah.
D
And
another
thing
to
keep
in
mind
is
that,
since
this
is
kubecon
and
open
source,
Summit
bind
there's
also
a
cloud
topic
track
an
open
source
Summit
that
might
be
worth
submitted.
That's
actually
what
what
we
were
planning
on
doing
with
Daniel
is
because
we
have
similar
similar
history
with
Valero
talks
not
being
accepted
to
kubecon,
but
we
submitted
to
open
source
Summit
North
America
last
year
and
got
it
approved.
I,
don't
know
if
it's
just
there
are
fewer
Cloud
related
things
submitted
there
or
what.
D
But,
since
this
has
co-located
both
open
source,
Summit
and
kubecon,
it
might
be
worth
considering
submitting
your
talk
to
the
kind
of
cloud
topic
under
open
source
Summit
for
Shanghai
yeah.
A
Thing
is
the
submissions
are
the
number
of
submissions
is
so
big
that
they
have
to
by
day
I
mean
cncf
and
the
people
who
are
actually
filtering
out
dogs.
They
have
to
prioritize,
oh,
to
put
Priority
on
talks,
which
are
directly
related
either
for
projects
which
are
under
the
scenes.
Yeah
good
or
like
emerging
projects
and
stuff
and
Valero
is
super
well
established
and
still
not
yet
on
the
cncf.
So
it's
like
not
the
appropriate
candidate
on
press
site.
So.
A
Source
Summit
is
the
right
place
in
my
opinion,
for
for
this
kind
of
talks.
A
A
So
yeah
something
it's
still
not
clear.
Is
it
how
to
say
so.
A
Yeah
yeah
that
would
help
actually
a
lot
of
parties
and
and
my
pocket
in
particular,
but
that's
not
what
I
mean
so
so.
The
final
discussion,
I'm
gonna,
send
you
I'm
gonna,
send
you
that
link
actually
I'm
gonna,
send
it
into
Valero
death.
So,
okay,
someone
else
wants
to
join
and
we
can
put
that
in
for
both,
as
as
Scott
mentioned,
actually
we're
teaching
a
little
bit
the
rules.
If
that
was
presented
or
not
so
we
can,
we
can
apply
for
both
with
this
one
all
right.
So
that
kind
of
answers.
A
My
question
who
joining
Boomer
is
going
to
join
red
hat
for
Shanghai
per
monthly.
Anyone
from
Dell
who's,
gonna,
join,
Shanghai
or
or
Chicago.
A
B
I
was
saying
not
for
Shanghai
definitely,
but
for
North
America.
We
are
try,
maybe
I'll,
let
you
know
in
in
a
few
weeks,
if
I
have
more
clarity.
A
B
A
B
Me
check
with
my
team:
if
I
have
any
update,
I'll,
let
you
know,
but
for
Shanghai
we
are
definitely
not
there.
Okay,
yeah
thanks.
A
Okay,
so
in
that
case
for
Shanghai
final
discussion,
it
would
be
something
between
firmware
and
and
Cloud
casa,
in
that
case,
maybe
Scott
or
tiger
or
Daniel
from
from
VMware
and
someone
from
from
Las
Casa.
Okay,
we
have
issues
with
ordering
some
some
swag
for
China,
as
the
local
people
that
are
doing
this
rank
are
a
little
bit
complicated
to
communicate
with,
but
keep
you
posted
with
this
one
same
thing
will
be
achieved
for
Chicago
whenever
we
have
the
the
budget's
open.
A
So
that's
that's
from
my
site.
Anyone
else
planning
to
print
and
bring
some
some
swag
for
Chicago
and
Shanghai,
and
do
you
need
anything
from
my
site
about
this?
One
I
mean
Valero
related
track,
plus
plus
some
company
logos
all
around
the
place,
but
by
the
way,
I
still
have
that
blue
from
the
from
cloudcaster
from
from
Amsterdam.
C
Yeah
I
mean
some
of
it.
Actually,
we
discussed
and
the
reason
I
brought
I'm
bringing
this
up
is
because
we've
been
I,
see
some
confusion
in
at
least
internal
testing
people
in
Cloud
Kasa
about
the
exact
meaning
of
this
resource
replacement
during
the
store
the
update
strategy
right.
So
it's
kind
of
very
clear
when
we're
talking
about
resources
such
as
config
maps
and
and
secrets,
for
example
right.
You
just
want
to
replace
the
data
but
update
the
data.
C
But
when
you're
talking
about
things
like
PVCs
and
and
thoughts,
it's
not
as
clear,
for
example,
I
have
a
PVC
currently
attached
to
a
part
that
is
running.
What
does
it
mean?
Restoring
the
PVC
with
the
replacement
strategy,
as
update
in
my
mind?
It
shouldn't
apply
at
least
because
you
don't
want.
Users
may
expect
that
files
of
the
PVC
may
be
replaced,
which
is
not
the
correct.
D
I
mean
what
we're
replacing
I
mean
when
we
say
resource
replacement,
we're
talking
about
the
step
in
the
in
the
in
the
restore
where
we
actually
create
the
resource.
So
we're
talking
about
kubernetes
metadata
pretty
much
exclusively
here
remind
me.
I
know
we
talked
about
adding
the
the
original
proposal
had
not
just
a
top
level
policy,
but
also
a
kind
of
resource
type,
specific,
override
and
I
know.
D
We
didn't
Implement
that
in
the
first
version
for
Simplicity
and
then
when
we
talked
about
adding
the
replace,
we
wanted
to
bring
that
back
because
replace
something
we
mail
and
only
want
to
do
for
certain
resource
types.
D
F
D
To
have
the
notion
of
a
replace
policy
across
the
board,
because
I
think
for
some
resource
types
that
could
be
dangerous,
because
if
you
delete
something
you're
not
guaranteed,
you
can
recreate
it.
Yeah
I
mean
I,
don't
know
that
the
timing
works
for
this,
but
and
I'd
be
more
comfortable
having
that
policy
available.
At
the
same
time,
allowing
you
know
resource
type
overrides,
so
you
could
say
you
know,
for
this
type,
just
leave
it
alone
and
for
this
type
of
place
it
for
this
type.
That
would.
F
F
We
can
use
this
feature
in
conjunction
with
resource
policy
feature
so
right
back
handy.
F
D
D
F
E
E
E
D
E
D
To
to
you
know,
you
know
not
none.
E
D
That
for
certain
types
we
don't
modify
at
all
and
then
say
for
config
maps
and
secrets
or
whatever
we
set
it
to
replace.
D
So
now
that
doesn't
need
to
be
part
of
this
PR
I
think
that
that.
F
F
D
D
You
know
I
said
where
we
can
include
exclude,
and
then
we
had
the
the
hybrid
where
we
took
the
default
one
as
what
we
took
for
anything
that
didn't
match
below,
and
then
we
allowed
the
overrides,
and
so
what
I
remember
we
originally
decided
is
hey
we're
just
going
to
do
the
the
easy
you
know,
the
the
top
level
to
fall.
That's
that
gets
80
of
what
we
need
easily,
but
with
the
idea
that
then
we
could
add
the
kind
of
ability
to
override
that
for
specific
resources.
D
On
top
of
that,
so
you
could
say
the
default
is
going
to
be
replaced
or
updated
and
then
modify
it
I
I.
It
just
seems
to
me
that
set
doing
a
restore
where
you
say
replace.
You
know,
delete
and
replace
anything
that
that
gets
an
error
on
on.
F
F
C
D
D
C
D
A
D
Kubernetes
metadata
in
kubernetes
resources,
file
systems
or
any
of
that.
D
C
F
C
D
D
F
F
C
C
From
that
and
third
question,
and
probably
the
last
question,
is
one
about
the
implementation
itself,
so
the
doc
says
you
compare
the
existing.
This
is
again
for
the
existing
implementation,
not
for
the
new
one.
So
you
compare
the
resource
with
what
is
already
present
and
then
you
patch
So,
when
you
say,
compare
what
how
exactly?
What
are
the
things
being
compared.
D
D
C
D
D
Then
we
compare
it
and
if
it's
the
same
as
what's
already
there,
then
we
we
leave
it
alone.
If
it's
different
than
what's
already
there,
then
the
policy
matters.
If
the
policy
is
none,
then
we
just
log
a
warning,
saying:
hey
it's
different,
but
I'm,
not
touching
it.
If
it's,
if
it's
update,
then
we
do
a
patch.
If
the
patch
fails,
then
we
warn
the
patch
failed.
F
D
C
D
C
D
We
have
that
was
different,
different
and.
F
D
F
A
D
Saying
you
tried
to
modify
the
Middle
Field,
it
failed.
So
so
we
don't
update
the
item
and
that's
where
the
up
the
recreate
policy
comes
into
effect
is
that
if
the
patch
fails
because
of
immutable
fields
or
whatever
the
next
thing
Valero
is
going
to
do,
is
delete
the
item
in
the
cluster
and
recreate
it.
D
C
Right
but
but
a
general
Point
even
for
the
new
1.12,
the
feature
replaced
right
is
it's
impossible
to
get,
create,
plus
delete
work
in
all
cases
and
I?
Think
if
somebody's
restoring
it's
almost
guaranteed
that
they
should
expect.
You
know,
in
some
cases
they'll
end
up
with
an
inconsistent
system.
It's
simply
not
possible
to
get
guarantee.
100
yeah.
D
C
D
To
do
it
includes
excludes
you
guys
have
a
restore
with
like,
although
you,
you
know
what
dependency
issues
there
too,
but
the
zero
edge
cases
that
we
can
deal
with.
C
You
might
end
up
doing
one
restore
after
another
right
one.
You
do
very
selective
replacement
right.
D
Right
right
right,
basically,
one
thing
you
could
do
is
you
you
just
do
a
replace,
restore
and
then,
if
you
get
any
warnings
about
the
center
place,
an
update,
if
you
get
any
warnings,
you
look
at
those
warnings
to
see
what
failed.
Then
you
do
another
restore
with
just
those
resource
types
with
those
replaced.
So
so
you
set
your
include
resources
to
only
include
the
things
that
failed.
C
A
Thank
you
and
parallel
West
started
the
discussion
about
the
roadmap
and
utilizing
the
wiki.
E
B
E
Section
just
so,
you
can
see
some
of
the
things
that
we'd
like
and
who
knows
what
release
they
go
into
right
initially,
but
we
yeah
I,
think
that
was
created
recently.
We
just
haven't
started
using
it.
So
I
think
it
would
be
really
helpful
if
we
did
yeah.
D
Recently
about
about
scoping,
you
know,
what's
in
the
next
release,
what's
in
the
next
release,
but
unfortunately
we
still
have
very
little
insight
as
to,
for
example,
right
now.
What's
that,
what's
beyond
1.12,
you
know
what
else
are
we
considering?
You
know
for
1.13
1.14
and
having
these
kinds
of
longer
term
discussions
and
undocuments,
and
it
would
be
good
for
planning
purposes
for
anyone.
That's
you
know
using
this
human.
D
I
mean
even
like,
for
example,
specific
issues
that
are
not
in
112.
We
don't
have
listed
here
like
like
you
know,
for
example,
the
the
we've
been
talking
for
a
long
time
about
doing
the
you
know
allowing
multiple
backups
to
happen
at
the
same
time.
Working
in
parallel,
that's
a
feature
that's
on
in
in
theory
and
on
the
road
map,
but
you
know
it's
not
in
one,
not
12,
so
we
don't
have
it
tag
with
a
milestone
and
we
don't.
You
know,
we
don't
know
where
it's
going.
D
What
the
priority
is
having
some
way
of
kind
of
thinking
about
the
you
know:
okay,
here's
the
things
that
we'd
like
to
have
in
1.13
and
1.14,
which
of
those
is
a
higher
priority.
You
know
we
don't
get
around
to
this.
The
explicit
scoping
like
we
do
with
the
current
release,
but
just
some
some
idea
of
where
we're
going
next
would
be
helpful.
A
Yeah,
but
in
that
case,
I
I
think
we
should
do
this
kind
of
sessions
to
discuss
where
we're
going.
Next,
we
I
I'm
not
sure
if
we've
done
that
before,
like
destroying
ideas-
and
we
now
we
discuss
pretty
much
what's
currently
on
the
table.
A
D
D
Obviously
we
don't
spend
a
lot
of
time
on
that,
because
our
focus
is
getting
the
1.12
stuff
together,
but
kind
of
knowing
what's
next
is
kind
of
helpful
from
a
planning
point
of
view,
even
if
we
don't
get
around
to
making
commitments
to
okay,
this
is
really
going
to
be
1.13,
because
that
that
actual
scoping
and
commitment
part
does
obviously
won't
happen.
D
It's
the
one
that's
always
out,
but
having
some
idea
of
what
we
think
we'd
like
and
when.13
and
1.14
you
know
would
be-
would
be
helpful
even
if
it's
not
something,
we
spend
a
whole
lot
of
time
on
yeah
and
that
also
becomes
relevant,
because
if
we
think
about
you
know
if
we
were
say
right
now,
going
through
the
1.12
roadmap
and
and
doing
you
know,
candidate
features
and
we're
saying
we
want.
You
know
you
know
we
have
this
list
of
10
features
and
we
we
identify
five
of
these.
As
we
know
we
want
1.12..
D
So
there
might
be
five
more
features,
we're
saying:
okay,
we
like
these,
but
we
don't
have
time
to
get
them
in
112.
Let's
put
them
in,
you
know
and
kind
of
potential
for
1.13
or
whatever.
So
we
have,
you
know,
even
even
the
scoping
for
the
current
release
might
end
up
filling
in
some
of
those
next
release.
Things
tentatively
saying
we
know
we
want
this,
but
we
know
we
can't
do
it
right
now.
Let's
move
it
to
the
next
to
the
next
release,
as
kind
of
backlog,
road
map
kind.
A
A
D
A
I'll
try
to
get
predicted
tomorrow
and
get
his
ideas.
I.
Think
you
what's
your
time
with
him,
so
you've
been
you'll,
bring
back
so
yeah,
maybe
maybe
he
has
some
some
good
ideas.
How
we
can
organize
that
thing,
because
the
the
whole
effort
is
clear
but
I'm
just
not
sure
how
we
should
organize
it
in
the
most
appropriate
way.