►
From YouTube: Velero Community Meeting - September 14, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Everyone
I
think
we
can
start
this
meeting,
wait
a
minute.
B
B
B
We
have
reached
the
first
point
of
the
roadmap
that
is
as
a
feature
release
it
is
on
today
and
for
the
for
the
individual
features,
The
copier
integration
we
have
made
through
the
the
workflows
for
both
rustic
and
copier
parts
from
end
to
end,
so
the
problem
setting
and
the
smoking
testing
has
been
started
and
the
port
volume
refactor
has
also
been
started
for
the
computer
refactories.
B
A
B
That
is
for
the
overall
status
and
for
the
individual
update
myself
is
working
on
copier
integration
items
and
I
have
also
started
the
part
volume
backup
refactor.
So
this
is
some
some
of
the
items
of
the
second
phase
for
copier
integration
and
besides
that,
I've
worked
on
some
easy
investigation
and
the
fix
that's
for
massage,
and
then
she
in
place.
C
B
Okay,
so
this
one
seems
we
will
have
some
kind
of
design
and
we'll
have
some
reviews
right
well,.
D
Yeah
I
already
provide
some
feedback
on
that
initially
I
think
we
still
need
to
go
through
some
of
some
of
the
edge
cases
there.
It
there's
potential
to
be
very
confusing
because
we
also
have
the
you
know:
there's
the
Boolean
for
whether
we
include
cluster
businesses
at
all
or
we
exclude
all
of
them
and
then
there's
that
kind
of
Middle
Ground,
where
we
only
include
relevant
ones.
So
we've
got
to
make
sure
that
those
two
interact
in
a
way:
that's
not
confusing
and
doesn't
lose
any
functionality.
E
A
C
I
I
think
I
can
add
it
back
and
we
can
discuss
it
in
the
in
later.
Okay.
B
Okay,
thanks
and
I
mean.
A
Yeah
I'm
pairing
192
release
and
we
plan
to
release
it
at
the
end
of
this
month
and
second
I'm
doing
the
I'm
I'm
down
the
copy,
overloader
integration
and
third
and
the
code
behind
the
original
are
most
finished
in
the
way
and
I
am
doing
the
Valero
enter
in
the
performance
test.
That
is
the
update
from
Fermi.
A
Okay,
first
I
add
ekis
in
Natalie
Pipeline
and
due
to
a
temporary
credential
issue,
I
only
picked
some
basic
tests
in
this
Pipeline
and
I
have
another,
is
I
had
I
had
an
into
intest,
which
is
opt-in
and
optimal
for
precision
volume,
backup.
There
is
some
issue
during
this
script.
A
The
opt-in
test
is
not
as
expected
in
the
as
the
description
in
the
Valero
dot
IO
document
I
think
there
should
be
some
design
issue.
I
will
look
into
it
and
discuss
it
within
the
team.
That's
all
from
my
side.
B
D
Yep,
so
just
first
off
on
the
plugin
versioning,
this
is
a
reduced
scope
from
the
original
plan
was
to
get
the
V2
version
of
these
things
in
place
as
well.
You
know
that's
that
hasn't
happened
yet,
because
you
know
those
designs
haven't
been
approved
yet,
so
this
is
all
the
refactoring
needed
for
the
V1
so
that
we're
ready
to
implement
these
two.
D
D
Without
this,
the
existing
plugins
are
all
broken,
so
this
one
needs
to
be
pretty
much
merged
as
soon
as
we
can
get
a
second
review
on
it.
The
following
PRS
are
that
I
have
open
or
just
additional
refactoring
needed.
Basically,
this
puts
everything
in
place
so
that,
when
we're
ready
to
once
we
Define
what
the
V2
is
all
the
code
is
there
that
we
need
from
V1,
so
we
just
need
to
add
the
V2
kind
of
alongside
so
so.
This
is
the
that's
for
back.
D
This
is
the
the
rest
of
the
backup
item
action
because
about
half
of
that
refactoring
was
done
with
Fong's
original
PR
and
then
that
first
PR
fixes
the
backwards
compatibility
and
then
then
the
one
you're
just
looking
at
the
5271
does
the
remaining
remaining
changes
needed
for
backup,
item
action
and
then
I
have
one
PR
for
Star
item
action
and
one
for
volume
snapshotter.
D
Those
are
the
three
plug-in
types
that
we
know.
We
need
a
V2
for
and
111,
because
those
are
the
ones
where
the
item
action
plugin
are
going
to
change,
so
those
are
kind
of
the
most
important
ones
to
go
to
V1
refactoring
in
place
for
the
other
plug-in
types,
Object
Store
and
everything
else
that
refraction
will
have
to
wait
until
you
know
the
111
time
frame,
because
that
hasn't
been
started
yet,
but
again,
these
these
are
all
changes
that
will
have
no
user,
visible
modifications,
there's
no
existing
plugins
that
aren't
broken.
D
No
one.
That's
using
Valero
needs
to
worry
about
the
changes.
You
know
if
you're
pulling
the
new
code
in
your
in
your
plugin,
there
may
be
cases
where
you
have.
D
You
know
use
a
different
package
name,
but
that's
only
relevant
if
you're,
actually
upgrading
to
110
and
building
your
plugins
on
top
of
that
existing
plug
and
still
work,
I've
tested
that
I
have
another
PR
that
I
didn't
link
here,
which
is
listed
as
draft
not
to
merge
it's
a
proof
of
concept,
for
what
a
V2,
backup
I'd
imagine
action
would
look
like
I,
just
kind
of
created
a
new
method
at
held
through
that
doesn't
really
do
anything,
just
kind
of
to
prove
out
that
if
you
build
a
V2
plug-in
and
update
the
backup
to
usv2,
then
you
can
register
V2
plug
and
it
works.
D
Existing
V1
plugins
continue
to
work
through
the
adapter,
that's
just
kind
of
proving
out
the
concept
of
plug-inversioning.
When
you
add
a
V2,
you
don't
break
existing
V1
plugins,
that's
not
to
be
merged,
as
is
as
a
proof
of
concept,
because
it
doesn't
actually
build
out
what
we
want
for
v2.
This
is
just
a
test,
so
that's
that's
it
for
the
plugin
versioning,
the
volume
snapshot,
location
credentials
that
PR
was
merged.
D
So
if
you're,
using
AWS
for
volume
snapshots
that
just
works
I've
tested
that
we
still
need
to
add
support
for
gcp
and
Azure
to
support
credentials
per
volume,
Snap
application-
it's
probably
not
a
very
large
PR
needed
there,
because
the
support
when
it
was
added
to
AWS
is
relatively
small,
but
that's
not
in
place
yet.
D
D
So
the
you
know
the
actual
Valero
plugin
for
UCP
and
the
Valera
plug-in
for
Microsoft
Azure.
That's
where
that
support
would
need
to
be
built
in
there's.
No
additional
change
in
the
Valero
code
base
needed
other
than
what's
already
merged.
E
Foreign
I
have
a
couple
of
questions.
First,
regarding
the
plugin
versioning,
yes,
I
I
thought
we
discussed
that
we
want
to
introduce
the
restore
item,
action
and
backup
item
V2
in
110,
so
so.
D
E
Okay,
I
see
because
yeah
that
progress
monitoring
was
not
part
of
when
10,
so
yeah
I
was
a
little
confused.
So
then
later
we
decided
to
combine
those
efforts
so
that
will
impact
both
the
backup
as
a
macron
and
restore
at
the
market.
Right
I
was
thinking
that
only
impact
the
better
by
the
American.
D
Well,
it's
actually
all
three
of
those,
because
the
the
updated
is
just
because
for
the
for
the
async
action,
the
the
ability
to
run
these,
you
know
the
upload
or
other
other
long-standing
plug-in
actions.
The
data
mover
design
that
shoebound
has
been
working
on
needs
those
both
for
backup,
item
action
and
restorative
action,
which
is
why
that
design
PR
includes
both
back
and
line
of
action,
restratum
action
and
volume
snapshotter.
All
three
of
those
will
need
a
V2
plug-in
API.
D
That
adds
that
in
a
cancel
method
and
the
the
status
method,
I.
E
See
so
then,
after
this
meeting,
I
I
will
double
check
the
issues
in
the
110
Milestone.
And
if
there
are
any
issue
regarding
the
V2
Bia,
Ria
plugins
I
will
add
a
comment
and
move
them
out
of
110,
because.
D
Yeah
I
mean
mean
I'd
love
to
get
them
in,
but
again
the
problem
is
that
those
are
kind
of
tied
up
in
the
design
does
not
get
approved,
and
since
we're
already
at
Future
freeze,
you
know
I,
don't
know
how
how
that's
gonna
fit
together
at.
A
This
point:
if.
D
We
get
an
approved
design,
you
know
the
next
few
days,
then
maybe
we
can
start
doing
that
and
throwing
that
in
there,
but
I
just
don't
know
where
we
are
with
the
other
design
reviews.
I
mean
we
at
this
point.
As
far
as
I
know.
For
sure
one
thing
I
know
we
need
to
add
to
the
V2,
for
those
plugins
is:
is
the
proposal
from
the
item
action
plugin?
D
There
I
think
we
had
another
yeah.
There
was
another
feature
relating
to
I
believe
it
was
just
restore.
But
as
this
give
me
at
the
moment,
this
a
long-standing
issue
from
you
know
a
year
or
so
ago
that
we
I
think
I
already
had
an
approved
design
on.
D
That
also
will
will
need
to
add
to
the
API,
but
once
we
once
you
know
that,
but
that
factors
into
the
design
of
the
V2
API,
because
there
we
say:
okay,
here's
what
we're
adding
here's,
what
we're
changing,
because,
basically
you
know,
if
we,
the
expectation
at
this
point,
is
that
we're
adding
new
methods
and
possibly
adding
fields
to
the
arguments
to
the
you
know,
to
the
functions
and
then
because
of
the
adapter
design.
D
These
are
optional
things
that
not
every
plugin
needs.
The
adapter
will
just
Implement
a
no-op
version
of
that
of
that
function.
That
just
you
know,
returns.
You
know
without
doing
anything
for
the
V1
plugins.
E
I
mean
you
know,
for
the
for
the
adapters.
We
can
introduce
them
when
we
implement
the
V2
yeah.
D
E
D
They're
actually
tied
in
there
and
if
you
look
at
if
you
look
at
my
proof
of
concept,
draft
PR
for
the
backup
item
action
V2,
you
see
an
example
of
that
those
adapters
are
actually
built
in
the
same
packages
as
the
restartable
backup.
You
know.
Backup
item
action,
recital
item
action
so
basically
because
they're
in
the
same
package,
they
basically
you
know
the
plug-in
author
doesn't
worry
about
it.
You
have
you
create
a
V1
plug-in.
You
create
a
V2
plug-in.
D
You
can
actually
register
both
of
you
on
nv2
plugins
in
the
same
plug-in
image,
because
they're
they're
separate
functions
for
that
and
when
Valero
said
you
know
when,
for
example,
the
backup
processor
says
give
me
the
give
me
all
the
item:
actions
the
backup
item
actions,
it's
getting
a
list
of
V2
plugins,
but
behind
the
scenes
Valero
is
giving
a
list
of
all
the
plugins
and
using
the
adapters
to
take
the
V1
plugins
and
adapt
them
to
V2.
So
if
for
a
given
plug-in
name,
if
there's
a
V2
plugin,
you
grab
a
direct
likely.
D
D
That's
why
I
created
that
that
draft
proof
of
concept
PR,
so
you
can
see
how
it
will
be
done
and
also
so
I
could
test
to
make
sure
that
you
know
once
we
divide
Define,
that
B2
plug-in
API,
all
the
refactoring
that
I've
done
for
backup
item
action
is
the
right
refactoring
and
the
two
work
side
by
side.
So
that
was
the
reason
for
creating
that
graph.
The
IRA,
so
I
could
test
test
it
out
to
make
sure
it
actually
works.
E
Then,
if
I'll
follow
that
up,
I
see
an
issue
with
that
when
we
were
planning
to
add
in
the
V2
plugins,
we
didn't
consider
this.
You
know
progress
monitor
in.
D
D
Well,
we
didn't
plan
that
and
in
fact
the
first
version
of
the
data
mover,
you
know,
async
plug-in
design
was
actually
a
completely
new
plug-in
type
and
then
we
decided-
and
then
we
realized
you
know
from
some
feedback
on
that
proposal-
that
this
was
actually
similar
to
Dave's
in
the
original
upload
progress
monitoring
and
we
and
we
realized
that
it
made
a
lot
more
sense,
not
to
create
totally
brand
new
plug-in
types,
but
instead
create
a
V2
of
existing
plug-in
types
and
that's
what
happened
with
the
the
V2
plan,
so
we
we
ended
up
adding
to
what
we
needed
for
v2.
D
A
D
Then
we
can
add
that
you
know
that's
fine,
I,
don't
know
if
I
don't
know,
if
we're
at
a
point
where
we
have
that
defined
other
than
the
item
action,
progress
that
we
need
I
know
Fong
had
one
at
one
point
relating
to
timeouts:
I,
don't
I'm,
not
sure
what
the
status
is
of
that
of
where
it's
needed,
and
if
that's
something
and
at
this
point
late
in
the
process,
if
those
things
I
mean
from
from
I
know
on
the
kind
of
oad
red
hat
side,
we
had
two
things
that
we
had
in
mind
for
the
plugins.
D
This
item
action
progress
is
the
main
thing
and
then
we
had
another
thing:
that's
really
lower
priority
for
us,
so
we're
fine.
You
know
if
we
can't
get
the
item
action
stuff
in
in
time.
The
other
stuff
is
also
something
that
we're
finding
to
wait
on,
because
it's
something
that
it's
not
it's
not
a
critical
thing
at
this
point,
whereas
the
item
action
stuff
is
more
critical
because
the
that's
kind
of
in
the
critical
path
for
getting
scalable
data
mover
in
place.
E
Okay,
yeah,
thank
you
and
the
other
thing
is
regarding
the
vsl
credentials,
so
I
think
afterwards.
After
this
meeting,
I'll
create
a
couple
of
issues
for
gcp
and
add
or
support
for
the
vsl
credentials.
Hopefully
we
can
have
them
done
in
one
hand,
but
I
promise,
and
if
you
do
not
support
it,
it
doesn't
break
anything.
So
it's
okay,
right,
yeah,.
D
Exactly
writing
the
credentials
doesn't
make
the
gcp
and
Azure
any
worse,
although
if
you
try
to
use
them,
you
know
it's
not
going
to
work
for
you,
okay,
but
so
so
so
we
could
always
release
with
that
being
when
one
of
the
you
know
known
issues
and
all
it
takes
is
a
plug,
we
don't
have,
we
don't
have
to
even
have
a
you
know,
1101
to
release
it.
D
We
just
need,
like
a
you
know,
for
example,
if
it's,
if
it's
1.6
is
the
plug-in
release
version
we
can
create,
we
can
release
a
1.6.1
plug-in
release.
That
includes
that
and
it
doesn't
require
a
whole
Bolero
new
release.
So
because
it's
in
the
plug-in
we
could
possibly
get
that
release.
D
You
know
and
I'm
just
saying
if
we
don't
make
the
deadline
for
what
it
you
know,
whatever
version
of
the
plug-in
we're
releasing
with
110,
since
it's
only
a
plug-in
release,
we
might
be
able
to
get
that
out
pretty
quickly
after
that,
if
we,
if
we
miss
it
and
if
we
get
it
soon
enough,
you
know
then
obviously,
then
we're
good,
but
we
don't
need
any
additional
code
for
vsl
credentials
and
Valero
itself.
It's
just
in
the
plugins.
At
this
point.
D
Thanks
and
again,
AWS
is
there
already
so
if
you're
using
AWS
you're
good,
it's
only
gcp
and
Azure
that
currently
don't
support
that.
B
Okay,
thanks
Scott
and
Daniel,
and
that's
normal.
Please.
A
Yeah,
so
I
got
one
issue
regarding
the
timeout
which
we
have
for
ensuring
restrict
repository
so
I
think
initially
it
was
just
one
minute,
so
we
had
some
testing
scale.
Testing
done
for
rustic
in
our
environment
and
for
200
name
spaces
using
restrict
the
backup
was
failing,
partially
with
the
error
that
the
ensure
repo
timeout
is
getting
exceeded.
A
So
I
just
wanted
to
bring
this
issue
to
attention
and
I
even
posted
a
PR
to
increase
the
timeout
to
like
five
minutes.
So
five
minutes
work
for
us
and
we
want
to
make
this
configurable
in
like
next
versions
of
Valero
and
keep
the
and
increase
the
timeout
like
give
the
user
the
choice
to
increase
the
timeout.
So
what
do
you
guys
think
about
this?.
D
I
mean
it
should
be
plenty
of
time,
but
the
problem
is
when
you
scale
this
up
and
if
you're
doing
a
backup
with
a
bunch
namespaces,
you
know
it
might
take
a
minute
and
a
half
two
minutes
three
minutes
instead
of
less
than
one
minute,
and
so
then
your
backups
are
just
failing
on
you
because
of
that.
So
you
know
in
the
short
term,
just
wrapping
it
up
to
five
minutes
seems
to
make
sense
as
a
default.
D
But
if
we
eventually
make
a
configurable,
then
someone
can
if
they
say
they
don't
want
five
minutes,
they
can
go
back
to
one
or
they
can
make
it
ten
if
they
need
to,
because
they
have
a
thousand
main
spaces
or
whatever.
But
I
think
this
change
in
the
default
to
something
larger
than
a
minute
is
kind
of
the
quick
fix
that
makes
it
work
for
for
now
and
then
longer
term.
We
can
add
the
configurability
with
another
parameter.
B
So
so
that
means
okay,
it
means,
for
we
have
many
namespaces,
so
we
will
connect
or
run
a
rest
Commander
to
connect
to
the
report
for
many
times.
This
is
where
we
need
a
lot
of
time
right,
yeah,.
D
Yeah
and
and
really
it
shouldn't
this
shouldn't
change
the
time
for
the
backup,
because
this
is
just
how
long
you
wait
before
we
give
up
and
error
out.
So
if
things
are
working
right,
increasing
the
timeout,
so
so
I
guess
there's
a
couple
of
cases
in
the
case
where
it's
already
working
increasing
the
time
that
won't
affect
you,
because
you're
already,
you
know
resting
professors
already
found
before
the
timeouts
hit.
D
So
you
don't
even
notice
the
difference
if
it's
taking
longer
than
a
minute,
but
less
than
five
minutes
then
increase
the
timeout
is
the
difference
between
a
back
a
failed
backup
and
a
successful
backup,
which
is
what
we're
accomplishing
here.
If
it's
taking
more
than
five
minutes
this,
you
know
the
speed.
I
don't
want
to
pack
you,
but
in
the
case
where
repositories
are
failing
like
it
really
is
not
ready,
because
it's
a
problem,
the
five
minute
timeout
means
it'll.
Take
you.
You
know
four
four
minutes
longer
to
time
out.
D
If
there
is
a
problem
but
again
in
terms
of
a
percentage
of
how
long
the
overall
backup
takes
I
think
that's
a
good
trade-off
versus
failing
backups
when
everything's
working
fine,
but
we
do
eventually
want
to
make
this
configurable,
because
you
know
eventually,
someone's
gonna
increase
the
number
of
namespaces
or
they're
working
in
a
cluster,
that's
overloaded
and
slow,
and
it
you
know.
Maybe
it
takes
six
minutes
instead
of
five
in
the
sales
again.
So
if
it's
configurable,
if
they
know
they
have
a
slow
cluster,
they
know
they're
making
a
huge
backup.
D
They
can
just
change
the
change,
the
the
setting.
You
know,
the
backup,
sorry,
the
timeout
setting
but
I
think
as
a
default,
especially
with
with
non-configurable
default.
One
minute
is
just
too
small:
it's
not
going
to
scale
it
doesn't
escape.
In
our
case,
it
was
working,
fine
for
100
namespaces
and
failing
for
200
and.
D
Testing
spaces
with,
like
I,
think
it
was
just
one
in
a
volume
per
namespace.
So
you
know
if
you
had
a
larger
backup
that
had
more
going
on
per
namespace
that
does
not
be
hit
even
earlier.
I'm,
not
sure
what
the
you
know.
What
the
cutoff
is,
what
the
cause
is.
I
just
know
that
when
we
create
200
at
once
and
then
wait
for
them
to
be
ready,
a
minute
may
not
be
enough
time.
In
fact,
it
probably
won't
be
enough
time
if
you
have
more
than
you
know
so,.
D
A
D
B
I
I
still
I
didn't
get
the
problem
here.
We
have
actually
created
one
repo
and
wait
for
one
repo
right,
but
why
it
has
something
to
do
with
the
the
amount
of
the
repos
we
are
going
to
create
because
we
create
one
Ripple
here
and
then
we
wait
for
that
repo
and
it
should
not
have
anything
to
do
with
the
the
count
of
the
repo
all
or
related
to
the
other
ripples
foreign.
D
I
mean
so
so
I'm
sure
when
this
is
run,
I
mean
you
know
and
I
know
we're
creating
a
channel
and
and
waiting.
So
so
that's
all
done
in
the
background
right
I
mean
I
think
so
so
these
are
being
done
in
parallel.
Is
that
is
that
the
case.
A
D
Had
the
impression
that
these
were
running-
and
you
know
each
of
these
threads
was
running
in
parallel,
you
know,
and
so,
if
you're
doing
a
bunch
of,
if
you're
creating
a
bunch
of
repositories,
then
you
know-
or
you
know,
because
because
we're
creating
a
channel
and
then
you
know
waiting
for
this
to
happen
in
the
background,
I'm,
not
sure
it
may
be,
that
we're
doing
one
at
a
time
here,
but
I
just
I
just
know
that
in
the
test
environment,
as
the
number
of
names
has
increased
is-
and
it
may
be,
that
it's
just
being
slower,
because
you
know
the
queries
when
you're
iterating
over
you
know
if
there's
200
wrestling
repositories
versus
if
there's
only
100
it's
taking
longer,
because
of
that,
in
other
words
the
cluster
itself.
D
A
E
Yeah
yeah
I
think
I
think
is
some
valid
doubt,
but
it
seems
that
there
there
is
a
lock
added
to
the
file
season
right
when
you
ensuring
the
Ripple
I'm,
not
quite
familiar
with
that
piece
of
code,
but
I
I
have
this
impression:
yeah.
D
I
mean
I
mean
we're
definitely
getting
the
time
that
waiting
for
rest
repository
to
become
ready.
Log
message
that
was
that
was
that
was
how
we
knew
it
was
in
this
code
that
it
was
causing
the
problem
because-
and
we
provided
a
modified
image
to
the
scale
testing
team
to
to
time
out
in
five
minutes
instead
of
one
minute
and
the
errors
went
away,
I
mean
this
is
again.
Why
I
think?
Ultimately,
we
wanted
this
to
be
configurable
so
that
you
know
if
in
particular
environments
it's
failing.
D
Then
we
can
have
those
users.
You
know
increase
this
number
for
just
them
based
on
that,
but
without
it
being
configurable,
and
maybe
five
minutes
is
not
the
right
answer,
but
you
know
something
larger
than
one
minute,
because
I
mean
one
minute.
It's
kind
of
this.
You
know
arbitrary
thing
of
oh,
this
should
be
fast.
So
let's
call
it
a
minute.
You
know,
and
the
problem
is
that
it's
you
know
that
that
number
was
not.
You
know,
kind
of
developed
as
a
result
of
testing.
D
You
know
real
world
environments,
it
was
just
hey.
Let's
it
looks
pretty
fast
in
my
environment,
it's
taking
a
few
seconds.
Let's
wait
a
minute,
you
know,
I
could
have
waited
two
minutes.
It
could
have
been
three
minutes,
but
it
was
one
minute
and-
and
it
just
doesn't
seem
to
work
environments
with
more
namespace.
B
E
B
Yeah
I
think
I
also
agree
that
we
need
to
further
tactics
to
get
the
Ripple
a
root
code
but
and
Daniel
what
you
suggest
that
doesn't
know
may
be
some
reason
that
is
for
for
the
for
the
repo
grading.
Actually,
we
we
call
the
rest
is
leads
the
same
shot
command
the
Lisa
same
shot
command.
If
I
remember
it
is
acquired,
it
acquires
exclusive
lock
that
will
make
all
the
Rainbow
Connection
to
be
sequential
right.
B
That
may
be
the
the
Rhythm
but
I'm
not
100
sure
for
right
now,
and
another
thing
is:
if
we
enlarge
the
time
and
One
impact
is
if
the
Ripple
creation,
for
example,
have
some
problem
and
I
mean
the
the
worst
case.
We
will
wait
for
that
long
time.
Right,
I,
think
that
will
be
the
impact.
D
This
is
where
it's
not
working
at
all
that
then
you're,
then
a
larger
timeout
will.
It
will
then
take
you
longer
to
to
get
the
error
added
to
the
to
the
backup.
E
Yeah
so
so
Shuba
you
need
to
reach
the
scale
goal
in
one
hand
right.
You
need
to
make
sure
that's
successful
for
200
name,
spaces.
E
Yeah
I
think
yeah.
Eventually
we
will
probably
fit
I
mean
for
one
time.
We
may
probably
merge
this
PR
to
increase
the
timeout,
but
I
I
would
suggest
we
dig
a
Little
Deeper
to
understand
why
why,
if
yeah.
D
I
mean
if
you
find
the
root
cause
where
you
know
things
are
being
slower
with
more
namespaces
than
it
shouldn't
be,
because
if
there's
some
inefficiency
you
know
or
someplace,
where
we're
getting
locked,
we
don't
need
it.
I
mean
that
would
be
a
better
solution,
and
then
we
could,
you
know
if
we
found
that
we
might
be
able
to
get.
You
know,
then,
reduce
that
time
out
again
if
we
found
a
definite.
Oh,
this
is
why
it's
taking
longer
for
200
name
spaces,
and
it
shouldn't
be,
and
this
other
code
change.
D
You
know
somewhere
else
makes
that
difference
whether
it
has
to
do
with
locks
or
it
has
to
do
with.
You
know,
try
and
do
things
in
parallel,
I'm,
not
sure,
but
if
we
can
identify,
you
know
a
bottleneck
somewhere
that
actually
makes
things
go
faster.
You
know,
making
it
go
faster,
is
better
than
increasing
a
timeout,
but
at
the
same
time
what
we
don't
want
to
do-
and
this
is
just
like
with
The
Rustic
timeout,
where
that's
configurable,
you
know.
D
If
there
are
cases
where
it's
taking
longer
for
a
rustic
backup,
then
we
need
to
increase
that
timeout,
because
what
we
don't
want
us
to
report
to
the
user,
hey
your
backup
failed
because
it
took
too
long,
even
though
it
would
have
finished
if
we'd
give
it
a
little
more
time.
A
B
Okay,
another
thing
is
I.
If
it's
really
related
to
the
erratic
log
problem
actually
for
copier,
we
don't
have
that
lock.
So
maybe
we
can
try
The
copier
Parts
sometime
and
to
make
a
comparation
to
see.
If
this
is,
it
is
related
to
the
resting
already
to
the
API
server
right,
then
maybe
some
some
way
to
troubleshoot
and
another
thing
is:
we
have
refacted
the
repo
insurer
several
days
ago,
so
maybe
well.
If,
when
we
want
to
do.
B
We
need
to
refer
to
the
new
code
yeah,
so.
D
Makes
sense,
that's
kind
of
you
know
showing
what
customers
are
seeing
right
now
and
then
you
know
when
we
hit
issues
if
we
identify.
Oh,
this
is
a
problem,
that's
been
fixed
and
you
know
Maine,
and
then
we
can
try
to
go
down
that
path
and
we
we
didn't
see
any
obvious
way
that
you
know
Maine
has
fixed
it,
but
you
know,
since
it's
been
refactored,
it
may
well
be
possible
that
the
refactoring
changed
the
Dynamics
there,
so
yeah
I
mean
it
may
be
worth
beating
the
test.
D
With
the
version
on
you
know
the
sort
of
version
on
Main.
E
So
so
so
I
would
suggest
we
spend
a
little
more
time
to
find
out
the
root
cause
before
modding.
The
pr
you
guys
are
okay
with
that
right
or
you
want
to
merge
the
pr
first
and
find
the
root
cause.
A
Can
we
get
this
PR
for
192?
Is.
D
A
And
yeah,
because
after
copy
integration,
the
code
is
changing
so
yeah,
it
would
be
okay
right,
it's
called
R
1.1.1
will
be
based
off
one
line.
Two.
D
A
D
People
are
going
to
be
using
192
before
they're
using
110.,
so
you
know
getting
in
192
means
also.
We
then
need
to
figure
out
before
we
release
110.
Do
we
also
needed
110,
or
does
it
not
need
it
anymore
because
of
the
refactoring?
But
you
know,
because
this
is
a
this-
is
you
know,
essentially
a
bug
fix,
not
a
new
feature.
This
is
something
that
you
know.
We.
A
D
D
E
D
Freeze
and
then
it's
it's
fine,
so
we
just.
D
The
right
time,
where
okay,
we're
not
changing
we're,
not
refactoring
copious
stuff
anymore.
You
know
whether
that's
into
this
week
or
whatever
at
that
point,
I'm
going
to
get
a
build
for
the
people
who
are
doing
skill
work,
and
you
know,
let's
use
that,
instead
of
which
would
essentially
be
doing
some
kind
of
scale
work
based
on.
What's
going
to
be
in
the
oadp
in
a
1-2
release
rather
than
the
one
one.
Because
that's
going
to
be,
you
know
so
that.
A
D
Yeah
that
will
help
us
know
whether
we
need
this
for
110
as
well.
If
we
can
see
something
in
that
refactored
code,
where
it's
you
know
in
a
different
place
where
we
can
fix
some
bottleneck,
that's
causing
things
to
take
longer.
That
would
also
solve
the
problem
in
a
different
way.
Yeah.
E
Yeah
yeah
I
think
the
the
pr
flu
dance
admitted
it's
a
quick
fix
and
that
works.
So
it
seems
a
reasonable
thing
for
192,
because
we
even
we
found
the
root
cause
and
the
better
change
in
the
long
mechanism.
We
are
not
going
to
make
that
change
in
one
nine,
probably
so
right
exactly
yeah.
So
so
would
you
mind
create
another
pr.j.
E
A
B
Thank
you.
That's
why
you
come
to
the
discussion
topics
and
the
first
one
is
the
the
the
first
two
one
from
me
is
like
some
rename
work.
Actually
we
have
discussed
a
little
about
this
during
the
environmental
design
PR
right.
So
here
specifically,
we
want
to
rename
The
Rustic
Democrat
to
the
Valero
node
identity,
massage
that
is
just
for
proposal.
So
if
we
any
parts
or
anyone
have
some
concerns,
we
can
just
read
them
out
and
it's
still
under
early
stage,
yeah.
D
B
D
Say
I
I
like
to
learn
a
new
engine,
I
think
that
I
think
that
works
well,
because
I
I
think
default
to
pod
volumes
is
confusing
because
to
a
user
they're,
not
thinking
in
terms
of
pod
volumes.
They're.
Thinking
of
you
know,
I
want
to
I
want
to
I
mean
because
rustic
is
a.
D
You
know:
file
system
copy
versus
a
snapshot
copy,
so
I
wonder
if
we
want
to
say
default
to
file
system
copy
or
something
like
that,
because
pod
volume
is
just
an
internal
thing
that
we
use
in
Bolero
to
use
rustic
or
copia.
It's
not
something
that
a
user
thinks
about.
You
know
because
pod
volume
just
to
have
a
pod
to
have
a
volume,
but
you
you
have
those
two
things,
even
if
you're
doing
snapshots.
D
D
D
Instead
of
default
to
pod
volumes
default
to
file
system
backup
or
something
like
that,.
A
B
D
Right
you're
right
on
file
system,
backup
because
so
I
mean
that's
not
a
perfect
word
either
and
so.
D
Yeah,
you
know
because
basically
there's
one
switch,
which
is
do
we
use?
Do
we
use
something
in
Valero?
You
know,
then,
using
the
node
agent
to
copy
the
file.
You
know
the
content
of
the
file
system
directly
by
Valero
and
that's
that's
what
we're
calling
default
volumes
rustic,
for
example,
in
the
current
you
know
one
nine,
and
if
you
don't
do
that,
then
what
you're
doing
is
either
CSI
snapshots
or
native
volume,
snapshotter
and
so
and
so
and
then,
whether
you
know
so
so
that's.
E
Yeah
I
I
think
it's
okay
to
rename
it
to
default
or
file
system
volume,
I
file
system,
backup,
if
that
sounds
better
because
we
are
not
using
this
term
in
the
CSI
snapshot.
Data
mover
scenario
right
right.
In
that
scenario,
we
will
add
some
attribute
to
the
data
movers
CR
or
somewhere
to
Define
that
so
that's
a
different
option.
E
A
D
What
data
mover
does
is
sort
of
I
mean
I
mean
this
is
more.
What
Valero
does
in
the
backup
and
restore
process
and
and
at
that
level
Valero
doesn't
care
about
data
mover
Valero's?
You
know
we're
using
CSI
or
we're
using
volume
snapshot,
or
you
know
using
some
plug-in
to
do
this
versus
we
use
this
file
system,
backup,
which
means
using
the
polaro
node
agent
to
perform
the
copy.
B
Yeah,
because
at
present
the
default
volume
to
Rustica
means
a
means
default
to
put
volume
back
up
right
before
the
hot
volume
back
up.
So
if,
if
we
tended
to
default
to
file
system
backup,
there
is
not
any
indiction
that
it
says
it
is
4.1
backup,
even
though
we
know
that
for
for
CSI
didn't
move
well,
we
don't
use
that,
but
from
the
name
or
from
user's
impression.
Okay,.
D
Since
we're
renaming
The
Rustic
to
the
node
agent,
we
could
also
say
default
volumes
to
node
agent,
saying
we're
going
to
use
the
node
agent
to
make
the
copies.
B
D
Node
agent
I
mean
the
name
that
would
also
work.
D
When
using
data
movie
you're
using
the
CSI
plug-in,
and
so
we're
not
doing
this
and
we're
still
making
that
decision,
Valero.
E
D
Hey
are
we
going
to
use
the
the
node
agent
to
copy
the
file
system
directly?
Are
we
going
to
not
do
that
and
instead
we're
going
to
let
the
CSI
plug-in
do
what
it
needs
to
do?
And
then
everything
in
terms
of
data
mover
to
you
know
using
the
using
the
volume
snapshots
and
volume
snapshot.
Contents
from
the
CSI
plug-in
Valero
itself
doesn't
worry
about
any
of
those
details.
It
just
invokes
a
plugin.
It's
a
backup
item
action.
Yeah
again,
it's
a
item,
action
plugin.
D
B
Yeah
I
agree
that
online
in
the
in
in
our
code,
part,
is
it's
totally
different,
but
from
users
impression
we
can
also
say
that
the
loan
backup
is
also
something
like
a
data
more
right.
It
is
a
backing
using
the
file
system,
file
system
way
to
backup
data,
and
the
CSI
did
more.
B
It's
also
using
the
file
system
way
to
backup
the
data,
so
so
I'm
trying
to
answer
Daniel's
the
question
so
for
me
that
this
is
the
file
system,
I
love,
backup,
it's
not
cannot
represent
the
port
volume
backup.
That
is,
it.
E
Does
not
I
mean
if
we
change
it
to
default
or
file
system
backup
it
does
not
represent
the
power
volume
backup
we
can
set
this
default
volume
default
volume
to
fail
system
backup
to
fast
and
still
use
the
CSI
and
data.
D
Right
I
mean
this
isn't
really
representing
either
one
directly.
This
is.
This
is
basically
just
how
you
tell
Valero
what
what
does
Valero
normally
do.
If
there's
not
an
annotation,
you
know
it's
just
because
it's
a
flag,
you
know,
and
so
I
guess
I'm
just
trying
to
think
in
terms
of
what
is
a
user
going
to
think
about.
You
know
if
a
user
says
Hey
I
want
to
use
CSI
or
I
want
to
use
volume
snapshot,
or
that's
the
default
that
you
know.
D
If
you
don't
set
this,
what
is
the
user
going
to
think
about
it?
Hey
I
want
to
use
copy,
I
want
to
use
rustic.
You
know
when
we
only
had
rest,
it
was
easy,
I
wouldn't
use
rustic.
You
know
that's
what
the
user
would
think,
and
so
they
would
put
that
in
there.
So
you
know
node
agents,
file
system
either.
D
One
of
these,
and
a
lot
of
this
is
just
what
we
document
as
long
as
we
document
you
know
if
you
use
this
flag,
this
is
what
happens,
and
we
just
need
to
make
sure
that,
whatever
term
we
use
here
is
integrated
into
the
documents
you
know
when
we
talk
about,
you
know
the
opt-in
versus
the
opt-out.
You
know
way
of
of
making
these
decisions.
E
So
so
I
think
so
God
you
think
both
I
mean
either
of
the
father
Fells.
This
is
some
backup
or
default
to
know.
Data
is
better
than
the
fossil
pop
volume
right
and.
D
And
the
only
reason,
I'm
thinking
that
is
again
I'm,
just
thinking
as
a
user
I'm
not
think
pod
volume
backup
is
kind
of
an
internal
Valero
CR
that
we
use
to
kind
of
you
know
hold
information
internally,
that
the
users
aren't
really
working
with
directly
the
user's
thinking.
Hey
I
want
to
use
Scopia
hey
when
he's
resting.
D
My
thinking
is
that
you
know
I'm
trying
to
figure
out
what
would
be
the
clearest
to
the
user
pod
volume.
Backup
is
not
something
that
you
know
you
specify
anywhere
in
the
backup
CR
and
the
annotations.
You
know
you're
talking
about
well,
I,
guess,
I,
guess,
there's
another
question
and
I
just
don't
remember
what
like
right
now
and
one
nine.
When
the
user
is
using
rustic,
they
have
The
annotation
that
says
rustic
volume.
So
what
are
we
calling
that
annotation
in
in
110
you
mean.
D
E
D
A
E
B
Okay,
okay,
let
me
just
add
one
more
thing:
it's
like
so
for
this
one.
Why
do
why
do
we
change
the
demon
style
to
know
the
items
we
want
to
make
the
demons
a
generic
for
generic
purpose
not
only
for
portable
backup
in
future?
So
if
we
call
it
default
volume
to
node
items,
it's
like
it
means
whatsoever,
node
Island
Factor.
What
whatever
functionality
the
node
item
provides.
We
will
use
that
for
the.
D
Then,
where
we
are
now
in
the
future,
it's
it's
only
going
to
be
used
for
things
that
you
know
it
has
to
be
on
a
specific
node,
because
that's
why
we
use
it
for
rustic
and
for
for
copious,
because
you
know
we
need
to
be
on
the
same
node
as
the
Pod
that
mounts
the
volume
and
so
the
node
matters.
You
know
any
generic
functionality
where
the
node
is
not
relevant
would
be
something
we'd,
probably
put
in
the
regular
Valero
pod.
D
Okay,
you
know
the
things
you're
going
to
put
in
the
node
agent
pod
are
things
that
you
know
need
to
run
on
a
specific
node
based
on
where
a
pod
is
running
or
where
a
volume
is
mounted
which
right
now
is
just
stressing
and
copia.
But
you
know
if,
if
there
is
future
functionality,
that
also
is
node
dependent,
that
could
go
there
as
well.
A
B
Yeah
but
the
the
already
are
clear
of
this
tool.
We
will
discuss
it's
offline
and
we
can
and
the
final
one
that
is.
Maybe
we
can
make
it
to
live
our
system,
backup.
That
will
be
clear
to
me,
but
I'm,
not
sure
for
other
others
of
of,
but
let's
discuss
this
offline
because
we
have
run
out
of
time
and
and
the
the
the
this
one
is
like
something
we
have
founded
during
the
refactory.
So
it's
like.
B
We
have
two
ways
to
get
the
same
plot
for
the
for
the
restore.
Actually,
the
first
one
is
we
get.
This
is
the
current
way.
We've
got
it
from
the
pvb's
status
right
like
here
and
actually
after
that,
if
we
don't
have,
we
don't
don't
found
that
we
will
fall
back
to
this
function
and
lock
here
we
founded
Some
Cloud
ID
from
the
port
annotation,
and
this
was
that
a
duplicated
way,
and
we
have
actually
add
this.
This
establish
kitchen
message
in
a
very
old
release.
B
So
since
we
are
doing
refactory
here
right
now,
so
we
want
to
just
remove
the
old
way.
So
we
want
to
see
if
there
is
any
concern
from
any
part.
If
not,
we
will
remove
that,
but
maybe
we
cannot
make
a
conclusion
here
and
I
right
now.
We
can.
If
so
we
can
leave
some
method
here,
I'm,
not
sure
if
any
yeah.
D
Good
idea,
as
long
as
everything
works
I
mean,
obviously
we
need
testing
to
make
sure
that
you
know
removing
this
doesn't
break
anything,
but
this
seems
like
exactly
the
thing
we
should
have
probably
removed.
You
know
several
releases
ago
and
like
like
you
said,
since
we're
totally
refactoring
this
removing
something
long
deprecated.
This
is
the
perfect
time
to
do
it.
Otherwise
you're
going
to
have
to
build
in
snapshot
ID.
You
know,
you
know
annotations
in
new
functionality
for
copia,
which
doesn't
seem
to
make
a
lot
of
sense.
E
Yeah,
it
seems
in
history.
There
was
one
zero
backup
format
then
later
in
one
zero
in
Valero
one
zero,
they
are
using
the
backup
format
1.1.
So
if
we
remove
this
fabricated
method,
that
means
Bolero
will
no
longer
work
with
one
zero
format,
but
I
think
that's
okay,
because
that
has
to
be
very
old.
D
Well,
it
also
and
I
may
be
misunderstanding
this.
It
sounds
like
if
we're
already
using
the
the
status
field
for
this,
even
even
an
old
Valero
backup.
You
know,
if
we're
setting
both
of
them,
then
then
we're
fine.
It's
only.
You
know
a
backup
old
enough
to
only
use
the
annotations,
because
I
guess
at
one
point
we
used
annotations
and
then
we
added
the
status
field,
but
we
kept
The
annotation
support.
A
D
You
know
the
point:
is
that
I,
don't
think
I
mean
I,
don't
know
what
version
of
Valero
was
the
latest
one
that
didn't
have
you
know
they
didn't
have
the
status
field
I'm
guessing
it's
Valero,
0.9
or
something
before
one.
D
Don't
think
we
still
support
Valero
0.9
backups
in
110.,
so
I
don't
think
removing
the
deprecated
code.
I
mean
that's,
that's
the
reason
we
deprecate,
because
why
don't
you
deprecate
something
you
say:
okay,
it's
some
future
release.
You
know
at
least
say
One
release
from
now
or
two
releases
from
now
we're
going
to
remove
this.
So
you
know
we
don't
have
to
rely
on
it
forever
because
of
that.
D
And
then
the
annotations
was
the
older
one:
it's
actually
The
annotation,
that's
the
deprecated
one,
okay,
so
that
I
think
what
this
means
is
that
at
some
point,
although
we
can
probably
follow
back
and
look
at
that
PR,
but
it
sounds
like
you
know,
a
very
old
version
of
Valero
two
or
three
years
ago
we
started
the
first
version
of
this
used
annotations
and
then
someone
decided
we
want
this
in
status
instead
of
relying
on
annotations
for
this,
so
we
added
and
it
added
the
status
and
deprecated
The
annotation
version
of
it
yeah.
D
So
in
Valero
1.0
we
had
the
deprecated
annotations
and
we're
using
status
with
the
expectation
that
say
maybe
Valero
1.01.
say
would
remove
it
completely,
but
then
we
never
got
around
removing
it.
D
A
D
A
one-nine
backup
with
rustic
will
restore
on
110
without
this
annotation
code.
I
think,
that's
probably
good
enough
to
say
you
know
we're
good,
because,
because
what
I
suspect
is
the
case
is
the
only
backup
you're
breaking
by
removing
this
is
from
Valero
0.9
or
whatever
release
was
before
we
added
the
the
status.
B
Oh
the
the
code
for
adding
that
annotation
have
been
removed
a
long
time
ago.
So
currently
we
only
have
the
getting.
A
D
And
and
by
the
way,
I
don't
know
if
we've,
if
we
have
a
published
list
somewhere,
we
probably
should,
if
we
don't
have
you
know
what
versions
of
Valero
backups,
in
other
words,
if
I,
if
I
back
up
with
Valero
one
nine
I
expect
110
I
will
be
able
to
restore
that
backup.
I,
probably
expected
a
one.
Seven
backup
will
work
with
110.
I,
probably
don't
expect
a
1.0
backup
will
work
with
110.
A
D
B
E
We
do
not
have
this
official
message
saying
that
latest
Valero
will
only
support,
like
n
minus
how
many
versions,
backup
but
I,
think
for
this
particular
case
I
think
we
are
relatively
safe
to
remove
that.
We
just
mentioned
the
code
that
was
adding
The
annotation
in
the
back
was
removed
in
a
very
early
version
right.
D
And
that's
what
I'm
saying
that
you
know.
We
know
that
a
user
should
expect
that
a
one,
seven
or
one,
eight
or
one
nine
backup
will
still
work
with
Fuller
110..
We
don't
expect
as
1.0
backups
still
work
with
one
I,
don't
know
where
that
cut
off
should
be,
but
I
think
you
know.
The
version
where
this
was
added
was
old
enough.
That
I
think
at
this
point,
because
especially
since
we've
deprecated
it,
you
know
several
versions
ago,
I
think
at
this
point
this
should
be
safe
to
remove.
A
D
D
I,
don't
know
what
the
reasonable
expectation
is.
We
haven't
documented
what
that
expectation.
You
know,
you
know
what
we
haven't
told
you
is
what
to
expect.
So
every
user
has
their
own
expectation
and
their
head
right
now
saying
what
I
think
you
know
what
they
think
is
reasonable
I.
We
probably
should
start
publishing
that
at
some
point,
but
that
requires
testing
to
see
you
know
what
actually
works.
D
But,
but,
but
that
that
cross
version
thing
is
especially
relevant
when
we
release
a
new
version
of
Valero
that
removes
support
for
older
kubernetes
versions,
because
you
know,
if
you
have
a
case
where
I
have
a
backup
from
a
cluster
that
can't
use
the
latest
Valero
and
I
want
to
restore
that
in
a
newer
cluster.
That's
where
the
cross
version,
backup
and
restore
is
actually
most
relevant
and
that's
where
having
some
kind
of
documentation
around
expectations
would
be
helpful.
D
But
yeah,
and
in
this
particular
case
we're
talking
about
you,
know,
Valero
One,
Zero
versus
110
I
mean
that's
a
you
know,
probably
three-year
time.
You
know
scale
and-
and
this
has
been
I
mean
once
you
deprecate
something
you're
you're
saying
you
know
we're
probably
going
to
be
removing
this
in
a
you
know
a
couple
of
releases
and
it's
been
many
more
than
that.
So
I
think
at
this
point
we're
well
beyond
the
point
where
we
need
to
keep
this.
B
Yeah
looks
like
it
is
a
generic
question
for
Valero
to
to
remove
some
tab
applications
right.
So
maybe
we
we
added
it's
in
the
release,
node
or
something
like
that
before.
Oh.
D
Yeah,
we
definitely
need
to
if
you're
removing
something
that
was
upgraded.
You
definitely
need
that.
You
know
the
release,
notes
to
mention
that
clearly
and
not
embedded
in
the
list
of
changes.
You
know
some
somewhere
fairly
prominent.
You
know,
and
we
should
be
clear
before
we
do-
that.
Let's
look
it
up
to
see
what
Valero
version
you
know
added
the
status,
so
we
can
say
for
sure
this
is
only
relevant
for
backups.
You
know
older
than
this
version.
You
know
just
to
be
clear.
B
Yeah,
maybe
we
can
continue
the
discussion
in
the
slack
Channel
and
we
will
make
a
decision.
We
add
that
through
the
degrees
now,
if
we
don't
remove
it
in
the
current
I
mean
in
way
1.10
or
we
can
add
some
things,
either
release
note
that
we
both
upgrade
to
it
and
not
release
something
like
that.
E
B
C
Okay,
okay,
so
can
I
share
the
screen.
A
C
C
C
Resources
in
separating
to
the
cluster
scope
and
the
namespace
scope
and
for
the
include,
especially
for
the
include
resource
parameter.
It
means
we're
only
including
the
resource
type
specific
file,
so
it
will
be
hard
for
to
in
in
to
implement
to
include
specified
resources
in
in
bank
group,
for
example
the
classical
scope
and
include
all
resource
in
another
group,
for
example
the
namespace
scope.
C
So
that's
why
I
propose
this
step?
Reading
and
I
think
there
are
some
comments
from
Scott
and
I
think.
The
main
idea
of
the
Scott
comments
is
hard
to
replace
the
include
cluster
resource
parameter
because
it
has
three
values
and
it
has
different
meanings.
C
D
Right,
yeah
yeah,
basically,
what
I
guess
what
I'm
saying
is
that
you
know
we're
adding
new
Fields
first,
you
know
and
these
new
fields
are
listing
specific
resource
types
to
include
or
exclude.
We
still
need
that
Boolean
pointer
field,
because
one
of
the
ways
that
we
use
the
existing
the
Boolean
field
is,
you
know
right
now
we
there's
if
we
said
it's
true
that
includes
everything
and
and
I
think
the
everything
then
could
be
filtered
and
Modified
by
these
new
Fields.
D
Likewise,
if
you
set
it
to
false
again
you're
saying
you
don't
want
anything,
the
thing
that
you
can't
reproduce
I
mean
you
know
they
include
everything
versus
exclude
everything.
You
know
the
new
fit
the
new
Flags
if
we
added
them
could
cover
that,
but
what
they
don't
cover
is
that
default
case.
The
auto
case,
where
you
know
we
have
this
where
it's
set
to
Auto
and
that
allows
us
to
do
things
like
only
include
PVS.
D
You
know
when
the
PVC
that
it's
linked
to
is
is
backed
up
or
only
includes
crds,
when
we
have
a
CR
for
that
type,
that's
backed
up,
and
so
you
know
we
don't
want
to
lose
that
functionality.
D
With
with
this
change,
so
so,
but
I
think
it's
fine
I
just
think
that
I
think
that
the
the
that
existing
Boolean
pointer
field
needs
to
exist
simultaneously
with
these
new
fields,
and
we
need
to
be
clear
in
the
way
we
name
them
as
to
what
their
functional
function
is.
In
other
words,
that
will
include
cluster
scope
if
you
set
it
to
true
that
kind
of
enables
the
new
parameters
to
say.
Okay,
if
we
include
everything
these
include
excludes,
then
let
us
filter
down,
which
ones
we
actually
mean.
D
We
don't
really
want
every
single
thing.
We
just
want
to
enable
us
to
include
everything
subject
to
those
filters,
whereas
if
you
set
it
to
false
you're,
saying
ignore
clusters
to
open
entirely
I,
don't
want
anything
clusterscope
and
then
again.
That
auto
case
is
is
where
you
only
want
to
include
certain
things.
But
again
you
can
use
that
certain
things
to
be
filtered
so,
in
other
words,
that
Boolean
kind
of
gives
overall
behavior.
That
I
think
we
want
to
preserve,
but
these
fields
here
are
really
just
replacing,
because
right
now
we
have.
D
The
single
list
include
cluster
Scopes.
Sorry
include
resources
and
exclude
resources
and
I.
Think
the
intent
here
as
I
understand
is
to
still
do
the
same
thing,
but
in
two
categories,
instead
of
in
one
category.
D
So
I
think,
as
long
as
we
preserve
the
behavior
of
the
existing
Boolean
field,
I
think
we're
good
there
and
the
other
question
comment.
I
had
there
was
again,
you
know
do
because
right
now
include
resources
where
everything's
together,
if
you
don't
specify
it,
then
the
default
is
include
everything.
C
D
Might
be
better
to
do
the
same
thing
with
the
include
cluster
I'm,
not
sure
you
know
or
maybe
because
it's
a
new
parameter.
We
can
change
that
and
say
you
have
to
specify
start
if
you
want
everything
included.
C
C
D
But
that
that's
just
one
thing
we
should
figure
out
the
other
thing
I
wonder
about
is
what
happens
if
you
set
that
to
like
what,
if
a
user
puts
a
cluster
scope
resource
in
the
include
resources
or
if
they
put
a
namespace
resource
in
the
cluster
one?
Would
that
just
be
a
validation
error
that
might
make
sense,
but.
D
I
wonder
because
if
you're
having
an
old
backup
that
that
you
know
format
it,
it
mixes
those
into
one
field.
E
Yeah,
shouldn't
I
I
think
we
used
when
we
discussed
this
earlier.
We
used
to
think
we
also
should
introduce
another,
a
set
of
parameters
to
include
and
exclude
the
namespace
scope
resources,
so
we
use
this
include
cluster
scope
resources
only
in
combination
with
the
include
namespace
resources
keep.
D
The
existing
one
working
the
way
it
does-
and
this
is
kind
of
analogous
remember
when
we
added
for
one
nine,
we
added
the
the
ore
label
selector
and
then
we
have
validation
to
say
either
use
the
new
way
or
the
old
way
the
old
way
works.
The
same
way.
The
way
we
did
before
the
new
way
is
a
different
rules
and
we
enforce
to
use
it
as
one
or
the
other,
and
so
we
could
do
the
same
here.
We
have
include
and
exclude
resources.
D
The
old
parameters
use
the
old
rules,
and
then
we
have
a
validation
rule
that
says
you
either
use
the
old
ones
or
the
new
ones,
not
both.
And
then
you
have
include
cluster
script
resources
and
include
namespace
resources,
exclude
cluster
scope,
resources
and
exclude
namespace
resources.
That
makes
it
clear
that
we
don't
reuse
the
same.
The
same
include
resources
and
changes,
meaning
right
yeah.
That
would
actually
probably
be
better.
D
A
A
D
Know
include
namespaced
and
include
cluster
scrap
resources.
You
have
to
put
the
wild
card
in
there
if
you
mean
include
everything.
Otherwise
it
includes
nothing.
Otherwise,
it
makes
it
harder
to
do.
One
of
those
rules
like
one
of
the
use
cases
here
is
I,
want
to
include
all
cluster
Scopes
and
no
namespace
resources.
D
If
we,
if
the
rules
for
these
new
fields
are
to
include
all
you
need
star
and
empty,
is
empty,
then
those
two
are
consistent.
Otherwise,
it's
confusing
if
I
think
the
namespace
and
the
clusterscope
fields
behave
differently.
C
Right
but.
D
Yeah
I
think
I
think
Daniel
suggestion
there
of
make
two
sets
of
new
fields
and
not
and
don't
reuse.
The
old
include
resources
and
actually
resources,
that's
going
to
be
clearer,
I
think
to
users,
and
then
we
can
have
a
validation
rule
that
says
either
use
the
new
fields
or
the
Old
Fields.
You
can't
use
both
sets
of
fields.
D
That
way,
that
way,
if
you
could,
you
know
if,
if
you,
if
you're,
creating
your
backup
from
a
yaml
template
that
worked
before
use
it
again,
it's
not
suddenly
changing
on
you
and
you
know,
Behavior
around
cluster
scope
doesn't
mess
you
up.
Whatever
works
before
will
work
now
exactly
the
same
way.
But
if
you
choose
to
make
use
of
the
new
functionality,
then
you
would
remove
those
include
resources,
actually
resources
fields
from
your
backup
template
and
instead,
you
would
add
the
new
ones.
E
C
Okay,
I
I
have
another
thing
to
to
talk
about.
It
is
about
the
the
the
values
of
the
include
cluster
results:
I'm,
okay,
with
your
proposal
for
the
for
current
for
these.
But
how
about
to
move
the
same
logic?
So
the
three
values
into
the
include
the
cluster
scope,
resources
in
future
releases
and
the
deprecated.
The
include
cluster
resource
parameter.
D
D
You
know
all
persistent
volumes
in
the
cluster
versus
I
want
to
include
just
those
that
have
PVCs
bound
right
right
now.
We
use
that
nil
value
kind
of
the
Auto
Value
to
make
that
distinction.
So
that's
why
I'm
still
thinking
this
makes
sense.
It's
a
separate
Boolean,
because
this
is
not
providing
a
list
to
the
user
or
to
Valero.
This
is
this
is
kind
of
toggling
behavior.
D
It's
saying
you
know
I
want
to
include
cluster
scope
Resources
by
default,
and
then
the
list
will
tell
you
which
ones
or
I
want
Auto,
which
means
I
only
want
to
include
certain
ones
that
are
relevant,
but
you
can
still
again
use
their
list
to
further
refine
that
and
then
the
final
category
is
I
set
to
false.
It
means
I,
don't
care
what
the
lists
say.
No
cluster
scope
resources
are
included
here
so
so
that
that's
kind
of
a
top
level
configuration
that
I.
D
Don't
really
think
you
can
Inc,
you
can
include
that
logic,
especially
around
that
the
kind
of
Auto
logic
of
including
certain
things
because
you're,
including
some
PVS,
but
not
others,
you're,
including
some
crds,
but
not
others,
and
there's
really
no
way
of
you
know
doing
that
in
just
a
list
of
resources,
so
I
think
it'll
be
a
lot
more
confusing.
If
we
try
to
remove
it
out.
You
know
that
way.
I
think
we
just
leave
that
logic.
D
The
way
it
is
now
because
it
really
doesn't
interfere
with
the
new
stuff,
because
if
you
look
at
the
Valero
code
and
where
it's
used,
you
know
right
now,
if
you
said
include
cluster
resources
to
true,
for
example,
it
doesn't
really
include
every
cluster
resource.
You
know
every
every
cluster
scope
resource
in
the
club
in
the
cluster
it
it
includes
those
subject
to
the
other
selection
rules.
So,
for
example,
if
you
say
include,
cluster
resources
is
true,
but
you
put
PVS
and
exclude
resources
and
they're
excluded.
So
you
know
we.
D
We
still
use
the
filtering
logic
to
determine
which
resources
are
included
and
which
ones
are
excluded.
But
then
we
use
that
Boolean
pointer
as
a
further
toggle
to
say:
if
it's
true,
then
you
know
basically
everything's
included,
subject
to
the
other
clip
you
know,
for
example,
also
you
can,
if
you
set,
it
include
cluster
resources,
true,
but
you
said
a
label
selector.
Well,
that
means
only
cluster
scope,
Resources
with
that
label
selector
use.
So
again,
this
is
a
separate
way
of
and
I
think
it's.
D
The
primary
use
of
this
is
to
let
users
set
it
to
nil,
to
say:
I
only
want
to
bring
in
those
cluster
script,
resources
that
relate
to
my
backup,
because
you
know
a
namespace
backup,
for
example,
that
says:
hey
I
just
want
to
back
up
this
one
namespace.
If
I
have
volume,
data
I
need
those
PVS.
If
I
have
crds
I
need
those,
and
so
there's
certain
cluster
things.
D
Cluster
resources
that
you
know
plugins
can
pull
in
or
Valero
can
pull
in
up
at
the
PV
and
and
we
use
that
field
being
set
to
Auto
or
nil,
to
tell
Valero
hey.
We
want
to
do
this,
so
I
think
we
still
need
that.
D
A
D
Think
I
think
everything
else
never
goes.
I
I
think
with
the
change
to
add
for
Newfield
instead
of
two
I
think
works
because
basically
you're
basically
providing
a
drop
in
replacement
instead
of
us,
because
what
the
include
exclude
resources
does
is
it
provides
essentially
it
it's.
You
know.
The
Valero
include
exclude
logic,
takes
those
two
lists
and
generates
a
list
essentially
of
all
the
resource
types
in
the
cluster
filtered
by
that.
D
You
know
the
Legacy
functionality,
then
it
just
looks
at
that.
Those
two
full
flat
lists
and
if
the
namespace
versus
you
know
namespace
first
is
cluster
scope.
Resources
you
know
two
separate
lists
are
provided
then,
basically
Valero
would
that
that
function
would
say
if,
if
this
is
a
namespace
resource,
look
at
the
namespace
list
to
determine
whether
it's
included,
if
it's
a
cluster
scope,
look
in
the
cluster
list
included.
D
So
that
isolates
the
change
to
just
that
kind
of
should
include
utility
function,
everything
else
about
the
backup
and
restore
logic
around
pulling
in
PVS
versus
not
include
cluster
resources.
You
know
true
versus
false
versus
Auto.
That
logic
is
unchanged.
This
PR
with
the
implementation
PR
for
this
won't
need
to
change
that.
D
It
just
needs
to
add
the
validation
for
the
new
fields
to
find
the
new
fields
and-
and
that
includes
excludes
package
where
we
have
the
logic
to
say
you
know
you
pass
in
a
resource
and
say:
hey
should
I
include
this.
D
It
looks
at
resource
type,
you
know
the
the
resource
itself
and
Compares
those
lists.
You
know
New
or
Old,
depending
on
which
one
they
specify
it
looks
at
label
selectors.
It
needs
to.
It
looks
at
namespaces
if
it
needs
to
and
and
all
that's
isolated
to
that
one
area,
and
we
don't
really
need
to
worry
about
this
function
on
the
outside.
Of
that
limited
area
of
you
know
specifying
it
in
the
CR
and
in
that
logic,
okay,
got.
C
A
Yeah,
so
this
is
something
which
we
had
from
one
of
our
customers.
So
have
you
guys
ever
tested
CSS
snapshots,
workflow
with
label
selectors.
D
B
E
E
Because
no
because
because
there
is
assumption
that,
for
example,
if
I
have
a
label
XYZ
that
matches
one
part
and
the
Pod
has
three
PVCs,
only
one
of
them
have
the
XYZ
label,
but
Valero
will
back
up
with
all
three
PVCs.
This
design
is
based
on
the
assumption
that,
for
the
part
to
start,
I
need
to
I
need
all
the
PVS.
D
Has
that
been
tested
and
is
it
just
I
guess
what
I'm
wondering
about
is
the
case
we
ran
into
wasn't
so
much
that
it
was
the
customer
did
label
the
PVCs.
The
issue
we
ran
into
is
that
the
the
problem
is
that
the
CSI
plugin
creates
resources,
because
you
know
the
backup
line
of
action.
The
CSI
plugin
creates
because
it
creates
volume,
snapshots
and
volume
snapshot
contents,
but
it
doesn't
put
labels
on
those.
D
So
if
I
have
a
backup,
you
know
if
I'm
doing
a
backup
and
I'm
saying
label
selector
has
to
say
you
know,
you
know
my
label
equals
XYZ.
It's
only
going
to
back
up
things
that
match
that
within
the
but
then
we
create
these
pod
volume
snapshot
and
volume
snapshot.
Contents.
We
don't
create
a
label
for
those
that
match
the
label
selector,
which
means
when
we
return
those
as
additional
items
in
the
plug-in.
The
layer
then
calls
backup
item
on
those
items
and
the
first
thing
Valero
does
in
backup
item.
Is
it
checks?
D
Should
this
be
included
in
the
backup?
You
know
if
we
call
that
we
call
that
includes
excludes.
That
should
include
I.
Think
that's
the
name
of
the
function
which
goes
and
checks
is,
is
this
namespace
included?
Is
this
resource
excluded
or
included?
D
You
know
which
is
affected
by
previous
topic,
but
also
does
the
label?
Is
there
a
label
selector
here
that
would
exclude
it?
I.
E
See
this
is
not
I,
don't
think
we
explicitly
tested
it
in
earlier
one
nine
development
cycle.
Would
you
mind
writing
up
on
YouTube
for
us
to
follow
up.
D
Yeah
right
that
makes
sense
yeah,
so
so
I
mean
I,
haven't
verified
this
myself,
but
from
seeing
what
shubham
saw
of
talking
to
a
customer
and
also
looking
at
the
code
I'm
pretty
sure
the
only
way
we
can
fix
this
is
to
have
the
CSI
plug-in
when
it
creates
volume,
snapshots
and
volume
snapshot
contents.
It
needs
to
look
at
the
label
selector.
If
there
is
one
for
the
backup
and
if
there
is
it
needs
to
create
a
add
a
label
to
those
items
it
creates
that
match
the
label
selector
for
the
backup.
D
Actually
it
might
be
I,
it
might
be
easier
than
I
was
thinking.
Originally
I
was
saying:
hey
we
need
to
set
the
labels
based
on
the
backup
levels
like
we
may
be
able
to
just
copy
the
labels
that
are
on
the
PV.
D
Although
that
to
Daniel's
point
we
may
not,
those
may
not
be
enforced.
We
can
check
that.
But
the
point
is
there
might
be
a
relatively
easy
way
to
fix
this,
to
pick
a
label
that
we
know
will
match
if
we
can
look
at
the
Pod
label
or
the
PV
label
or
something
I
just
don't
know
for
sure,
I
I
think
the
PV
for
CSI
does
need
the
label,
because
that
doesn't
depend
on
a
pod.
But
in
any
case,
once
we
create
the
issue,
you
know
we
can
look
into
figuring
out
those
edge
cases.
D
Labels
from
the
PVC,
which
might
be
the
easiest
way
to
fix
it
or
we
need
to
look
at
the
backup,
CR
and
that
might
be
more
complicated.
But
if
we
can
grab
the
labels
from
the
PVC
that.
A
E
D
Problem
with
that
is
that
Valero,
you
know
that
that
weren't
executable
when
a
user
creates
Valero
backup.
D
You
know
Valero
is
very
strict
about
you
know,
for
example,
if
there's
a
resource
that
we
add
to
exclude
resources
and
say:
hey
I,
don't
want
to
you
know
back
up
this
particular
resource,
then
Valero
will
never
back
that
up,
because
every
time
we
get
to
backup
item
it
checks,
hey
this
is
included,
and
so
we
log
message
saying
this
is
excluded
and
we
go
on
and
if
plugins
can
get
around
that
by
adding
additional
items
that
are
excluded,
I
I
think
that
kind
of
breaks
the
contract
we're
we're
telling
Valero.
D
This
is
the
selection
criteria,
but
I
think
since
I
think
it
makes
sense,
especially
with
the
CSI
plugin,
because
we're
creating
volume,
snapshots
and
volume
snapshot.
Contents
based
on
a
PV
I
think
it's
reasonable
for
that
plugin
to
add
labels
to
those
things
it
creates
that
match
the
labels
that
are
on
the
PV.
It's
creating
them
from
you
know.
If
I
have
a
label
in
a
PB
that
says
you
know,
app
name
is
bolero.
E
Too
yeah
that's
a
valid
fix
yeah,
but
we
were
talking
a
problem
from
different
perspective
right.
So
let's
write
out
an
issue
and
we
can
continue
to
discussion
there.
Yeah.
A
D
Yeah
yeah,
because
I
think
you
know
if,
if
you
create,
if
you
set
a
label
based
on
the
PV
I,
think
that'll
solve
everything
without
having
to
do
anything
complicated
in
Valero
like
I
was
originally
thinking.
We'd
have
to
do,
but
then
because
then
the
fix
is
only
in
the
PV.
Only
in
the
in
the
plug-in
and
there's
no
Valero
changes
needed
for
that.
I
assuming
you
should
test
it
to
see,
but
I
think
that'll
actually
be
an
easier
fix
than
I.
Originally
thought.
D
D
Right,
the
backup
CR
could
have
multiple
labels
because
we
have
that
or
label
selectors
thing
or
it
could
be
label
selected.
That
says
this
labels
values
in
this
list,
so
label
selectors
have
a
fairly
complicated.
You
know
set
of
ways
you
can
create
them,
I
think
if
we
just
clone
the
labels
from
the
PVC.
You
know
because
this
is
the
PVC
item-
action
right,
that
where
we
create
the
the
it's
either
PVC
or
PV,
I
forget
which
one
but
basically
there's
a
there's,
an
there's.
An
item.
D
We
already
have
the
PV
right
there,
the
PVC,
it's
clearly,
whatever
labels
it
has
are
sufficient
to
be
included
in
the
backup
or
we
wouldn't
be
running
this
code.
Let's
just
use
those
labels,
because
that
also
makes
sense.
D
You
know
when
you're
creating
volume
snapshots
the
volume
snapshot,
contents
that
are
based
on
specific
PVCs
to
have
those
same
labels,
because
if
the
user's
saying
I'm
labeling
everything
for
this
particular
application,
including
PVCs,
to
include
in
the
backup,
then
when
we
create
the
volume
snapshot,
contents
and
the
volume
snapshots,
we
put
those
same
labels
on
there.
D
That
tells
Valero
to
include
it
too,
and
that
should
solve
the
problem
for
us.
Also.
It
makes
sense
because
you're
you're
already
labeling,
these
PVCs
already
have
labels
anyway
or
they
wouldn't
be
included,
and
you
also
avoid
having
to
worry
about
label
selectors
in
the
in
the
backup
in
cases
where
the
PVCs
don't
have
labels,
and
you
don't
need
them.
A
And
so
this
actually
breaks
the
backup
workflow
and
during
the
backup
work.
So
we
also
have
a
cleanup
step
right,
like
delete
snapshots
if
your
plugin
doesn't
execute
creation
of
vscs
and
vs.
That
also
goes
to
nail
pointer
exception.
I
think
we
even
saw
that
so
I
can
put
that
up
as
well.
C
D
The
backups
name
levels
doesn't
help
us
here
because
you're
not
going
to
create
a
backup
with
a
label
selector
saying
I
only
want
to
back
up
things
that
have
this
backups
label
already
on
them,
because
they're
not
necessarily
going
to
have
that
on
them.
A
D
E
D
D
Changes
the
contract,
I
think
I
think
we
would
have
to
talk
about
that
in
a
second
issue,
because
I
think
users
are
expecting
a
certain
thing
right
now
and
that
that
changes
functionality,
because
now
you're
saying
that
if
I
put
something
to
exclude
resources,
I
can't
guarantee
it's
excluded.
For
example,
this
is
written.
E
A
E
D
E
D
Yeah
and
again,
the
multiple
backup
Insurance
can
currently
again
is
something
that
I
think
is
not.
You
know
at
this
point
not
reasonable
to
expect
that
in
110.,
so
yeah
I
think
that
I
agree
there
we're
not
working
on
it
yet
so,
okay.
B
Okay,
thanks
and
so
any
other
things.
If
not,
we
can
stop
here
today
and
we
have
our.