►
From YouTube: Velero Community Meeting - Feb 15, 2023
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
hello,
everyone.
This
is
Valero
community
meeting
on
February
5th.
A
First,
let
me
update
some
highlight
some
overall
status
and
for
1.9
we
are
going
to
create
another
patch
1.9.6
and
around
the
beginning
of
March,
and
the
candidates
are
still
being
discussed
and
for
1.10.11
we
have
released
it's
1.10.11.
We
have
released
on
February
6th
and
for
one
part
11.
A
B
Oh,
you
mean
the
delay,
obviously
yeah
yeah
I,
just
I
I
discussed
with
Scott
because
of
some
issues
scars
is
working
on
also
someone
that
we
are
working
on
I
think
we
require
some
more
time,
so
we
I
propose.
We
delay
two
weeks
for
the
FC
and
also
the
RC
and
GA
date
will
also
be
delayed
accordingly.
So
you
can
check
the
new
wiki
page
about
new
date.
B
Yeah,
so
if
we
delay
two
weeks
the
new
FC,
they
will
be
March
7th
and
the
RC
one
will
be
March
21st
and
the
ga
will
be
around
the
end
of
March.
So
that's
a
new
date
yeah.
If
we
yeah
that's
for
my
discussion
with
Scott,
so
I
think
that's
probably
the
new
plan.
If
there's
no
concern
I
will
update
the
the
schedules
after
this
meeting.
A
Yeah,
that's
for
the
overall
status
and
for
personal
update
Daniel.
Please
yeah.
B
I'm
working
on
the
issues
targeting
1.11
and
doing
some
PR
reviews
also
spend
some
time
around.
The
designs
like
the
resource
filter
and
the
class
her
resource,
including
cool
stuff.
A
A
D
F
I
was
working
on
the
cluster
and
namespace
resource
fielders,
and
the
function
is
basically
done
and
PR
is
ready
for
review.
Second,
is
the
still
working
on
the
reflector
of
restart
controller,
and
that's
all.
G
I'm
working
on
adding
a
detailed
resource
list
in
the
mineral
restore
describe
command
and
the
second
one
is:
do
some
investigations
to
support
either
workload
ID
for
the
error,
plugin.
That's
all.
H
I
J
Yeah,
so
the
main
thing
for
me
is
the
the
asynchronous
plug-in
work.
I
now
have
a
PR
out.
It
was
draft
for
a
while
I've
done
some
testing
ducted
docs.
So
this
PR
is
ready
for
review.
I've
noticed
some
some
comments
already
just
I
just
want
to
quickly
address,
because
kind
of
the
basic
comment
I
saw
so
far
was
some
concern
about.
J
You
know,
updating
the
you
know,
uploading
this
item
operations
file
to
file
store
to
the
object,
store
every
time
it
changed,
and
that
was
actually
a
concern
raised
during
the
design
as
well,
and
we
had
agreed
that
we're
only
gonna
we're
basically
gonna
minimize
that
and
actually
I
wasn't
sure
it
was
a
designer.
J
It
was
PR,
but
in
any
case
that
we
had
a
prior
discussion
and
we'd
agree
that
we're
going
to
basically
store
that
in
a
map
and
memory
in
the
controller
and
then
upload
it
when
we
need
to
and
as
I
was
going
through,
the
implementation
I
realized.
J
There
are
basically
three
places
where
we
need
to
upload
that,
so
we're
going
to
be
checking
that
every
two
minutes
by
default,
that's
configurable.
We
don't
want
to
upload
that
for
every
in
progress
in
a
backup
every
two
minutes,
because
that
would
be
a
lot
of
activity
on
the
bucket
and
all
that
so
basically
we're
caching
that
in
memory
and
then
the
things
that
are
going
to
trigger
an
upload
are
going
to
be
the
most
important
one
being
when
we
finish
the
backup.
J
At
that
point,
we
delete
the
in-memory
map
and
upload
it
to
the
file
storage,
and
you
know
the
object
store
and
then
we're
done
with
it.
The
that
controller
never
looks
at
it
again
because
it's
now
in
a
terminal
state
and
we're
also
updating
it
when
there
is
an
error
triggered
because
not
only
do
we
need
to
update
the
status,
but
because
it's
going
to
change
the
state
of
the
backup.
So
it's
going
to
go
from.
J
You
know,
waiting
for
plug-in
operations
to
waiting
for
plug-in
operations
partially
failed
and
there's
an
there's.
An
existing
PR,
that's
still
being
reviewed,
where
we're
trying
to
on
the
backup
side,
upload
error
and
detailed
error
and
warning
information
similar
to
what
we
do
for
restores.
Once
that's
done,
then,
when
there's
an
error
triggered
from
an
operation,
we
also
need
to
update
that
list.
That's
not
in
this
PR
yet
because
that
functionality
doesn't
exist
on
backups.
J
Yet
when
I
get
the
restore
PR
out
there,
that
will
be
included
and
the
final
area
where
we
need
to
upload
to
file
storage
is
when
a
user
specifically
does
a
Valero
backup
describe
dash
dash
details.
We
don't
need
it
if
they
don't
do
details,
because
then
we
just
list
a
number
of
operations
complete
and
in
progress.
J
That's
on
the
CR,
but
if
they
do
details
they
that
we
need
to
pull
that
list
from
file
storage,
so
that'll
trigger
an
upload,
so
the
user
can
download
it
because
the
the
client
doesn't
run
and
the
Pod
it
runs
on
the
users
in
a
local
machine
where
they're
running
in
a
connecting
to
the
cluster.
So
the
way
the
way
describe
works
for
the
for
all
the
places
that
we
that
we
pull
from
file
from
the
object
store,
not
just
this,
but
also
the
detail
list.
J
The
logs
is
that
the
client
creates
a
download
request
resource
the
download
request,
controller,
reconciles
that
creates,
you
know,
provides
a
download,
URL
and
then
update
that
as
process,
then
the
download,
then
the
client,
when
that
gets
updated,
gets
that
URL
and
then
the
client
requests
that
download
directly
from
the
object
store.
J
So
we
do
need
to
upload
that
Object
Store
in
response
to
a
user
request.
What
we
don't
want
to
do
and
what
my
PR
handles
is
just
always
updating
any
change.
Every
two
minutes,
because
that'll
be
way
too
much.
Traffic.
J
Some
of
the
files
where
that's
happening,
but
it
in
that
I
think
one
of
the
comments
was
on
the
upload
updates
here:
CR
and
Object
Store.
It's
in
that
function,
where
we
check
to
see
those
statuses,
they
basically
say:
hey
here's
the
conditions
where
we
update
the
object
store
and
it's
basically
you
know
those
where
we
have
new
errors
or
we've
completed.
B
Myself
but
you're
saying
you're,
making
change
so
that
the
data
will
be
uploaded
when
there's
a
download
request.
Is
that
a
plan.
J
Yeah,
so
there's
so
this.
Well,
that's
one
of
the
cases
too.
So,
basically,
there's
two
situations
where
we're
gonna
upload
the
the
controller
itself,
the
the
async
backup
processing
controller
will
upload
when
the
backup
completes,
which
is
going
to
be
the
normal
case
where
it
uploads
or
it
will
upload
if
one
of
those
requests
one
of
those
operations
errors
out.
J
So
those
are
the
two
situations
and
which
the
controller
itself
is
going
to
upload
it,
but
yeah
the
the
issue
was
I
was
realizing
when
I
was
writing
the
the
describe
work
is
that
if
we
didn't,
if
we
just
did
that,
then
if
a
user
went
to
describe
in
one
of
details
on
progress,
they
wouldn't
get
progress
details
until
the
you
know,
the
backup
was
done
because
we're
not
uploading
it
to
the
file
store.
So
what
what
I
did
was
the
con?
J
Basically,
the
the
in-memory
map
that
that
stores
the
current
operations
for
all
of
the
backups
that
are
processing
I
have
an
API
on
that
controller.
That
exposes
that.
But
you
know
using
a
lock,
so
it's
thread
safe
and
when
we
create
the
async
backup,
processing
controller,
that
linked
to
that
struct
gets
returned
and
then,
when
we
create
the
download
request
controller,
we
pass
in
a
link
to
that
that
struct
to
that
type,
so
the
download
request
controller.
J
If
it
specifically
gets
a
request
for
the
item
operations,
it
calls
into
this
protected
in
a
resource
to
to
basically
tell
it.
You
know:
I
have
a
request
for
this,
and
if
there's
no
pending
updates
to
it
hasn't
changed,
then
it
just
returns
and
so
our
pending
updates.
You
know
things
that
haven't
been
uploaded
yet
then
it
will
submit
that
to
the
object,
store
and
then
return.
So
that
way,
and
the
reason
for
that
is
that
the
download
of
the
the
client
doesn't
actually
request
anything
directly.
J
The
client
doesn't
have
open
a
connection
to
the
Pod.
The
client
just
creates
a
download
request,
controller
and
then
reads
the
status
of
that
to
get
a
URL
which
points
to
the
object
store
in
S3
or
wherever
we're
storing
it.
And
then
the
client
makes
it
direct
S3
request.
So
the
client
does
not
have
an
opportunity
to
get
this
data
directly
from
the
Valero
server.
It
has
to
get
it
from
the
object
store.
B
Yeah
I
think
I
understand
that
that's
helpful
for
the
experience
for
backup
describe
dash
dash
details,
but
my
concern
is
that
makes
the
if
you
think,
of
the
API
level
that
makes
the
cement
here
quite
confusing
right
users
just
want
to
download
something.
If.
J
From
the
outside
and
knows
nothing
about,
you
know
the
download
request,
you
know
when
you
create
a
download
request.
You
don't
know
that
this
is
different.
You
know
you
just
put
the
the
the
describer,
which
is
the
client
that
actually
makes
the
download
request.
For
example,
the
describe
code
just
says.
J
J
G
J
Something
that
can
be
modified
over
time
too,
if
we
decide
we
want
to
you
know,
because,
basically
we
were
trying
to
avoid
uploading
everything
on
every
update,
because
that's
going
to
be
a
lot
of
traffic
to
S3,
that's
unnecessary,
but
at
the
same
time
a
user.
That's
you
know
directly,
interacting
with
the
client
that
says,
I
want
to
see
the
progress
on
this
thing.
I
want
to
know
what
you
know.
I've
got
50
volumes
that
are
uploading
and
you
know
I
want
to
see.
K
J
Was
this
was
this
was
the
cleanest
way
that
I
came
up
with
that
allowed
us
to
do
this
without
having
to
you
know,
come
up
with
some
completely
brand
new
API
and
a
new
server
to
have
you
know
we?
Don't
we
don't
want
to
have
to
write
some
custom
code
on
the
client
that
contacts
directly?
You
know
some
connection
to
the
Pod
and
all
that
this
just
lets
us
use
the
existing
download
request
API,
but
in
a
behind
the
scenes
in
the
controller
level,
you
know
it
does
that
coordination.
A
Yes,
God
for
the
first
two
cases
that
when
a
when
the
bank
have
reached
to
the
terminal
State
whenever
it
is
Click,
complete
or
error
out,
I
I,
I,
I,
understand
and
totally
agree
on
the
on
on
the
cylinder
that
we
do
the
update.
But
for
the
third
case,
I
also
have
some
concerns.
It's
like.
First
of
all,
when
we
do
this
well,
our
couple,
the
you
know,
the
describe
command
and
the
the
data
update
process
for
the
backup.
It's.
J
We're
not
coupling
the
the
the
threads
there.
Basically
there's
a
shared
structure
that
both
control,
because
so
the
data
update
I
already
used
already
uses
the
download
request
controller.
So.
A
Yeah,
because
something
like
what
about
when
we
use
the
stuff
that
described
command,
detail
command
the
web
out
need
to
trigger
an
update
that,
well
so
logically,
World
Cup,
the
the.
J
Right
right,
I
mean
right,
it's
I
guess
what
I
mean
is
that
we
it's
not
coupling
to
processes
in
the
sense,
because
it's
the
download
request,
controller
itself
when
it
processes
this,
you
know,
performs
the
up
upload
and
then
sets
the
URL
for
the
user.
So
it's
not
really
coupling
two
processes,
but
what
it's
doing
is
that,
under
certain
circumstances,
when
there's
you
know
unflushed
cached
information
that
download
request
controller
will
do
a
little
more
work
before
providing
the
user,
because
right
now,
what
happens?
J
If,
if
you,
you
know,
ignore
this
PR
and
just
think
about?
What's
in
you
know,
110,
for
example,
the
download
request
control
the
describe
might
perform
several
make
several
down
requests.
You
know
one
for
logs
one
for
errors,
possibly
one
for
the
the
actual
list
of
resources
for
each
one
of
those.
We
create
a
download
request.
Cr
the
download
requests
controller
in
the
Valero
pod,
reconciles
that
CR
looks
up
the
actual
URL
using
the
object,
store,
plugin
API.
A
J
So
the
change
here
for
this
controller
is,
you
know
if
it's
a
async
operations
request.
K
K
J
Queries
that
that
shared
you
know,
map
and
tells
it
to
upload
if
it
needs
to.
This
is
kind
of
like
a
file
system
thing
like
you
know
when
you're
writing
a
file
system.
For
example,
you
know
you,
you
keep
recent
updates
to
files
and
memory
and
you
don't
write
the
disk
every
time
you
save
a
file
because
you
make
an
optimization,
you
store
it
in
memory.
J
But
if
you
go
to
read
that
file
or
copy
that
file,
some
of
those
you
know
operations
will
trigger
a
flush
to
the
cache
write
it
to
disk,
so
it's
there
and
and
so
that,
but
that
all
that
optimization
is
behind
the
scenes,
not
part
of
the
API.
So
the
idea
is
to
make
rights
more
efficient,
because
you
don't
write
as
you
don't
you
don't
have
as
many
rights
one
of
the
downsides
of
caching
and
memory
and
not
writing,
as
often
is
when
you
do
a
read.
A
Yes,
but
just
at
the
example
of
the
file
system,
so
the
the
read
and
the
reader
operation
never
trigger
any
any
other
operation
from
the
right
right.
That
is,
even
though
there
is
a
cat
I
mean
the
file
system
will
catch
so
the
when
will.
J
You
want
to
win
something
if
it
has
access
to
the
cache
I
mean,
but
if
you're
on
a
remote
system,
I
mean
this
is
the
the
challenge
here
is
that
we
don't
have
an
API
Valero
doesn't
have
a
way
for
the
for
the
client
to
communicate
with
the
server
to
get
this
directly
and
even
now,
if
I
get
the
list
of
resources,
the
Valero
pod
is
not
streaming
that
to
the
user.
That's
the
job
of
the
of
the
S3
store.
J
The
Valero
pod
is
just
giving
the
user
a
URL,
and
the
user
is
retrieving
that
URL
using
the
you
know
the
SSL
another
security
set
is
the
certs
and
everything
to
pull
from
from
AWS,
so
to
pull
it
directly
from
the
server
to
from
the
Pod.
We'd
have
to
build
our
own
infrastructure
around
that,
and
that
would
be
a
lot
of
API
changes.
You
know
we'd
have
to
have
a
essentially
a
Valero
pod.
J
You
know
Object
Store
downloader,
you
know
server
running,
and
that
would
be.
That
would
be
a
huge
API
change
for
Valero
and
it
would
be
another
point
of
failure
too.
Of
course,
it
could
be
unstable.
A
Yeah
I
I
think
personally
I
think
of
this
problem.
It's
a
it's
a
it's
a
because
of
we
have
a
pre.
We
have
assumptions,
something
like
the
the
the
backup
item.
Action
data
or
the
data
is
the
only
thing
that
is
available
all
the
time
across
in
the
current
class,
all
across
all
other
clusters.
A
So
we
have
to
when
we
want
to
update
something
we
have
to
write
the
the
Json
or
the
data,
and
we
may
want
to
read
something
we
we
have
to
read
it
from
the
backup
story,
but
actually,
if,
if
we,
if
we
think
about
the
data
motor
case,
I,
don't
think
this.
A
This
is
this
is
necessary
all
the
time
and
in
fact,
in
the
in
most
of
the
cases,
we
don't
need
to
read
the
data
from
the
backup
store,
for
example,
at
the
current
design
of
all
and
the
current
discussion
of
the
data
Movement.
We
have
the
Snapchat,
backup,
CR
and
somehow
resource
ER,
and
that
CRS
will
be
a
part
of
Valero
publicly
in
future.
A
So,
and,
and
as
another
fact
is
that
in
the
near
future,
we
don't
we,
we
cannot
support
a
restart,
I
mean
the
risks
are
resumed
or
we
don't
want
to
to
to
think
or
to
use
the
the
CRS
from
another
class
of
window.
Cr
is
not
complete
or
Union
is
not
in
a
terminal
state.
So
it
means
that
the
CRS
are
always
available
when
we,
when
users
run
the
describe
detail
command
so
but.
J
Valero
doesn't
know
about
those
here
that
that's
the
that's
the
area
here
is.
This
is
a
generic
functionality.
The
the
snapshot
backups
yeah
are
doesn't
exist.
This
isn't
something
that
the
controllers
know
about,
because
any
plugin
can
be
written.
You
know
you
know
in
oadp
we
might
have
a
custom
plugin
that
uses
this.
The
Bolero
Upstream
core
will
never
know
about,
so
it
can't
reference
it
explicitly.
It
has
to.
A
Yeah
yeah
I'll
agree
on
that.
Well,
so.
J
That's
so
that's
why
I'm
saying
you
know,
even
though
it's
in
the
cluster,
it's
only
the
plug-in
that
really
can
read
that,
because
the
plugin
knows
about
the
CR.
For
this.
D
J
A
J
The
client,
the
the
client,
doesn't,
the
the
plug-in,
doesn't
expose
it
out
of
the
cluster.
There's
no
API.
For
that.
The
way
the
client
works
is
it
uses
download
requests
that
that
would
be
a
lot
of
changes
that
was
not
part
of
the
design
that
wouldn't
meet
the
deadline
for
111.
We
did
that
approach.
F
B
So
so
you
need
to
flush
the
data
to
the
backup,
store
and
downloaded
via
Valero
CLI
in
the
backup
Dash
that
details
to
read
the
like
the
completion
right.
J
To
use
the
currently
defined
API,
that's
the
way
you
would
need
to
do
it.
B
Yeah,
so
so
so
we
so
so
that
means
we
do
not
reflect
that
data
in
the
backup,
CR
and
backup.
Cr
will
only
show.
J
A
Yeah
another
another
I'm
going
to
think
about
another
possibility.
So
since
we
are
on
the
client
and
the
sample
backup
and
software
results,
they
are
will
also
be
persisted.
That
will
be
definitely
presented
so
from
the
client.
Do
we
can
we
add
some?
You
know
some
some
loading.
B
A
B
J
J
B
But
but
I
I
personally
think
from
user
experience
perspective
up.
It's
probably
okay,
that
user
wait
for
the
backup
to
reach
to
the
final
state
so
that
it
has
written
to
the
backup,
store
and
exactly.
J
At
that
time,
if
you
do
that,
then
your
progress
is
kind
of
useless.
It
means
that
we
don't
need
to
know.
You
know
we're
50
done
if
we're.
Never.
If
we're
not
going
to
update
this
to
the
user
until
we're
100
done,
you
know,
that's
not
useful
and
I
mean
I.
Think
the
fact
that
this
is
hidden
behind
the
dash
details
like
none
of
this
actually
happens,
I
mean
we
don't
have
any
of
these
extra
rights.
J
If
the
user
doesn't
do
dash
dash
details,
if
we
can
document
the
dash
dash
details
could
in
fact
performance
that
may
solve
this
problem,
because
if
a
user
doesn't
do
Valero
describe
dash
dash
details,
then
we
don't
do
any
of
these
additional
rights.
That's
the
only
time
we
would
do
these
additional
rights,
as
if
user
is
specifically
saying
look
this
thing
might
take
an
hour.
I
want
to
know
how
far
along
it
is
right
now
and
I
know.
My
plugin
can
give
me
that
information.
A
Yeah,
if
I
do
in
this
way
for
the
for
the
dash
details,
I'm
afraid
that,
because
we
cannot
control
what
your
user
about
you,
this
did
the
command.
I'm,
afraid
that
you
will
do
some
periodical
queries.
So
it's
like
something
that
that's
what
right
so
that
will
eventually
called
the
the.
J
That's
what
I
mean
if
you,
if
a
user,
does
that,
even
without
this
PR,
that
means
they're
making
several
download
requests
from
S3.
You
know
every
second,
that's
already
a
performance
problem
that
we
recommend.
We
should
recommend
document
that
users
not
do
that's
what.
J
A
J
J
I
So
so.
B
So
what
I'm
thinking
is
given
current
API
contract,
where
we
run
the
backup
describe
dash
dash
details?
Yes,
we
need
to
flush
the
data
in
the
map
to
somewhere,
even
if
we
do
not
flash
it
to
the
backup
store
due
to
Performance
reason.
What's
the
alternative
price,
we
we
Flash
the
data
to
the
backup
CR
in
the
status
field,.
J
C
J
A
We
have
the
data
I
I
just
mentioned
that
we
won't
need
process
the
sample
back
coverage
right.
A
A
Saying
yeah:
if
I,
if
I
don't
talk
about
this,
or
even
don't
think
about
this
from
the
concept,
we
just
think
it
from
the
reality.
We
have
the
data
from
the
client
side.
That's
the
the
problem
is
that
we
cannot
make
a
logic
to
within
that
data.
That
is
the
current
problem
right.
We.
J
Have
the
data
only
the
plugin,
based
on
the
way
the
API
is
defined,
the
plugin
is
responsible
for
determining
progress
and
and
if
we,
if
we
don't
get
that
data
from
calling
plug-in.progress
we're
not
guaranteed
that
it's
going
to
be
the
same
format,
even
as
what
the
plugin
would.
A
Return
yeah
I
agree
on
this,
but
I've
got
to
talk
about.
We
have
the
the
data
from
the
the
let's
just
take
about
take
the
post
volume
back
up.
For
example,
when
we,
when
we,
when
user
called
the
data,
has
details.
A
Besides
describe
the
things
in
the
backup,
critical
weighted,
the
portable
backup,
CR
and
gather
the
progress
there,
and
if
we
can
and
right
now
for
the
for
the
for
the
for
our
content
problem,
we
have
the
backup
VR
and
we
also
have
the
sample
backup
CR,
but
merely
we
cannot
directly
visit
that
Central
Bank
has
CR,
but
if
we
have
a
way
in
the
logic
that
to
that
that
to
have
us
visit
the
back
example,
backup,
CR
and
that
weight
will
not
conflict
with
the
current
backup
item,
action
concept
or.
J
A
J
K
A
That
it
started
the
problem,
but
I'm
not
the
thing
about
whether
we
we
can
find
a
way
in
this
direction
that
is
I
I'm,
not
sure
so.
I
I
will
think
about
this
offline.
J
I
mean
I
think
there's
a
couple
of
the
other
thing
we
could
do
and
again
this
goes
back
to
you
know
we
could
Define.
You
know
two
different,
you
know
I
mean
if
the
concern
is,
you
know
how
often
we're
writing
to
to
files.
You
know
the
upload,
sorry
to
the
object
store
and
we
don't
want
to
write
to
it
for
every
reconcile
and
it's
possible
when
one
other
approach
here
is
to
actually
Define
two
two
intervals.
One
is
the.
J
How
often
we
reconcile
and
the
second
one
is,
how
often
we
flush
the
cache
and
say:
okay,
we
don't
want
to
write
to
it
every
minute,
but
you
know.
Is
it
okay
to
write
to
it
every
five
minutes
and
then
just
Define
and
and
then
document
that
when
you
do
a
describe,
you
might
be
five
minutes.
J
You
know
up
to
five
minutes
out
of
sync,
but
if
we're
okay
with
uploading
these
every
five
minutes,
you
know
for
packups
that
are
currently
not
complete,
then
we
wouldn't
have
to
do
anything
special
on
the
download
request
side
and
then
we
would
have
some
notion
that
well
describe
won't,
be
a
hundred
percent.
J
Accurate,
but
it
will
be
relatively
recent
what
what
I'm
trying
to
avoid
here
and
I
think
we
do
need
to
avoid
is
especially
if
you
have
a
backup,
that's
going
to
take
three
hours,
because
you
have
huge
volumes,
you
really
can't
have
the
describe
output
to
be.
You
know
two
hours
out
of
due
out
of
stay
out
of
date.
J
So
if,
if
the
concern
is
we
think
that
user
requesting
dash
dash
details,
all
the
time
is
going
to
result
in
rights
that
are
too
frequent,
but
we're
okay
with
making
rights
in
a
slightly
more
frequently
than
we
are
now
by
default,
then
what
we
can
do
is
we
can
define
a
you
know:
flush
the
cache
interval
to
say
how
often
do
we
take
what's
in
these
this
map
in
memory
and
upload
to
S3,
and
if
it's
separately,
configurable,
you
know,
users
that
say:
hey
I'm,
trying
to
minimize
my
costs,
I,
don't
want
to
write
to
S3
more
than
I
have
to.
J
They
can
set
this
to
a
really
big
number
that
we
document
saying
if
you
do,
that,
you're
described
will
be
out
of
date
and
for
users
that
say
I.
Don't
care
what
it
costs
I
want
to
know.
I
want
to
be
up
to
date.
They
can
set
that
interval
to
be
the
same
interval
as
the
refresh
interval
anyway.
J
So
that's
another
way
of
doing
it
is
we
can
control
it
on
this
on
the
controller
side
to
say
how
often
do
we
do
we
flush
that
cache
write
it
to
the
backup,
storage
location
and
then
the
clients
side
describe?
Does
the
same
thing
for
this?
It
does
to
everything
else,
never
triggers
anything.
Custom.
B
Yeah
you
mentioned
that
default
frequency
is
two
minutes
right.
You
mentioned
that
every
two
minutes.
J
So
so
that
frequency,
let's
be
to
be
clear,
that's
the
controller
frequency
so
basically
how,
in
other
words,
when
we
process
the
backup
the
first
time
we
check
on
these
operations
is
at
the
end
of
acrobat
things.
So,
for
example,
if
you're
processing
a
backup
and
all
of
your
uploads
or
an
all,
you
know,
data
mover
or
whatever
are
very
quick
and
when
the
backup
completes,
if
everything's
already
done,
then
then
we
just
go
straight
into
completed
and
this
new
controller
never
touches
it.
H
J
You're,
you
know
uploading
with
data
mover
say
two
of
them
are
done
already,
but
eight
of
them
are
not
so
now
this
backup
goes
into
the
waiting
for
plug-in
operation
State.
Now
this
async
backup
operations
controller
is
going
to
every
two
minutes
by
default.
That's
configurable,
you
know
basically
reconcile
goes
through
the
backup
list.
Anything
that's
in
the
waiting
for
plug-in
operation
State.
J
It
grabs
that
that
list
from
it
in
memory,
if
it
has
it
and
if
it
doesn't,
have
it
so,
the
first
time
it
runs.
Of
course
it's
not
going
to
have
it
in
memory.
It'll
it'll
have
to
pull
it
from
Object
Store
store
it
in
memory,
and
then
it
iterates
over
the
list
and
for
everything
that
isn't
already
done
from
the
last.
You
know
update
it.
J
Calls
progress,
gets
the
information
updates
that
internal
map
and
with
the
current
code,
it
doesn't
upload
anything
if
the
backup's
not
done
yet
and
there's
no
new
errors,
then
two
minutes
later
it
runs
again
reconciles
on
this
backup.
J
It
grabs
the
in-memory
map
of
the
of
our
automatic
equations
because
it
started
there
it
iterates
over
what
wasn't
completed
yet
gets
those
completion
information
again,
if
everything's
complete
at
this
point
it
uploads
it
removes
from
the
map.
If
it's
not
completely.
D
J
Two
minutes
later
so,
basically,
you
know
there's
the
two
extremes
here
on
one
side
is
what's
in
the
pr
now,
which
says
we
never
upload
until
it's
done
or
there's
an
error,
but
then
on
the,
but
we
want
to
be
described
to
always
be
correct,
so
we
trigger
a
change.
J
You
could
also
the
the
simplest
fix
which
I'm
not
recommending,
because
we
have
the
performance
concerns
is
just
always
upload
that
to
Object
Store
every
two
minutes.
If
it
changes,
if
you
did
that,
you
wouldn't
have
any
problems
with
the
describer,
because
it's
already
there
I'm
thinking,
we
could
take
a
hybrid
approach
where
we
have
two
two
intervals
that
we
specify
configuration
again.
Maybe
one
of
them
is
two
minutes
by
default.
One
is
five
minutes
or
ten
minutes
where
the
two
minutes
is
how
often
we
actually
make
the
check
that
way.
J
If
it's
done,
we
mark
it
as
done
and
we
move
on,
but
that
larger
interval,
whether
it's
five
minutes
or
ten
minutes
for
that
one.
We
say:
okay,
if
we,
if
we
haven't
uploaded
this
in
the
last
10
minutes,
for
example,
if
that's
your
sign
up,
then
we'll
upload
it
even
if
it's
not
done
so.
This
will
give
you
a
way
of
configuring
and
then
and
then,
if
we
did
that
we
could
just
rip
all
that
code
out
from
the
describer
and
not
do
any
kind
of
one-off
uploads.
I
D
J
J
So
a
user
who's
concerned
about
you,
know,
traffic,
upload
cost
and
all
that
can
make
that
a
bigger
interval,
but
again
that
the
user
is
concerned
about
traffic
is
also
probably
going
to
want
to
minimize
their
use
of
describe
because
describe
even
now,
even
without
this
code
mix
downloads,
at
least
it's
not
making
F
loads.
But
it's
making
downloads.
J
L
Just
quick
question
I'm
just
wondering
if
we're
focused
on
configuring,
the
wrong
thing
I
mean
I
like
the
idea
of
I,
like
everything,
you're
saying
and
hearing,
but
I
wonder
if,
if
by
default,
when
you
run
describe
what
D
detail,
if
that,
if,
if
it
uploads
at
that
moment,
if
that
could
be
configurable
by
an
admin,
because
you
know
it
could
be
the
admins,
don't
want
that
to
happen.
J
That
and
so
one
option,
that's
another
point
that
I
hadn't
thought
about
of
keep
this
PR
exactly
as
is
and,
and
you
have
two
two
modes
of
operating
one
is
have
Bolero
describe
trigger
an
upload
if
necessary,
if
it's
out
of
date-
and
the
other
is
never
do
that
and
then
just
document
that
you're
describe
is
not
necessarily
going
to
be
very
accurate
in
the
middle
of
you
know
running
and
that
that
that's
a
question
of
data
accuracy,
optimization
versus
you,
know,
performance
and
traffic
and
cost
optimization,
and
that
would
be.
J
That
would
be
easy
enough
to
do.
Basically,
the
code
would
be
exactly
as
is,
but
I
would
add
another
configuration
a
server
configuration.
It
would
just
be
a
Boolean
flag.
That's
you
know
you
know
set
to
something
by
default.
That
will
do
this,
maybe
you
say
disable
you
know
describe
operation,
sync
or
something,
and
then
you
know
it
would
be
one
place
in
the
code
to
check
for
the
flag.
J
So
that's
another
possibility
that
would
be
less
complicated
than
this
kind
of
dual.
You
know
time
timing
of
had
when
it
I
want
to
update
my
CR
every
two
minutes,
but
I
only
want
to
upload
every
10
minutes.
That
way,
you
know,
I
mean
did
that.
That
would
work,
but
that
might
be
more
confusing
and
the
users
than
with
the
multiple
times
and
all
that
versus
just
having
a
flag
to
say.
I
want
to
disable
describe
sync.
J
We
do
not
well
so
so
every
two
minutes,
so
basically
every
two
minutes
we
reconcile
on
all
of
the
backups,
so
so
the
overall
reconciled
design
is
similar
to
the
the
garbage
collection
controller.
You
know
you
know
that
that
one
runs
on
a
different
schedule,
but
you
know
when
the
interval
is
hit
for
garbage
collection
controller.
We
we
look
at
all
the
backups
in
the
system
and
we
find
backups
that
are
expired,
and
then
we
do
something
for
those.
J
I
J
Yeah
every
two
minutes:
we
look
at
the
backups
for
those
backups
that
are
waiting
for
plug-in
operations,
which
is
only
going
to
be
a
subset.
You
know
backups
that
are
in
progress
and
new.
We
ignore
backups
that
are
in
terminal
States.
We
ignore.
We
only
look
at
those
backups
that
are
in
this
waiting
for
plug-in
operation
state.
J
We
grab
the
list
from
memory
if
it's
there
or
from
Object
Store.
If
this
is
the
first
time
we've
done
this,
so
so,
for
example,
if
if
Valero
you
know,
if
Valero
gets
bounce-
and
there
are
three
backups
in
this
state-
it'll
pull
those
three
down.
You
know
the
first
time
it
reconciles
or
for
a
new
backup
when
Valero
the
first
time
you
complete
the
backup
and
you
go
from
in
progress
to
waiting
for
plug-in
operations.
J
This
controller
hasn't
seen
that
backup.
Yet
so
this
controller
doesn't
have
that
in
memory.
So
this,
because,
at
the
end
of
the
initial
backup
completion,
that's
when
we
do
the
upload
of
the
you
know
the
error,
log
and
sorry
the
the
the
backup
log,
the
list
of
resources,
everything
else,
and
that
also
includes
the
initial
version
of
this
plug-in
operations
list
where
most
of
them
are
probably
not
complete.
Yet
so
when
this
async
operations
controller,
you
know
every
two
minutes
the
first
time
it
sees
a
backup
in
that
state.
J
It
downloads
from
Object
Store
puts
that
in
the
in-memory
map,
so
the
second
time
it
runs,
it
doesn't
have
to
download
it
because
it
already
has
once
it's
in
memory.
It
updates
progress
for
each
so
it
literate
over
the
list
of
plugins.
You
know,
let's
say
the
first
time
it
runs.
You've
got
10
operations
they're.
All
you
know
in
progress.
J
Five
of
those
10
are
complete.
We
Mark
those
five
as
complete.
Five
of
them
are
still
in
progress,
and
then
we
re
it
gets
recued
for
two
minutes
later.
Two
minutes
later
it
runs.
It
skips
the
first
five
that
are
already
been
completed.
It
doesn't
call
the
plugin
again,
it
doesn't
need
to,
because
we
already
have
a
complete.
B
Yeah
yeah
I
I'm
still
struggling
to
understand.
Are
you
thinking
it's
really
too
large
to
pull
in
the
backup
status.
J
B
I
think
I
think
that's
probably
acceptable.
That
user
can
only
see
the
exact
arrows
via
this
graph
Dash
that
details
when
the
backup
is
Arrow.
So
so
in
the
when,
when
when
the
icing
operations
are
still
running,
the
backup
is
not
in
the
final
state.
If
user
run
this
graph
Dash
as
details,
he
knows
how
many
operations
is
done
or
was
no.
B
Yeah
so
yeah
yeah,
but
the
three
gigabytes
per
s
is
a
50
gigabyte.
For
example,
that's
not
a
huge
data
that
should
be
also
be
okay
to
put
in
the
status
right.
Well,.
J
But
I
mean
how
many,
how
many
I
mean
I
guess
the
question
is
I
mean
it's
like
the
errors.
I
mean
again,
you
know
you
say
we
don't
we
want.
You
know
you
can
put
errors
in
status
because
hey
you
might
have
three
errors
and
that's
fine,
but
if
you
have
300
errors
that
might
not
be
fine
if
you're
doing
a
backup
that
has
300
asynchronous
operations.
We
may
not
want
that
on
that,
but
I
think
I
think
this
is
a
bigger
different
discussion.
J
J
You
know
four
weeks
before
feature
complete.
J
Know,
I,
don't
know,
I
I,
think
I
think
we
can
talk
within
what
we've
we've
kind
of
agreed
on.
This
is
the
status
the
the
design
hearing
kind
of
figure
out
what
makes
the
most
sense
but
I
think
that's
more
of
a
future
optimization.
Maybe
you
know
if.
J
Running
into
problems
of
you
know,
you
know
this
is
taking
to
because
I
guess
my
concern
too,
is
that
if
you
have
so
much
operation
data
that
you
know
it's
killing
your
Object
Store
traffic,
that
also
might
be
too
big
to
put
in
the
status
yeah.
B
Yeah,
my
my
yeah,
my
counselor,
is
not
about
the
traffic.
My
concern,
it's
about
the
logic
in
the
download
request
controller,
but
that's
yeah,
yeah
I,
don't
like
the
the
additional
complexity,
because
normally,
when
you
download
something
you
just
download,
it's
not
like
the
server
will
will
flash
something
to
disk
and
serve
it
when
they
receive
the
download
request.
J
B
Yeah,
so
so
yeah
so
I'm
thinking
which
one
is
you
know
easier
for
maintenance
or
easier
to
understand
if.
J
J
It
has
it
has
to
have
a
link
to
this
shared
resource,
to
sort
of
to
basically
make
that
API
call
this
an
API
called
to
that
internal
Call,
to
tell
the
controller
to
flush
it
if
it
needs
to,
and
so
that
complexity
goes
away
and
instead
of
that,
this
async
controller
has
a
complexity
of
having
this
secondary.
You
know,
because,
because
right
now
the
frequency
field
is
just
used
to
configure
the
reconcile,
but
one.
A
J
We
can
do
is
if
we
have
this
additional.
You
know
flush,
cache
frequency,
you
know
whatever
you
want
to
call
it
where
you
can
set
it
to
the
same
two
minutes
or
you
can
set
it
to.
You
know
10
minutes
or
you
can
set
it
to
an
hour.
You
know
and
basically
have
the
controller
track.
The
last
time
it
uploaded
for
a
given
backup.
A
I
I
think,
first
of
all,
the
logic
I
I
I.
Of
course
it's
it
makes
them
complexity.
A
Secondly,
I
think
I,
I
I
think
we
cannot
use
the
backup
storage
I
mean
the,
for
example,
the
S3
as
the
other
catch,
because
when
we
consider
the
future
features
like
the
mutability
of
some
other
backup
repository
features,
this
is
not
friendly
to
that
features,
so
so
yeah
and
finally,
the
cost
I'm
not
sure
what
what
is
what
does
it
mean?
A
What
does
it
mean
to
the
users
but
anyway
we
add
analysis,
our
necessary
cost
to
to
users
and
need
to
push
them
for
decision
and
just
because
of
our
our
design
or
architecture
or
workflow
right.
That's.
J
It
so
so
I
have
a
question
then
about
that,
with
the
approach
that
I
was
proposing,
where
we
we
have
a
a
separate
configuration,
parameter
to
tell
the
Valero
controller,
how
often
to
flush
this
and
upload,
because
that
way,
if
the
users
are
concerned
about
costs
and
performance,
they
can
set
it
to
be
longer
and
if
they're
concerned
more
about
accuracy
and
up-to-date,
you
know,
then
they
can
set
it
to
be
shorter,
and
that
would
that
would
allow
the
user
to
make
that
decision
for
themselves
rather
than
Valero
automatically
doing
all
this
stuff.
K
B
J
Okay,
I
mean
I,
mean
I,
said
I,
think
I
think
I'm
fine
with
the
way
it
is
now
too,
but
if
we,
if
we
want
to
change
it
and
and
make
it
instead
work
based
on
having
the
secondary,
you
know,
how
often
do
we
update
it,
and
we
just
need
to
be
clearing
that
if
we
do
that,
we
need
to
be
clearing
the
documentation
that
the
that
getting
dash
dash
details
won't
necessarily
be
up
the
latest
data,
whereas
if
we
with
the
current
approach,
it's
guaranteed
to
be
as
recent
as
Valero
has
also
keep
in
mind
that
everything
we're
doing
here,
we're
going
to
need
to
do
the
equivalent
when
I
do
the
restore
PR.
J
So
whatever
we
agree
to
here,
we're
going
to
be
doing
it
twice
so,
for
example,
that
means,
if
we're
taking
the
approach
where
the
download
request
controller
you
know,
is
going
to
trigger
hey
flush.
This
thing
now
for
just
for
this
backup,
not
for
all
the
backups
it'll,
need
to
track
the
same
thing
for
ReStore,
so
it'll
be
the
same
logic,
but
it'll
be
in
two
places
for
two
different
sets
of
data.
J
If
we
were
to
take
the
you
know,
extra
frequency
approach,
where
we
Define
how
often
we
update
the
data,
you
know
that
way.
They
can
say
it
10
minutes
or
one
hour
or
two
minutes,
and
we
don't
have
the
download
request
do
anything
special.
J
B
Yeah
so
yeah,
if
you
still
have
company,
we
can
discuss
offline,
maybe
in
later.
C
Today,
yeah,
okay,
let
me.
C
D
J
With
the
two
week
delay,
we've
only
got
four
weeks:
less
development
I
need
to
get
this
PR
changed
if
we
decided
to
make
a
change,
get
that
reviewed
and
then
write
the
restore
workflow
following
it.
So
I
think
we
need
to
make
a
decision
on
this.
You
know
I
mean
if
we
can
make
a
decision
now,
that's
great,
but
obviously
we're
running
out
of
time.
If
we
can
decide
in
the
next
day,
because
if
I
need
to
update
the
pr
with
different
logic,
I
need
to
do
that
starting
tomorrow.
Hopefully,.
J
J
There's
also
going
to
be
a
slight
performance
hit
when
you
do
have
Valero
described
and
there
has
to
be
an
upload,
that's
going
to
take
longer.
So
that's
that's
going
to
be.
You
know
it
might
take
a
few
seconds
longer
for
the
user
to
see
results.
J
I
don't
think
it's
a
huge
deal,
but
just
keeping
that
in
mind,
if
we
do
the
other
approach,
where
the
user
has
a
separate
configuration
field
for
how
often
the
controller
flushes
its
in
memory
map
then
describe
work
just
like
it
does
today,
it's
just
new
data.
That
would
you
know,
and
so
that's
easier.
J
The
the
downside
there
is
that,
depending
on
how
big
you
make
that
interval
users
are
gonna,
like
you
said,
if
you
set
it
to
an
hour,
for
example,
that
means
that,
for
any
backup
that
takes
less
than
an
hour
to
complete
all
operations
describe
is
not
going
to
give
them
any
useful
information
about
specific.
You
know
this
volume
is
three
out
of
30
gigabytes
done.
J
J
But
so
so,
but
again
I
mean
you
know,
given
that
we
have
to
document
these
parameters
anyway,
maybe
as
long
as
we
tell
the
user,
if
you
set
this
to
be
a
very,
very
large
number,
then
your
described
output
might
be
out
of
date,
and
if
you
set
it
to
a
small
number,
then
you
know
again
there's
the
performance
so
so
I
think
from
a
complexity
of
the
pr
point
of
view,
they're,
probably
equivalent
the
amount
of
the
amount
of
actual
code.
There
is
going
to
be
about
the
same.
J
You
know
equal
on
both
sides
and
we
don't
think
it
matters
either
way.
The
way
it
is
is
best
for
the
deadlines
in
the
schedule,
but
if
we
actually
decide
that
the
other
approach
is
better,
that's
fine
I
can
make
the
change,
but
I
think
we
need
to
make
that
decision
soon,
so
that
I
can
make
the
change
and
get
the
pr
updated
as
soon
as
possible.
A
But
by
the
way,
it's
about
wait
we
here
in
the
current
code
that
we
don't
have
the
logic
to
trigger
the
to
upload,
just
because
we
have
the
upload
right
now
we
have
it
upload.
J
Oh,
the
logic's
not
going
to
prescribed
yeah
yeah
that
logic
isn't
in
the
described
it
has
to
be
in
the
download
request
controller.
Because
again,
the
describe
is
not
running
in
the
server
that's
running
on
the
clients.
It's
the
the
download
request
controller
when
it
reconciles.
If
you
look
in
the
download
request
controller
here,
it
should
be
there.
That's
foreign.
J
The
update
for
backup,
which
is
not
calling
the
controller,
the
other
controller,
but
what
it
is,
is
that
there's
a
shared
struct,
that's
passed
back,
so
this
is
the
code
that
would
go
away
if
we
took
the
approach
of
having
a
secondary
interval
defined
in
the
other
controller.
So
so
the
what
happens
is
the
the
client
code
is
the
same
and
that
won't
be
affected,
but
the
download
request
controller.
It
calls
this
upload
for
back
update
for
backup
and
if
it
doesn't
need
to
do
anything.
J
So
if
there's
no
changes
since
last
update
and
no
error
since
last
night,
data
does
nothing
otherwise
it
uploads
the
progress.
So
that's,
that's
the
part
where
the
download
request
controller
says
hey.
Do
we
need
to
update
this?
We
update
it
and
then
once
that
comes
back,
the
uploads
happened,
then
the
download
request,
controller
basically
finishes
processing
and
the
describer
can
then
grab
the
URL
and
output.
It.
J
If
you
can
just
talk
about
this
today
and
hopefully
just
respond
to
the
pr-
and
you
know,
let
me
know
hey
we're
good
with
this
approach
or
we
want
to
do
the
other
approach
and
then
I
will.
If
we're
going
to
take
the
other
approach,
then
I
can
rework
it.
And
hopefully
you
know
by
you
know
later
this
week
to
get
that
updated.
A
Any
other
things
Scott
from
you.
J
No,
that's
pretty
much
it
as
I
said
once
we
get
this
agreed
to
and
I
get
that
updated
the
net.
The
the
other
major
thing
for
me
is
to
do
the
restore
side.
J
Oh
and
just
this
PR
you'll,
see
too,
there
are
some
minor
changes
along
the
way
around
the
API
stuff
for
bib2,
some
bugs
that
I
ran
into
and
and
that
kind
of
stuff.
But
so
it
should
be
all
everything
that's
in
there
is
necessary
for
the
controller
code
to
work,
so
just
keep
that
in
mind
as
you're
looking
through
it.
But
okay.
C
Okay
thanks
and
what
about
you
outside.
D
Oh
yeah
there's
been
working
on
some
design,
PR
reviews
specifically
regarding
the
CSI,
like
choosing
the
volume,
screenshot
class
or
CSI
volumes.
A
A
Do
we
need
a
third
round,
Central
discussion
or
alternately
if
we
are
okay
on
the
you
know
to
sorry,
if
we
are
okay
to
discuss
the
offline
piece
by
piece,
I
think
we
will
not
need
another
thing
through
discussion.
I.
B
Don't
I
don't
think
we
need
to
make
decisions
that
we
do
not
need
discussion
now.
We
just
push
everyone
to
reach
agreement.
If,
if
we
need
better
progress,
I
think
the
central
discussion
is
better.
What
do
you
think,
for
example,
we
have
a
design
for
the
CR
in
the
Google
Doc
and
the
nobody
speaks
or
we
don't
know
if
everyone
is
happy
with
that,
it
will
be.
B
Okay
to
you,
know,
hold
the
discussion
and
make
sure
everyone
agrees,
but
if
everyone
says
they
agree,
you
know
probably
we
do
not
need
this
discussion,
so
we
don't
need
to
make
decisions
whether
we
need
a
discussion
or
not
next
right.
A
Okay,
then
I
think
we
can
check
the.
H
B
Yeah
yeah
I
think
there
is
still
a
gap
is
in
terms
of
how
we
can
plug
in
the
different
controllers.
Yeah
I
mean
last
week.
I
was
thinking
we
just
you
know
let
Valero
handle
the
default
controller
and
we
we
put
these
other
controllers
outside
the
lifecycle
of
a
little.
That
may
be
a
problem
because
we
we
don't
have
a
good
way
to
did.
Have
the
conflict,
yeah,
yeah
I,
think
that's
something
we
need
to
think
about
yeah.
B
B
B
A
B
C
B
C
Yeah
yeah
yeah
yeah,
yeah
yeah.
A
Okay,
I
got
it
so,
let's
continue
to
reveal
the
CRS
offline
and
a
few
of
them
may
need
the
discussion
we
can.
We
can
pin
number
one
in
the
channel.
Okay,
yeah.
M
A
Okay,
so
to
me,
I
think
your
part
is
about
the
the
introduction
of
the
design
of
the
result,
filter
right.
B
Right
I
think
today's
meeting
May
yeah
make-
maybe
maybe
we
need
another
20
minutes-
is
that
okay,
everyone?
If
you're
not
we
can
do
this
review
offline,
but
I
noticed
that
Yvonne
you
have
a
PR
that
probably
have
some
conflict
or
or
overlap
with
the
current
design.
So
we
need
to
make
sure
we
do
not
introduce
too
many
filters
at
the
same
time
and
the
means
design
was,
you
know,
decided
earlier
that
we
need
to
handle
in
version
one
now
11..
B
So
with
that
said,
probably
this
design
your
design
will
be
delayed
in
review
or
decision
making.
Are
you
okay
with
that.
M
B
Yeah
yeah
yeah
yeah
yeah
that
was
planned
in
one
that
11
already
the
goal
is
to.
Let
me
quickly
explain
the
goal
is
to
maybe
I'll.
Let
me
prevent
this
design
and
the
event
you
can
later
maybe
comment
in
this
design
or
in
the
slack,
and
let
us
know
if
there's
overlap
and
what
do
you
think.
M
Yeah
I
I
can
take
a
look
at
it.
Offline
I
mean
like
yeah
I
want
to
be
aware
of
some
folks
time
as
well.
If,
if
it
serves
the
same
purpose
of
like
being
able
to
to
Resources
by
like
Fields
like
you
know,
like
names
and
resource
names
and
stuff
like
that,
then.
B
M
Okay,
okay
I
have
to
drop
soon.
I
don't
have
another
one
to
go
to,
but
I
can
take
a
look
at
this.
M
Yeah
yeah.
So
what
sorry,
maybe
like
30
seconds
summary?
What
what
does
this
do?
It's
a
similar
to
yeah.
B
The
goal
is
to
solve
the
problem,
this
design,
trying
to
solve
with
that.
We
want
to
provide
more
filters
regarding
volumes
because,
currently,
when
users
want
to
skip,
you
know,
snapshotting
a
volume
or
choose
to
skip
back
half
dollar
volume
via
rapid.
Normally
he
need
to
add
labels
to
the
volume
or
change
somehow
change,
users,
resources-
and
that's
you
know,
some
of
our
customers
are
not
quite
quite
happy
with
that.
B
So
we
are
trying
to
provide
a
data
structure
which
is
referenced
by
the
backup
CR
so
that
you
know
in
this
data
structure,
we
Define
a
more
complicated
filter
to
filter
the
volumes,
maybe
maybe
in
version
1.11.
We
only
support
a
few
conditions
to
filter
the
volumes,
but
in
future
that
data
structure
can
be
extended
to
support
more
conditions
or
other
resources.
So
what.
M
Because,
in
our
case,
like
specifically,
like
custom,
resource
definition
have
caused
like
users,
all
users,
all
of
pain,.
B
So
I
feel
quite
you
want
a
filter
based
on
the
name
of
the
object,
not
only
the
name
of
the
resource
right.
You
want
a
specific
custom
resource
now,
the
the
type
of
all
custom
resources.
M
Custom
resource
definitions,
not
custom
resources,
so
just
because
right
now
is
like
you
know,
right
now,
Bolero
will
only
like
back
up
like
custom
resource
definition.
M
If
there's
custom
resources
defined
or
instantiated
namespace
right,
but
you
know,
operated
namespace,
you
can
imagine
sometimes
the
operators
that
don't
need
the
content
resources
inside
exam,
so
it's
they
will
have
custom
resources
everywhere
else
like
service
mesh
is
a
good
example
right,
so
I
think
if
it's
a
way
to
so
looking
at
this
PR,
it's
a
way
to
make
it
more
generic,
but
just
not
just
volume,
I
think
that
would
be
helpful.
M
G
B
1.11,
it
will
be
volume
specific,
but
that
data
structure
is
definitely
extensible.
I
talk
with
me
about
it,
it's
just.
We
are
too
close
to
FC,
so
we
cannot
Implement
everything
we
can.
We
can
make
additional
change
to
the
design.
If
you,
if
you
take
a
look
at
this
design,
I
think
you
will
figure
out.
M
Okay,
okay,
yeah!
Let
me
just
take
a
look
at
it:
okay,
thanks
thanks.
M
A
So
we
we
discussed
it
next
time
right.
Oh.
B
E
And
in
each
policies
we
have,
we
have
conditions
and
one
action
and
take
this,
for
example,
that
when
we
do
them,
when
we
do
the
file
system
backup
and
for
a
Target
volume,
if
the
volume
meet
all
the
conditions,
then
you
will
do
the
backup
and
take
this
policy.
For
example,
if
the,
if
the
the
PV
that
has
these
kind
of
drivers,
then
they
will
skip
back
up
a
bit
yeah.
That
is
the
basic
policy
of
it
and
the
use
as
using
that
our
conditions
is
varied
from
each
from
each
policy.
E
So
we
divided
a
flexible
structure,
that
is
the
map
string
interface
and
that
is,
could
be
ex
in
extendable
and
flexible
and
for
the
later
days,
if
you
want
to
add
other
policies,
you
can
just
either
here
separately
and
currently
we
are
only
focused
on
the
volumes
yeah.
J
I
had
a
couple
questions
about
the
interface
there
yeah,
so
if
it
meets
more
than
one
of
those
conditions,
is
it
the
first
one
that
matches
is
the
one
that
that
follows.
E
Yeah
and
why
we,
why
we
didn't
why
we
designed
this,
because
if
users
have
a
lot
of
complex
the
policies-
and
we
will
there
will
be-
you
will
be
in
conflicted
when
we,
when
we
solve
the
conflict
problems.
J
It
one
of
the
questions
capacity
is
that
a
minimum
or
a
maximum,
or
do
you
specify
range
like
if
it's
instead
of
you,
know
100?
If
it's
you
know
100,
so
this
is
above.
B
Or
below
how
does
that
work?
Yeah,
I
think
this
in
this
example,
if
he
claims
that
you
can
define
a
range,
but
if
you
like,
if
you
just
put
one
number
and
a
comma
in
the
string,
you
can
Define
like
it's
minimum
or
maximum.
J
M
M
It
yeah
and
this
this
API
is
not
it's
not
like
a
custom
resource
definition
where
it's
just
yeah
I'm.
B
Currently,
not
yeah,
we
we
do
have
some
idea
that
we
want
to
introduce
a
custom
resource
like
resource
like
backup
policy
or
resource
filter,
but
currently
I.
Don't
think
it's
mature
enough.
So
if
you
just
put
this
as
a
config
map
to
allow
us
change,
we
make
change
easily
but
in
filter.
If
this
is
have
good
feedback.
B
M
M
I
guess
my
first
question
is
definitely
instead
of
like
defining.
You
know,
a
separate
set
of
apis
with
a
separate
set
of
attributes,
so
why
not
tap
into
the
kubernetes,
like
the
you
know
how
they
have
like
the
volume
API
right.
So
essentially,
we
just
need
a
way
to
define
the
different
attributes
of
a
volume
and
say
include
or
exclude
this
volume
based
on
this
attributes.
Maybe
storage
classes,
yeah.
B
Yeah
we
yeah,
we
did
discuss
it.
Do
we
filter
it
just
based
on
the
based
on
the
volume
attributes,
but.
A
B
Yeah
yeah,
but
but
I
think
the
the
one
this
one
is
more
user
friendly
in
our
you
know,
because
when
users
write
this,
he
may
not
because
the
attributes
in
the
PV
itself,
sometimes
it's
confusing.
There
are
quite
different
ways
to
define
the
same
thing,
I
think
in
our
internal
discussion.
So
instead
of
letting
users
to
read
the
PV
to
understand
the
attribute,
they
want
to
match,
I
think
we
just
provide
more.
It
may
be
easier
if
you
just
provide
more.
B
You
know
human
understandable
attribute
here
in
the
condition
and
so
that
we
can
explain
it
and
document
it
and
yeah.
M
Concern
that
eventually,
this
design
gonna
end
up
like
trying
to
chase
like
him
after,
like
what
kubernetes
API
is
going
to
be
doing
so
today
we
only
do
capacity,
CSI
storage
class,
but
tomorrow
we'll
say:
oh
no.
Now
we
need
to
do
like
more
attributes
to
it,
because
kubernetes
volumes
API
support
all
these
different
attributes
necessarily.
J
A
series,
a
new
crd
for
that
you
could
even
add
it
to
you
know,
for
example,
the
backup
CR
spec
might
have
a
feel
that
happens
to
be
defined.
As
one
of
the
you
know,
kubernetes
volume
you
know,
structures
or
something
you
could
I
mean
that's
what
we
do
with
some
of
other
ones.
You
know
where
we
can
Define,
for
example,
label
selectors.
You
know
we
didn't
redefine
that
we
used
the
kubernetes.
J
J
Volume,
some
of
the
volume
structs
that
are
already
existing-
we
might
be
able
to
use
some
of
those
here.
Yeah.
B
The
reason
we
we
put
in
a
separated
config
map
is
that
we
find
out
that
that
if
we
we
don't
want
to
continually
adding
new
fields
to
the
CR,
and
that
will
make
the
CR
really
huge.
Because
in
this
there's
a
structure
you
can
essentially
Define.
If
then
a
semantic,
but
you,
if
you
don't
put
the
whole
child
info
back
up
there,
that
may
be
too
much
data.
M
Okay,
yeah
there's
definitely
there's
definitely
some
similarities
between
like
this
one
and
what
I
was
hoping
to
accomplish,
but
you
know
addressing
like
a
custom
resource
definition.
G
G
M
Pull
request
and
then
maybe
add
some
thoughts
to
it.
Thanks,
thanks
for
putting
this
together
and
bringing
this
to
my
attention
appreciate
it.
Thank
you.
Yeah.
B
B
Right
right,
this
is
one
one
more
additional
filter,
so
all
the
existing
filters
still
working
as
this.
So
we
just
add
this
one
more
filter
at
this
moment,
but
yeah
I.
We
we
all
realize
that
there
are
too
many
filters,
there's
something
to
complete
with
drive
with
each
other
in
the
arrow.
We're
gonna
handle
that
in
future,
hopefully
to
introduce
some
brick
changes
and
drop.
Some
of
them.
B
B
B
Doable
that
may
not
be
an
action
that
might
be
a
parameter
in
future,
like
type
volume
snapshot,
parameter,
move
the
data
true
or
false,
and
we
can
be
decided,
honey,
that's
doable,
but
yeah
in
terms
of
which
piece
of
information
where
to
push
and
where
we
put
them.
That's
a
tricky
thing
and
I
think
we
will
need
a
lot
of
back
and
forth
to
do.
J
That
would
be
a
use
case
because
kind
of,
like
you
have
volume,
snapshot
and
file
system,
backup,
I
imagine
daily
mover
would
be
another
one
as
another
option.
B
J
J
B
E
E
The
data
part
is
the
is
the
volume
policies,
and
here
is
the
the
package
are
the
reference
to
the
configure
map
and
we
are
also
you
introduce
the
original
mechanism
that
we
add
a
version
field
in
the
in
in
the
yamo
data
in
case
of
break
change
and
because
we
have
if
we
support
the
vision
in
which
you
would
consider
multiple
version
supporting
and
currently
we
decide
to
owning
support
one
version
and
suppose
that
if
in
111
we
have
V1,
we
have
the
V1
gamma
data
and
the
one
13.
We
have
a
break
chance.
E
Then
the
your
mother
will
be
upgraded
to
version
V2
and
if
the
user
pump
up
our
Valero
version
from
111
to
113
and
before
they
bump
up,
they
should
first
the
label
the
the
reality.
The
configure
map
with
this
kind
of
levels
and
people
and
when
the
levels
start
up,
they
will
do
the
migration
to
to
to
make
them
to
change.
The
formatted
data,
the
yometer
V1
to
V2.
E
B
Yeah
one
thing
I
want
to
highlight
is
that
in
terms
of
versioning,
it's
only
for
managing
the
brick
change,
so
it
may
be
staying
V1
forever.
As
long
as
we
do
not
introduce
any
break
change,
for
example,
to
satisfy
Yvonne's
requirement,
we
may
add
a
resource
policy
Chunk
in
this
config
map,
but
that's
not
a
brick
change,
that's
additional
change,
so
so,
in
that
case
the
version
will
remain
V1.
E
B
Ize
yeah,
so
that's
something
we
want
to
achieve
in
one
level
at
least,
to
skip
the
volume
based
on
storage
class
I.
Think
that's
the
top
priority
concrete
use
case,
foreign.
B
This
is
mainly
about
this
PR
there's
a
contributor
from
macro
style,
I.
Think
it's
in
India,
so
he's
not
attending
this
Beijing
community
meeting.
This
PR
has
been
around
for
a
while.
So
currently
there's
some
concern
regarding
the
accuracy
of
the
information
in
result
that
Jason
I
think
Scott.
You
know
more
background.
J
Yeah
I
can
give
some
background
here
if
you
want
so
basically
on
the
restore
side,
we
have
the
Fairly
elaborate
setup
for
communicating
errors
and
warnings
where
most
of
the
functions
that
we
call
have
a
return
result
back
and.
H
H
Namespaced
and.
J
Then
cluster
scope
and
then
General
and
we've
for
a
long
time
known
that
we
want
to
do
the
same
thing
on
the
backup
side.
It's
just
one
of
the
things
that
hasn't
been
around.
So
this
is
this
is
our
first
attempt
or
his
someone's
first
attempt
to
actually
implement
this,
and
rather
than
rework
all
of
those
functions
to
pass
back.
The
results
like
restore
dies,
which
would
probably
be
the
best
way
of
doing
this,
but
it
would
be
a
lot
more
work.
J
J
The
results
and
I
guess
the
concern
here
is
that
right
now,
the
way
this
code
works
for
namespace
resources,
I
guess
those
are
handled
correctly,
but
we're
not
distinguishing
between
cluster
scope,
resources
and
general
areas
that
aren't
resource,
specific
and
I.
Think.
My
last
comment
on
here
was:
it
might
be
worth
someone
looking
into
this
PR
and
looking
in
the
code
to
see.
Is
there
some
way
of
either
with
the
current
log
messages,
or
maybe
a
quick
modification
to
some
of
the
Vlog?
G
J
J
That
would
still
be
an
improvement
over
now
where
we
have
no
structured
errors,
but
I
think
we
do
want
to
eventually
get
to
that
point
where
we
can
distinguish
them,
I
I,
don't
think
I,
don't
think
we
should
say:
oh
well,
we
can't
distinguish
it.
So
we
shouldn't
do
this.
J
I
think
this
is
an
important
thing
to
fix
and
you
know
I
think
even
not
distinguishing
there
is
better
than
nothing,
but
if
we
can
figure
out
how
to
make
that
distinction
either
by
modifying
the
logging
or
with
the
current
logging-
and
there
may
be
some
way
of
knowing
already
I
haven't
looked
at
these
in
detail
to
see
what
the
log
methods
actually
look
like
and
what
we
need
to
do
is
generate
a
backup
that
has
a
general
error
and
a
at
a
clusterscope
error,
not
necessarily.
K
K
B
That
yeah
yeah,
so
so
you're
saying
even
even
the
these
things
should
I
I
mean
we
do
not
distinguish
this
class
level
and
namespace
level
and
the
Valero
level
Arrow
accurately.
It's
still
better
than
nothing
I.
J
Think
I
think
we
do
want
to
use
that
same
result,
struct
for
the
backup
and
restore
because
we
do
want
to
eventually
get
there
and
also
because
this
gets
uploaded
to
the
object
store.
We
want
to
use
the
same
struct
so
that
we're
parsing
the
same
way.
So
that
way,
we
can
reuse
that
code
as.
K
J
B
J
K
J
A
place
to
make
a
note
in
the
docs
that
this
is
a
limitation
in
the
backup
that
would
be
great.
Okay,
if
not
at
least
make
a
GitHub
issue
so
that
we
know
to
track
this,
and
maybe
you
know
in
112
or
whatever
we
can
fix
that
if
and
kind
of
manage
that
later.
Of
course,.
B
But
but
there's
only
because
this
just
just
one
more
count,
because
this
result
is
tightly
coupled
with
the
love.
So
we're
going
to
be
very
careful
with
the
log
method
in
future.
Yeah.
J
K
J
Actually
do
the
reverse
and
basically
again
a
little
more
context.
Several
releases
ago,
I
actually
ran
into
this
and
I
realized
that
we
were
doing
totally
different
things
on
the
backup
side.
We
just
lost.
J
Which
we're
doing
now
and
we
generated
errors,
the
numbers
for
the
CR
from
the
logs
on
the
restore
side,
we
were
using
the
results
and
we
weren't
logging
them
at
all.
So
if
you
did
Valero
restore
logs,
you
wouldn't
see
the
errors,
so
I
actually
submitted
a
PR
a
few
releases
ago
that
added
those
errors
from
those
results
and
the
warnings
to
the
log.
So
at
least,
if
you
do
look
at
logs,
you'll
get
the
errors
also,
and
this
PR
kind
of
does
the
same
on
the
other
side
kind
of
make
them
consistent.
B
J
J
K
B
J
Yeah
I
mean
it
is
I
think
if
we
I
think
if
we
can
fix
that
distinction
between
clusterscoped
and
General
errors
without
doing
the
refactor,
that
might
be
fine
and
there
just
may
not
be
an
easy
way
of
doing
that.
We
need
to
look
into
kind
of
relative
effort.
There.
M
C
B
A
The
last
one
from
Eva
I
think
we
have
mentioned
this
a
little
bit.
C
A
C
A
More
things
to
teach
us
not,
we
can
discuss
the
next
time.
J
C
J
That's
kind
of
a
follow-on.
There
are
some
comments
on
that.
There
was
the
pr
around
splitting
the
clusterscope
and
namespace
resources
and
we
had
agreed.
We
we
want
to
make
that
a
separate
design
proposal
rather
than
trying
to
kind
of
keep
going
with
that
one,
but
via
the
idea
here,
was
to
extend
that
further
to
be
able
to
use
field
selectors
to
be
able
to
sort
of
do
the
includes
and
excludes,
with
in
a
more
granular.
J
J
So
so
the
the
easiest
use
guys
to
think
about
here
is
I
want
to
include
crds
in
my
backup
that
aren't
necessarily
because
right
now
we
include
crds
if
there's
a
CR,
but
maybe
you
have
an
operator
that
has
crds
that
are
only
used
for
temporary
resources.
So
when
you
do
a
backup,
you
don't
have
a
resource
there
with
that
crd,
but
you
want
to
back
up
the
crd,
but
because
it's
a
namespace
level
backup
you
don't
want
to
include
all
cluster
scope
resources.
You
don't
want
to
include
every
crd
in
your
cluster.
J
You
just
want
the
ones
really
related
to
your
application,
so
that
that
was
kind
of
the
use
case
that
he
was
giving
as
kind
of
one
reason
why
we
would
like
to
be
able
to
do
this
and
I
think
we
might
be
able
to
do
it
by
basically
extending
and
I
I.
Don't
I
haven't
actually
read
through
this
design
proposal
in
detail
yet,
but
that
was
kind
of
the
discussion
that
led
to
this
being
created.
B
J
We're
gonna,
you
think
the
thing
is
less
clear
to
me
is
because
the
other
thing
we
were
talking
about
was
mostly
on
the
on
the
volumes
and
all
that
and
was
very
specific
to
the
way
volumes
work.
But
you
you
could
do
the
skip
thing
here
with
this
without
without
that
I
guess
it
depends
on.
You
know
if
you're
talking
about
custom
behaviors
per
volume,
that's
kind
of
different.
If
you're
talking
about
a
way
of
skipping,
you
know
excluding
volumes
by
name
or
including
volumes
by
name
or
all
of
that.
J
You
know,
then
this
proposal
would
allow
for
that.
I
I,
just
don't
I,
don't
know
otherwise
how
compatible
they
are,
because,
although
there's
some
overlap
in
what
their
use
cases
are,
there
are
some
very
different
things
about
them
as
well.
C
Okay,
thanks
and
now
I
think
we
have.
We
did
all
the
topics
and
thanks
everyone
for
this
long
meeting
and
see
you
next
time.
Thanks.