►
From YouTube: Velero Community Meeting - October 26, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Meeting
on
October
26th
first,
let
me
do
some
stages:
update
for
1.9.3.
The
party
date
had
been
decided
on
October,
20
28th
and
the
included
fixes
can
be
viewed
in
this
link
and,
if
there's
at
the
plan-
and
it
is,
it
will
be
ga
Netflix.
A
1.10
we
have
made
the
RC
criteria,
so
we
now
we
are
preparing
RC
and
because
of
because
we
have
delayed
the
obviously
for
several
days,
so
the
party
may
also
be
delayed
and
about
that's
it,
and
the
data
is
not
decided
yet
and
for
the
last
two
weeks
for
copier
integration,
we
have
down
the
problem
test
and
also
we
have
done
the
upgrade
and
compatibility
test.
So
the
both
we
will
prepare
documents
for
the
both
and
for
the
radio
update.
A
Myself
is
working
on
several
things.
The
first
things
it's
about
a
new
design
about
the
Valero
upgrade
command.
It
is
used
to
replace
the
current
available
has
been
documented
and
that
I
mean
that
is
a
manual
way
and
the
the
wheeler
upgrade
command
will
replace
that
in
future.
But
since
because
of
effort,
we
were
not
included
with
the
design
and
the
implementation
in
1.10,
but
it's
still
a
good
idea
and
a
good
solution.
A
So
please
continue
to
review
it
and
we
may
be
able
to
implement
it
in
1.11.
A
Okay.
The
next
thing
is
it's
some
documentary
Factor,
because
we
have
refactor
the
part,
one
backup
workflows
and
we
also
added
in
two
passes
and
also
we
rename
some
something.
So
the
document
need
to
be
refactored
and
I
have
submit
a
p
submitted
PR.
Please
reveal
it
and
finally,
I
troubleshooted
some
pipeline
issues.
So
right
now
the
pipeline
is
basically
working
for
all
the
cases.
Yeah,
that's
so
much
like
and
Junction.
Please.
B
Oh
okay,
I
I'm,
working
on
the
developing
Bolero
1.9.2
Spurs
carbo
package
and
integrated
into
the
tce
and
TKD.
By
far
the
tce
part
is
completed,
I
still
working
on
the
tkg
part
and
a
second
one
is
I.
Shall
we
pick
some
PRS
into
the
1.9.3
release
and
do
some
tests
and
the
last
one
is
adding
some
new
cases
for
1.10
and
and
that's
all.
C
D
A
Okay:
okay,
thanks
and.
E
Hello,
yes,
I'm
running
a
nightly
test
for
1.10
and
we
still
have
a
few
stability
issue
which,
because
we
haven't,
have
a
full
full
text
passed
for
one
time,
but
we
still
feel
confident
to
FC,
because
all
tests
has
been
passed
by
several
tests.
Run
and
I
will
try
to
fix
the
stability
issue
in
in
the
in
your
recent
days,
and
we
prepared
the
test
plan
for
1.10
and
we
will
hold
a
Roundtable
test
in
in
tomorrow.
E
Tomorrow
we
will
have
a
all
all
the
all
the
member
will
will
have
a
will
have
the
tester
for
one
day.
Okay,
that's
that's
all
from
me.
F
F
One
thing
I'd
forgotten
to
do
that
Daniel
reminded
me,
and
some
comments
was
that
the
State
diagram
from
Dave
and
his
original
design
that
we
copied
over
I've
still
had
those
old
previous
names
for
the
states,
and
since
we,
since
we
changed
the
names
of
the
uploading
States
for
the
more
generic
functionality
here,
I
needed
to
update
that
diagram.
So
that
has
been
done
and
I
made
a
few
more
updates.
F
You
know
corrected
some
mistakes,
updated
design
in
a
couple
of
areas
and
response
to
feedback
and
then
answered
some
other
questions
as
well.
Hopefully,
now
that
there's
more
attention
on
this
again,
we
can
kind
of
go
back
and
forth
and
kind
of
finalize
this
in
the
next
week
or
so.
Obviously,
none
of
there's
not
going
to
be
any
code
in
110
based
on
this,
but
the
hope
is
that
by
the
time
we've
branched
for
110
on
the
release
Branch.
F
Don't
have
any
intention
on
getting
that
reviewed
or
merged
until
we've
got
all
the
designs
out
of
the
way,
because
I
realized
that
any
changes
to
the
design
at
this
point
might
require
changes
to
that
refactoring
I
just
wanted
to
get
started
on
some
of
the
early
things
were
because
I
think
the
API
designs
are
where
there
was
less
discussion
again.
They
were
actually
one
of
the
comments
from
Daniel
did
involve
a
possible
field
name.
So
even
that
could
change.
F
You
know
as
needed
if
we
do
make
changes
to
the
design,
but
because
there's
a
bunch
of
components
to
this
I
wanted
to
kind
of
front
load.
Some
of
the
early
implementation
work
around
the
API
changes
to
get
those
basically
going
soon
after
we've
made
the
110
release
branch
that
wait.
A
lot
of
this
stuff
can
be
in
place
early
in
the
111
development
cycle.
A
Okay:
okay,
thanks
Scott,
and
thanks
a
lot
for
monitoring
the
comments.
Great
TP
monitoring
and
yeah
I
I
think
we
will
focus
on
discussing
this
PR
net
capitalized
yeah.
F
F
I
do
want
to
call
mention
one
thing.
I
know.
One
of
the
comments
was
about.
You
know
the
concerns
about
the
Json
file
because
of
changes
around
you
know,
unified,
repost,
stuff
and
I
actually
think
there
might
have
been
some
confusion
about
where,
where
that
was
coming
from,
the
plug-in
itself
is
not
going
to
be
writing
to
the
object
store.
F
This
is
more
analogous
to
what
we're
doing,
for
example,
in
the
back
of
a
restore
when
we
upload
the
logs,
we
create
a
gzip
blog
file
that
we
put
to
the
object
store
and
this
file
that
lists
the
operations
that
Valerian
used
to
come
back
and
look
at
would
be
uploaded
in
a
similar
place
kind
of
in
a
similar
way.
So
this
would
just
be
another
file
that
goes
through
the
existing
Object
Store
interface
that
we
have
for
AWS
and
everything
else
right
now.
A
Okay,
okay,
so
that
will
be
good.
It's
like
just
like
the
existing
Json
persistent,
exactly
like
yeah
like
the
Vlog,
that's
right,
so
that
will
be
centrally
handled
by
at
the
end
of
the
backup.
For
example,.
F
F
So
you
know
if,
for
example,
we
have
an
object
store
in
the
future
that
uses
NFS
local
storage
and
we
use
the
objects
our
plugins,
to
implement
that
this
will
still
work
because
we're
just
going
through
the
plugins
yeah
I've
just
got
plugins
just
like.
A
Everything,
okay
yeah,
so
so
I
could
think
so
it's
very
simple
right
because
the
different
plugins
first
it's
a
common
interface
I
mean
the
bi
ra.
You
can
call
me
interface,
so
a
different
plugins
may
may
use
different
way.
May
face
different
storages.
So
with
better
knots
to
and
not
you
know,
make
any
positions
in
inside
the
parking.
F
Right
exactly
you
know,
all
the
plug-in
needs
to
do
is
it
has
an
ID
for
this
operation.
So,
for
example,
a
volume
snapshotter
plugin
has
a
snapshot
ID
that
we
already
have
access
to.
We
already
know
what
it
is.
The
plugin
just
returns
it
the
same
with
the
you
know.
Data
mover
plug-in,
for
example,
creates
a
data
mover
CR
that
the
data
mover
controller
is
going
to
operate
on
so
that
CR
is
going
to
have
some
idea.
G
G
A
A
Yeah,
so
the
next
question
is
something
like
it's
the
things
we
don't
write
anything
from
the
plugin.
Why
should
we
Define
this
defines
its?
You
know
this
data
and
the
way
to
write
to
write
the
data
in
this
plugin
interface
CR
design.
Can
we
just
mention
that
if
the
plugin
wanted
to
write
something
opposite
something,
it
is
a
plugin's
own
work.
F
No,
the
plugins,
that's
the
thing
is
this:
isn't
the
plug-in
this?
This
is
bolero's
interface,
because
what
because,
when
Valero
calls
plug-ins
Valero
has
to
come
back
and
check
to
see
if
these
operations
are
done
so
Valero
needs
a
list
of
all
these.
You
know
it
kind
of
basically
it's
a
list
of
kind
of
plug-in
ID
operation
ID
pairs,
so
that
Valero
can
you
know,
say.
For
example,
you
have
two
two
volume
snapshots
that
you're
that
the
volume
snapshot
I
plug
in
initiated
upload
on
and
you've
got
two
more
operations
from
a
data
mover.
F
So
Valero
has
this
list
of
four
operations
and
then,
at
the
end
of
running
the
backup,
it
grabs
the
list
and
says:
okay,
it
calls
into
the
volume
snapshot
I
plug
in
with
that
snapshot
ID
and
ask
the
plugin
hey.
What
give
me
the
progress
for
this
and
the
plugin
figures
out?
Okay,
it's
done
or
it's
not
done
and
returns
that
value
and
the
same
with
a
data
mover
Plug-In
or
any
other
plugin.
So
this
is
just
a
list.
That's
part
of
the
Valero
backup
controller
instruction,
so
the
plugin
doesn't
create.
F
This
Json
doesn't
even
know
that
this
file
exists.
This
is
something
that,
for
example,
the
item
backuper
knows
about
or
the
backup
the
backup
controller
knows
about,
because
what
happens
is
that
when
you
finish
when
Valero
finish
is
running
a
backup,
if
there's
still
these
operations
that
have
it
completed,
because
you
know
the,
for
example,
the
data
mover
is
uploading,
a
big
a
back,
you
know,
snapshot
to
storage,
then
Valero
puts
that
back
up
in
the
and
then
kind
of
that
async
operation
running.
F
You
know,
waiting
for
plug-in
operations,
State
moves
on
to
the
next
backup
and
then
then,
when
we
reconcile
again,
we
go
back
and
look
at
existing
backups
that
are
in
that
waiting.
You
know
waiting
for
the
async
operations
state
for
each
backup.
In
turn,
it
grabs
that
Json
file
gets
a
list
of
objects
of
operations
that
we
need
to
check
status
on
calls
the
plugins
and,
if
everything's
completed,
then
we
move
the
backup
into
the
deleted
State,
and
would
you
know
now
it's
done
so
this
is
these.
F
A
Okay,
so
currently
we
mentioned
this-
this
Json
in
this
PR
just
on
behave,
just
a
very,
very
I,
would
say
that
is
that
is
to
talk
about
the
the
the
behavior.
We
are
of
the
controller
right
right
now
to
the
plugin
yeah.
F
We
modified
the
plugin
interface
to
initiate
this
operation
and
return
the
ID,
and
you
know
the
the
methods
to
call
progress
and
cancel,
but
the
design
also
includes
sections
for
what
we're
doing
to
the
workflow
of
the
actually
running
a
backup
or
restore
that
requires
changes
to
the
list
of
backup
and
restore
status,
phases
and
logic,
changes
around
the
backup
and
the
restore
workflow.
And
that's
where
this
item
the
Json
file
fits
into.
F
It
fits
into
the
section
of
the
design
around
the
workflow
chain
just
to
back
up
and
restore
not
to
the
plugin
changes,
because
those
are
two
different
components
of
this
overall
design,
because
we
have
to
have
all
of
these
things
working
together
for
it
to
be
functional.
Now,
they're
going
to
be
implemented
a
separate
PRS,
for
example.
The
first
thing
we're
going
to
do,
which
I've
already
started:
I
have
one
PR
open
on
the
draft
now
because
it's
not
an
approved
design.
F
Yet
to
change
the
plugin
interfaces
once
the
plugin
interfaces
have
been
modified
to
have
those
V2
interfaces.
The
next
step
is
going
to
be
adding
those
additional
workflow
states
to
the
crds,
so
so
that
those
are
valid
States
for
backups
and
restores,
and
then
the
next
thing
we
do
is
modify
the
actual
backup
and
restore
workflow
to
check
the
status
and
and
update
it.
F
And
you
know,
go
through
that
whole
process
and
that
that's
going
to
be
the
probably
the
most
complicated
part
of
this,
because
that's
actually
changing
the
workflow
of
the
way
Villard
runs
backups
and
restores
and.
F
All
that's
in
place.
The
last
thing
to
do
for
this
is
actually
update
the
existing
plugins
that
need
this
to
make
use
of
the
new
feature
so,
for
example,
the
volume
snapshotter
plug-in
and
within
use
this
or
the
you
know,
once
we
have
a
data
mover,
those
plugins
would
make
use
of
it
or
any
other
plugins
needed
it.
So
that's
kind
of
a
and
that's
all
outlined
at
the
end
of
the
document.
It's
kind
of
the
implementation
tasks
it
kind.
F
There's
the
API
specific
changes,
the
workflow,
specific
changes
and
so
there's
different
sections.
The
design
document
that
focus
on
those
different
parts.
A
Okay,
so
then
that's
clear
of
for
me,
so
maybe
maybe
we
we
can
find
a
way
to
collaborate
it
on
the
on
the
document,
because
the
first
time
when
I
read
I
believe
that
this
is
the
I
mean
the
Json
persistent
thing.
So
it's
down
by
the
plugin
itself
or
down
by
by
Valero
or
the
battle
controller.
And
then,
when
I
read
it
again,
I
I
I
I
tend
to
intend
to
believe
that
it's
down
by
the
plugin,
so
I
I
got
somehow
come
a
few
yeah.
F
I'll,
go
back,
I'll,
go
back
and
look
at
the
documents
tomorrow
to
make
sure
if
there's
anything
I
can
do
to
kind
of
if
it
helps
to
move
things
around
to
make
it
clear,
but
basically
I
mean
there
are
separate
sections
of
the
document,
for
you
know,
plug-in
API
changes,
workflow
changes,
but
I'll
go
back
there
tomorrow,
just
to
double
check
to
make
sure
that
all
of
the
thing
relating
you
know
everything
relating
to
those
the
the
item
operations
CSM
files-
are,
you
know
clearly
in
that
workflow,
so
plug
backup
and
restore
workflow
section
of
the
document,
rather
than
you
know,
plug-in
because
yeah
that
that's
that's
like
I
guess.
F
The
key
here
is
that
that's
part
of
Valero's
processing
of
the
backup
and
handling
the
API
changes
and
then
the
API
changes
themselves
are
more
limited
because
within
the
API
we're
talking
about
a
single
item
for
a
single
plug-in
at
a
time
and
at
that
level
the
plug-in
changes
are
a
little
bit
more
isolated.
Basically,
the
execute
method
of
the
plug-in
returns.
This
extra
operation-
ID,
that's
optional.
F
So,
if
it's
blank,
if
it's
empty,
then
that
means
no
change
that
we
don't
have
an
operation
to
check
on
and
then
we
have
this
progress
function
and
the
that
we're
adding
that's
a
new
API
function
where
you
pass
that
operation
ID
back
in
and
the
plugin
says:
okay,
I
know
how
to
look
up
the
operation
ID
for
the
plugin.
Let's
check
to
see
if
it's
done,
and
so
maybe
it's
looking
at
the
data
mover
or
CR.
Maybe
it's
looking
at
the
snapshot
upload,
you
know
whatever
the
plugin
is.
F
The
plugin
has
to
know
how
to
interpret
that
ID
to
find
find
a
status.
Bolero
doesn't
know
that
doesn't
understand
that
that's
kind
of
the
split
but
on
the
other
hand,
when
you're
talking
about
managing
the
list
of
operation,
IDs
and
plugins,
that's
folaro,
workflow
job
on
the
backup
and
the
restore
and
the
plugins.
Don't
need
to
know
about
that
because
that's.
A
A
Problem:
okay,
okay,
thanks
a
lot
and
sure
that's
the
interest
dates.
We
will
focus
on
discussing
about
this,
the
CPR
and
without
a
lot
you
know
for
any
okay,
anything
yeah.
F
F
So
so
there's
the
overall
design
which
is
kind
of
you
know
that
includes
the
API
changes
built
in
and
then
these
are
smaller,
more
detailed,
API
design
that
actually
lists
the,
for
example,
the
Proto
code
that
we
would
be
using
for
those
API
changes
to
actually
build
out
those
changes
so
that
that's
kind
of
what
the
split
is.
F
So
once
we
approve
and
merge
and
make
whatever
changes
we
need
to
to
the
async
design,
then
I
can
go
back
if
there's
any
changes
necessary
I
can
go
back
and
modify
those
three
kind
of
smaller
but
more
detailed,
plug-in
APA
designs,
and
then
we
can
get.
Those
will
probably
be
quicker
to
review
and
merge
because
they're
more
limited
in
their
scope.
It's.
G
F
Okay
for
backup
item
action,
here's
the
changes,
we're
making
and
I
will
mention
restore
item.
Action
is
a
little
more
complicated
because
that
and
that
that
combines
two
features,
because
we
have
a
previously
approved
design.
That's
referenced
there
that
also
added
a
couple
of
functions
to
the
API
and
so
we're
at
we're,
basically
putting
both
into
the
V2,
because
there's
really
no
point
in
making
a
V2
and
then
immediate
V3,
all
in
the
same
one
got
11
time
period.
D
F
The
the
design
itself
links
to
the
relevant
documents
that
it's
based
on,
but
don't
worry
about
those.
Yet,
let's
get
the
overall
design
you
know,
finalized
First,
Once,
that's
merged.
Then
we
need
to
focus
on
getting
those
smaller
API
design,
FDR's
merged.
A
Yeah
yeah
thanks
thanks
God
and
do
you
have
any
comments
on
this
PR
because
we
are
also
working
on
this.
A
B
Yeah
I,
no,
no!
No,
but
but
by
far
I
I,
don't
have
more
comments.
I
think
Scott
already
clarified
the
the
Json
file
and
the
under
some
details
of
the
design.
A
Okay,
okay,
then,
we
can
discuss
mom
about
this
offline,
okay,
okay,
okay,
thanks
Scott
and.
D
Yeah
now
I
have
pump
up
some
the
reference
of
the
golden
Library
use
value
in
the
pr
and
the
requirement,
and
next
I'm
going
to
verify
to
some
location
for
the
area
plugin,
to
check
whether
to
to
the
CSI
package
to
check
whether
CSI
plugin
can
cover
the
use
case.
Set,
it's
a
parking
income
backup
either
digs
from
multiple
resource
groups.
This
your
case
is
not
supported
by
either
plugin,
but
we'll
see
whether
it
can
be
supported
by
CSF
problem
s.
A
Okay,
thanks
again
and
next
we
go
to
the
discussion
topics.
The
first
one
is
from
my
side.
Actually
one
of
the
we
found
one
problem
and
we
need
to
change
the
behavior
of
the
editing
Behavior,
a
little
speech
about
how
we
support
the
as
three
compatible
obvious
law,
but
not
known
by
Red
Arrow.
A
Currently
we
are
known
the
AWS
right
and
beside
that
Azure
and
the
gcp,
but
for
some
there
are
many
other
Cloud
providers
that
providing
the
R3
compatible
object
source
so
at
the
existing
rapid
backup,
I
actually
support
that,
so
the
user
need
to.
Let
me
open
this
issue.
A
You
don't
need
need
to
provide
something
like
of
a
parameter
called
rustic
repo
prefix.
What
is
the
rate
of
prefix?
It
is
only
on
the
stand
by
by
rustic.
It's
a
it's
in
this
format,
so
it's
like
when
we
when,
when
Valero
calls
I,
do
a
radical
backup
or
call
rather
button
command
for
AWS.
A
It's
also
composing
a
URL,
something
like
this,
so
for
others
that
without
doesn't
know,
without
requires
user,
to
provide
this
URL
into
the
rusty
Ripple
prefix
parameter,
but
that's
only
work
for
rustic,
because
this
part
is
unknown
only
known
by
ratio.
This
is
a
you
know,
protocol
a
decided
by
rustic.
So
the
first
question
is:
how
do
we
spot
copy?
A
A
big
copia
only
supports
that
we
provide
the
provides
the
separate
requirements
parameters
in
a
common
way
like
this
is
the
the
region,
and
this
is
included
in
the
something
we
call
it
endpoint
right
reading-
and
this
is
the
end
point.
This
is
the
bucket
name
and
following
that
is
a
prefix,
so
Cobia
only
supposed
to
to
only
accept
this
kind
of
parameter,
endpoint
bucket
and
the
prefix
there's
no
way
to
provide
a
URL
string
to
and
give
it
to
Cobia.
That's
the
first
problem.
A
The
second
problem
is
actually
we
have
a
map
in
this
this
place,
the
BSL
stack
dot
config.
This
is
the
map
and
that
map
we
already
have
these
separate
parameters.
I
mean
the
endpoint
bucket
name
and
end
point
in
relatively
call
it
S3
URL,
and
we
already
already
have
it.
So
they
are
redundant
information,
so
we
can
only
make
we.
We
only
need
to
make
once
more
change
to
make
the
secondary
work
and
as
a
generic
solution
for
both
radio
copia,
graphic
and
copia.
A
So
in
this
PR
first
thing,
I
I,
just
I,
just
make
that
second
solution
work
by
changing
the
code
a
little
bit.
Second,
things
users
has
already
using
this
I
use
this
graphically
perfect
in
a
previous
release
and
theoretically
we
can
remove
it
because
the
second
part
also
works
for
work
for
both
as
a
second
solution
work
for
both
residents
and
Cobia.
A
But
for
compatibility,
consideration
we'll
leave
it
there,
that
is
to
say,
that
is
to
say
the
common
way
or
the
recommended
way
for
Valero
to
to
handle
this
actually
compatible
and
not
known
providers
is
to
use
the
second
way
that
to
specify
the
separate
parameters
in
the
config,
but
there
are
also
supports
that,
for
traffic
only
to
support
to
provide
a
user
to
provide
this,
this
URL
in
the
radical
repo
prefix.
A
That
is
what
we
have
done
in
this
PR,
and
so
as
because
of
those
we
have
some
Behavior
change
in
this
PR
and
our
way
we
were
also
documented
the
new
solution
or
the
new
way
in
the
in
the
the
point
volume
backup
document.
A
So
please
help
to
review
this
PR
and
we
can
discuss
any
anything
through
comment
or
something
like
that.
Oh
do
we
have
any
common
right
now
for
this
change.
A
If,
if
not
now,
we
can
continue
to
review
this
PR
offline
and
I
believe
that
this
PR
needs
to
be.
We
need
to
have
a
solution
to
support
Cobia
before
the
RV,
so
we
may
have
one
week
something
like
that.
If
the
work
is
good,
we
can
merge
the
pr
if
this
doesn't
work,
I
think
we
won't
need
to
find
another
way.
That
is
the
current
situation,
that
is
from
my
site
and
any
questions
on
this
PR.
A
Ok
Okay,
if
not
I,
and
the
next
one
is
the
the
current
performance
testing
readout
and
the
upgrade
document
for
v1.10
that
is,
will
be
interested
by
zooming
and
I.
Think
that
will
take
some
time.
So,
let's
go
to
the
third
question.
A
Third,
the
discussion
from
shubham
first,
because
that
is
a
question
right,
so
are
we
planning
to
remove
radical
spot
if
you
have
seven
and
the
information
from
my
site
is
like
with
first
of
all,
first
of
all,
we
will
not
remove
rustic
in
1.10
and
when,
while
we
remove
it
that
will
based
on
first
of
all,
first
of
all
the
New
Path
The
copier
path.
A
What's
the
copyright
copyright
path
works
like
and
performs
like
if
the
copier
path
is
works,
very
good
and
the
rest
and
the
performance
is
much
better,
we
may
consider
to
remove
graphic
paths
in
the
near
future.
If
not,
we
will
keep
the
both,
for
example,
for
some
cases
copier
works,
but
for
other
kids
Graphics
works
better.
So,
in
that
case
we
will
we're
well
okay,
but
they
both
passed
for
quite
a
long
time
until
we
finally
decide
that
with
one
pass,
everything
could
be
solved
or
everything
is
working.
A
Well,
that's
the
information
from
my
site.
H
G
H
F
I
would
say
too,
since
we're
definitely
not
deprecating
this
in
110,
when,
generally,
when
you
deprecate
something,
you
know
you
there's,
and
this
is
where
the
policy
comes
in,
we
haven't
decided,
but
you
you
have
some
period
after
you
deprecate
it
before
you
remove
it,
because
you
know
when
you
deprecate,
that's
that's
where
you
don't
remove
it.
You
know
you
warn
people.
This
is
going
to
be
removed
in
the
future
right.
I
F
Still
there
now,
so
you
have
to
have
some
you
so
once
you
decide
to
deprecate
it,
you
know
at
a
minimum.
Is
that
one
release
and
we
might
decide
it's
two
releases,
that's
what
we
need
a
policy
around
this.
We
need
to
decide.
You
know
you
know,
for
example,
if
we
deprecate
something
in
112,
does
that
mean
we're
going
to
remove
it
in
113
or
could
it
be
114?
You
know.
I
I
I
would
agree
more
than
one
release,
but
I
wonder
if
our
the
release
schedule
for
Valero
is
consistent
enough
for
us
to
like
be
just
release
based
it
might.
We
might
need
to
include
some
amount
of
time
in
that
equation
as.
F
H
Yes,
yes,
I
think,
let's
get,
let's
get.
It
started
to
work
out
a
deadbecker
process
for
project
overall
I
think
that
is
the
first
step
we
need
to.
We
need
to
do
right,
so
if
anyone
could
to
stamina
PR
to
propose
a
process
that
would
be
good
and
also
I
will
to
sync
up
with
RPM
about
this.
H
If
he
have
any
opinion
on
this
or
anything,
we
can
I
mean
to
proposal
or
for
the
replicating
process,
and
once
once
that
is
pi,
ready,
I
think
the
Nintendo
and
concluded
to
discuss
this
in
the
community
and
get
an
agreement
on
a
wild
process.
We
prefer
to
use
to
detect
a
future,
so
it
it
does
make
sense.
F
You,
but
my
only
question
relating
to
that
is
because
you
said
you
know
we
should
have
someone
should
submit
a
priority
to
you
know
proposed
policy
here
is:
where
exactly
would
that
policy
go?
Is
there
an
existing
document
somewhere
in
the
docs,
where
it
would
be
appropriate,
or
would
this
be
a
new
document
that
would
go
somewhere
I'm,
not
sure
what
what
exactly
the
pr
would
be
modifying,
I
guess.
F
So
so
it
may
just
be
that
we
need
a
new
file.
You
know
under
the
docs
section
of
the
repo
that
would
just
be
an
application
policy.
G
A
Yes,
so
so
anyway,
we'll
be
careful
for
the
darker
green
things
and
we
will
have
the
standard
processes
and
we
will
also
and
tap
the
status
of
the
other
parts.
I
mean
they'll
copy
of
us,
yeah,
okay,
okay,
so.
F
And
I'm
just
thinking
from
a
timeline
point
of
view,
I
just
since
we're
very
close
to
110
being
released,
and
since
we
don't
have
a
policy
in
place
certification,
yet
I'm,
assuming
that
means
110
we're
not
deprecating
anything
so
the
earliest
we
would
deprecate
the
arrested
that
it
could
be
would
be
in
111,
which
would
then
be
some
number
of
releases
after
that
to
remove
it.
But
we
also
might
decide
that
one
level
does
not
the
time
to
deprecate
it.
Based
on.
You
know
whether
we
have
issues
that
aren't
you.
H
H
A
Okay,
okay,
thanks
all
and
for
this
discussion
and
the
next
one
next
ones
are
from
to
me
so
time
to
me.
C
C
C
C
A
Yeah
I'm,
sorry
yeah,
that
is
current,
available
upgraded
stuff.
So
it's
quite
simple,
and
that
is
my
new
steps
that
users
need
to
take
reference
to
this
steps
and
do
it
manually
right,
okay,.
C
C
C
Also
through
the
Mystic,
we
should
delete
all
the
restic
part
and
create
a
new
new
demonstrate.
That
name
is
node
agent,
so
that
is
just
a
batch
script
and
it's
just
for
reference
and
we
will
develop
a
new
Valero
command
to
do
the
upgrade.
That
is
just
food.
It
is
just
a
tempering
partial
for
110.
C
A
Yeah,
let
me
add
one
more
thing
to
elaborate
on
this:
this
talk
of
this
solution,
the
relation
of
this
solution
with
the
new
Bolero
upgrade
to
come
out.
Actually,
we
double
our
Command.
Well,
two.
If
we
have
the
with
our
upper
command,
it
will
do
the
things
he
included
here
and
include
that
in
the
code
and
do
it
automatically,
but
you
know
for
a
way
1.10.
We
cannot
include
that
because
of
average.
A
So
we
have
this
this,
this
grip
and
this
dock
to
help
users
to
manually
to
do
these
steps.
That
is
a
relationship
about
this
two
solutions.
Yes,
okay,
actually.
C
Here
is
developer,
110,
perform
test
and
I'm
and
we're
doing
do
the
test
in
them
in
the
environmental,
1,
21,
14,
kubernetes
version
and
we're
using
the
menu
of
the
repo
and
NFS
of
the.
Surely
the
file
system
and
the
menu
disk
had
the
300
megabits
per
second
right
speed
and
NFS
has
the
175
megabase
per
second
right
speed
below
we
have
nine
test
cases
and
for
every
test
case
we
divide
it
into
two
groups.
C
C
C
C
C
And
the
kids
4,
we
have
one
megabytes
per
files:
five
kids,
five,
ten
megabytes
per
fire
and
the
two
K7
will
have
one
gigabytes
per
fires
and
and
the
result
shows
that
both
copia
and
heuristic,
with
one
core
CPU
that
they
almost
use
up
all
the
CPUs
this
in
the
copy,
I'll
use
them
and
94
and
rescue
you.
The
100
percent
One
Call-
are
mostly
the
same.
C
C
And
we
have
analysis
this
formula
phenomenon
that
we
we
found
that
and
they
influencer
in
influence
of
copies.
The
is
the
the
speed
of
a
disk
right,
but
if
heuristic,
if
we
give
more
CPU
resources,
eaten
it
the
the
time
consuming
is
much
shorter
than
before.
Here
is
16
minutes
in
four
course.
It
goes
to
just
five
minutes.
A
Yeah,
so
it's
it
means
that
it
implies
that
right
so
for
for
for
the
throughput,
we
have
some
factors
like
the
throughput
copyright
you
can
process
and
the
support
of
the
disk
that
maybe
the
two
primary
factors
so
in
this
case
implied
it
implies
that
under
the
one
CPU
two
a
2GB
memory,
Cobia
has
already
read
the
rule
for
the
the
the
the
the
capability
capability
of
the
disk.
A
So
we
can
see
that
we
will,
when
we
increase,
increase
the
CPU
and
memory
there
is,
there
is
not
so
much
so
much
increase,
but
for
rustic
under
the
one
CPU
and
2GB
it
has
not
I
the
the
bottleneck
is
on
itself.
So
when
I
increase
the
CPU
and
the
memory,
it's
a
it's
a
it's
a
I
mean
the
finishing
time.
It's
much
shorter
and
it
has
yeah
in
terms
of
finishing
mod
is
much
shorter
and
it
has
consumed
much
more
CPU.
A
That
means
the
processing
of
The
Rustic
requires
multiple
on
this
case.
On
this
kind
of
cases,.
C
C
And
for
the
cat
8
and
the
nine,
the
the
total
data
we
want
to
backup
is
all
is
the
same
900
gigabytes,
but
the
case
8.
We
have
900
files
in
each
file
with
one
gigabytes
and
the
kiss9.
C
We
only
have
one
files
and
the
total
file
is
900
gigabytes
and
the
result
both
the
redial
are
similar
and
but
but
the
the
formula,
phenomena
that
we
mentioned
and
previous
is
very,
is
much
more
obvious
that
will
increase
in
the
CPUs
copier
directly
Tribune
I
will
retrievable
and
in
one
core
CPU
we
see
just
ran
out
of
time
and
with
four
CPU.
It
runs
for
two
hours
and
and
I
also
found
a
phenomena
that
for
every
this
is
the
usage
of
a
memory,
usage
of
copy
and
heuristic
for
every
test.
C
We
I
found
that
the
copier
memory
usage
in
the
line
is
very
stable,
but
the
realistic
with
the
time
passed
by
the
the
memory
using
system
as
a
nano
growth,
so,
overall
all
the
tests
that
shows
the
item,
including
tiny
fires
or
directories,
or
large
fires
copy
and
use
copies,
much
faster
than
realistic
and
for
tiny
virus.
Our
zero
content.
C
We
think
that
ristic
have
some
problem
with
the
dealers
drawing
in
in
Ripple,
especially
for
this
case
and
for
tiny
files.
Copier
use
more
CPUs,
but
when
increasing
the
fire
fire
size,
copy,
I'll
use
less
less
of
you
than
than
rustic.
C
Yes,
there
is,
there
is
that
is
the
I
am
briefly
go
through
the
document,
and
you
will
you
have
any
any
in
comment
or
you
supplement
supplement.
A
Yeah,
so
for
the
offered
our
work
I
mean
for
the
conclusion
here,
yeah
here,
so
we
we
can
say
that
besides
the
the
one
thing
I
want
to
add
is
like
it
doesn't
mean
that
when
I
use
Cobia,
there
is
no,
oh
actually,
both
copy
and
The
Rustic
doesn't
make
any
control
on
the
on
the
CPU.
Sorry
on
the
memory
usage,
so
it's
like
on
the
memory
usage,
so
it's
like
under
the
we
call
it
massive
small
files.
For
example,
the
first
couple
cases.
A
So
it's
it's
like!
No
sorry,
it's
like
that
when
we,
when
we
see
when
we
have
when
we
go
to
the
large
file
size,
women
have
a
lot
back
outside,
so
the
memory
usage
on
the
on
both
copy
and
the
rest
will
be
increased
so
so
in
in
this
kind
of
case,
you
don't
need
to
change
the
default
Belarus
resource
configuration
I
mean
the
resource
limitation,
otherwise
there
will
still
be
all
and
the
thing
I
want
to
emphasize.
A
This
is
the
natural
thing
about
the
backup
or
You
Are
all
especially
for
the
file
system,
backup,
because
for
the
file
system,
backup
when
you
first
of
all,
we
need
to
consume
CPU
to
to
travel
the
file
system
and
also
we
need
to
store
the
file
system
metadata
and
finally,
on
the
reports
on
the
repository
side,
we
need
the
the
the
reports
they
need
to
to
store
the
the
dupe
indexes.
Something
like
that.
A
So
so
many
places
require
memory
and
CPU,
so
it
doesn't
mean
that
women's
wage
to
copy
otherwise
no
om,
oh
I,
want
to
say
that
in
some
some
in
some
some
scenarios,
so
the
current
default
configuration
I
mean
the
result.
Configuration
of
Valero
is
not
enough.
Users
need
to
increase
the
resources,
and
we
can
read
this
overall
conclusion
and
it
includes
for
which
cases
we
need
to
change
the
configuration
yeah.
C
Are
we
just
the
link,
put
a
linked
list.
A
What
about
like
this?
We,
we
will
created
a
performance
or
performance
guidance,
dog
right
and
that
will
be
in
the
rip.
Our
our
GitHub
repository
I
mean
code
reposit.
So
we
will
include
the
the
important
things
that
mentioned
in
this
problem
test
and
into
that
doc
to
guide
the
user,
to
make
a
decision
on
on
the
path
or
make
the
configure
the
configuration
change
and
so
for
others,
because
there
are
so
many
details
for
others.
We
will
not
publish
that.
Does.
H
A
Yeah
yeah
that
well,
that
will
what
we
do
in
the
performance
guide.
A
A
Okay,
thanks
I
think
we
have
done
all
the
topics
for
today's
meeting.
So
do
we
have
any
any
other
things.
A
Okay,
we
nearly
are
run
out
of
time,
so
if
then,
we
know
other
things,
we
can
finish
to
the
meeting
and
thanks
everyone
and
have
a
good
day
and
a
good
evening
thanks
bye.