►
From YouTube: Velero Community Meeting/Open Discussion - June 8, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
There
we
go
hello,
everyone
and
welcome
to
the
bellero
community
meeting.
Slash
open
discussion
today
is
june,
8th
2021.,
if
you
haven't
already,
please
add
your
name
to
the
attendee
list
in
the
hack
md,
as
we
that
we
use
for
meeting
notes,
it's
very
useful
for
us
to
see
who's
attending
and
yeah.
We
just
wanna
make
sure
that
we
we
capture
everyone
in
their
thoughts
and
comments.
A
We
do
have
some
status
updates
here
and
we'll
dive
into
some
discussion
topics
per
usual,
most
likely.
I
have
one
big
status,
update
that
I'm
going
to
cover
first
and
then
we'll
dive
into
the
other
ones.
So,
first
off
we
have
a
few
things
that
we
want
to
share
with
the
broader
community.
We
have
some
valero
maintainer
changes
coming,
which
will
most
likely
impact
the
project
speed
a
bit
initially,
but
also
give
us
the
possibility
of
enhancing
the
feature
set
of
valero.
A
So,
first
of
all,
I
want
to
do
a
big
big
thank
you
to
nolan,
colisia
and
ashish.
All
three
of
them
are
on
the
call
here
for
their
fantastic
contributions
to
valeria
over
the
many
many
years
as
we
move
forward.
People
also
move
forward
and
both
calicia
nolan
and
ashish
have
been
looking
for
other
interesting
opportunities
within
vmware
for
a
while
and
as
such
they
will
be
moving
off
the
valero
project
over
the
next
few
weeks
during
a
transition
process.
A
A
B
Sure
hi
guys
this
is
daniel,
I'm
a
maintainer
of
harbor
and,
in
addition
to
harvard,
I
also
did
some
commit
to
open
source
projects
like
clear,
notary
and
docker
distribution,
mainly
the
products
we
need
to
be
integrated
with,
and
I'm
super
excited
to
be
joining
this
community
and
looking
forward
to
the
collaboration.
B
I'll
also
try
to
learn
some
valuable
experience
and
bring
it
back
to
harbor
community,
and
you
know,
make
us
benefit
from
this.
Thank
you
guys
sounds
awesome.
Welcome
daniel.
C
Venkai
you
want
to
introduce
yourself
too,
okay,
hi.
I
want
this
ivanka,
I'm
based
in
china,
and
have
been
working
on
another
open
source
project
harbor
for
a
couple
years
and
I
fixed
some
bugs
also
for
darker
and
darker
distribution
and
I'm
very
glad
to
have
a
chance
to
join
the
relative
and
contribute
to
this
cool
project.
Thank
you
all
awesome.
A
Thank
you.
Thank
you
yeah.
So
as
part
of
this,
this
move
this
transition
here,
the
the
past
couple
of
months,
the
joint
maintainer
team
for
valero
and
pm
here
eleanor
have
been
looked
at
looking
into
advancing
the
feature
set
of
valero
and
its
capabilities.
A
How
we
integrate
with
other
backup
solutions
and
disaster
recovery
solutions,
making
sure
that
we
open
it
up
for
the
community
even
more
so
to
handle
those
new
advancements
and
responsibilities.
Vmware
is
investing
in
the
core
contributor
team
and
we're
actually
hiring
more
engineers
here
as
well.
We're
looking
to
hire
five
more
people
to
contribute
to
valero
and
within
our
team
in
in
beijing.
A
So,
if
you're
interested
in
joining,
please
let
us
know
we
would
love
to
have
you
there
and
we'll
be
working
on
both
the
valero
project
and
project
astrolabe,
which
I'll
know
we'll
cover
in
a
future
community
meeting
as
well.
We'll
talk
more
about
astrolabe
and
kind
of
the
idea
what
we
want
to
do
moving
forward
with
the
valero
project.
A
So
again
I
want
to
give
a
super
big
shout
out
to
alicia,
nolan
and
ashish.
Thank
you
so
much
for
for
being
awesome.
Maintainers
for
years
for
valero,
I
also
want
to
give
a
big
shout
out
to
dave
and
bridget
and
scott
and
jen
ting,
who
keep
the
lights
on
for
valero
as
maintainers
going
forward
and
also
again
welcome
daniel,
and
thank
you
any
questions
or
comments.
D
Well,
I
also
wanted
to
say
thank
you
to
nolan,
carlesia
and
ashish.
I
think
valero
has
been
you
know,
kind
of
leading
up
what
data
protection
on
kubernetes
is.
I've
certainly
learned
a
lot
from
the
team
as
I've
gone
through
my
own
journey.
So
thank
you
and
you
know
we'll
be
seeing
you
I'm
sure.
E
Yeah,
definitely
to
echo
dave's
sentiments,
I'm
I'm
quite
sad.
Actually
I
was
I
was
I've.
Always
I've
definitely
enjoyed
working
with
carlisia
and
ashish
and
nolan
and
they've
been
the
best
at
help
just
being
so
supportive
and
helping
me
learn
not
just
the
the
surface
of
things
but
really
deep
into
the
design
and
the
technicalities
of
it
so
I'll
be
sad,
also
excited
for
the
new
people.
Welcome
it's
going
to
be
fun.
F
I
actually
learned
a
lot
from
you
guys
when
I
experienced
with
very
roles
and
then
interact
with
the
guy
during
the
code
reveal
design
review
and
I
of
a
base
about
of
of
how
much
you
guys
know
about
not
only
this
product,
better
role
but
also
other
aspects
as
well.
So
I
learned
a
lot
from
the
guy.
Thank
you
very
much.
G
Yeah
you've
been
awesome
on
the
team
and
you'll
be
missed,
but
I'm
excited
for
you
and
all
you'll
be
working
on
next,
and
I'm
also
excited
to
welcome
new
team
members
as
well.
So.
H
Plus
one
to
everything
that's
been
said,
and
a
teaser
carlesian
scott
submitted
slang
at
cubecon
for
valera
carlisa,
said
she's
still
happy
to
give
that
talk
if
it
gets
accepted,
so
hopefully
it'll
be
the
first
of
many
times
we'll
see
her
and
the
other
and
nolan
initial
in
the
future.
Still
with
regards
to
valera.
I
Yeah
thanks
everybody
for
your
time
here,
appreciate
everybody's
contributions
and
putting
putting
trust
in
us
with
your
data.
It's
not
going
away,
it's
not
ending
and
I'm
sure
I'll
see
people
at
kubecon,
whether
it's
this
year
or
next
year,
but
yeah,
I'm
not
I'm
not
totally
disappearing.
J
I
just
want
to
say
thanks
to
all
you
guys
for
helping
me
along
as
I'm
coming
up
as
a
new
maintainer
and
even
before
that,
as
I
was
contributor
in
memphis
community.
So
thanks
again,.
K
It's
been
great
working
on
the
on
valero
at
vmware,
outside
of
vmware.
I've
been
the
I've
had
the
chance
to
work
on
whatever
from
both
sides.
So
it's
thanks
for
allowing
me
to
be
a
part
of
the
community.
L
I
think
it's
a
new
phase
for
valor
to
become
a
grown-up
backup
products
and
enterprise
level
products
and
I'm
not
gonna,
be
really
far
away
from
it
at
all.
So
this
sounds
like
such
a
hard
goodbye,
but
we're
gonna
be
around.
H
And
just
to
follow
up
on
what
jonah
said,
yeah
we're
really,
I
feel
like
khaleesi,
you
and
the
others
have
brought
this
to
such
a
valero
such
a
good
place
like
you've,
really
developed
blair.
It
has
such
a
strong
community
and
we're
very
excited
for
this
next
phase
with
astrolave.
There's
that
teaser,
we'll
figure
out,
maybe
even
in
the
next
community
meeting.
We
can
talk
more
about
that.
It's
nothing
too
different.
H
It's
part
of
valero
still,
but
just
a
new
direction
from
a
product
perspective
and
I'm
very
excited
to
work
with
the
new
team
members.
So
coming
from
an
excellent
era
of
valera
valero
and
going
to
a
new
era
of
valero.
A
For
sure
for
sure,
yeah
again,
thank
you,
everyone,
everyone
for
all
your
contributions
so
far
and
looking
forward
to
more
future
contributions
to
the
project
here
in
and
growth
as
well
of
the
both
the
community
and
the
the
project
and
features
and
everything.
So
it's
gonna
be
super
super
interesting
to
see
in
the
next
coming
months
here
what
we
can
do.
A
All
right,
so
with
that
big
thing
out
of
the
way,
as
we
are
ramping
up
our
efforts
over
in
beijing
as
well,
we'll
most
likely
switch
around
the
community
meetings
a
bit
and
I've
been
talking
to
the
the
maintainers
about
this
as
well.
We'll
most
likely
do
like
a
bi-weekly
rotating
schedule,
but
I'll
I'll
set
up
a
a
poll
for
everyone
to
join
in
and
and
share
some
share
some
thoughts
on
when
that
would
be
a
good
time
for
everyone.
D
Nothing
too
special
just
went
through.
We
did
bridgette
carlos
and
I
sat
down
and
did
a
little
review-a-thon.
Last
week
we
did
make
some
good
progress
on
you
know
getting
some
prs
merged
and
we'll
continue
to
burn
down
the
pr
list
and
yeah
I'm
still
working
on
the
upload
progress
monitoring,
among
other.
A
G
So,
like
dave
mentioned,
we
spent
time
last
week
catch
up
and
getting
through
some
of
the
review
backlog.
Last
week
I
promised
a
design
dog
for
plug-in
versioning,
but
a
few
other
things
come
up
in
the
meantime
that
is
like.
I
have
it
ready
it's
about
to
be
committed
and
pushed,
so
it
will
be
there
so
shortly
and
apologies
to
those
who
are
waiting
for
it.
F
Yeah,
so
I've
been
doing
some
scale
tests
on
the
federal
backup
and
restore,
and
I
bump
into
a
few
issues.
Last
week
I
present
I
talked
about
one
issue
that
scott
I
created
a
bug
on
openshift
plug-in,
and
I
we
working
with
scott
on
red
hat
about
that,
but
this
week
I
present
a
few
more.
F
So
it's
all
of
them
related
to
the
fact
that
in
the
names
when
the
names
pay
have
a
lot
of
a
lot
of
resources
right,
the
usage
for
it
is
that
our
customer,
they
create
their
own
cid
and
they
create
they
keep
track
of
things.
They
put
a
lot
of
cr
onto
the
namespace
as
a
consequence,
when
we
back
up
on
restore
the
namespace,
we
hit
these
problems.
I
I
already
defined
a
box
on
valero.
Can
I
share
the
screen.
F
Yeah
yeah,
so
I
I
want
to
share
my
screen,
so
I
talk
about
a
few
of
them,
so
so
the
one
one
problem
is
that
when,
when
we
back
up
a
namespace
with
a
lot
of
spay,
the
valero
part
itself
got
killed.
It
ran
out
of
the
the
my
velour
part.
I
configure
it
with
128
megabyte
of
memory,
but
when,
even
when
I
increase
it
to
512
megabyte
it
still
fail.
F
F
I
want
to
see
the
situation
is
this,
but
if
I
create
a
namespace
with
like
say,
10
000
of
cr
any
kind
of
cr
right
in
this
in
this
sample
test
case
that
I
I
test,
I'm
using
secret
and
service
account,
but
you
can
imagine
that
it
can
be
any
cr
and
then
just
one
simple
backup
and
within
a
minute
it
the
part
developer
part
you
know,
got
killed
with
the
as
you
can
see
here.
This
is
the
sample
that
I
I
run
and
it
got
the
after
the
part
restart.
F
You
look
at
the
the
status
you're
going
to
see
that
it
it
got
killed
because
of
our
memory
right
and
I
look,
I
dig
into
valero
code
and
I
found
that
in
the
backup
at
the
beginning
of
it.
We
we
call
this
get
all
item
ghetto
item
and
then
we
would
go
through
a
loop
and
backing
up
one
by
one
right.
F
So
this
is
the
loop
that
we're
going
to
back
up,
so
I
haven't
put
in
debug
to
to
figure
out
exactly
what
happened
yet,
but
from
the
go
flow
it
looked
like
if
we
collect
all
the
items
to
back
up
here.
The
more
item
we
have,
the
bigger
this
list
will
be
occupied
in
the
memory
and
it
eventually
right
now.
F
128
megabytes
is
not
a
lot
when
it
could
come
to
thousands
of
items
in
in
in
in
the
namespace
right,
and
that
is
where
the
bug
coming
in
is
that
it
crossed
the
velopod
and
that
caused
another
box,
which
is
when
the
part
restarted
right.
The
valero
backup
resource
is
still
there
and
the
style.
The
phase
is
still
in
progress
when
the
little
part
is
popping
up
it
back
up
it
restarted.
It
doesn't
continue
to
do
anything
with
the
backup
cr,
that's
already
there.
F
It
just
leave
it
there
as
a
consequence
when,
when
the
client
side
I'm
a
client,
so
I'm
tracking,
tracking,
I'm
tracking
tracking
the
backup
and
all
I
see
was
just
you
know-
I
don't
see
any
progress
on
it
right
after
five
ten
minutes
turn
20
minutes.
I
have
to
time
out
and
kill
the
the
thing,
and-
and
that
is
something
that
I
I
think
we
this
feature
is
lacking-
is
that
maybe,
after
the
part,
the
valero
part
restart,
and
it
see
that
there
is
a
backup
in
progress
laying
around
we?
F
Maybe
we
maybe
it
should
pick
it
up,
set
it
at
fail
or
continue
with
the
backup
or
whatever
we
do
with
it
right.
I
don't
know
exactly
we
invited
to
discuss
here,
I'm
just
describing
the
situation
that
I
I
see.
So
that
is
number
two
problem
right
and
number
three
problem
is
not
in
the
backup,
but
rather
in
the
restore
so.
H
F
Almost
three
here
so
number
three
is
that
during
backup
I
can
see
that
the
the
status
of
the
backup
being
updated
right.
So
once
every
once
in
a
while,
we
say:
hey
backup,
trend,
10
out
of
20
out
of
2
000
item
200
out
of
300,
but
I
don't
see
the
same
thing
in
the
restore.
So
in
the
restore,
I
only
see
it
change
from
in
progress
and
until
it
finished
all
the
way
to
the
end.
It's
a
completed
right.
F
However,
in
the
case
that,
if
you
know
the
available
part
roughly
start
in
the
middle
always
see
it
just
in
progress
forever
right,
it
just
stuck
there
forever
and
then
that
point,
the
restore
my
restores.
I
have
no
clue
whether
the
backup
other
restore
is
going
on
or
the
part
already
dead
right.
So
those
three
are
the
three
issues
that
I
found
that
when
I
test
a
namespace
with
a
lot
a
lot
of
a
lot.
C
F
Resources
in
it
and
again
I
I
don't
think
it's
related
to
what
type
of
resource,
but
it
might
be
happen
to
any
any
type
of
resource
as
long
as
there's
a
large
number
of
it.
So
I
want
to
share
with
you
guys
here
to
see
what
we
should
disc
will
have
about
after
that,
and
then
we
I
have
another
topic.
So
if
you
want
to
discuss
so,
but
let's
talk
about
these
three
bucks
in
terms
of
the
scale
first.
D
Yeah
so
in
terms
of
scale,
so
there's
a
pr
out
right
now
for
a
multiple
namespace
scale,
tests
which
takes
us
to
like
2500
namespaces
and
tries
that,
and
what
we
should
do
is
take
what
you're
doing
and
add
that
you
know
make
a
scale
test,
an
ee
test
or
we
can
dial
it
up
and
down,
and
then
we
can.
I
think
we
can
get
some
recommendations,
for
example
like
if
your
cluster
is
this
big,
allocate
this
much
memory,
so
I
think
there's
a
difference
between
like
the
system
goes
exponential.
D
D
We
could
look
at
other
ways
of
like
streaming
stuff
through
that's
going
to
be
bigger
changes
to
the
code,
and
I
think
one
of
the
the
thing
we
can
also
look
at
you
know
I
think,
to
some
extent
it
dumps
things
to
a
file
in
between.
D
I
think
one
of
the
issues
is:
how
do
we
get
a
relatively
consistent
view
of
the
of
the
api
server
because,
like
if
we
stream
stuff,
but
we
don't
process
it
quickly
enough,
then
it's
kind
of
spread
out
in
time.
So
I
think
we
should
start
with
just
adding
some
scale
tests
that
replicate
what
you're,
seeing
and
dial
it
up
and
down
and
see.
You
know
like
what
the
memory
versus
scale
would
be.
Does
that
seem
reasonable.
F
F
D
It's
going
to
be
a
balancing
act
because
we
don't
really
have
a
snapshot
mechanism
for
the
for
the
api
server,
so
you
know
if
we
spread
it
out
more
in
time,
because
right
now
I
think
the
the
thought
here
is.
We
grab
all
the
resources
as
quickly
as
possible,
so
we
get
a
somewhat
static
view
of
them.
D
If
we
kind
of
let
it
spread
out
in
time,
things
can
be
changing
in
the
api
server
more
while
we're
we're
fetching
items.
So
I
think
there's
there's
definitely
room
there
to
improve,
but
I
think
we
have
to
be
a
little
bit
careful
about
how
we
do
it.
We're
going
to
have
to.
F
Yeah,
I
agree
with
you
on
that
one
because
I
remember
when
I
do
I
have
to
serialize.
I
have
to
change
the
order
of
backing
up
items
in
the
my
app
consistent
back
in
the
project
last
year.
I
I
have
to
get
somewhat
get
a
bunch
of
all
of
the
item
first
and
reorganize
them
into
a
specific
sequence
that
I
want.
If
I
don't
collect
all
of
them,
I
have.
I
will
not
able
to
do
that
right,
so
yeah,
maybe
maybe.
I
I'll
also
toss
out
there
that
the
the
problem,
with
not
being
able
to
snapshot
the
or
get
a
consistent
view
of
the
kubernetes
api
server
is
something
other
data
protection
projects
have
run
into.
This
came
up
at
the
data
protection
working
group.
I
I
don't
think
there's
been
any
progress
on
it,
but
somebody
threw
out
the
idea
of
having
something
equivalent
to
an
fs
freeze
or
a
lock
on
the
whole
api
server,
or
at
least
for
reads,
but
I
don't
know
how
far
that's
gone.
I
think
it
was
mentioned
once
in
a
meeting
and
that's
it,
but
that
may
be
another
angle
to
try
to
solve
the
problem.
I
think
valero
needs
to
fix
it
in
terms
of
its
own
memory
management,
but
in
terms
of
like
getting
a
snapshot
of
the
kubernetes
api
server.
I
If
we
can
expose
like
astrolabe
endpoints
and
then
grab
that
data,
that
would
be
great
I'll.
I
have
to
drop
here
in
just
a
second
but
I'll
also
mention
dave
eleanor
and
I
had
done
some
triage
of
old
issues
a
while
ago
and
there
was
some
pagination
bugs
that
could
potentially
help
here,
but
I
think
it
it.
I
That's
related
to
what
dave
is
saying
about
expanding
over
time
like
as
you
paginate,
that
slows
you
down
a
little
bit
in
terms
of
getting
items
out
of
the
server,
but
you
wouldn't
blow
out
the
memory
limit
as
quickly
so
that's
a
possibility.
But
I
I
also
agree
that,
like
upping,
the
memory
limit
will
help,
but
the
thing
that
will
help
dial
this
in
as
having
scale
tests
to
run
whatever
solutions
get
implemented
against.
D
Yeah
and
we
can
set
some
numbers-
you
know,
for
example,
like
recommended
numbers
like
you
have
this
many
resources,
you
need
this
much
ram
and
that
that
at
least
is
a
guide
for
for
people
to
work
with.
So
let's
work
on
getting
on
taking
what
you're
testing
right
now
and
making
some
some
ede
tests,
I
don't
think
it's
gonna
be
too
hard.
It
should
be
pretty
pretty
quick
with
the
framework
we
have
currently.
D
So
why
don't
you
ping
me
offline
on
that
and
then
the
second
point
was
handling
server
restarts
and
that's
definitely
an
issue.
That's
something
I
was
planning
to
address
in
the
upload
progress
stuff
because
we
have
to
deal
with
restarts
there
anyway.
So
we
can
so.
I
think,
that's
something
we
will
address
in
the
one
seven
time
frame
and
it's
definitely
yeah.
What
we're
doing
right
now
is
not
good.
D
F
D
J
I
think
there's,
I
think,
there's
a
couple
things.
One
of
the
things
that
may
be
going
on
here
is
again
once
you've
restarted
the
valero
server,
even
though
the
status
says
in
progress
for
that
backup
or
that
restore
it's
not
actually
being
processed,
so
there
would
be
no
updates
there.
So,
if
that's
the
only
not
updating,
I
think
the
issue
is
that
it's
because
we're
not
actually
processing
it
anymore,
even
though
it
says
in
progress,
because
once
you
restart
valero
at
valero
ignores
anything,
that's
not
new.
J
If
there's
actual
failure
to
update
there
were,
I
guess
two
things
I
noticed
when
I
was
dealing
with
the
restore
progress.
One
was,
it
was
breaking
cr
restore
because
we
were
processing
it
ahead
of
time,
and
I've
got
a
pr
out
there
that
once
I
had
some
end-to-end
tests
for
and
respond
to
some
other
feedback,
we
should
be
able
to
fix
another
aspect
here,
and
this
doesn't
affect
the
cr
updating
itself
in
terms
of
the
restore,
but
the
backup
progress
was
also
logging.
J
J
But
in
terms
of
the
updating,
the
restore
cr
itself,
I
don't
know
that
I
saw
anything
there
that
would
fail
to
update
if
flora
was
actually
restoring
items.
So
I
think
we
want
to
see
whether
the
fact
that
it
stopped
updating
was
because
the
restore
itself
stalled
and
then,
when
we
started
valero
or
whether
we
were
actually
succeeding
in
restoring
items
that
weren't
getting
updated
in
the
progress.
That
would
be
a
more
fundamental
issue
with
the
progress
recording
that
I
haven't
seen.
But
I
haven't
necessarily
looked
for.
F
I
actually
done
a
test
and
when
in
which
we
restore
about
1800
items
and
okay,
and
during
that
it
was
successfully
restored.
The
entire
name
say
with
that
many
items.
However,
during
the
restore,
I
do
not
see
any
update,
I
only
see
in
progress
and
at
the
end,
just
complete.
That's
it
so
at
least
that
what
I
see
in
my
test.
J
F
J
Yeah,
that
is
the
point
of
the
that's
what
it
should
be
doing.
I
I
I'm
not
sure
whether
those
updates
get
batched
or
whether,
like
so
what
kind
of
how
often
we
update
the
restore
and
again
one
question
is:
are
you
seeing
the
same
on
backup
or
is
it
different
from
backup
restore,
because,
if
at
all
possible,
we
should
be
doing
the
same
thing
in
both
and
if
one
of
them's
not
updating
as
often
as
the
other,
that
may
be
an
issue
to
look
into.
F
J
It
so
so
that
that
may
be
something
to
look
into.
It
would
be
again
worth
seeing
whether
we're
seeing
this
in
all
cases
or
just
in
certain
cases,
but
if
that's
true
across
the
board,
that's
definitely
something
that
we
would
need
to
fix
with,
because
again,
if
backup's
doing
it
restore
needs
to
be
doing
it
too,
those
two
should
be
roughly
equivalent,
and
so,
if
they're
not,
there
could
be
something
going
on
there.
J
D
Yeah
and
one
weird
thing
so:
well,
I
don't
know
weird
but
the
the
restore
patch
the
restore
progress
patch
that
had
broken
frankie's
tests
initially,
which
were
you
know
because
she
had
didn't
and
we
we
did
a
lot
of
work
with
that
and
we
got
frankie's
tests
working
again
and
frankie's
test
use
custom
resource
definitions
in
crs.
D
J
And
what
was
happening
is,
and
I'm
not
sure,
maybe
that
her
test
do
it
a
little
something
differently.
Is
it
basically
right
now
kind
of
before
this
change?
You
know
on
the
backup
side,
anytime
we
back
when
we
back
up
a
resource.
J
The
schema
so
again
in
the
old
logic,
where
we
would
iterate
over
the
list
of
resources
in
priority
order,
restore
that
resource
completely
then
go
to
the
next
one
validate
and
then
restore
it.
By
the
time
we
got
to
a
custom
resource
that
crd
has
already
been
restored
and
things
have
been
refreshed
with
the
restore
progress
we
pre-processed.
All
of
that,
so
you
would
go
through
the
list
of
the
resource
list,
validate
count
the
number
of
items
added
to
your
total
and
so
the
problem
there
was
when
we
got
to
those
custom
resources.
J
J
You
wait
long
enough.
That
was
one
thing
that
you
know
they
came
out
too.
If
you
do
this
and
and
then
you
immediately
restore
things,
haven't
flushed
necessarily,
and
so
you
still
might
have
that
resource
available.
So
if
you
wait
long
enough
and
then
you
try
to
restore
with
the
existing
1
6
code
base,
you're
failing
on
the
validation
side,
so
we
actually
see
that
error.
Saying
hey
this
custom
resource
this.
This
type
is
not
known
to
the
server
before
we
go
and
actually
try
to
restore
anything.
J
So
what
my
pr
does
is
it
basically
runs
the
crd,
restore
first
just
that
one
resource
type
and
then
do
everything
else
that
way
by
the
time
we
get
to
validate,
because
who
knows
how
many
of
those
resources
in
your
list
are
custom
resources
if
the
crds
are
all
restored?
First,
then,
when
you
get
to
those
they
all
exist,
the
validation
succeeds.
J
We
can
count
the
number
of
items
and
go
on,
I'm
not
sure
how
those
tests
that
were
failing
and
now
working,
you
know
interact
with
any
of
this
to
see
whether
it
was
a
question
or
the
cds
already
exist
there
in
the
test.
You
know
prior
because
that
was
one
thing
where
you
know:
if
you,
if
you
create
them
in
the
cluster
and
then
you
back
them
up
and
then
you
delete
the
items,
but
not
the
crd
definition
and
then
restore
nothing's
going
to
fail
because
the
api
server
still
knows
about
that
resource.
G
G
Further
to
the
point
that
scott
made
and
we're
talking
about
like
the
cash
information
as
well,
so
that
was,
I
tried
to
reproduce
this
issue
locally
and
even
though
I
had
deleted
the
crd
and
then
performed
the
restore,
there
was
still
some
cash
information
about
that
crd
in
the
cluster.
So
it
was
still
able
to
perform
the
validation.
Even
though,
if
I
made
a
query
to
the
api
server
it,
it
said
that
you
know
the
crd
didn't
exist,
but
it
well
not
that
it
didn't
exist.
G
But
there
was
some
other
error
that
it
couldn't
like
resolve
the
the
shorthanding
that
I've
given
it
to
to
the
full
name.
So
it
still
had
a
reference
in
there,
even
though
it
wasn't
accessible
and
you
couldn't
create
resources
of
that
type.
So
my
guess
is
that,
because
there
hadn't
been
like
a
refresh
done
of
the
of
the
client
that
we're
using
to
communicate
with
the
api
server
in
between
the
deletion
and
then
the
restore.
J
Yeah
and
again,
I
know
that
I
did
reproduce
this
when
there
had
been.
I
mean
when
I
first
ran
into
this.
I
was
actually
backing
up
on
one
cluster
restoring
to
another.
So
of
course
it
wasn't
there
at
all,
but
then
I
ran
through
the
test
again
after
deleting
it,
but
there
would
have
been
several
minutes
in
between
so
again
that
whatever
caching
was
in
place
would
have
been
flushed
so
that
that's
yeah.
I
think
you'd
use
that
as
well,
that
by
restarting
bolero
you
were
also
able
to
reproduce
it.
G
E
Something
yeah
I
know
I
had
to-
I
actually
incorporated
restarting
bluro
pods
to
account
for
that
to
be
able
to
account
for
the
refresh
during
not
not
during
a
restore,
but
I
think
during
backup
it's
been
so
long
since
I
worked
closely
with
those
tests
but
yeah,
I
definitely
had
to
do
some
acrobatics
to
get
those
to
work.
D
D
That's
doing
this,
but
I'd
like
you
know-
and
we
could
maybe
do
something
with
find,
but
if
there's
anything
else
that
anybody
can
think
of,
that
would
be
a
good
way
to
spin
up
new
clusters,
as
you
know,
automatically
as
part
of
the
testing
I'd
be
very
open
to.
A
It
all
right
so
do
we
have
a
clear
path
forward:
more
performance
testing,
setting
up
end-to-end
testing
as
well
dave,
as
you
mentioned,
then
formalizing
best
practices
when
you
have
x,
number
of
crds
and
resources.
D
Yeah,
so
we'll
do
some
more
scale
testing
and
the
other
issues
that
fong
mentioned.
We've
got
on
the
radar
and
we
will
at
least
we
try
to
get
them
both
fixed
up
in
the
one
seven
time
frame.
I
think
they
should
be
one
one's.
A
big
lifter
is
already.
C
F
Okay,
if
we're
done
with
that,
I
have
another
topic
that
I
want
to
talk
about.
I'm
sorry,
I
take
all
the
time,
but
is
that.
F
Okay,
so
the
the
idea
is
about
on
the
the
replication
of
namespace
from
one
cluster
to
another
right
right
now,
because
I
don't
have
another
solution,
I
what
I
do
is
I
backup
the
namespace
in
this
cluster
right
into
our
my
backup
device,
and
then
I
have
to
restore
it
to
another
one
and
that
you
know
take
time
right.
So
it's
like
one
star.
It
takes
additional
hop.
F
I
always
wonder
if,
in
the
scenario,
if
I
want
to
replication,
I
want
to
replicate
my
namespace
from
one
cluster
to
another.
If
there's
any
way,
I
can
do
that.
You
know
between
two
valero
part.
You
know
one
side
backing
up
and
send
that
out
directly
to
another
side
and
restore
it
directly
right
without
any
hop
any
any.
You
know
have
to
store
down
to
another
place
I
right
now.
I
don't
think
that
it'll
have
that
feature
between.
D
We
do
actually
so
part
of
it
is,
is
in
the
astrolabe
framework
and
one
of
the
goals.
There
is
abstract
out
things
like
snapshotting
and
restoring
namespaces
volumes,
etc
so
that
we
can
disconnect
them
from.
You
know,
going
to
s3,
for
example,
and
take
a
a
source
and
a
destination
and
just
hook
them
together
directly.
So
that's
on
the
roadmap,
and
we
can
talk
about
that
and
raphael
and.
M
M
Hi,
this
is
rafael.
We've
been
working
on
this
use
case
long
at
this
point.
What
we
have
is
what
we
call
remote
veloro.
You
have
the
letter
running
on
a
different
cluster,
then
the
actual
you're
gonna
take
the
backup
and
the
restore.
M
So
basically,
this
the
letter
running
and
management
cluster
or
whatever
you
call
it.
It
makes
this
connection
over
the
api,
writes
in
a
dummy
menu
instance.
So
in
that
case
you
don't
persist
that
data
and
then
immediately
the
other
valero
running
on
a
different
name.
Space
starts
to
restore
right.
You
need
to
do
some
coordination,
of
course,
but
that
it
removes
at
least
the
writing
on
a
disk
and
removes
the
need
of
all
the
clusters.
M
F
That
yeah,
that
at
least
at
least
we
are
interested
in
that,
at
least
in
my
user
game
right
now
I
have
to
do
the
backup
two
storage
before
I
can
restore,
and
if,
if
you
guys
do
that
directly,
it
will
help,
at
least
in
my
use
it
a
lot
that
I
don't
have
to.
I
just
replicate
between
the
two
clusters.
M
J
J
You
know
on
a
like
kubernetes
111
cluster,
for
example,
which
is
an
openshift
3d
311.,
and
the
other
aspect
the
that,
which
is
what
fung's
talking
about,
is
kind
of
there's
the
performance
aspect
I
mean
we've
already
found,
for
example,
what
one
thing
we're
doing
with
the
latest
version
of
the
conveyor
migration
tool
is
instead
of
using
restic
for
copying
file
systems
to
you,
know
the
s3
store
and
then
out
again
on
the
restore
is
we're
actually
bypassing
bolero
for
pvc
copy
and
just
doing
an
rsync
basic.
J
You
know
having
a
separate
controller
that
creates
pvcs
and
then
our
syncs
between
them
and
worse.
You
know
improving
performance
there
by
just
doing
a
direct
copy
of
pvp
data
from
one
cluster
to
the
other
and
and
what
fong
is
talking
about,
would
allow
some
similar
performance
enhancements.
Doing
the
same
with
the
valero
data.
F
F
J
Yeah
and
then
the
pvc
is
the
one
I'm
saying
that
with
the
conveyor
crane
product,
for
example,
you
know
where
we
do
the
migrations
from
one
cluster
to
another.
We've
actually
done
that
with
pvcs,
where
we're
not
using
valero
to
backup
pvcs.
If
the
user
decides
to
do
the
direct
copy
and
instead
we're
actually
having
a
separate
controller
that
creates
pvcs
based
on
the
metadata
and
then
does
our
sync
to
copy
the
data
straight
from
one
of
the
other.
J
That's
orchestrating
another
layer
kind
of
in
addition,
but
then
we
use
below
everything
else
for
all
the
you
know,
cluster
kubernetes
data
so
that,
but
but
for
the
kubernetes
side
and
everything
through
valero
we're
still
doing
the
full
backup
to
the
you
know,
the
the
backup,
storage
location
and
then
that
storage
location
is
also
in
the
restore
cluster,
which
we
then
use
durability,
store.
F
F
Yeah,
so
please
keep
me
up
to
date
with
that
project,
because
I'm
interested
in
it
I
even
able
to
help
if
that
is
what
needed.
N
Yeah,
I
I
just
have
one
question
regarding
this:
when
you
use
when
we
say
replication,
I
mean
just
like
you
mentioned,
so
does
that
mean
that
this
replication
will
happen
in
the
same
site?
I
mean
data
center
or
it
will
be
a
close.
Yes.
N
C
F
N
Yeah,
maybe
I
do
not
know
currently
the
details
about
how
we
back
up
through
the
cloud
provider,
maybe,
for
example,
if
you
will
use
a
database,
we
still
use
the
ebs
snapshot
to
do
the
backup,
if
with
that,
if
you
try
to
replicate
it
to
another
region,
so
how
that
snapshot
can
be
used
in
another
region.
D
Yeah
so
there's
ebs
has
evs
direct
which
lets
you,
for
example,
take
a
snapshot
and
then
pull
the
data
out
of
it
using
their
api.
So
we're
working
on
that.
I
actually
got
the
poc.
I
can
extract
data
and
next
step
is
to
push
it
back
into
a
snapshot.
A
All
right
so
follow
up
on
that
as
well
your
coming
along
anything
else.
Anyone
want
to
add
to
that
discussion
topic
today.
A
All
right,
thank
you
all
for
for
joining,
have
a
fantastic
rest
of
the
week
and
see
you
all
next
week,
bye
folks,
you.