►
From YouTube: Velero Community Meeting - Sept 21, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
the
valerio
community
remaining
slash
open
discussion.
Today
is
september
21st
2021
we've
got
some
status
updates
that
we'll
dig
into
and
then
discussion
topics.
First
up
we
got
dave
yeah.
So
just
basically.
B
Working
on
the
one
seven
release
some
more
work
on
upload
progress,
I'll
try
to
get
back
to
it
anyway.
The
beijing
team
is
out
monday,
tuesday,
so
they'll
be
back
this
evening,
so
yeah.
As
far
as
I
know,
there's
no
release
blockers
found
for
one
seven,
so
we're
going
to
go
ahead
and
move
towards
the
rc2
release
just
to
get
in
some
minor
fixes
and
we'll
run
through
another
round
of
the
automated
testing,
and
we
should
be
good.
C
C
Like
we
said,
everything
seems
to
be
progressing
well
and
looking
good
there
and
then
also
just
spending
some
time
looking
at
some
of
our
internal
build
processes
as
well,
which
may
ultimately
affect
some
work
going
upstream,
we're
looking
at
different
image
registries
and
there's
a
chance
that
we
might,
that
might
end
up
affecting
where
we
push
the
open
source
images
too.
So
I
think
we've
discussed
that
before
we
haven't
made
any
decisions
yet,
but
those
discussions
are
in
the
works,
so
there
might
be
some
changes
there.
A
Nice
any
questions,
comments
for
bridget.
A
All
right,
we
do
have
some
discussion
topics.
I
don't
see
eleanor
on
here
right
now.
I've
already
pinged
her
so
we'll
see
if
she
shows
up
otherwise
I'll
defer
to
this
one.
For
next
time,
hey.
C
No,
it
hasn't
it
has
been
merged.
Yet
I
think
we've
been
focusing
we've
kind
of
frozen
main
admits,
we're
only
merging
in
things
which
are
release
blockers.
I
think
all
the
comments
have
been
addressed
there
I'll
take
another
look
and
then
I'll
ping
folks
to
look
at
it
again
and
try
and
get
that
merged.
As
soon
as
we
do,
the
the
g8417.
D
Okay,
just
curious
guys,
I
think
we
are
starting
from
our
district.
I'm
gonna
sit
back
and
working
on
the,
so
you
already
created
a
brand
for
it
right.
C
D
A
All
right
so
for
discussion
topics,
since
I
know
rafael
that
this
topic
usually
brings
up
a
lot
of
discussion.
I'm
gonna,
let
scott
go
first
with
his
discussion
topic
and
then
we'll
dive
into
yours.
E
Sure
so
the
linked
issue
was
one
that
was
closed
a
few
months
ago.
With
the
comments
saying
we
don't
think
we
can
fix.
It
reason
that
came
up
again
is
the
saw,
a
comment
on
it
and
actually
someone
apparently
using
the
I'm,
not
sure
if
they're
using
conveyor
or
the
oadp
side,
but
we
got
a
an
issue
posted
on
our
side
as
well,
suggesting
that
and
basically
the
issue
seems
to
be
when
you're
running
a
pod
that
says
run
is
non-root.
E
That
is
that,
if
you
specify
a
container
that
uses
a
name
but
no
uid,
it
doesn't
allow
it
complains.
But
if
you,
if
your
container
has
a
uid,
then
as
long
as
it's
not
zero,
you're
good,
so
the
issue
is
that
rustic
restore
fails
anytime,
you're
using
your
you
know,
your
your
pod
has
run
as
non-root
true,
and
I
guess
the
question
was:
is
simply
switching
that
to
use
a
uid.
You
know
instead
of
using
non-root,
which
is,
I
guess,
what
we're
using
now
with
the
change
to
distrolus.
A
E
E
Do
the
rest
of
her
story
right,
but
this
is
this:
is
the
rustic
restore
helper
in
a
container
this
added
to
the
pod
in
the
restore?
So
when
you
restore
a
pod
that
has
that
we
have
rustic
volumes
to
copy
in,
we
have
the
restrict
restore
in
a
container
that
basically
looks
for
that
dot,
bolero
file
just
to
so
that
it
knows
that
it's
done.
So
it's
really
just
an
in
a
container
that
has
one
job,
which
is
to
look
for
that.
One
directory
to
be
created.
C
E
E
E
So
if
you're,
if
you
have
a
pod
that
has
run
as
non-root
true
any
containers
you're
adding
need
to
have
a
uid
specified,
apparently,
and
so
that
rustic
container,
that
the
internet
restore
helper
is
failing
to
start,
and
it
seems
like
in
that
case,
although
this
isn't
anything
I've
tested
myself,
but
just
from
the
comments
in
the
issue
you
put
in
that,
if
you
were
to
use
the
uid
for
the
that
non-root
username
that
we're
using
now
instead
of
the
name,
it
would
work
in
that
use
case.
E
E
Since
you
know
we're
using
consistent
images,
they're
different
in
terms
of
base
images
than
upstream.
You
know,
our
fix
would
be
independent
of
the
upstream
fix
anyway.
So
you
know
whether
we
fixed
it
or
not,
is
kind
of
separate
from
whether
upstream
fixes
it.
E
Although
I
just
wanted
to
get
an
idea
here
as
to
whether,
because
the
issues
are
the
same,
it's
just
a
question
of
you
know
in
your
docker
file
versus
that
you
know
how
that
would
be
handled,
and
I
know
I
guess
now
too,
we
don't
have
a
separate
docker
file
for
the
rest
of
restore
helper,
because
with
build
x
we're
using
the
kind
of
combined
kind
of
shared
docker
file
for
all
the
builds.
Now
I
think.
C
Yeah
there
is
there's
just
a
single
docker
file
and
I
think
we
just
replaced
like
the
bin
argument
during
make
and
it
copies
in
the
appropriate
binary
or
builds
the
appropriate
binary
during
the
image
build.
So.
E
I
mean
it's
not
something
that
we
need
to
decide
now,
which
is,
I
guess,
bringing
it
up,
because
it
it.
It
may
be
something
that
warrants
some
kind
of
investigation.
You
know
afterwards,
just
to
figure
out,
it
seems
like
just
using
the
uid
instead
of
the
name
should
work,
but
there's
always
the
hidden
downsides
that
you
don't
think
about
when
you
first
talk
about
it,
and
I
just
don't.
E
C
E
E
E
Just
so,
there's
a
need
to
do
this
change,
for
you
know
the
valero
and
the
valero
image,
for
example,
because
that's
running
as
its
own
pod
from
a
deployment
or
daemon
set
that
you
know
we
specify.
E
So
you
know
that
workaround
is
only
needed
for
this
rustic
restore
helper,
because
this
is
an
image
that's
being
inserted
as
an
edit
container
in
user
pods,
and
that
might
that
might
require
a
change
to
the
buildex
stuff
to
be
able
to
specify
alternates
and
something
that,
because
we'd
only
want
this
to
apply
to
that
one
container
image
not
to
the
various
architectures
of
the
valero
image.
E
B
E
And
I
think
it's
one
of
these
things,
you
know
when
you're,
when
you're
going
through
100
issues,
you
say:
okay,
I
don't
see
how
this
work.
You
know
you
close
it
and
then
someone
else
just
comes
back
and
asks
a
more
detailed
question
about
it.
You
go
back
and
think
and
say:
well,
maybe
maybe
that
does
work.
You
know,
let's
think
about
it
and
which
makes
sense.
I
mean
that's
kind
of
the
whole
process.
E
B
Okay,
that's
fine,
so
tell
you
what
I
will
put
the
I'll
reopen
it
I'll
add
the
needs,
investigation,
level
label
and.
F
E
E
If
we
do
make
a
change,
it
will
be
separate
anyway,
but
if
we
make,
if
we
take
action
on
our
side,
we'll
update
the
issue
accordingly
and
then
we
can
decide
whether
it
makes
sense
for
the
valero
level
as
well
or
whether
that
doesn't,
and
as
I
said,
I
don't
even
know
if
the
user
that
put
the
issue
in
for
us
and
was
commenting
upstream.
You
know
which,
whether
he's
using
our
stuff
with
you
know,
adp
or
conveyor,
or
whether
he's
using
upstream
and
kind
of
where,
where
he
fits
into
this.
E
G
G
I'm
gonna
try
to
keep
a
little
bit
high
level,
so
in
apologies
I
just
merged
requests.
This
morning
I
I
didn't
have
any.
I
didn't
give
anyone
time
to
review
it
offline.
So
that's
why
I'm
going
to
go
in
online
with
you
very
quickly
here,
so
the
just
a
refresh.
G
Those
are
hooks
to
do.
Pre
and
post
backup
and
restore
one
thing
that
drove
the
conversation
last
time
is
the
use
case
of
one
the
little
backup
being
finished,
triggering
available
store,
not
necessarily
on
the
same
cluster.
G
G
The
high
level
is
run
the
plugin
after
the
backup
object
itself
is
validated,
so
we
just
make
sure
the
backup
is
validated.
It
doesn't
have
any
clash
with
existing
names.
Then
we
run
the
pre-backup
actions
now
after
that
that
they
did
some
proposal
changes.
So
the
post
backup
action
what
we
spoke
like.
We
really
want
the
backup
be
persisted
on
the
object
storage.
So
that's
pretty
much.
What
I
changed
here
after
the
backup
is
completely.
We
persisted
on
the
object,
storage,
we're
gonna.
G
G
G
B
Yes,
I
I
think
the
idea
of
having
multiple
backup
logs
is
reasonable
and
that'll.
Let
us
do
a
couple
things,
because
I
also
have
the
same
problem
with
like
upload
progress,
because
we've.
If
the
server
restarts
the
log
files,
we
can't
really,
I
don't
think
we
can
really
append
them,
because
they're
actually
written
as
gzip
files.
G
I
can't
do
either
way
I
thought
like
a
quick
one,
not
change
the
existing
function,
put
backup
and
then
create
a
method
just
to
you
can
individually
just
save
one
file.
B
Okay
and
and
plus
they're,
also
in
temporary
storage,
so
we're
going
to
eventually
be
doing
a
so
the
upload
progress
monitoring
should
keep
going,
even
if
we
restart
the
server
and
if
we
restart
the
server.
I
think
the
log
files
like
if
the
container
gets
restarted
the
log
files
are
living
in
temp
space,
so
I
think
they'll
get
widened
anyway.
So
I
I
would
say,
there's
good
reasons
to
have
multiple
log
files
where
we
can
have
this
post
stuff
happening
and
and
capped
them
all
together.
G
B
Well,
it'd
be
good
because
I
think
we
want
to
have
the
logs
of
the
the
upload
progress
monitoring,
because
if
it's
getting
errors
or
something
at
the
end
after
the
bat,
the
main
backup
is
completed,
it
would
be
good
to
actually
log
those
into
the
backup.
So
we
may
as
well
have
a
mechanism
for
doing
it
for
that,
and
we
can
do
it
for
this
as
well.
B
Good
with
having
multiple
log
files,
I
can
just
go
ahead
and
put
that
put
something
into
the
upload
progress
monitoring
to
do
that
and
I'll
get
some
patches
in
pretty
soon
for
the
remainder
of
it.
Okay
and
then.
G
B
B
G
What
I'm
gonna
do
here,
I'm
gonna
change
the
design
dock
to
kind
of
like
refer
your
in
progress,
uploading
and
then
wait
for
them,
but
yeah,
okay,
right,
very
quick
on
the
pre-restore
action
as
well.
I
changed
it
based
on
the
conversation
last
time.
G
The
pre-restore
action
will
happen.
The
way
that
I'm
suggesting
here
is
to
execute
when
the
restore
object
is
created
is
validated
for
semantics,
but
the
backup
it's
not
yet
fetched,
and
during
the
pre
restore
action
we
run
a
sink
on
the
on
the
the
storage
location
to
make
sure
refresh
to
whatever
the
backup
that's
coming
will
be
there
right
because
remember
we
we
were
talking
if
you
restore
too
fast
and
the
other
backup
is
not
yet
seen
because
you're
running
different
clusters,
you
want
to
sync
the
disk.
G
Now
the
the
thing
is,
the
pre-restore
action
will
execute
before
fetching
the
object,
the
backup
object.
So,
of
course
you
can
kind
of
do.
Do.
Is
my
backup
ready
you?
You
do
not
know
at
that
time,
but
you
can
write
some
protection
code
on
the
restore
pre-restore
action
to
you.
You
can
fetch
the
backup
before
the
actual
restore
you
can
do
some
other
stuff.
If
you
want
you
know
so
that
that's
a
that's
a
change
from
the
other
and
finally,
the
post
restore
action
very
similar.
What
we
discussed
on
the
post,
backup
action.
G
G
Then
we
have
to
find
out
how
we're
going
to
save
those
logs,
because
the
logs
for
restore
they
they're
different
from
the
logs
from
the
backup
right.
You
don't
have
the
same
phases
of
uploading
and
so
on,
restore
it's
a
simpler
way
for
the
good
or
bad
in
terms
of
backup.
G
G
Those
are
the
main
changes
from
the
of
the
design
from
last
time
and
you're
gonna
have
phases
right
for
each
execution
of
those
plugins.
G
Oh
I'll
leave
everyone
to
sink
in
to
see
if
we
have
during
the
week,
you
can
take
a
look
and
see
if
we
have
any
changes,
but
already
have
something
to
do
it.
Based
on
the
conversation
date.
A
And
let's
dive
into
some
contributor
shout
outs
or
actually
before
I
do
that,
does
anyone
have
anything
else
that
they
would
like
to
bring
up
today
as
a
discussion
topic.
C
C
I
think
that
the
first
one
is
a
new
one,
that's
from
tampon
who
just
added
some
changes
to
our
et
test
to
make
sure
that
we're
properly
waiting
for
snapshots
whenever
we
are
running
on
the
aws.
C
So
thank
you
very
much
don
fung
that
helps
just
make
the
edu
test
more
reliable.
So
thank
you.