►
From YouTube: Velero Community Meeting/Open Discussion - June 1, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
the
valero
community
meeting
today
is
june
1st
2021.
I
would
like
you
all
if
you
could.
Please
add
yourself
to
the
attendee
list
here,
so
we
know
who's
attending
these,
I'm
going
to
talk
about
some
status
updates
and
then
we
have
some
discussion
topics
about
openshift.
So
that's
going
to
be
interesting.
A
Let's
kick
off
kick
things
off
with
some
status
updates
to
start
with
carlesia.
B
Yes,
I
cleaned
up
the
past
the
last
week's
release
boards,
as
far
as
what
my
tasks
were
and
moved
things
to
the
new
release,
not
release,
I'm
sorry
sprints
and
what
I'm
doing
is
updating
this
document,
which
is
way
overdue.
C
Okay,
well,
let's
see
I
was
on
vacation
the
last
two
weeks,
so
I'm
back
to
catching
up
and
we're
back
onto
the
upload
progress
monitoring
this
week
and
start
kicking
that
forward.
A
Awesome,
thank
you
yeah.
Does
anyone
have
any
questions
for
kalisia
as
well
around
the
the
carbon
configuration
here.
A
No
all
right
and
dave
welcome
back
glad
to
have
you
back
all
right,
brilliant.
D
Hi
everyone.
So
last
week
I
had
what
I
felt
was
group
meme,
with
following
scott
to
discuss
the
plug-in
versioning
design
talk.
I
felt
like
there
were
a
few
comments
to
be
added
just,
but
I
think
everyone
seemed
on
board
with
the
general
direction
that
was
going
so
I'm
currently
in
the
process
of
incorporating
those
comments
and
also
creating
the
pr
for
the
actual
design
docs,
but
it
to
the
repo.
So
I
should
have
that
up
later
today,.
A
Sounds
good
and
I
have
the
recording
there
as
well,
so
you
can
add
that
to
the
the
vr
information,
oh.
E
D
D
Yeah,
that's
the
general
process
yeah,
so
I
I
usually
we
will
create
the
design
dock
initially
in
github
and
and
have
the
discussion
prs
on
the
pr
there,
but
we're
trying
to.
I
tried
out
google
docs
just
because
I
feel
like
it's
easier
to
like
iterate
on
that
and
get
other
people's
feedback
more
easily
incorporated.
D
So,
yes,
I
think
the
design
dock
that's
in
google
docs
is
like
mostly
complete.
Yes,
it's
just
a
matter
of
like
putting
it
into
market
online
and
then
we'll
do
a
breakdown
of
the
next
steps
like
within
that
and
then
that
should
be
the
design
ready
to
go,
and
then
we
can
start
making
progress.
D
Well,
that's
great
to
hear
thanks
for
advancing
that
work
thanks
to
scotland
fong
for
their
for
their
input.
I
felt
like
it
was
a
good
meeting.
So
thanks
for
for
coming
along
and
chatting
about
it.
A
All
right,
let's
dive
into
the
discussion
topics.
F
Well,
yeah,
thank
you
about
this.
This
is
more
like
a
an
open
ship
issues,
but
I
think
there's
got
here
so
we
can
talk
about
it
here
and
also.
I
also
raised
a
concern
about
that,
was
that
when
we
back
up
a
namespay
in
openshift
that
it
executed
an
openshift
plugin-
and
we
observed
that
in
some
namespace
it,
it
have
a
lot
of
resources
that
openship
is
listening
to
and
and
and
we
will
need
it.
F
So
as
a
consequent,
we
can
see
that
the
total
time
to
back
up
a
namespace,
even
though
it
have
no
pvc,
it's
significantly
larger
than
to
execute
a
backup
of
the
same
similar
namespace
in
in
a
non-openship
environment,
for
example,
when
we
do
when
we
backup
a
namespace
with
2000
secret
and
on
the
normal
cluster
and
the
sorry.
F
So
and
a
similar
namespace
on
openshift,
we
can
see
significantly
different
between
a
few
seconds
of
total
backup
time
to
minutes
many
minutes,
like
maybe
six
or
seven
minutes
of
total
backup
time.
So
that
is
how
significant
it
is,
and
I
look
into
the
openshift
and
I
plug
in.
I
saw
that
the
common
plug-in
being
executing
the
comment
back
back
and
being
executed,
and
that
is
something
that
I
will
follow
up
with.
Maybe
scott.
F
However,
the
general
concern
is
this:
when
we
have
a
cluster
and
we
bring
backup
and
if
the
plug-in
happen
to
register
on
the
wrong
type
for
quote
unquote
right
on
the
wrong
type
of
entry,
so
that
it's
somehow
being
executed
on
each
of
these
items
that
cause
the
total
time,
because
we
are
backing
up
variable
in
you
know
in
linear
right,
we
we
don't
backing
up
item
in
parallel,
so
the
total
backup,
time
of
the
whole
namespace
being
linearly
dependent
on
these
plugins
and
and
it
take
it
might
take
a
very
long
time
for
backup
to
being
executed.
F
It
has
any
way
that
we
can
somehow
right
now.
I
don't
think
we
have
a
total
timeout
for
the
backup
or
so,
if
that's
in
a
way,
we
can
do
some
more
some
more
statistics
to
kind
of
show
up
like
when
we
look
at
the
backup
right,
we
can
say
how
long
it
takes
to
back
up
the
specific
type
of
item
or
anything
like
that.
F
That
is
my
my
so
it's
kind
of
help
the
developer.
When
we
look
at
the
backup
it
says.
Certainly,
you
see
that
the
valero
backup,
for
example,
take
like
10
minutes
20
minutes
to
back
up
some
namespace
compared
to
other
namespace.
Can
we
have
some
way
for
delapolo
to
see
hey?
Is
it
backup
that
long,
because
we're
spending
so
much
time
on
this
on
this
and
that
or
something
like
that,
so
that
is
something
that
I
it's
just
like
a
a
more
like
an
concern
more
than
any
question
or
anything
like
that.
G
G
You
know,
can
we
add
some
tools
or
some
diagnostics
along
the
way,
somewhere
to
kind
of
give
a
better
idea
of
whether
certain
plug-ins
or
certain
aspects
of
the
valeria
backup
are
taking
longer
than
others,
but
to
go
back
to
the
first
point
I
would
say
it
sounds
like
what
you're
talking
about
is
not
so
much
an
open
shift
issue.
G
As
a
specific,
you
know
plug-in
issue,
which
is
to
say
that
our
080p
third-party
openshift
plug-ins,
you
mentioned
the
common
plug
and
that's
one
that
we
use
to
set
some
annotations
on
all
resources
to
kind
of
give
us
some
general
information
that
some
of
the
plugins
need
in
terms
of
the
openshift
version
for
backup
and
restore
and
we're
setting
registry
information
for
the
images,
and
so
it
might
be
interesting
to
see
you
know
because
you're
saying
non-openshift
takes.
You
know
a
few
seconds
and
openshift
takes
several
minutes.
G
It
sounds
to
me
like
it's
this
plug-in,
probably
that
you're
dealing
with
rather
than
you
know,
running
valero
and
openshift,
so
it
might
be
interesting
to
see
if
you
run
a
backup
without
the
these,
our
plug-ins
on
openshift.
You
might
see
that
same
few
seconds.
You
know,
in
other
words,
let's,
let's
figure
out
whether
we're
talking
on
an
open
shift,
performance,
question
or
a
plug-in
performance
question.
Do.
F
You
think
it's
a
good
idea
to
open
like
an
issue
on
open
chip,
oadp
and
then
discuss
about.
G
G
There
may
be
some
aspects
to
that,
because
what
we're
doing
is
setting
some
annotations
on
every
resource-
and
I
don't
know
if
it's
just
because
we're
calling
save
on
everything-
and
maybe
there's
not
much-
we
can
do
about
that
or
if
there's
certain
things
that
that
plug-in
is
doing
that
are
slower.
That
could
be
improved,
promise
wise.
So
that's
something
to
look
into,
for
example,
just
looking
at
this
now.
G
G
I
guess
we
should
look
into
it,
so
one
possibility
would
be
to
get
this
plug
and
figure
out
a
way
to
streamline
it
to
take
less
time,
although
I
suspect,
if
you're
saving,
you
know
every
resource
modifying
everyone
that
could
still
be
a
performance
that
we
need
to
look
into.
Do
we
need
to
do
this
for
everything
I
know
when
we
created
this
plugin,
we
wanted
to
do
we.
We
needed
some
of
these
annotations
on
more
than
one
resource
type,
and
so
you
know
when
you're
not
doing
it
on
just
one
resource
type.
G
The
next
easiest
thing
is
to
do
everything,
and
maybe
we
need
a
more
targeted
approach.
That
is
certain
resource
types
needed,
I'm
just
not
sure
off
the
top
of
my
head
I'll
have
to
look
at
that
offline.
We
may
be
able
to
reduce
the
number
of
resources
that
we
apply
this
plug
into
that
could
help,
or
we
may
need
to
streamline
the
actions
within
the
plugin,
but
I
think
either
way,
that's
an
aspect
to
look
into.
You
know
go
ahead
and
enter
an
issue
on
the
oadp
things.
G
F
G
Yeah,
I
would
say
when
you
put
that
issue
and
provide
some
information
as
to
how
to
reproduce
that
on
our
site
as
well.
You
know
this
kind
of
resources
this
many,
and
this
is
the
difference
that
you've
seen
so
that,
because
there
also
could
be
environment,
questions
that
you
know
it
may
be
specific
to
your
environment
or
it
may
be
all
open,
gif
environments.
Just
you
know
we'll
look
into
that.
G
G
F
G
F
C
Well,
you
know:
we've
got
some.
We've
got
a
bunch
of
stuff
on
sitting
in
the
in
the
backlog
of
like
metrics
and
stuff,
so
we
could
add
in
more
like
internal
stats
metrics,
I
think
that'd
be
worthwhile
and
we
could
put
that
up
there,
and
I
think
the
current
plan
is
used
prometheus
and
I
haven't
actually
used
prometheus.
So
would
like
prometheus
metrics,
give
us
the
kind
of
stuff
we're
looking
for
here,
or
should
we
be
looking
at
a
different
mechanism
for
tracking
this
kind
of
stuff.
F
Let's
just
let's
just
say
in
my
case
right
in
my
we
can
talk
about
my
case
first
and
then
we
can
try
to
generalize
to
make
sure
it
help
it
help
many
other
people
in
the
community.
So
one
specific
in
my
case,
like
I
want
to
interest
in
in
how
long
it
takes
valero,
I
mean
how
long
the
valero
backup
spent
in
the
grpc
card
in
the
network
call
that
it
interact
with
right.
F
So,
let's
just
say
if
the
the
total
time
that
is
spent
on
grpc
car
was
accumulated
to
about
eighty
percent
of
the
backup
time
and
it,
and
it
is
something
like
significant
right
and
other,
and
I
think
in
my
example,
without
making
grpc
call
if
I'm
backing
up
in
a
normal
cluster,
it's
only
take
a
few
seconds
to
back
up
the
whole
name
say
with
2000
secret.
F
C
D
So
it's
been
a
long
time,
so
I've
used
prometheus
on
previous
projects
in
the
past,
but,
like
my
understanding,
is
it's
more
for
like
looking
at
the
performance
of
like
a
processor
for
like
a
long
period
of
time,
and
so
it's
how
you
break
down
the
individual
metrics
to
be
assigned
to
like
a
particular
backup,
and
I
think
that,
as
you
add
in
more
trying
to
remember
all
the
prometheus
terms
but
like,
if
you
add
in
more
labels
to
say
this,
prometheus
metric
is
associated
with
it's
like
this
metric
for
this
backup.
D
As
you
add
more
of
those,
I
think
that
impacts
the
overall
prometheus
performance.
So
it's
like
you,
don't
want
to
introduce
too
many
different
labels
for
like
different
metrics.
So
I
think
it's
probably
better
for
like
you,
you
could
gather
more
information
like
how
long
does
it
back?
D
How
long
do
backups
on
this
valero
installation
take
in
general
rather
than
this
specific
backup
took
this
much
time
or
this
specific
backup
took
or
spent
this
much
time
in
grpc
calls,
for
example,
but
I
could
go
back
through
previous
things
and
try
to
remember
like
the
the
particular
complexities
for
me
of
prometheus.
It's
been
a
while,
since
I've
worked
with
it,
but
it's
my
big
recollection.
C
No,
I
think
this
is
a
good
idea
in
general,
so
I'll
go
ahead
and
kick
in
just
you
know
just
a
ticket
to
track
it,
but
what
we
really
need
is
for
someone
to
dig
in
and
say
what
actually,
how
can
we
actually
get
these
metrics
to
display
them,
because,
certainly
it's
pretty
easy
to
you
know
spit
out
a
log
message,
but
you
know
as
you're
saying
following
it's,
you
know
kind
of
a
pain
to
deal
with.
So
if
there's
some
system
that
we
could
plug
into
that
would
be.
C
G
One
other
thing
to
point
into
here
that
might
be
relevant
is
that
I
know
we
did
some
similar.
We
had
some
concerns
with
our
conveyor
migration
project
and
one
of
the
guys
on
our
team
derek
ended
up
putting
using
jaeger
tracing
to
be
able
to
sort
of
get
some
idea
of.
You
know
this
section
took
you
know
this
many
seconds
or
this
many
milliseconds
and
kind
of
putting
that
instrumentation
in
place.
G
I
wasn't
really
involved
in
that
effort,
but
but
that
that
was
something
that
that
helped
us
to
figure
out.
You
know,
I
think
in
our
in
our
case,
the
the
big
takeaway
for
the
first
round
of
improvement
came
from
that
came
from
you
know,
looking
into
sort
of
you
know,
kubernetes
client,
caching,
in
terms
of
you
know
tweaking
that,
but
but
that
that
kind
of
approach
may
be
relevant
here
again,
especially
in
this
question
of
you
know,
is
the
bottleneck.
G
G
I
don't
know
that
it's
the
calls
I
mean
I
suspect
there
may
be.
You
know
in
this
particular
example.
You
know:
there's
probably
a
combination
of
some
efficiencies
needed
to
be
incorporated
into
that
to
that
plug-in
action
itself,
with
the
fact
that
we
might
need
to
tweak
it
a
bit
and
not
call
that
plug-in
action
as
many
times,
but
again
the
general
question
of
you
know
having
to
track
this
down
to
say
you
know:
why
is
this
migration
taking
10
minutes?
You
know
where
is
the
time
being
spent?
G
G
And
I
can
I
can
talk
to
to
derrick
about
it,
he's
an
mpto
right
now
to
get
some
idea
of.
You
know
how
that
might
be
relevant
on
the
side
as
well.
As
I
said,
you
know
I
I've
looked
at
in
sbr,
but
I
really
wasn't
involved
in
actually
putting
that
in
place.