►
From YouTube: Velero Community Meeting/Open Discussion - July 6, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
the
valero
community
meeting.
Slash
open
discussion
today
is
july.
6Th
2021.
Please
add
yourself
to
the
attendee
list.
I
have
not
added
myself
to
the
attendee
list
and
I
will
do
that
and
let's
go
through
some
status
updates
and
then
dive
into
discussion
topics.
First
off
we
have
bridget.
B
Hi
everyone
so
last
week
I
was
on
pto,
so
doing
a
little
of
catch-up
after
that
week.
This
week,
a
lot
of
my
focus
is
going
to
be
on
trying
to
catch
up
on
some
of
the
the
issue.
Backlog
and
community
support
requests
that
we've
had
coming
in
with
the
transition
and
changes
on
the
team,
and
that
is
something
that
we
haven't
been
giving
as
much
attention
to
so
that
is
something
I'm
going
to
try
and
rectify
this
week.
B
I
need
to
put
in
another
bullet
point
here,
I'll
type
after
I've
finished
talking,
but
I
have
some
changes
to
the
plug-in
design
dock
changes
that
were
requested
from
fong
and
scott,
which
I
need
to
push.
So
I
will
do
that
as
well.
Apologies
lots
of
different
things
going
on
in
different
directions,
so
apologies
for
keeping
folks
waiting
on
that.
A
Thank
you,
bridget
for
catching
up
on
support,
really
appreciate
it.
Any
questions,
comments
for
bridget.
C
Hey
you
know
this
week
same
as
last
week
did
work.
Do
some
work
on
pulling
together
the
1-7
road
map
with
eleanor
and
this
management
team
she's
going
to
go
through
that
upload
progress
still
going
along.
I
did
do
some
more
poc
work,
I'll,
try
and
get
some
demos
in
here.
I've
got
astrolabe
moving
data
directly
from
an
ebs
disk
to
a
vsphere
disk,
so
that's
kind
of
fun,
so
work
on
getting
the
the
videos
and
the
explanations
of
that
together
for
everybody.
A
All
right
and
scott,
if
you
want
to
add
any
updates
here,
please
do
so
as
well.
D
A
All
right,
let's
dive
into
discussion
topics
the
first
one
we
have
is
the
revised
1.7
roadmap.
Eleanor.
E
Yes,
so
do
you
mind
if
I
share
my
screen
or
do
you
want
me
to
oh
perfect
so
at
can
everyone
see
my
screen
with
this
roadmap
on
it?
E
Great
so,
in
short,
as
I
think
folks
know,
we've
had
some
maintainers
move
to
other
teams
and
we've
had.
We
are
starting
to
have
some
folks
join
our
valero
team
and
ramp
up
because
of
that,
we've
had
to
reconsider
what
we
deliver
for
1.7,
and
it
did
give
us
a
chance
to
actually
focus
on
some
things
that
maybe
we
should
have
focused
on
anyways
more,
so
we've
decided
that
we
think
the
very
so.
First
of
all,
we
are
saying
that
1.7
will
ideally
be
delivered.
E
We
really
think
that
the
number
one
priority
should
be
improve,
valeris
technical
health,
and
we
don't
and
that's
a
very
wide
topic,
and
so
really
we
see
it
zooming
into
three
things.
One
is
to
streamline
the
release
process
right
now.
My
understanding
is,
it
takes
the
engineers
quite
a
while
to
release
and
that's
time
taken
away
from
community
support
and
feature
work,
so
let's
streamlize
line
it
using
automated
processes.
E
Likewise,
the
end
to
end
tests
are,
of
course
automated,
but
the
running
of
them.
My
understanding
is
not
automated,
so
we
want.
Our
goal
is
to
automate
the
running
of
the
end-to-end
tests
on
at
least
one
cloud,
probably
aws
or
vsphere.
E
Eventually,
we
will
cover
other
clouds,
but
from
what
I
hear
running
between
clouds
is
not
where
we
see
most
of
the
bugs
it's
usually
just.
We
just
want
to
run
them
regularly
and
then
once
we
get
that
to
happen,
it
sounds
like
possibly
not
for
1.7,
but
we
will
work
on
figuring
out
when
we
run
them
either
once
a
day
or
as
part
of
the
release
pipeline
or
before
or
after
pr
something
like
that.
E
But
first
we
just
want
them
to
run
in
an
automated
fashion
and,
lastly,
there's
a
lot
of
pre-release
manual
tests
that
are
currently
done,
and
obviously
those
are
ideal
candidates
to
turn
into
automated
end-to-end
tests.
All
of
this
will
save
our
developers
so
much
time,
so
they
can
do
more
good
work
elsewhere
and,
of
course,
hopefully
we'll
be
more
likely
to
catch
bugs
and
deliver
better
features.
So
that's
why
this
is
our
number
one
priority
and
then
so
for
the
other
things
that
we
still
hope
to
deliver
in
1.7.
E
E
We
for
a
while
have
wanted
to
do
ipv6
testing.
This
is
also
in
the
original
roadmap.
We
just
want
to
make
sure
we
think
that
going
to
ipv6
should
be
no
problem
at
all
for
valero,
more
and
more
clusters
are
moving
to
ipv6,
so
we
just
want
to
test
and
make
sure
that
that
works.
Fine,
and
if
there
are
issues
boy
do
we
want
to
know
about
them
now,
rather
than
later
and
note,
this
is
specifically
not
going
to
be
a
dual
stack
mode.
E
It
won't
be
ipv4
and
ipv6
it'll
just
be
for
ipv6
layer.
Debug
was
something
that
nolan
was
working
on,
and
so
our
new
team
members
based
in
beijing
are
going
to
start
working
on
that,
and
dystrolis
is
also
going
to
be
here
as
well
and
carvel
installation
we're
very
lucky
that
carlesia,
even
though
she
has
moved
to
a
different
team,
is
still
going
to
actually
be
doing
that
work.
So
we're
very
grateful
that
she's
doing
that
plug-in
timeouts.
This
is
really
a
fun.
You
can
maybe
update
us.
E
We
just
wanted
to
note
that,
of
course,
from
what
I
understand,
the
plug-in
versioning
needs
to
be
merged
in
first
and
bridget
is
having
more
and
more
things
pulling
100
times.
So
it's
slowing
down
how
much
it's
reducing
the
time
she
can
put
into
plug-in
versioning.
Maybe
this
node
is
even
irrelevant
now
so
fong
or
bridget
you
can
update
after,
but
we're
just
calling
that
out
and
then
from
that
original
1.6
roadmap
we
had
hoped
to
do.
E
Csi
snapshots
bring
that
to
ga
we've
decided
to
let
that
slip
for
the
simple
reason
that
you
know
the
next
few
versions
of
kubernetes
will
still
support
entry
drivers,
and
so
someone
wants.
Basically,
you
can
use
other
plugins
rather
than
the
csi
plug-in
to
do
the
same
thing.
Obviously
we
will
not
let
this
slip
too
long
or
else
then
we'll
have
major
problems,
but
this
will,
I
hope,
being
1.8.
E
Multiple
cluster
support
is
something
that
we
very
much
want
to
do.
We
are
letting
it
slip
because
it
feels
less
urgent
than
the
other
tasks
listed,
but
I
think
there's
a
very
good
chance.
That'll
get
into
1.8,
and
this
manifests
for
backup
and
restore,
which
is
foundational
work
for
so
much
else.
We
want
to
do
we
very
much
hope
that
will
be
in
1.8
so
that
we
can
do
the
other
good
things.
D
Just
a
quick,
quick
question:
I
I
noticed
that
the
v1
beta1
crd
issue
is
not
mentioned
here.
I
know
this
was
mentioned
last
week
on
the
call
the
hope
was
to
get
that
dealt
with
prior
to
1
7.
I
wasn't
sure
where
that
fit
into
this,
whether
that
was
going
to
be
a
1,
6,
z,
release
or
you
know
where
that
fit
in.
B
Yeah,
I
think
so
so
we
realized
that
one
seven
would
come
after
the
122
kubernetes
release
that
we'd
need
to
have
a
patch
release
for
the
one
six
series
in
place
before
that.
So
I
think
that's
our
current
plan.
D
E
And
by
the
way,
to
clarify,
of
course,
we
have
1.6.1
out
which
fixes
that
bug
that
we
had
1.6.2.
We
suspect,
oh
actually,
there's
just
a
few
things
that
we
want
to
there's
a
few
fixes.
We
might
want
to
do
in
1.6.2.
That
should
come
out
pretty
soon.
We
are
hoping
and
then
1.6.3
will
be
the
what
you
just
discussed
so.
F
E
A
Just
just
one
question
so
for
1.7:
do
we
have
like
a
target,
a
rough
target
date.
E
We're
gonna
say
early
fall
for
now.
The
reason
simply
being
is
that
as
we
ramp
up
our
new
team
members,
it's
pulling
time
away
from
dave
and
bridget,
so
that
affects
these
two
plus
our
new
team
members
are
going
to
be
working
on
these
three
plus
this,
so
we're
hoping
in
about
a
month
as
they
ramp
up
more
it'll,
be
a
lot
easier
to
put
a
date
on,
but
anyone
who's
curious
if
anyone
is
really
depending
on
1.7,
feel
free
to
ping
me,
and
I
can
try
to
give
you
an
updated
estimate.
E
So
my
plan
is
to
wait
a
day
or
two.
If
we
don't
get
any
other
feedback,
then
I
will
commit
this
as
or
someone
maybe
jonas
will
help
me
commit
this
as
the
new
road
map
and
it'll
replace
the
current
one
on
our
repo.
C
E
E
Okay
sure
so
I
think
I
was
talking
to-
I
don't
know
where
it
came
up,
but
I
was
having
conversations
with
bridget
and
dave,
and
one
thing
that
emerged
was,
I
think
david
lee
said
he
was
quite
shocked
that
so
so
valero
has
the
default
backup,
storage
time.
I
don't
know
if
there's
a
more
specific
term
in
the
industry,
no
okay,
the
time
that
we
keep
a
backup
before
we
delete
it
is
set
to
30
days
by
default.
E
My
understanding
is
users
can
change
that
dave
told
me
who
dave
who's
been
in
the
industry
for
a
while
said
that
when
he
discovered
that
it
shocked
him
that
his
backups
suddenly
were
deleted
when
he
came
back
a
month
later,
so
I
guess
I
just
wanted
to
see
whether
is
this
something
that
we
should
consider
changing.
Has
anyone
felt
pain
around
this?
Is
this
like
so
irrelevant,
compared
to
the
other,
bigger
kind
of
bugs
we're
seeing.
C
D
Noticed
it
either,
I
actually
had
noticed
it
and
again
up
until
recently,
our
main
use
case
was
for
migration,
and
so
that
really
wasn't
relevant
because
where
we
tend
to
use
them
immediately,
but
now
that
we're
doing
more
on
the
backup
and
restore
side.
That
was
something
we
really
hadn't
talked
about
it.
But
it
was,
I
mean
I
was
aware
of
it
because
I
had
you
know
seen
it.
D
I'm
wondering
whether
we
might,
in
addition
to
maybe
lengthening
that
think
about
some
way
of
making
a
more
flexible
way
of
keeping
a
certain
number
of
backups.
But
then
you
know
backups
of
what,
because
you
know
you
have
different
backups
that
aren't
equivalent
and
there's
some
way
of
identifying
this
group
of
backups
where
one
supersedes
the
previous
one,
and
you
want
to
keep
a
certain
number
around.
D
But
you
know
that
that
kind
of
thing-
I'm
not
really
sure,
but
might
so
it's
good
something
worth
considering
a
kind
of
a
more
flexible
scheme
around
that,
but
in
the
absence
of
that,
even
just
increasing
that
default
might
make
some
sense.
E
C
C
D
Yeah,
and
with
snapshots
in
particular
depending
on
the
the
plug
and
some
other,
you
know
those
tend
to
be
incremental.
So
do
we
have
issues
where,
if
you
delete
an
old
backup,
an
old
snapshot
that
incremental
thing
comes
into
where
you
don't
have
a
complete
snapshot
or
does
the
underlying
storage
handle
that
properly.
C
The
expectation
is
the
underlying
storage
handles
that
I
know
that
was
one
of
the
issues
we
looked
at
when
we
did
the
vsphere
plug-in,
if
it
can't
handle
deleting
an
incremental
in
the
middle.
I
consider
the
plug-in
or
the
storage
system
to
be
broken.
Yeah.
E
A
C
It
depends
so
the
kubernetes
resources
themselves.
It's
always
a
full
backup.
The
rustic,
if
you
do,
if
you're
using
rustic
rustic,
is
essentially
an
incremental
the
plug-ins,
it
depends
on
what's
underneath
of
it.
So,
for
example
like
on
visa
right
now,
we
always
do
a
full
backup.
Ebs
automatically
does
incrementals
azure.
I
think
you
have
to
set
incremental
someplace
to
get
that
yeah.
D
C
So
it's
a
good
point,
but
where,
if
the
storage
systems
are
designed
properly,
they
they
won't
make
the
incremental
snapshots
go
bad,
like
rustic
should
do
this.
You
know
like
if
you
delete
the
the
basic
thing.
It
has
a
whole
structure
inside
and
that's
one
of
the
reasons
why
the
vsphere
plug-in
does
falls.
Every
time
is
because
we
didn't
want
to
build
out
that
structure
yeah,
so
we
rely
on
those
I
mean
honestly,
we
can
double
check
that
we
probably
should,
but
that
the
scheduled
back
the
scheduled
deletion
versus
the
user
deletion.
C
A
So
if
we
just
have
a
30
30
retention
period
right
now
and
if
we're
looking
at
doing
something
else
as
default,
would
it
be
like
one
full
monthly
and
keep
that
for
a
year
or
something
like
that
or.
C
I
think
those
are
those
are
really
good
options,
so
it's
like
you
know,
having
a
more,
not
even
flexible,
a
more
configurable
thing
where,
for
example,
like
in
a
schedule,
you
could
say,
hey
keep.
You
know,
apply
this
retention
policy
where
monthlies
get
kept
for
a
year,
weeklies
get
kept
for
a
month
dailies.
You
know
these
are
all
like
our
classic
backup
scenarios,
so
that
would
be
kind
of
cool.
I
think
what
we're
just
asking
right
now
is
this
30-day
removal.
C
You
know
it
kind
of
makes
sense,
depending
on
where
you're
thinking
about
your
kubernetes
data
from
because
I
think
a
lot
of
people
are
thinking
about
this
as
just
hey.
I
just
want
to
be
able
to
get
back
to
a
working
system
almost
where
I
was
at
as
quickly
as
possible.
In
that
case,
like
the
30
day,
retention
interval
makes
some
sense,
but
as
we
move
on
and
it's
like,
the
data
is
becoming
more
important
or
whatnot,
and
it's
just
like
I
don't
know
it's.
C
You
know
it's
kind
of
like
just
an
expectation
that
the
backup
system
is
keeping
things
longer.
At
least
automatically
I've
been
I've
been
bitten
by
this
in
the
past
in
other
systems,
not
systems,
but
you
know
admins
keeping
a
schedule,
a
turnover
schedule,
that
was
too
short
and
the
results
were
not
fun
and
nobody
was.
Nobody
was
happy
afterwards.
E
A
E
Well,
I'm
hearing
that
tentatively
well,
actually
what
I
was
going
to
say
by
the
way
about
what
dave's
making
a
point.
Also
another
tkgpm
who
and
or
sorry
another
tanzu
pm
who
I
work
with,
who
handles
security
and
compliance
actually
had
some
questions
about
valero.
So
I
think
dave
and
I
are
gonna-
meet
with
him
just
to
understand
what
other
large
what
enterprises
might
expect
from
a
compliance
point
of
view.
E
So
we
may
learn
that
there
are
certain
features,
maybe
something
like
that
tweaks
that
we
can
make
to
valero
that
will
make
users
more
comfortable
using
valero.
So
I
think
next
step,
maybe
I'll
create
an
issue
just
around
this.
Just
enough
kind
of
explain
this
thought
process
we're
going
through
and
lay
by
the
way.
I
like
your
point
that
maybe
we
have
no
default
and
we
maybe
the
flag
every
time
you
want
to
run
a
backup,
maybe
a
required
flag.
E
Is
you
have
to
set
the
time
or
or
no
not
setting
it
at
all?
I
don't
know
how
easy
that
is
from
an
engineering
perspective.
All
that's
to
say
is
I'll,
create
the
issue,
and
then
maybe
we
can
post
on
the
slack
channels
and
see
if
anyone
wants
to
give
feedback
is
that
is
that
a
good
next
step
great,
doesn't
seem
like
it's
super
urgent,
but
it
just
I
don't
know.
I
really
hate
the
idea
of
really
shocking
people
by
deleting.
C
D
C
D
Longer
it's
going
to
be
around,
but
you
have
to
think
about
okay.
That
means
it's
going
to
be
deleted
after
you
know,
25
days
or
whatever,
and
and
I
mean
I
guess,
the
question
was
asked-
you
know:
what's
the
downside
to
just
as
a
first
step
just
making
up
365
days,
I
think
the
only
downside
to
that
is
some
people
might
run
out
of
storage
in
their
bucket.
D
But
if
someone's
hitting
storage
issues
then
they
can
set
it
to
something
lower.
You
know
explicitly
or
clean
it
up
manually
or
something
like
that.
That's
kind
of
on
the
user,
because
now
they're
running
on
storage,
maybe
we
need
to
think
about
how
I
don't
know
some
kind
of
monitoring
or
noticing
of
you
know.
I
don't
know
if
that
makes
sense,
or
that's
kind
of
only
user
to
do
to
deal
with
that.
D
But
you
know
okay,
you're
back
you
know
the
bucket
you're
using
is
almost
full
because
you're
keeping
all
these
backups,
which
could
be
fine,
but
you
know.
C
But
again,
kind
of
a
interesting
feature
would
be
like
max.
You
know
max
storage
or
something
for
backups
and
you
could
set
like
an
alarm
or
something
or
an
alert,
probably
better.
Not
to
have
the
system
start
automatically
deleting
things
but
but
yeah
being
able
to
monitor
it
would
be
cool.
E
C
E
Okay,
in
the
interest
of
keeping
the
meeting
running,
I'm
going
to
take
my
action
items,
and
I
will
done
with
this
issue.
A
Thank
you
eleanor
next
up
we
got
a
phone.
F
No,
no
nothing
to
share
on
the
screen.
I
just
want
to
give
some
update
and
some
questions
so
number
one
is
the
update
for
the
profit
environment,
implementation
so
plug-in.
Versioning
is
close
to
the
end
I
started.
D
F
So
I
might
have
to
have
a
conversation
with
the
with
with
bridget
to
see
that
if
there's
any
way,
we
can
refactor
the
code
a
little
bit
so
that
in
the
future,
if
we
need
to
add
more
version
in
if
we
change
more
when
we
change
the
version,
but
we
don't
have
to
change
the
code
that
much
so
that
you
know
there's
a
lot
of
place
where
we
reference
to
a
version.
F
I
mean
the
plug-in
interface
all
of
these
change.
All
of
these
have
to
be
fitting
to
you
know,
version
two
and
then,
when
we
have
like
version
three,
we
have
that
same
shade,
again,
2003
and
so
on
and
so
forth,
so
that
that
caused
a
lot
of
change
that
I
I
wonder
if
that
is
a
necessary
change.
So
that
is
something
that
I
would
like
to
maybe
discuss
with
with
dave
and
and
bridget
and
other
people
maybe
later.
F
But
that
is
the
current
right
now,
I'm
just
trying
to
do
the
implementation
just
to
rough
out
the
design
to
see
if
I
can
see
additional
work
need
to
be
added
into
the
design.
Another
thing
that
I
want
to
talk
about
when
I'm
implementing
that
versioning
is
because
I
think
I
mentioned
this
email
design
meeting,
which
is
that
I
would
like
to
have
like
a
unit
test,
or
at
least
like
a
test
setup
ready,
so
that
when
we
are
ready
to
you
know
all
the
chain
is
ready.
F
I
want
to
run
through
this.
You
know
unit
test.
This
system
test
integration
have
the
whole
setup
to
guarantee
that
the
chain
will
not.
You
know,
cause
any
regression
to
the
existing
blocking
that
you
know.
Currently
we
have
out
there.
I
don't
have
all
the
resources
to
you
know
to
protect
all
of
these
plugins.
So
I
wonder
if,
in
our
community
we
have
any
resources
or
any
pathways
that
are
already
available
for
me
to
to
use
to
do
for
that
purpose.
B
Well,
I
think,
like
we
can
definitely
help
out
with
the
testing,
as
eleanor
pointed
out
in
our
roadmap,
getting
our
end-to-end
tests
running
on
at
least
one
client
and
building
out
the
end-to-end
test.
Suite
that
we
have,
I
think,
will
help
with
that,
but
whether
that
happens
at
the
point
whenever
you
want
to
do
this
testing,
I
don't
know
if
it'll
be
ready,
then,
but
if
you
have
like
something
that
you
want
to
test
it
on
a
specific
cloud
with
a
specific
plug-in.
F
B
F
B
Yeah,
I
think
as
well
your
comments
about
so
much
code
need
to
be
changed.
I
think
that
was
something
that
I
was
like.
I
think
we
were
aware
of
whenever
doing
the
the
design,
because
the
implementations
for
all
of
the
the
grpc
handling
happens
in
so
many
places,
and
a
lot
of
it
is
duplicated
and
I
don't
think
there
is
a
way
well,
at
least
whenever
I
was
doing
the
initial
investigation
work.
B
F
What
I,
what
I
saw
was
that
that
that
we
might
avoid
you
know
changing
our
back.
They
might
want
to.
We
might
have
to
refactor
the
code,
and
that
might
not
be
like
an
easy
task
to
do.
To
be
honest.
So
so
I
would
right
now
we
just
go
with
the
design,
and
let
me
see
how
much
code
that
we
need
to
change,
and
at
that
point
we
can
come
up
with
some
kind
of
a
desire
to
refactor
the
code
so
that
the
next
time
we
roll
in
a
version
of
plugin
version.
F
We
don't
have
to
change
that
much.
So
that
is
something
that
I
hope
we
can
achieve
over
this
exercise
of
implementing
system
version,
plus
blocking
versioning.
F
C
F
And
I
think
one
of.
F
C
We
really
need
to
reach
out
to
them
and
make
sure
that
they're
involved
in
this,
because
I
think
this
is
you
know
given
given
the
the
current
architecture
of
the
visa
plug-in.
This
needs
to
go
all
the
way
through
their
stack
as
well.
Yeah.
F
Yeah,
so
that
is
it's
coming.
That's
also
touching
the
the
topic
that
we've
mentioned
earlier.
That
is
the
the
time
out
the
login
panel
problem
that
I
tried
to
solve
so,
and
we
mentioned
that
it
was
about
to
schedule
for
release
the
level
1.7.
F
So
my
comment
on
that
one
is
that
I
have
to
implement
this.
We
have
to
implement
this
versioning
first
after
we
have
this
versioning,
we
would
able
to
add
the
contacting
to
the
plugin
interface
at
that
point.
F
If
we
feel
ready,
then
we
would
ask
the
timeout,
because
the
time
the
version,
the
valid
or
timeout
is
rules
you
that
the
versioning
plotting
that
gives
you
that
that
new
interface,
so
I
cannot
say
much
about
the
roblox
very
low
plug-in
panel
until
this
versioning
implemented
and
then
that's
it
so
whether
or
not
you're
gonna
make
it
into
1.7.
F
I
think
my
call
right
now
would
be
very
likely
that
it
will
slip
until
we
have
it.
We
have
until
this
one's
coming.
You
know
melting
okay,
so
that
is
my
one
number.
One
pick
number
two
topic
that
I
have
is
is
currently
we
run
into
this
box
that
when
we
have
a
prodigy
class
included
in
in
our
in
our
cluster,
it's
in
one
cluster,
but
it's
not
in
the
other
cluster.
F
So
when
we,
when
we
back
up
and
we
restore
to
the
other
cluster,
it
failed
because
it
doesn't
have
the
same
currency
pass
on
the
on
the
target
cluster.
So
even
from
what
I
understand,
when
we
enable
the
include
cluster
resource
for
a
namespace,
then
the
backup
will
backup
all
of
the
cluster
resort
being
used
by
by
this
name
say
it
seems
like
it
have
a
bug
in
that
area
that
when
we
back
up
and
then
say
we
did
not
back
up
the
priority
class.
F
F
I
I
found
that
one
when
it
got
close,
so
I
have
to
double
check
whether
that
is
that
is
the
same
problem
that
I
have
or
not,
and-
and
I
will
create
a
new
box
retract
this
issue,
but
I
just
want
to
bring
it
out
here
in
case.
Anyone
in
the
community
have
heard
about
that
or
know
about
that.
Aware
of
that
problem.
C
D
D
So
you
know
when
it's
when
it's
true,
I
think
we're
supposed
to
be
including
everything
whether
or
not
it's
relevant
to
the
name
space
and
then,
if
it's
the
case,
where
things
sometimes
get
included
or
not,
is
when
it's
not
said,
and
that's
where
you
know
only
the
only
specified
cluster
resources
that
we
know
about,
or
you
have
a
plug-in
that
you
know
knows
about
we'll
pull
those
in.
D
F
D
Okay,
so
so,
if
it's
no,
what
that
means
is
that
valero
pulls
in
certain
resources
that
it
knows
about,
and
so
some
some
of
them
are
handled
by
valero
internally,
and
that's
going
to
be
things
like
pvs
and
crd
definitions
and
that
kind
of
thing,
and
then
you
can
write
a
plug-in
that
will
pull
in
things.
If
you
know
that
you
know,
for
example,
if
you
have
a
case
where
you
know
in
this
case
the
the
of
the
priority
class
is
needed
by
pods.
D
D
Right
so
in
your
case
it
sounds
like
if
you
have
a
pod
that
specifically
references
a
priority
class,
you
might
need
a
plug-in
that
responded
to
paw,
so
it
applies
to
returns,
pods
and
then
for
each
pod.
It
locates
because
it's
not
just
all
priority
classes,
it's
the
specific
priority
class
that
relates
to
that
pod.
D
Probably
I
think,
although
I
don't
know
that
particular
type
and
so
you'd
like
you,
could
have
a
plug-in
that
said,
okay
for
this
pod,
we
look
up
priority
class
based
on
something
in
the
spec,
and
then
we
add
that,
because
one
of
the
things
the
plugin
can
do
is
return,
additional
items
on
a
backup,
plugin
just
say,
also
include
these
things,
and
so,
when
include
cluster
resources
is
set
to
false.
That
means.
D
Included,
even
if
a
pod
says
include
them,
but
if
it's
set
to
null,
then,
if
that
pod
plug-in
in
a
references
of
the
you
know
the
priority
class,
and
if
this
is
something
that's
generally
applicable,
not
just
for
your
use
case,
it
might
make
sense
for
this
plug-in
to
be
in
valero
that
that's,
I
guess,
a
different
question.
D
If
it's
something
that
anyone
that
uses
this,
but
if
this
is
specific
to
your
use
case,
then
you'd
have
a
plug-in
that
just
you
installed
in
your
valero
instance,
but
I
mean
some
of
these
things
that
are
employed
as
plug-ins
still
could
be
in
core
valero
if
they
make
sense
for
all
valero
users.
But-
and
I
know
we've
done
this
on
the
kind
of
oadp
red
hat
side
where
we
might
have
certain
things
related
to
openshift,
for
example,
where
you
know
we
want
to
pull
in
a
certain
cluster
scope
resource.
D
F
D
G
Hey
so
I
just
want
to
provide
a
brief
update
and
I
don't
want
to
rehash
the
same
conversation
last
time,
but
we
in
the
past
week
we
have
jay
on
the
call
he
was
looking
into
ex.
Basically
using
that
install
valero
flag.
That
was
brought
up
last
call
for
our
use
case.
We
found
a
bug,
so
we
submitted
pr
to
expose
that
option
back
to
the
make
file
and
also
fix
one
of
the
tests
that
wasn't
using
that
flag.
G
If
we
merge
and
obviously
it
sounds
like
she
doesn't
have
any
bandwidth
to
update
that
pr,
which
makes
a
lot
of
sense.
So
I
just
kind
of
wanted
to
raise
it
as
a
as
a
point
that
we
probably
need
to
do
something
with
that
before
we
can
like
really
start
changing
the
end-to-end
tests
that
that
pr
does
break
the
install
valero
flag,
so
it
kind
of
makes
the
pr
we
posted
null
if
that
does
get
merged.
G
I
know
that
last
week
we
mentioned
that
we
we
wouldn't
want
to
merge
that
pr
as
it
is,
if
it
removed
that
flag,
but
she
doesn't
have
the
bandwidth
to
add
it
back
in
so
you
know
we're
obviously
happy
to
you
know
refactor
it
and
add
it
back
in
if
we
decide
that
the
best
approach
is
to
take
the
improvements
that
she
made
and
get
it
merged
in,
but
or
I
don't
know,
if
it's
possible
that
you
know
we
can
merge
this
to
a
feature
branch
and
then
we
work
there
as
part
of
like
you
know,
we
update
her
changes
to
support
this
flag
before.
C
That's
just
sounding
way
too
complicated.
Let
me
get
with
her
we'll
take
over
if
necessary,
we'll
just
take
over
the
branch
and
make
changes
and
push
it
in,
because
I
mean
it's,
it's
not
it's
not.
Okay,
to
say:
hey,
I'm
giving
you
a
pull
request
and
no.
I
won't
respond
to
changes
that
are
being
requested
before
it
gets
merged.
C
B
Okay,
yeah
and
I
think,
like
as
by
default,
I
think
I
know
I've
made
changes
to
other
maintainer
branches
before
so.
I
think
anyone
who
has
right
permissions
to
the
repo
should
be
able
to
push
changes
to
those
branches
as
well.
So,
if
needs
be
like
dave
mentioned,
we
can
take
over
the
branch
and
make
the
changes
that
are
necessary
before
merging.
A
All
right,
let
me
turn
my
screen
and
we'll
go
through
some
shout
outs
who
wants
to
leave
the
shout
outs
today.
B
I
can
do
it
if
no
one
else
yeah,
so
this
first
one
is
a
change
from
wang
kai
who's.
One
of
our
new
team
members
based
in
beijing
and
this
is
enabling
et,
has
to
be
run
as
part
of
our
pr
github
action
flow.
So
this
is
a
huge
step
in
automate,
automating
or
et
tests.
So
thank
you
very
much
weinkai.
This
is
a
really
great
addition.
B
So,
yes,
hopefully
not
as
well.
Whenever
you
look
at
any
upr
on
the
on
the
blair,
repo
you'll
see
additional
actions
in
there,
similar
to
how
we
check
the
crds
on
various
versions
of
kubernetes
as
well.
B
Okay,
yeah!
So
then
this
next
one
is
from
kyle
who
had.
This
was
a
feature
that
was
added
into
1.62.
B
I
believe,
use
the
honor
references
from
the
schedule
in
the
backup
I
can't
remember
exactly
the
the
feature
and
how
it
works,
but
this
is
enabling
it
to
be
used
within
the
helm
chart
so
from
what
I
can
tell
avoid
some
issues
when
using
things
like
argo,
cd,
so
yeah.
Thank
you
very
much
and
then
I
believe,
there's
one
more
from
eleanor
and
I'll
give
it
a
plus
one.
So
go
ahead.
Eleanor
I
agree.
E
Yeah
I'll
just
say
that
I
I
certainly
monitor
our
two
slack
channels
of
valero
users
and
valero
devs,
and
I
try
to
answer
the
questions.
Usually
I
can't
I'm
I'm
increasing
my
own
knowledge,
but
I've
just
noticed
in
the
last
few
days,
jen
ting
has
been
answering
a
number
of
questions
and
it
makes
me
so
happy
to
hear
that
we've
had
at
this
kind
of
staffing
transition
time.
E
I
think
we've
all
had
a
little
bit
of
a
harder
time
addressing
the
community
needs
and
it's
so
great
to
see
him
doing
that
and
I
apologize.
I
don't
monitor
the
pr
reviews,
but
I'm
so
happy
to
hear
scott.
Maybe
you
said
you
were
doing
pr
reviews
or
something
that's
thank
you
for
leaning
and
that's
fantastic.
E
A
And
yeah
just
a
quick
update
as
well
so
now
we're
in
july
so
we'll
we'll
switch
things
around
with
the
community
meetings.
So
next
week
we'll
have
our
first
community
meeting
in
asia
pacific
meeting
hours.
So
it
will
be
late
at
night
here
in
the
u.s
early
morning
over
in
beijing
in
the
beijing
time
zone.
So
look
forward
to
getting
some
email
blasts
around
your
calendar
invites.