►
From YouTube: Velero Community Meeting - Apri 12, 2023
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
It's
a
today
the
April
12th,
okay,
so
for
some
status
update,
we
won
Point
11.
We
have
created
the
first
rc
release
yesterday
on
April
11th
and
we
will
continue
verify
the
the
release
by,
for
example,
by
scanning
the
CVS
and
by
E3
test,
and
we
may
first
create
a
further
RTD
readers
or
and
then
we
will
be
able
to
make
the
ga
release
and
the
four
way
one
point
1.12.
A
We
have
started
to
revealed
candidates
and
for
the
currently
at
least,
we
can
reduce
this
link
and
now
it's
open
for
for
for
the
candidates.
So
if
we
have
any
requirements,
we
can
just
add
the
web
1.2
label
to
any
issues
and
So.
Currently
we
have
various.
So
we
are
reviewing
that
and
the
data
more
will
be
the
uncle
feature
for
web
1.12.
A
This
is
overall
status
and
and
for
some
individual
updates
myself,
it
will
continue
verifying
the
the
more
POC
and
I
have
done
some.
A
B
A
Okay,
thanks
and
Daniel.
C
Yeah
so
the
example
plugin
for
the
backup
bottom
action,
restorative
action
V2
looks
like
it
does,
have
the
two
approvals,
so
we
can
merge
it.
There
was
a
comment
because
that
PR
moves
go,
laying
up
to
118
instead
of
119,
but
I.
Think
at
this
point
the
I
thought
was
we're
going
to
just
merge
this
and
then
we'll
do
a
separate
PR
to
move
18
to
19.
the
reason
that
was
18.
C
There
is
because
this
PR
actually
was
created
before
we
moved
to
19
and
I
had
to
move
to
18.
Just
for
you
know
everything
to
build
properly,
so
that
this
was
kind
of
minimum
version
for
the
code
in
here,
which
is
good
enough
for
the
example.
But
we
should
probably
follow
up
with
it
a
later
PR
to
update
that
to
119
and
to
be
consistent
with
the
other
repos.
A
Okay,
so
I
think
Daniel.
That
means,
if
we
want
to
do
this
in
this
PR,
we
can
do
this.
If
not,
we
can
go
and
merge
it.
Yeah.
C
Yeah
yeah
yeah
yeah.
His
comment
was
to
we
can
go,
we
can
go
ahead
and
merge.
It
I
think
the
thinking
is
because
we
need
to
see
an
end
test
anyway.
Let's
just
merge
it
and
then
create
a
separate
PR
and
move
it
to
119.,
because
the
118
really
wasn't
the
point
of
this
fear.
The
point
is
to
add
the
back
of
item
action.
B2
yeah
I
had
to
move
it
to
118
because
it
wouldn't
build
with
117,
and
that
was
before
we.
A
Okay,
then
we
can
do
that
in
the
following
PR.
So
if
no
further
comments
from
everyone
else,
I
think
we
can
merge.
A
A
Okay
sense.
C
And
the
other
thing
is
that
I
am
still
working
on
the
end
and
tests
I
got
pulled
over
into
some
other
stuff,
so
I
didn't
make
as
much
progress
as
I
went
into
it.
We
discussed
this
last
week
that
if
these
weren't
going
to
be
ready
before
rc1,
then
we
would
just
hold
off
and
submit
these
in
test
Post
Release
at
this
point,
so
I
think
that's
the
plan
right
now.
A
Okay
sounds
and
demo.
E
I'm
working
on
running
Natalie
for
we
1.11
and
scan
some
CVS
and
also
add
a
new
in
test,
which
is
a
schedule
backup
creation
test.
But
that's
all
from
me.
A
Okay
sounds
and
Daniel.
F
Hey
thanks.
Sorry
I'm,
like
yeah
I'm,
currently
working
on
the
plan
for
1.12,
and
there
are
some
box
I
opened
in
the
test
of
B
version,
1.11
and
I'm,
verifying
it
post
rc1
thanks.
Okay,.
A
Thanks
yeah,
that's
for
the
hours
later
channel
in
video
updates
and
next
for
the
discussion
topics
and
we
have
two
Targets.
The
first
one
is
from
me:
it's
about
the
the
data
more
design
and
you
know
we
have
reached
the
RC
fees
for
wave
1.11.
So
hopefully
we
can
make
all
the
consensus
of
this
design
and
merge
some
before
the
release
of
the
diable
1.11,
so
that
we
can
start
work
for
web
1.12.
A
So
right
now,
I
think
currently
in
the
we
have,
we
have
done
something
Central
discussion
through
some
Central
meeting
and
now
I
think
we
under
in
this.
So
in
the
comments
for
this
PR
I,
think
one
major
question
is
about
right.
A
It's
about
about
this
one!
This
one
is
about
something
like
you
know
we
have
I.
Will
we
want
to
modify
the
the
current
CSL
plugin
to
integrate
it
with
the
API
V2
and
R
ifa2,
so
we
may,
and
also
from
the
design
we
you
can
see
that
there
is
a
generic
restore
workflow,
that
we
will
not
need
the
volume
some
charts
to
be
kept,
and
also
it's
a
related
objects
like
one
some
short
object
and
the
volume
some
content
objects
to
be
preserved.
A
Like
the
current
theater
plugin
that
we
have
rhythm
it
Opposite
it
in
the
backup
table,
we
don't
need
that.
So
after
some
investigation,
we
see
that
it
will
bring
some
some
some
complexity
if
we
still
keep
that
and
to
go
with
the
current
data
moment
design.
So
so
one
problem,
though
it's
like
we
want
to
remove
it.
A
modified
workflow,
like
you
know
we
are,
will
not
take
care
of
this
volume.
A
Some
content
objects
are
in
its
backup
will
flow
and
all
the
and
the
these
objects
will
be
delivered
to
the
data
more
and
it
will
be
secured
by
the
rumors
and
decide
and
its
life
cycle
is
decided
by
the
data
mover.
So
it's
like
data
more
has
its
own.
How
you
could
decide
about
it
itself
of
whether
to
delete
this
this
this
is
objects
and
how
to
when
to
delete
that.
So
and
possibly
there
will
be
two.
A
There
will
be
two
possibilities
for
the
for
the
for
the
data
mode
and
no
matter
the
building
that
more
and
the
plug
into
the
more
it's
like.
If
the
data
mode
don't
need
that
they
can
just
just
delete
the
the
object
after
the
backup
if
they
need
that
they
need
to,
they
need
that
they
need
to
proceed
the
object
themselves.
So
here
we
just
for
the
without
building
them
or
all
for
the
recommendation.
We
just
recommend
that
we
don't
don't
keep
that
objects.
A
A
Oh
okay,
so
if
here
we,
if
for
if-
let's
say
it's,
not
the
data
more,
we
will
not
I
think
so.
This
is
this
flag
is
helped
to
got
that
if
they
do
not
there's
more
case,
zip
flag
will
not
be
thought,
so
everything
will
be
cop.
That
is
okay
at
the
current
plugin
okay.
C
C
Right,
that's
what
I
mean
so
obviously,
if
Naval
CSI
is
not
set,
we're
not
going
here
at
all,
but
in
in
the
case
where
we're
using
the
CSI
plug-in
but
we're
not
using
the
the
data
mover
framework,
then
we
create
the
volume
snapshot.
We
return
it
as
an
additional
item
and
the
plug-in
will
wait
like
now
until
it's
ready
to
use
before
it
returns,
because
it
won't
be
an
asynchronous
plugin.
C
A
Right
and
yeah,
actually,
actually
the
in
the
current
code
or
workflow.
The
weight
for
ready
for
the
somehow
to
ready
to
to
use
is
done
by
Valero
right.
C
A
Right,
no,
actually
it's
better
for
for
the
plugin
to
that.
So
we
have
created
another
issue
to
optimize
the
workflow
of
the
theater
plugin
under
the
backup
item
action,
V2
workflow,
so
that
is
possible.
Yeah.
C
But
if
we
do
that
there
is
one
other
consequence,
and
that
is
any
other
action,
any
other
plug-in
that
relies
on
that
being
somewhere.
We
can
no
longer
rely
on
that
being
ready
to
use
in
the
other
plugins,
although
I
guess
we're
not
waiting
until
the
end
of
processing.
Now,
so
that's
not
that's
again.
Another
change,
because
even
in
the
current
code,
we
don't
wait
until
it's
ready
until
in
the
backup
processing.
C
A
Makes
sense?
Okay,
so
one
more
thing
we
want
to
clarify
sir,
is
about
how
to
use
the
current
data
more
workflow
and
why
we
don't
need
the
CSI
volume.
Some
problems,
Central
content
objects,
so
this
is
here:
I
have
a
list
of
what
the
plug-in
data
more
I
need
to
do
for
to
integrate
with
the
current
workflow
so
about
for
backup
for
backup.
A
These
are
normal
things
like
to
handle
the
data
upload
CRS
correctly
and
to
handle
the
state
machines
like
the
physics
and
progress
of
the
DCR
correctly
and
to
handle,
if
supported,
to
handle
the
cancel
request.
Crackers
the
under
normal.
These
are
the
normal
steps
and
the
final
lines
to
dispose
the
volumes
and
shots,
as
well
as
their
related
objects.
After
Central
data
is
transferred.
So
this
this
is
as
what
I
have
mentioned,
that
this
morning
take
care
of
the
warm
snapshot
and
the
related
objects.
A
I
think
this
is
also
the
plugin
data
mower,
a
generic
operation
that
the
plugin
is
more
need
to,
that
is,
to
create
a
PV
and
then
result
data
to
it
in
any
way.
So
here
we
don't
limit
how
the
data
is
restart
to
the
PV.
Actually,
that
is
the
exposed
operation,
as
we
mentioned
for
the
for
the
building
data
more
right,
but
we
that
import
operation
is
belong
to
the.
A
We
only
declare
how
to
do
that
for
the
building
data
more
so
the
plug-in
data
model
could
also
follow
this.
Similar
steps
like
to
expose
from
the
host
pass
or
to
mount
it
to
to
a
result
part
and
do
that
from
inside
the
Pod,
and
but
we
don't
limit
that
for
the
plugin
anymore.
A
So
here
we
are
going
to
say
that
we
want
to
create
a
PV
and
we
want
the
DM
or
data
more
to
Resort
data
to
the
PV
and
then
the
final
thing,
the
plug-in
data
more
we
need
to
do
is
that
to
start
the
reference
for
the
provide
to
the
provided
privacy,
that
is
so
we
have
a
PVC
or
or
that
that
users
all
will
always
backed
to
bound
to
this
freeway.
So
we
started
this
previous
claim,
reference
to
the
pvc's
name
and
the
reference
ID.
A
That
is
I,
think
that
is
the
necessary
operation
and
a
very
simple
operation
and
finally,
set
a
label
to
the
PV
and,
with
with
two
these
two
subs,
the
PVC
will
be
able
to
mount
the
amount
to
this
PV.
That's
all
for
the
for
the
for
the
for
the
result,
so
I
I,
so
I,
I,
I,
think
all
internally
after
the
discussion.
We
think
that
this
Rockefeller
should
work
for
all
the
data
movers.
All
the
data
more
that
want
to
use
this
workflow,
so
I
want
so
here.
A
A
So
my
question
here
is
whether
this
workflow
is
is
work
or
do
we
think
this
workflow
works,
or
this
is
a
general
attended?
Workflow
is
generic
enough
that
work
for
all
the
data
models
and,
for
example,
does
this
work
for
IDP
data?
More?
G
C
B
C
We're
maintaining
CSI
behavior
and
then
once
we
transition
to
using
the
framework,
then
we
can,
you
know,
change
the
way
we
we
can
figure
it
so.
B
A
Oh
okay,
okay,
I'll
call
that
so
so
so
it
means
generally
once
it
at
least
this
workflow
work.
For
for
the
audit
even
more
once
we
want
to
integrate
with
the
current
workflow
right,
so
it's
just
a
option
or
or
when
to
merge
into
or
attracted
into
the
current
workflow.
F
C
I
mean
I
think
we
will
need
to
make
some
changes
to
correspond
to
this,
but
that's
all
right,
because
it's
part
of
the
next
release
cycle
and
we'll
be
working.
You
know
at
the
same
time
so
we'll
be
kind
of
participating
with
these
as
they
change.
So
if
we
need
to
make
any
changes
on
our
side
before
112
is
out,
we'll
have
plenty
of
time
to
react
to
that
here.
F
C
C
But
you
know,
for
example,
the
way
the
volume
snapshots
are
created
and
deleted.
You
know
we
may
have
to
make
some
changes
associated
with
that,
but
just
in
general
it
does
look
like
you
know
in
the
base
case
of
we
could
use
the
existing
data
mover
without
plugging
it
in
as
kind
of
a
first
step,
it'll
probably
work
with
minor
changes,
but
then
it
also
looks
like
a
longer
term.
We
want
to
use
the
plug-in
framework
to
actually
you
know,
make
use
of
the
new
features
as
much
as
possible
and
I.
A
Oh
yeah
to
uploader
will
be
fourway,
1.12
will
be
copier
only
so
we
will
not
to
integrate
with
racing
okay,
yeah
and
in
future
we
may
have
other
data
uploaders
other
other
uploaders.
That
is
also
building
the
building
uploaders.
But
right
now
we
have
only
one
and
also
we
can
think
that
it's
open
question
or
open
question.
C
B
C
Know
that's
what
we
did
like,
for
example,
with
the
CSI
as
soon
as
inside
the
AWS
and
all
those
plugins.
Originally,
those
three
you
know:
object
stores
were
all
built
in
and
then
there
was
a
change
around
I
think
1.0,
where
those
were
all
pulled
into
plugins,
and
so
now
there
is
no
built-in
Object
Store
everything,
even
the
even
the
you
know,
VMware
supported
ones
are
plugins.
C
It
might
be
simpler
in
the
long
run
so
eventually
have
even
the
built-in
data
mover
really
just
be
a
plug-in.
So
the
day
we
interact
with
Valero
in
the
same
way
that,
like
an
ODP
one
would-
or
you
know
some
third-party
one
from
Dell
or
from
someone
else
would
but
that's
something
we
can
deal
with
later.
I
guess
you
know,
once
we
get
to
the
point
where
we
want
more
than
one
built
in
at
that
point,
it
might
just
make
sense
to
have
everything
use
the
same
plugin
framework,
yeah.
C
B
C
Grpc
because
we
happen
they're
all
in
the
same
Bolero
pod,
but
at
the
same
time,
in
the
first
version
of
Valero,
the
the
AWS
Object
Store
plug-in,
for
example,
was
also
in
Valero
core
and
at
some
point
the
then
maintainers
team
I
wasn't
a
maintainer
at
the
time.
But
you
know
the
team
made
the
decision
to
treat
all
object
stores
in
the
same
way,
and
so
the
AWS
and
gcp
and
Azure
plugins
are
pulled
out
of
core
and
became
their
separate,
plug-in
repos
as
their
own.
C
C
You
know,
sort
of
internal
data
movers
I,
don't
know
at
that
point
that
if
it
makes
more
sense
and
again
I
think
the
question
of
is
it.
Is
it
a
grpc
plug-in
with
the
separate?
You
know,
image
is
a
different
question
from
is
this
part
of
Valero
and
and
is
it
if
it's
implemented
using
the
same
plugin
interface?
C
A
I
mean
yes,
I
I,
tease
upon
the
I.
Think
the
original
question
from
one
is
like
whether
we
have
have
a
direct
uploader.
C
Yeah
yeah
that
was
hypocrite
and
I.
Think
part
of
the
reason
for
that
question
was,
for
example,
what
we're
doing
on
the
oedp
side
is,
you
know
we're
actually
using
valsync,
which
is
you
know,
with
rustic
as
the
uploading
uploader
itself,
and
but
you
know,
since
right
now,
for
for
the
file
system,
backup,
you
know
we
had
the
option
of
copyristic
and
I
guess
the
main
reason
I'm
guessing
why
we
decided
not
to
do
rustic
here
is
because
we've
already
decided
that
we
want
to
deprecate
Rustic.
A
A
F
Yeah
and
I
think
with
current
design.
You
can
stick
with
rustic
involving
if
you
want
to,
because
when
we're
talking
about
plugin
in
this
plug,
both
data
mover,
that's
not
a
grpc
plugin,
that's
a
controller
and
you
can
somehow
wrap
your
code
as
a
controller
to
handle
the
data
mover
CR
and
you
can
continue
using
resting.
Even
the
internal
data
mower
chooses
to
use
Utopia.
A
G
F
C
G
C
F
Modified
image,
no,
the
the
whole
idea
is
that
if
user
want
to
use
the
non-internal
data
move
or
he
just
install
the
controller
in
another
part,
but.
C
A
A
So
what
I'm,
seeing
so
the
wallsync
solution
is
still
selectable
by
the
ODB
data.
More.
F
A
A
I
think
we
have,
we
have
discussed
all
the
us
back
and
I
think
there's
no
major
question
dude,
so
this
one
is
about
whether
we
need
or
will
we
use
the
existing
CSR
plugin
as
a
data,
more
plugging
and
we
have
mentioned,
but
we
will
review
that
we
will
refactor
it
to
to
integrate
with
backup
acronym
A2,
and
we
also
want
to
do
some
refactor
for
the
current
CSR
plugin.
Without
it.
It
is
more
also
use.
F
Yeah
I
think
we're
gonna,
leave
this
open
for
around
another
week
or
so
and
before
that
just
yeah
I
want
to
say
it
again,
then
make
sure
there's
no
misunderstanding,
and
if
we
murder
that's
something
you,
you
also
understand
what
you're
gonna
do
and
you
approve
it.
F
G
G
Okay,
thank
you
and
I.
Think
it's
not
just
us
as
well.
We
might
have
different
more
users
as
well,
who
want
to
use
this
workflow.
So
keeping
that
in
mind,
I
think
this
is
pretty
good.
Pretty
generic,
okay.
F
Yeah,
but
before
we
merge
it,
I
will
confirm
with
you
guys
again.
We
are
slack,
maybe
so
just
want
to
make
for
this
design.
It's
you
know,
make
sense
for
different
I
mean
the
data
mover
providers
so
that
it
is
pluggable
enough.
F
A
A
Okay
thanks
everyone-
and
this
is
for
my
topic
and
then
what
next
one
is.
G
Yeah,
this
is
something
we
had
a
customer
ask
as
well.
So,
as
we
know
like
existing
resource
policy
is
already
implemented,
but
with
only
99
update
options,
we
had
kept
the
delete
and
recreate
options
as
phase
two
or
future
scope.
So
I
was
wondering:
what
do
you
guys
think?
Should
we
take
this
on
in
the
next
release
like?
Would
this
be
a
good
candidate
and.
C
And
Chip
I
just
wanted
to
clear.
Add
to
that
too.
There
are
actually
two
things
that
we
put
off
for
the
original
design
who
implemented
this
and
I
guess
this
rfe
is
probably
not
as
relevant
as
the
design
document
that
Shivam
wrote.
C
More
recently
than
this
I
don't
know
if
you've
linked
to
that,
but
but
basically
there
are
two
things
that
we
did
not
Implement
and
one
was
the
policy
where
we
had.
The
idea
of
you
know
do
nothing
which
is
the
default.
There
was
an
update
where
we
try
to
update
resources,
but
we
never
try
to
recreate
them.
C
So
if
there
are
immutable
fields
or
immutable
resources,
then
that
update
will
fail
and
then
there's
a
delete
and
recreate
option,
which
was
the
third
thing
which
one
reason
we
didn't
Implement
in
the
first
pass
is
it's
kind
of
a
riskier
thing,
because
if
Valero
deletes
the
resource
in
the
cluster
and
then
it
fails
to
recreate
it,
then
we've
actually
done
damage
in
the
cluster.
So
this
would
be
an
option.
Users
have
to
be
very
careful
about
and
to
follow
on
with
that.
C
The
other
thing
we
did
not
Implement
here
was
the
we
have
the
default
policy
which
applies
to
everything,
but
the
original
design
also
had
a
notion
of
specifying
by
resource
type.
So
you
could
say
you
know
the
default
is
updates,
but
for
pods
I
want
to
delete
and
recreate
and
for
Secrets
I.
Don't
want
to
touch
them
or.
C
So
there
was
a
way
there
was
a.
There
was
a
a
section
in
the
design
to
override
the
policy
on
a
per
resource
basis,
I
think
if
we
do
the
delete
and
recreate,
we
also
want
to
do
that,
because
I
think
for
most
users
they're
going
to
be.
We
want
to
be
very
selective
about
what
they
were
willing
to
delete
and
recreate
so
I
think
we
should
do
those
two
things
together,
rather
than
adjust
the
delete
and
recreate
option.
F
So
so
so
so
you're
suggesting
put
this
into
112
candidate
and
because
I
that
one
you're
talking
about
was
not
on
my
radar.
C
We
we
these
two
parts
that
we're
talking
about
now
were
kind
of
said.
You
know
we
agreed
that
we're
not
going
to
implement
them
in
the
first
design,
but
they're
kind
of
in
the
backlog
for
the
future.
We
we
had
not
targeted
them
for
a
release,
so
we
didn't
say
it
was
going
to
be
in
you
know:
1,
10
or
111..
C
The
reason
we're
mentioning
it
now
is
I
think
it's
actually
possibly
more
than
one
customer
we've
run
into.
That's
you
know
said
hey,
you
know
in
fact,
I
think
in
one
case,
one
of
the
people
we
were
talking
to
I
think
also
created
it
or
commented
on
one
of
the
Upstream
issues
as
well.
Here
you
know
on
the
Valero
GitHub,
but
basically
we
we
have
other
people's.
C
You
know
noticing
that
when
we
restore
something
that's
already
in
the
cluster,
we
get
that
warning
saying
you
know,
resource
already
exists
and
it's
different
than
than
backed
up
version
and
users
are
saying
you
know
why
didn't
Valero
update
that?
Why
didn't
Valero
mono,
you
know,
fix
it,
and
obviously
we
have
that
update
policy
which
works
for
most
cases.
But
you
know
if
it's
a
pod
or
if
it's
something
else
that
we
can't
modify,
then
the
delete
option
is
really
needed.
F
Okay,
so
so
would
you
mind
opening
another
one
issue
to
so
that
seems
like
an
enhancement
to
the
existing
resource
policy
that
has
already
been
implemented.
Yeah
yeah,
I.
C
Think
that
would
be
better
than
this
rfe,
because
this
rfe
was
written.
You
know
a
long
time.
It
was
a
2018.
and
if
you
enter
and
create
a
new
issue,
you.
C
Yeah
and
if
we
tag
that
with
the
the
you
know,
candidate
112.
F
So
so
since,
since
you
you
mentioned
this
one,
so
are
there
any
other
issue
or
stuff
you
you
want
to
implement
in
112.,
because
currently
we
are
in
the
process
of
trading
the
112
candidates.
I
think
we
can
combine
your
you
know,
proposed
work
items
together
and
try
to
track
them
internally
and
discuss
with
you
guys.
So
so
that's
the
one
for
existing
policy.
Are
there
any
other
items.
F
Oh,
you
don't
have
you
don't
need
to
answer
it
right
now,
but
I
believe
you
all.
Have
the
maintainer
maintain
our
permission,
so
you
can,
you
know,
create
the
issue
and
label
it,
one
s112
and
ping
us
on
track
and
we
can
trade
because
we
are
having
some
internal
discussion
to
try
audit
them
and
later
we
can.
F
You
know,
gather
on
the
community
meeting
and
hold
ad
hoc
Zoom
session
to
go
through
them,
and
you
know,
finally,
by
the
end
of
feeder,
freezer
112,
we
as
previous
releases,
we
have
a
concrete,
relatively
concrete
workload
planned
local.
Would
that
work
for
you
guys,
just
in
case
you,
you
can
also,
you
know,
have
some
offline
discussion
internally
within
Red
Hat
folks
and
your
customers
to
see
what
you
want
to
do
in
112.
G
I,
just
have
one
question:
hey
Daniel,
do
you
guys
have
any
plans
for
non-admin
backups.
F
For
a
long
admin,
but
how
do
you
mean,
but
but
currently
I
believe
we
support?
You
know
running
Valero
as
arbitrary
permission
just
as
long
as
they
can
handle
this
right.
They
have
enough
permission
to
create
the
resources
and
do
the
read
or
write
calling
the
apis
as
long
as
they
have
enough
the
sufficient
permission
and
they
are
not
admin
I-
think
that's
doable.
G
F
Yeah
yeah
I
I
from
my
understanding
for
this
problem.
I
think
it's
all
about.
You
use
a
an
account
to
do
the
read
and
write
for
certain
resources
during
the
workflow,
whether
it's
admin
or
not.
F
It's
not
really
relevant,
but
but
normally
when
you
try
to
write
any
resources,
you
you
need
a
relatively
High
higher.
F
H
H
F
F
F
H
We
may
in
the
future,
have
you
know
some
proposals
that
will
speak
to
in
a
community
meeting
I'm,
not
sure
if
we're
really
prepared
to
do
that
here.
Yeah.
G
A
Yes,
if
we
we
talk
about
the
multi-tenants.
F
I
recall
a
I'm,
not
sure
that
you
asked
or
other
folks
in
red
hat
I
mean
pretty
mentioned
that
some
he
has
some
discussion
with
red
hat
PMS
and
you
guys
think
we
want
to
work
on
multi-tenancy.
But
there's
a
long
list
of
you
know,
work
items
in
the
backlog
by
pretty
even
pradeep
said
him.
He
convinced
you
guys
that
we
can,
you
know,
hold
the
multi-tenancy
discussion
and
work
on
other
stuff.
H
Yeah
I
mean
you're,
probably
correct,
I,
probably
that
information
has
changed
a
bit
since
and
can
I'm
not
sure
I
mean
we.
We
have
some
thoughts
on
various
implementations
that
probably
aren't
much
past
just
thoughts
at
this
point.
To
be
honest,
so
we
could
come
back
with
something
more
concrete
after
we
probably.
F
E
F
Yeah
and
and
if
you
think
the
multi-tennessee
is
a
really
high
I
mean
it's
a
high
priority
requirement,
make
sure
predict
be
on
the
same
page,
because
we
have
a
regular
thing
with
predip
and
the
predictive
tale
of
about
some
results
that
no,
the
discussion
happened
between
him
and
the
different
external
partners.
F
Sure
yeah,
okay,
maybe
as
easy
as
you,
you
can
pin
him
on
the
kubernetes,
Slack
or
or
somehow
just
as
long
as
he's
on
the
same
page,
I
think
we
are
happy
for
the
discussion,
but
as
for
how
to
implement
that
may
require
a
lot
of
discussion,
especially
for
the
multi-tennessee,
because
that
firm
seems
to
be
pretty
flexible.
That
means
different
things
for
different
people,
so
we
won't
make
sure.
Let
me
talk
about
money.
Can
you
talk
about
the
same
thing?
I.
A
Okay,
if
not
I,
think
if
we
can
finish
off
here
and
and
have
a
good
day
at
the
evening,.