►
From YouTube: 2021-05-18 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
the
recording
has
started-
and
this
is
the
may
18th
2021
rook
community
meeting,
letting
some
more
folks
into
the
call
here
now
all
right.
Let's,
let's
get
started
with
our
milestones
here
1.5.
A
B
A
Nice
so
yeah
we
don't
have
anything
on
the
board
at
all,
so
it
doesn't
seem
like
there's
a
case
yet
for
a
doing
another
patch
release,
unless
maybe
there's
something
on
the
1.6
board.
That
looks
like
it's
worth
backporting,
but
anybody
want
to
raise
a
hand
for
the
necessity
of
a
1.5
patch
release.
B
A
Yeah,
I
guess
we
do
have
some
fixes
that
have
been
merged
and
they're
sitting
in
the
release
branch
that
happened
that
have
not
gone
out.
Then
keeping
you
know
regular
train
going
like
every
couple
weeks.
There
seems
fairly
reasonable.
Then,
if
there
are,
you
know
future
fixes
in
the
branch.
B
A
And
that
makes
I
mean
that
does
make
sense
all
right.
Let's
take
a
look
see
here
at
the
1.6
board.
A
Yep
and
so
we're
working
towards
a
another
patch
for
the
1.6
travis.
You
want
to
talk
about
some
of
the
highlights
that
are
that'll,
be
in
that
patch.
B
Let's
see
I'm
trying
to
see
your
screen
event
reporting.
Oh
this.
I
think
this
is
a
it's
an
interesting
one.
So
up
till
now
we
haven't
been
reporting
events
on
our
crs,
but
now,
with
this
change,
we'll
be
able
to
see
the
most
important
thing
being
if
a
reconcile
fails
in
the
operator
we'll
be
able
to
see
a
kubernetes
event
on
the
cluster
cr
being
reported
with
that
error,
it's
been
kind
of
hidden
or
you
have
to
look
at
the
logs
or
it's
just
harder
to
see
been
harder
to
see.
A
I
I
found
the
events
to
be
really
really
useful,
especially
because
it
like
it
does
a
really
good
job
of
capturing
debugging
information,
and
you
know
useful.
You
know,
like
events,
is
the
best
way
to
say
it,
but
you
know
like
milestones
for
an
object's
life
cycle
all
in
one
place,
and
so
I
think
it's
much
easier
for
people
to
access
that,
especially
like
on
the
consumer
side.
A
Instead
of
having
to
look
in
the
pot
like
a
logs
for
a
pog
pod
to
see,
like
you
know,
troubleshooting
and
observability
information,
so
I'm
always
a
huge
fan
of
events.
I
did
change
my
tune
on
that
like
a
year
ago,
or
so
it's
been
super
useful
since
then,
so
yeah
definitely
vote
for
events
for
sure.
D
C
A
Yeah,
what
would
the
scope
of
this
be?
Is
this
for,
like
all
of
the
major
ceph
objects,
or
just
like
some
highlight
ones
like
at
the
cluster
level
or.
B
Well,
this
pr,
the
it's
just
the
first
implementation
of
the
ceph
cluster
events
or
the
result
of
the
seth
cluster
reconcile,
but
we
yeah.
We
want
to
add
more
events
for
each
of
the
crs
reconciles
after
this,
not
just
that
cluster,
so
yeah,
we
need
we'll
need
follow-up
prs!
Oh
okay,
to
do
it
for
all
this
sears.
B
B
Lot
yeah,
I
mean,
and
there
are
other
fixes
that
would
be
in
the
dot
pre-release
this
week.
I'm
gonna
forget
all
of
them
on
the
spot,
but
it's
all
goodness,
and
if
there's
anything
that
really
needs
to
be
in
the
release.
D
A
Okay,
if
anybody
has
anything
else
to
say
for
the
what
if
I
were
1.6,
then
just
holler
right
now,
and
otherwise
we
can
continue
moving
on
okay.
So
for
the
community
topic
section,
I
just
wanted
to
open
the
floor
for
folks
to
kind
of
share
some
of
their
experiences
from
kubernetes.
A
Last
time
we
cancelled
the
rook
community
meeting
in
order
to
you
know,
be
able
to
focus
on
kubecon
and
avoid
long
days
for
europeans
or
crazy
hours
for
north
america.
Folks,
so
did
anybody
want
to
kind
of
share
kind
of
how
some
of
the
the
book
sessions
later
travis?
You
did
a
lightning
talk.
We
did
a
a
deep
dive,
a
session
as
well
too
unlimited
maintainer
track
kind
of
we're
going
to
share
some
of
the
learnings
or
observations
from
those.
B
One
thing
I
wish
would
have
been
different
as
far
as
the
what
was
supposed
to
happen.
They
said
no
q,
a
for
the
lightning
talks,
I
mean
I'm
sure
it's
just
because
of
the
speaker
could
easily
be
overwhelmed
with
tons
of
questions
from
from
more
people
that
attend
the
lightning
talks.
B
But
people
were
asking
like
the
questions
anyway,
and
I
was
doing
my
best
to
keep
up
with
them.
So
it
was
good
to
see
that
and
see
the
questions.
People
have
questions,
they
want
to
know
more,
there's
just
a
chasm
between
how
do
you
get
them
into
the
rook
community,
like
in
the
ruck
slack
after
to
get
them
to
follow
up
on
questions,
because
once
they're
out
of
that
lightning
block,
I
don't
feel
like.
We've
got
any
of
them
to
join,
or
maybe
they
did,
and
I
just
didn't
know
who
they
were.
B
So
that's
all
for
me.
I
know
saturday
was
at
the
deep
dive
talk.
It
was
pretty
early
for
blaine
laney
didn't
make
it
did
you.
It
was
just
too
early.
E
Oh
yeah,
it
was
just
really
really
early
yeah,
so
tori
had
had
good
things
to
say.
I
guess
there
were
like
10
questions
or
so
that
he
was
able
to
answer.
A
Yeah
I
mean,
and
the
the
value
of
those
you
know
in
a
virtual
setting
is
still
you
know
somewhat
questionable.
So
that's
not
a
huge
loss,
I
think
the
lightning
talk
and
the
meet
the
maintainer
track
stuff.
I
think
that's
got
more
value
of
being
able
to
connect
with
people,
and
you
know
really
share
the
message
and
answer
questions
more
than
the
booth
does.
B
To
the
next
conference,
where
they're
trying
to
do
a
hybrid
event
where
some
in
person,
because
in
person
is,
I
don't
know
it's
more
effective
for
talking
to
people.
A
Halls
be
pretty
crazy,
okay,
so,
let's
move
along
then
to
just
a
quick
update
for
the
cfp.
I
think
that's
this
week
on
sunday
evening.
The
call
for
proposals
for
the
next
cube
con,
because
you
don't
get
too
long
of
a
break
in
between
kubecons
folks.
The
next
call
for
proposals
is
due
on
sunday
evening.
So
if
you
have
some
talks
or
some
ideas
you
want
to
submit,
I
think
travis
has
linked
it
right
here
in
the
dock
and
that
will
be
for
the
kubecon.
A
That's
scheduled
to
be
the
hybrid
event
we're
talking
about
there
in
los
angeles,
california,
this
this
october,
the
that's
always
out
of
out
of
cycle
the
lightning
talks
maintainer
stuff,
that's
always
done
in
a
different
track,
different
process,
so
we
do
not
have
to
have
anything
submitted
for
for
those
by
sunday
night,
at
least
all
right,
ci
improvements.
I
know
we've
done
a
whole
bunch
of
stuff
to
continue
our
migration
off
of
jenkins,
which
is
awesome.
Do
you
wanna
give
us
an
update
about
that
and
all
the
details.
B
Yeah
so
yeah
lots
of
good
progress.
We're
getting
close
to
turning
off
jenkins,
I
feel
like
so
where
we
are
so
master
builds,
are
publishing
with
the
github
action
and
what
that
means
is
well
pretty
soon
we'll
be
able
to
do
that
for
release
builds
as
well,
but
we're
just
working
out
one
last
kink
with
it.
So
it
picks
up
the
right
tags
for
that,
but
yeah
we're
publishing.
B
So,
what's
next
to
really
be
able
to
turn
off
jenkins
and
go
full
on
with
the
github
actions,
we
need
to
convert
the
nfs
operator
tests
to
run
as
an
action.
That's
the
last
integration
test.
That's
not
yet.
As
an
action
we've
got
to
do
some
little
things
like
update
the
badge
that
we
show
on
github
to
pick
up
those
actions
instead
of
jenkins,
then
disable
the
jenkins
test
for
pr's
and
and
the
master
branch,
because
then
there
will
be
nothing
left
that
they
need
to
run.
B
B
D
Missed
it:
okay,
yeah,
you
took
me
off
guard
travis,
so
no,
I
think
I
was
thinking
for
quite
a
while
that
we
could
add
the
badges
for
sure
and
that's
definitely
something
that
would
be
good
to
to
highlight
it's
mostly
cosmetic,
but
it's
still,
I
think
it's
good
to
have
yeah.
I
think
we're
on
the
right
track
honestly
to
to
move
out
of
the
chain
consistencies.
We
have.
D
B
D
We
gather
the
results,
I
think,
but
but
overall
yeah
it
has
proven
to
be
much
more
reliable
and
stable
and
well
integrated,
even
the
reports-
and
I
think
this
is
also
like
stepping
a
little
bit
the
path
for
for
running,
more
tests
against
seth
master,
because
I
I
feel
like
we,
we
start
testing
the
newer
release
once
it's
like
two
weeks
from
from
being
out,
and
I've
always
had
that
that
dream,
but
like
I
always
wanted
to
be
as
close
as
possible
from
selfmaster,
so
that
we
could
just
quickly
make
sure
we
don't
miss
anything.
A
I'm
definitely
excited
about
being
able
to
fully
retire
jenkins
as
well
too.
A
It
served
well
for
years,
but
it
also
didn't.
A
A
Yeah,
exactly
cool
okay
sounds
good
thanks
for
the
effort
on
that
everybody
and
then
so
the
next
item
here
is.
I
took
a
quick
look
at
that
pr
travis.
Do
you
do
a
do
you
want
to
do
you
want
to
share
a
little
bit
more
context
here
and
then
kind
of?
Maybe
you
know
explanation
of
like
some
of
the
impact
the
impact
of
that
like
that
was
something
I
would
yeah
yeah
motivation
and
impact.
A
B
Basically,
for
all
these
fields,
but
like
the
cassandra
operator
only
uses
one
of
them,
it
only
uses
volume,
claim
templates
and
then
the
nfs
operator,
I
don't
think,
used
any
of
them
and
if
they,
if
an
operator,
doesn't
need
the
types
does
it
really
make
sense
to
have
the
overlapping
types
where,
if
one
operator
wants
to
add
a
type
now,
the
other
one
gets
the
type
in
its
schema
too.
But
then
it
doesn't
use
it
which
is
kind
of
awkward.
B
A
And
then
so,
walk
me
through
a
little
bit
more
on
how
like,
if
we
put
this
into
I,
I
just
doesn't
sound
like
something
we
backport
or
unless
the
migration
is
super
simple,
but
to
say
we
should
be
releasing
1.7
and
we
somebody
does
an
upgrade
to
the
1.7
binaries.
So
we're
thinking
that
that
the
the
struct
like
the
serialization
or
deserialization
of
a
crd
of
a
cr
would
be
functionally
equivalent
in
terms
of
its
you
know,
serialized
json
or
like
the
way
it's
stored
inside
of
the
api
server.
A
So
like
the
go
link
types,
the
packages
that
they're
imported
from
etc
might
have
changed,
but
serializing
and
deserializing
that
round
trip.
There
is
still
matching
the
same
schema
there.
So
it
would.
It
would
round
trip
just
fine
and
go
along
without
really
even
having
to
care
about
it.
Do
we
have
to
do
any
sort
of
like
mutations,
or
you
know,
conversion
hooks
or
anything
like
that.
B
Yeah
we're
fine
because
you're
right,
because
the
the
json
deserialization
will
just
pick
up
the
types
in
the
new
package,
even
though
the
the
types
mode
packages
that's
the
same,
json
underneath
and
so
there's
I
haven't
found
any
conversion
needed
the
upgrade
tests
that
we
already
have
are
working
fine
with
this
change,
but
yeah
I
don't
want
to
backboard.
I
just
want
to
leave
it
for
a
1.7
release.
B
A
Okay,
right,
that's
good
that
that
sounds
good.
Then
it
feels
really
comfortable
about
that
from
and
it's
being
exercised
on
in
our
intense
integration
tests
and
the
release
test
in
our
master
builds
like
from
1.6
to
current
master
for
both.
B
The
1.16
tests
well
still
need
to
be
updated
to
be
1.6
to
master.
Today,
it's
1.5
to
master,
which
is
even,
I
guess,
more
telling
that,
yes,
it
should
work,
but
yeah,
maybe
I'll
make
the
change
for
the
1.6
upgrade
test
while
I'm
at
it.
So
we
can
confirm
that
upgrade.
A
And
in
that
upgrade
test
would
would
be
exercised
within
this
pr
right
with
this
code.
Still
in
this
branch
has
not
made
its
master
right.
A
Right,
okay!
Well,
that
sounds
good.
Then,
from
a
you
know,
risk
or
migration
standpoint.
I
think
I
do
like
that.
We
still
have
some
common
functionality
in
terms
of
like
logic
or
functions.
That
is,
you
know,
can
be
used
by
multiple
providers,
so,
like
not,
every
provider
has
to
implement
everything
themselves.
A
I
do
like
that
part-
and
I
can
see
a
case
here,
for
you
know
not
not
requiring
each
provider
to
get
more
than
they
need
basically
or
you
know,
feeling
kind
of
constrained
by
one
provider
wanting
to
add
a
type
to
that
shared
type
or
a
field
to
that
shared
type
and
then
that
having
impact
on
the
other
providers
etc.
So
I
could,
I
could
see
the
point
for
that
yeah.
A
I
think
it
is
definitely
kind
of
another
step
away
from
an
idealized
goal
of
having
you
know
a
lot
of
shared
logic,
shared
types
being
a
current
framework
foundation
for
a
lot
of
providers,
but
that
trend,
you
know,
hasn't
really
been
able
to
get
the
investment
that
that's
it's.
You
know
we've
wanted
for
quite
a
while
now
anyway,
so
it's
not
like
we're.
You
know
taking
a
strong
divergence
from
what
we've
been
doing
in
practice
recently.
I
suppose
right
and.
B
A
Yeah,
that's
cool,
okay,
cool
yeah,
okay.
That
makes
sense
to
me.
I
suppose
then
yeah
thanks
for
taking
time
to
kind
of
explain
a
little
bit
more.
I
went
in
and
looked
at
it
this
morning
and
I
kind
of
got
a
bit
of
an
overview
of
like
what
was
moving
around
and
stuff
like
that,
but
I
also
saw
you
know
thousands
of
lines
moving
and
I
was
like
community
meetings
today.
I'm
just
gonna
have
travis
explain
it
to
all
of
us
when
we're
all.
F
Yeah
I've
been
using
a
rook
recently
been
running
into
a
couple
of
situations
where
I
can't
get
those
resolved.
I
have
some
tickets
open,
I'm
not
sure
what
the
the
process
is.
Is
this
something
we're
doing
something
wrong
or
what?
Because
it's
just
like?
If
I
look
at
ticket
number,
seven,
seven,
five
six,
you
know
collected
the
logs
and
everything
else,
but
we
don't
seem
to
make
any
progress
as
far
as
resolving
them.
So
I
need
your
advice
like
what
is
it
that
I
can
do
to
speed
up
the
process.
F
Yeah
as
well,
so
we
have
tickets
couple
of
them
that
we
blocked
on
we're
trying
to
bring
up
a
cluster
with
rook,
and
we
seem
kind
of
like
we're
blocked
with
two
issues.
B
F
B
Yeah,
so
I
guess
sometimes
we
just
need
a
reminder
in
slack
that
hey
you
know,
you
know
just
to
ping
it
again
there,
because
we
try
to
keep
up
on
slack
on
a
daily
basis.
So
if
you
don't
see
a
response
in
slack
like
within
24
hours,
you
know
feel
free
to
to
ping
again
just
say:
hey
still
struggling
here.
F
I
can
bring
up
the
prometheus
metrics,
but
not
the
dashboard,
for
that
instance,
and
then
the
other
one
is
provision
in
the
pvcs.
It
doesn't
go
to
bound
states,
it
stays
in
pending,
so
I
captured
all
the
logs.
I
think
that
we
needed
I'm
not
sure
where,
where
the
issue
is.
C
D
C
F
C
Well
from
at
least
well
a
little
bit
like
this
from
a
you
seem
to
have
issues
with
the
manager
dashboard
setup
right
now.
Maybe
an
update
will
fix
it
if
you
like
that,
like
the
said
like
you're
on
f
version
15
to
8,
so
maybe
it's
just
a
set
update
which
resolves
this
for
well.
Whatever
reason
there
is,
on
the
other
hand,
trolls
remind
me
again
which
1.5
release
was
the
late
latest
one.
C
Okay,
so
those
would
be
at
least
from
from
my
side
regarding
a
troubleshooting
mall.
That's
like
the
first
two
things
to
look
into
first
set
version
update
to
at
least
latest
fifteen
point
three:
are
we
already
free?
I
don't
know
we
need
to
check
the
docker
and
then,
if
that
doesn't
resolve
it,
updating
to
the
latest
at
least
patch
release
of
version
115,
one
five
we're
not
that
fired
and
seeing
if
like
how
it
goes
from
there,
because,
like
now,
at
least
from
the
locks,
I
see
it's
you're
running
on
1.56.
C
I
don't
know
just
to
throw
it
in
like
updating
to
the
latest
patch
release
and
or
sap
version
to
the
latest
one.
Maybe
that
already
fixes
it
like
at
least
for
a
few
customers,
a
project
we
have
up
basically
telling
them,
hey,
which
version
you
were
on
distance
and
we're
like.
Could
you
try
updating
the
latest
and
sometimes
it
magically
resolves
itself,
so
maybe
try
updating
and
then
a
report
back.
Let's
see
how
it
goes.
Maybe.
F
My
concern
is
like
okay.
If
we
upgrade
to
the
latest
release,
we
might
see
some
different
problems.
I
mean,
if
we're
trying
to
take
care
of
issues.
Maybe
you
know
that
we
have
the
logs.
B
C
Oh
well,
depending
on
who
you're
asking
if
it's
just
a
development
cluster
might
be
perfect
place
to
check
if
the
new
version
fixes
it
or
like
if
you're
like,
if
you're
running
the
same
version
in
the
production
environment,
maybe
then
like,
if
you,
if
we
would
see
with
like
the
update,
you
can
fix
it
in
depth
like
we
can
look
into
pinpointing
it.
But
it's
well.
F
C
Okay,
so
at
least
skimming
over
the
logs.
The
one
thing
that
is
a
bit
I
would
say
suspicious
would
be
the
that
it
failed
to
start
up
the
first
monitor
in
time
at
least
or
well.
It
failed
to
form
a
tractor
forum
on
it,
so
that
normally
points
to
something
weird
with
the
network
or
maybe
really
just
a
networking
so
slow
to
pull
the
image.
C
C
Is
it
like,
if
you
have
the
same
set
up
in
another
in
like
two
environments:
let's
try
updating
sev
to
15
to
11
or
12
12.
It
is
and
based
on,
maybe
like
already
dead
like
if
it
fixes
itself
it
doesn't
try
updating
to
the
latest
version:
1.5
11,
rook,
seth
release
and
seeing,
if
that's
solved,
and
if
it
doesn't
solve
that,
then
I
think
we
should
start
from
there
as
it
like.
C
I
said
like
right
now,
it
could
be
something
maybe
fixed
in
the
patch
release
of
of
sev
and
our
rook
ceph
operator.
Maybe,
okay.
B
C
B
A
Awesome
yeah
thanks
everybody,
okay
and
then
next
item
here
on
the
agenda
around
the
rate
limiting
so
yeah.
We
should
not
have
any
rate
limiting
at
all.
We
should
be.
You
know
we
are
approved
by
docker
hub
to
be
part
of
the
open
source
program.
There
shouldn't
be
any
issues
with
that
whatsoever.
I
have
followed
up
a
couple
times
to
get.
They
were
supposed
to
put
like
a
badge.
You
know
on
the
repos
that
are
part
of
the
open
source
program
and
they
still
haven't
followed
up
on
that.
A
So
it's
hard
to
get
a
little
a
little
conclusion
to
that
one,
but
we
should
not
be
seeing
any
any
rate
limiting
within
the
rook
repos
itself.
Did
this
end
up?
Did
this
end
up
being
like
somebody's
seeing
that
or
is
it
like?
The
you
know,
a
bunch
of
you
know
they're
pulling
from
a
lot
of
different
repos
and
a
lot
of
different
projects
and
stuff
and
they're,
seeing
some
rate
limiting
in
general.
B
E
B
And
then,
like
a
couple
of
us
just
forgot
about
that,
we
shouldn't
have
rate
limiting
and
we're
commenting.
Oh
maybe
we
should
publish
or
push
to
quay
or
whatever
too
but
wait.
No,
we
shouldn't
have
rate
limiting.
We
shouldn't
need
to
do
that.
Let's
keep
dr
hubs
and
and
call
it
good
yeah.
So
this
issue's
closed,
but
if
you
have
any
more
insight
that
might
might
help.
If
you
want
to
comment
on
that
issue,.
A
A
Any
rate
limiting
at
all
for
for
the
rope.
A
B
Yeah,
I
think
I've
got
captured
most
of
the
features
that
we're
planning
on
for
1.7,
and
so,
if
there
are
any
other
last-minute
things
feel
free
to
comment
on
the
pr,
and
we
can
always
edit
it
of
course
later.
But
this
this
captures
the
current
plan
for
1.7
and
the
themes
section
below
will
be
kind
of
future
things
that
could
even
make
it
into
1.7.
But
more
likely
after
one
does
come.
B
A
Whole
computer
now,
okay
and
then
yeah,
so
folks
can
finish
up
their
feedback
or
any
other
last
request
for
the
1.7
roadmap.
Pr
here,
799.10
and
then
blaine.
You
had
a
agenda
item
that
you
wanted
for
probably
handling
resource
solution.
E
Yeah,
this
is
I,
I
guess.
The
like
problem
statement
is
currently
it's
possible
to
delete
the
ceph
cluster
from
underneath
its
dependent
resources
like
an
object,
store
or
a
file
system.
E
Or
like
block
pool-
and
so
this
is
a
design
there-
there
are
two
two
design
documents
in
here.
E
One
of
them
will
be
kind
of
selected
as
the
I
guess,
the
the
end
result,
but
yeah.
The
the
basic
idea
is
that
we
we
shouldn't
be
deleting
a
set
cluster
from
underneath
resources
that
are
reliant
on
it
and
in
the
second
design,
it
is
more
around
actually
codifying
the
dependency
relationships
between
resources.
E
More
broadly
so,
even
being
able
to
say
that
our
like
seth
nfs
resources
can
be
dependent
on
any
anything
that
has
a
pool,
or
you
know,
a
cef
pool
in
this
case
and
then
also
codifying
like
while
there
are
object
bucket
and
soon
cozy
buckets
that
could
be
dependencies.
The
the
yeah,
the
graph
is
actually
a
really
nice.
I
think
way
to
illustrate
like
there.
There
are
these
various
dependency
relationships
that
exist
in
because
of
the
crds.
E
We
have
incept
that
we
can
effectively
like
prevent
users
from
deleting
things
when
it
they
have.
E
A
That
would
have
been
awful
or
definitely
listen
to
yeah.
There's
a
really
good
image
here,
as
well
too,
to
kind
of
capture
and
help
kind
of
visualize
what
the
dependencies
are
and
how
we
can
express
them.
A
Well
so
yeah,
it
looks
like
you've
gotten
some
feedback
from
folks.
Is
there
like
a
action
item
for
for
that?
You
want
blaine
to
drive
to
conclusion
on
this
one
or
soliciting
just
eliciting
more
feedback
or
making
this
more
visible
for
the
rest
of
the
community.
E
I
think
visibility
and
if
anyone
does
have
particular
feedback,
I
guess
just
yeah.
Let
me
know
I
think
most
of
the
feedback
is
really
around
like
implementation,
which
I
think
travis
and
zeb
are
are
on
top
of.
E
But
if
there
are
particular
use
cases
especially
around
like,
like,
I
think
one
of
the
big
open
questions
that
we
have
in
it
and
that
we're
deferring
until
later,
currently
in
the
design
is
storage
classes
and
whether
or
not
the
existence
of
a
storage
class
that
references,
a
like
file
system
or
block
pool,
is
something
that
should
block
deletion
of
the
file
system
or
block
pool.
B
Yeah
and
if
I
were
to
summarize
the
goal
in
my
mind
at
least
so,
the
goal
is
to
prevent
accidental
deletion
of
resources.
So
people
don't
lose
data
accidentally,
but
at
the
same
time
we
don't
want
to
prevent
intentional
deletion
or
cleanup
of
a
cluster
where
we
don't
want
to
make
it
too
painful
to
to
uninstall.
Basically,
so
that's
kind
of
the
thoughts
that
I
want
to
keep
in
mind.
A
Okay,
thanks
for
putting
that
together,
blaine,
that
was
the
that's
all
of
the
agenda
items
that
are
in
the
dock
here.
Are
there
any
other
further
comments
or
topics
anybody
wanted
to
get
addressed
here.
D
A
All
right,
then,
yep
thanks
everybody
for
for
joining
in
today
and
a
good
discussion,
and
we
will
follow
up
on
slack
and
in
the
in
prs
on
github.