►
From YouTube: Velero Community Meeting - Nov 10, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
the
valero
community
meeting
today
is
november
9th
2021
and
per
usual.
We've
got
some
status
updates
and
discussion
topics
and
then
some
fantastic
contributor,
shout
outs.
So
let's
get
started
with
status
updates.
First
up
we
have
daniel.
C
Sure
so,
let's
see
I've
been
working
on
addressing
reviews
on
the
item
snapshotter
pr.
So
thanks
everybody
for
those,
daniel
bridget
and
jade
glick,
I'm
not
sure
the
first
name
there
so
definitely
appreciate.
It
spent
a
fair
amount
of
time
last
week,
working
with
the
cappy
folks
for
cluster
api
and
looking
at
backup
and
restore
issues
there.
And
so
that's
that's
something
that
essentially
there's
there's
some
issues
right
now
with
rolling
back
a
cluster.
So
if
we
have
a
cappy
cluster,
that's
managing
other
clusters
and
we
take
a
backup.
C
Then
we
do
some
changes
and
some
infrastructure
gets
changed
and
then
we
try
to
restore
back
to
the
old
state.
That's
not
working
real
well
at
the
moment,
so
we're
working
on
figuring
out
what
the
right
strategies
for
that
are
going
to
be,
and
hopefully
we'll
come
up
with
something,
and
then
I'm
also
working
on
the
the
astrolabe
demo
stuff
and
we're
doing
working
like
grpc
plugins
in
astrolabe,
so
lots
of
fun.
C
We're
trying
to
get
to
the
point
where
we
have
like
a
bunch
of
different
things
that
work
through
astrolabe.
So
right
now
we
can
do
volume
snapshots.
C
There
was
a
demo
where
we
could
move
a
disk
from
aws
to
vsphere,
using
the
astrolabe
apis
and
we're
working
on
a
postgres
poc
that'll,
take
like
the
zolando
operator
and
it'll
snapshot
the
database
using
postgres
backup,
rather
than
snapping
the
disks
and
we're
just
about
finished
up
with
the
restore
part
of
that
and
what
else
bridget's
working
on,
like
crd,
based
apis,
I'm
working
on
a
plug-in
mechanism
so
that
we
can
have
things
come
and
go
inside
of
the
asteroid
framework
itself,
rather
than
linking
them
all
together
and
once
we
have
all
so
we'll
have
some
of
those
pieces
working
we'll
be
able
to
demo
the
pieces
individually
and
then,
with
the
item
snapshotter.
A
Do
you
have
timeline,
I'm
just
I'm
just
curious,
because
I
I
would
like.
C
We
should
be
able
to
do
some
things
like,
I
think
we
we
can
probably
show
some
things
in
another
couple
of
weeks
with,
like
the
postgres
and
the
copia
repository.
A
We
see
if
we
can
schedule
that
for
the
the
is
it
the
first
week
of
december
or
something
like
that.
C
Yeah
yeah,
we
can
just
put
that
in
the
community
meeting
and
do
a
quick,
a
quick
demo
of
where
things
are
at.
They
still
need
to
put
together
like
the
whole,
like
here's.
The
whole
big
shebang
of
everything,
but
just
just
a
quick
demo
would
probably
be
fun.
B
Yeah
recently,
I've
been
working
on
the
csi
support
for
aws
driver
or
valero
aws
plugin.
B
When
I'm
working
on
this,
I
see
an
issue
regarding
a
piece
of
code
that
was
committed
a
few
years
ago.
Shall
I
discuss
it
right
now
or
wait
till
the
discussion
topic
section.
B
Okay,
okay,
so
so
the
issue
is
that
valero
collects
the
availability
zone
via
the
label,
but
it
cannot
it.
This
doesn't
work
for
pv
provision
by
csi.
B
I'm
gonna
discuss
it
later,
and
another
issue
is
the
pre
and
post
cook
issue
of
42.68,
a
developer
from
redhead
pointed
out
that
the
the
pv
snapshot
is
not
reliably
triggered
between
pre
and
post
books.
Actually,
when
I'm
investigating
this
issue,
I
see
some
additional
issues.
Currently,
I'm
still
doing
some
investigation.
B
It's
really
weird
that
even
for
one
part
and
one
pv,
if
I
write
some
file
in
the
pre
hook,
pre
backup
hook,
the
new
file
was
not
backed
up
in
the
snapshot.
Currently,
I
I'm
not
sure
why
I
discussed
with
scott,
and
he
also
did
not
know
why.
So
I'm
still
doing
some
investigation
on
that.
C
B
B
C
A
Okay,
thank
you,
daniel
any
other
questions.
Comments
for
daniel.
A
All
right,
bridgette.
D
Hi
everyone.
So
last
week
I
had
a
sync
meeting
with
fong
to
talk
about
the
plug-in
version
and
the
work
that
he's
been
doing
on
that
so
far,
which
is
really
great
to
see.
I've
been
experimenting
with
the
the
branch
a
little
bit
because
I
was
trying
to
see
if
we
could
pull
out
the
part
that
just
does
the
code
rearranging
rather
than
like
the
next
step,
which
is
adding
in
all
the
context
stuff.
D
And
while
I
was
working
that
I
realized
that
there
might
be
some
things
missing
from
the
design
dock
in
terms
of
like
where
all
the
code
should
go.
The
plug-in
code
is
is,
is
pretty
complex
and
there's
lots
too,
and
I
think
there's
there's
a
section.
I
think
I've
missed
in
rearranging
stuff
for
the
versioning,
so
I'm
gonna
try
and
update
the
try
and
figure
that
out
and
then
I'll
update
the
the
design
talk
to
encapsulate
that
or.
E
D
That
information,
so
as
they've
mentioned,
yeah
we've
been
working
on
astrolabe
and
the
and
working
on
the
I've
been
working
on.
The
crd
design
experimenting
a
little
bit
with
like
trying
to
add
a
controller
into
like
the
astrolabe
server
and
like
the
demo
environment
that
we
have
just
to
try
and
experiment
with
that
and
see
see
what
the
best
approaches
is
going
to
be.
D
But
that's
all
very
much
work
in
progress,
and
then
we
we
had
a
bug
report
from
like
an
internal
team
and
it
was
something
that
had
been
reported
by
the
community.
But
then
I
think
that,
like
they
solved
their
issues
and
closed
the
closed
the
bug
and
then
we
kind
of
lost
track
of
it.
But
the
there
was
an
issue
with
the
cli
where,
if
you
set
up
an
additional
backup
storage
location,
the
credential
field
always
gets
set.
D
Even
if
even
if
you
didn't
try
to
specify
credentials,
then
that
causes
an
issue
later
because
it
tries
to
retrieve
a
secret
with
a
name-
that's
empty.
So
so
I've
put
in
a
fix
for
that
and
I
think
I'd
like
to
get
that
in
for
the
171
release.
If
possible,
I
don't
know
where
we're
at
with
the
timing
for
that,
but
that's
that's!
What
I've
been
working
on.
A
Awesome,
thank
you
so
this.
D
A
Okay,
okay,
so
yeah,
because
I
was
wondering
why
it
was
still
open.
Yes,
cool.
A
A
All
right,
when
cac.
F
A
A
G
Oh
all,
right,
I
think
the
best
way
to
explain
this
particular
issue
that
I'm
working
on
is
by
clicking
on
the
rfe
option
to
delete
and
recreate
objects.
The
use
case
we
care
about
is
very
similar
to
this.
I
really
think
that
the
only
I
almost
think
that
the
issue
that
my
team
member
pradeep
brought
up,
I
think
it's
a
duplicate.
The
only
difference
between
these
two
issues
is
the
solution.
G
So
here
it
says,
yeah
I'll,
just
read
the
user
story.
As
a
cluster
operator,
I
want
to
use
arc
now
called
valero
as
a
mechanism
to
keep
two
clusters
in
sync.
This
might
be
prod
a
and
prod
b
or,
alternatively,
every
night
mirror
production
to
staging,
so
that
we
have
a
fresh
environment
for
testing
staging
and
that's
exactly
the
use
case
we
tensor
migrator
team
have
and
currently
with
valero
what
we
you
know.
G
What
you
have
to
do
is
you
you
can
do
backups
on
a
schedule,
but
if
you
want
to
do
a
restore
you're
going
to
have
to
start
from
a
fresh
cluster
right,
but
we
want
to
kind
of
save
time.
We
want
to
keep
up
a
kind
of
dormant
cluster,
that's
exact,
so
this
and
here
they
call
it
staging,
but
we're
going
to
call
it.
I,
our
team,
I'm
going
to
call
it
dormant.
G
C
C
G
Exactly
and
it
saves
time
because
you
know
some
of
these
clusters
can
be
very
big
and
it
takes
a
lot
of
time
to
do
a
restore.
So
we
want
to
save
that
time
by
doing
the
restore
ahead
of
time
and
then,
when
a
disaster
does
happen,
we
want
to
be
able
to
bring
the
cold
one
up,
really
quick
and
it's
as
fast
as
scaling
up.
G
In
order
to
have
that
you
know
cluster
prepared,
we
need
to
be
constantly
restoring,
which
means
that
we
need
to
do
what
we
call
incremental
restores,
but
that's
also
kind
of
a
misnomer.
But
what
you
want
to
you
know
it's
like
you,
take
the
dif.
You
take
the
difference
from
the
backup,
so
you
have
this
cluster.
That's
supposedly
update
with
your
production
cluster,
and
then
you
take
a
backup
of
your
production
cluster
and
then
you
restore
onto
this
cold
cluster,
which
is
I'm
sorry,
does
it.
G
C
I
mean
the
idea
is
that
we
could
update
update
resources
right
rather
than
simply
so
right
now
we
either
add
new
resources
or
we
leave
them
alone.
Yes,
and
and
that's
kind
of
the
the
philosophy
that
was
taken
early
on
exactly
and
yeah,
there's
just
there's,
there's
a
there's,
several
different
things
that
you
know
might
play
into
this.
So
one
would
be
updating
resources
in
place.
C
It
gets
ugly,
for
example,
like
pvs,
so
with
file
system
backups,
it
might
be
possible
to
patch
things,
but
we
don't
see
we
don't.
Usually
the
difference
is
usually
things
are
live
so
like
if,
if
there's
a
volume,
that's
live
and
it's
actually
in
use
and
we
we
change
it.
That
will
actually
break
a
bunch
of
applications.
C
Like
say
you
have
a
postgres
database
running
and
we
just
go
and
start
overwriting
its
files.
The
database
will
most
likely
crash.
It's
not
going
to
be
happy,
so
that's
one
potential
issue:
there
block
block
volumes
are
even
worse.
I
mean,
if
I,
if
we
start
slamming
block
volumes
in
without
unmounting
and
remounting
the
volume
we
will
crash
the
worker
nodes,
we'll
actually
get
a
kernel
crash.
G
Rustic
do
when
it
does
incremental,
like
snapshot
stuff.
What
I
I'm.
C
C
I
mean
backup
versus
restore
different
things,
so
on
backup,
all
you're
doing
is
you're
looking
for
changes
and
you're
you're
you're,
storing
them
away
on
restore
when
you
have
something
that
exists
and
you're
trying
to
bring
it
in
sync:
it's
it's
something
you
can
do
on
a
you
know.
If
something
is
unmounted
or
not
in
use
right
so,
like
you
have
a
file
like
say,
you're
using
rsync
right,
let's
take
our
sync
as
an
example.
C
Now,
if
you,
if
you
have
two
hosts
and
you've,
got
applications
running
on
host
and
applications
running
on
a
host
b-
and
you
are
sync
from
a
to
b
with
the
applications
running
bad
things
will
happen
you
you
know
you
have
like
the
cold
cluster
like
you're.
C
Talking
about
where
b
is
not
working,
then
you
can
do
things
like
update
via
the
file
system
and
change
files
and
the
applications
aren't
running,
so
they
don't
care
right
and
as
long
as
it
looks
good
when
they
start
they're
good
with
block
volumes,
we're
if
we're
working
at
the
block
level,
the
the
volumes
have
to
be
disconnected
from
the
worker
nodes.
C
Mindful
of
so,
like
your
your
case,
where
you're
saying
hey,
we're
going
to
shut
down,
for
example,
like
the
the
applications
would
scale
them
down
to
zero,
then,
ideally,
the
pvs
would
be
detached
at
that
point,
because
there's
no
pods
running
so
we
can
do
things
like
update
the
pvs
and
we
can
update
the
kubernetes
resources,
still
issues
with
things
like
running
controllers.
C
That's
the
update
issue
and
then
the
other
issue
is
say
you
delete
something
in
your
source
cluster.
When
we
restore,
should
we
delete
it
from
the
destination
cluster.
C
Yeah,
so
we'd
have
to
add
that
so
it's
probably
something
where
we're
better
off
looking
at
how
we
can
share
infrastructure
and
building
like
a
sync
application,
rather
than
a
backup
restore
application,
because
kind
of
my
experience
has
been
that
people
don't
expect
backup,
restore
applications
to
like
delete
data
or
overwrite
things
for
them,
and
they
when,
when
it
does
happen,
they
get
really
bent
out
of
shape
because
they
weren't
expecting
it
to
happen
versus
like
if
you're
saying
like
I
had
this
even
like
a
desktop
backup,
I
built
a
desktop
backup
product
and
one
of
the
features
in
it
was
clone
which
says
which
you
know
we
defined
as
we're
going
to
make
your
destination
disk
look
exactly
the
same
as
your
source.
C
Disk
and
people
would
use
this
and
they'd
say
hey.
Why
did
you
delete
all
the
files
in
my
destination
disk?
It's
like
well,
because
you
cloned
it.
You
wanted
to
look
exactly
the
same.
Then.
Why
would
you
you
know?
What
did
you
expect
it
to
do
and
they're
like?
Well,
I
didn't
expect
to
delete
all
my
files
and
I
wound
up
with
like
five
layers
of
click
through
whatever
you
phone
things.
Like
really,
you
know
this
is
gonna
delete
stuff.
C
So
it's
not
an
easy
question
is
the
problem?
It's
definitely
something
we
can
discuss
and
we
can
look
at.
You
know
what
the
right
place
is,
whether
it's
a
backup
or
store.
You
know
it's
part,
be
part
of
backup
restore
if
we're
really
looking
more
towards
like
hey,
let's
make
kubernetes
think
and
and
say
something
and
go
down
that
path.
E
In
in
our
backup
storage,
I
mean
much
of
restore
solution.
We
have
all
of
that
taking
care
of
a
little
bit
higher
layer
than
the
alpha
level,
so
we
only
see
whatever
to
restore
the
metadata.
E
However,
when
you
indicate
that
you
you
are
talking
about
restore
to
the
original
namespace,
where
the
application
is
running
and
using
the
pgp,
then
we
do
exactly
just
what
dave
just
mentioned
that
first,
we
have
to
scale
down
the
application,
so
that
is,
nothing
is
running
at
all.
E
E
We
guarantee
that,
but
the
problem
is
that
that
is
not
like
in
place
restore
and
you
have
to
shut
it
out
and
that
may
interrupt
a
customer
uses
and
probably
that
might
not
be.
F
E
Restore
scenario
compared
to
resources
everything
to
a
new
namespace
and
just
you
know,
shift
over
to
that
in
space.
That's
just
my
opinion,
such
that
it
depends.
G
I
was
thinking
you
know
not
to
address
all
the
problems
with
with
with
the
solution
to
this
issue,
all
the
problems
meaning
having
to
create
you
know
scale
down
every
all
the
pods
to
zero
and
and
okay,
that's
probably
the
biggest
issue.
G
Maybe
we
are
introducing
hooks
as
a
team,
we
have
the
pre
backup
post,
restore
hooks
that
we're
introducing.
You
know
we're
expecting
that
if
you
want
to
use
this
flag,
maybe
a
force
restore
flag.
You're
gonna
have
to
know
that
you
know
the
implications
of
using
it.
No,
you
don't
want
you,
don't
even
want
an
option
for
force.
You
don't
want
to.
G
G
C
C
G
Like
a
lot
too,
to
think
about.
C
G
G
I
can't
remember
what
those
fields
are,
but
it
strips
those
away
to
compare
to
objects
if
they're
different,
then
it
throws
a
warning
but
I'll
say
instead
of
a
warning,
just
go
ahead
and
update
it
update
the
resource.
C
C
C
Well,
no,
because
there's
multiple
layers
now
right,
so
we
may
have
an
operator
that
relies
on
another
operator
so
say.
For
example,
we
have
the
hardware
operator
working
with
the
postgres
operator
and
on
side
a
we
updated,
the
harbor
thing,
which
then
updated
the
postgres
thing,
okay
good.
So
these
are
all
reconciled
properly.
C
Now
we
do
backup
restore
and
if
we,
for
example,
like
change
the
postgres
resource
before
we
update
the
harbor
resource,
is
it
you
know
what
what's
the
reconciliation
going
to
look
like
when
we
do
change
the
harbor
resource?
That's
like
a
change
and
will
harbor
be
able
to
reconcile
that
properly.
I
don't
know
it's
just
hypothetical
right.
It's.
G
C
C
G
C
G
Yeah,
that
makes
sense:
okay,
cool
I'll,
keep
thinking
about
it
with
my
team,
and
then
we
can
discuss
it
more
outside
this
meeting
and.
C
Yeah,
maybe
we
can
start
like
an
epic
or
something
because
I
think
it's
worthwhile
to
think
about
I
mean
certainly
the
think
case
is
pretty
cool
and
whether
or
not
that's
a
valero
thing
or
whether
we
share
infrastructure
to
have
a
separate
sync
utility,
but
I
think
we
need
to
at
least
work
through
the
issues
that
we
can
see
before.
We
we
start
going
too
far
down
that
path.
G
G
Right
and
it
seems
so
easy
right-
it's
like
oh,
come
on
just
update
the
res
resource,
but
there's
so
much
more
to
it
than
that.
C
Yeah,
it's
a
little
trickier.
Unfortunately,
maybe
there's
just.
A
What
are
the
the
next
steps
here.
A
G
Talk
about
the
issues
that
they've
brought
up,
dave
and
funk
brought
up
today
and
see
if
they
have
any
ideas
around
it
and
then
go
back
and
forth
with
dave
with
these
ideas,
yeah.
C
When
eleanor's
back,
which
should
be
next
week,
why
don't
we
start
making
this
as
like
an
epic
that
would
just
have
like
an
investigation
figure
out
what
what
a
good
solution
for
say
doing
either
incrementals
or
you
know,
let's,
let's
like
start
listing
out
what
the
actual
use
cases
are,
because
I
think
there's
one
use
case,
which
is
hey.
We
want
to
keep
two
clusters
in
sync,
which
is
a
little
different
than
backup
and
restore,
and
then
there's
things
like
I
want
to
roll.
C
My
cluster
back
to
a
known
state
you
know
is
that
is
that
feasible
or
sensible,
because
one
is
like
we're
moving
forward
right,
we're
thinking,
we're
bringing
b
up
to
date
with
a
and
then
there's
the
like
we're
going
to
roll
backwards.
We're
actually
going
to
take
us
back
in
time
and
update
everything.
So
it's
where
what
it
was
before.
A
Awesome,
thank
you
both
all
right.
Next
up,
we
got
phone.
E
Yeah,
it's
more
like
another
shoe,
but
I
actually
want
to
have
yet
any
update
that
the
little
team
has
on
the
vulnerability
issue.
I,
like
anything
we
talked
about
this-
is
we
are
thinking
about
doing
it
in
the
7.1,
but
I
just
want
to
get
any
updates
from
that.
If
you
guys
have
it.
F
E
E
B
November
yeah
it's
around
the
end
of
november,
and
I
want
to
point
out,
as
for
the
cve
fix,
I
think
there's
one
cv
with
a
relatively
high
score
in
the
g
devc
in
this
show
list
that
one
we
really
depend
on
dave
and
him
to
fix
it.
We
need
to
double
check
before
we
release
1.7.1,
but
if
there
is
no
fixed
cv,
it
will
still
be
there.
B
Yeah
can
I
share
my
screen.
B
Yeah,
so
this
is
regarding
how
valero
collects
the
availability
zone
info
when
it's
taking
a
snapshot
of
a
pv.
So
currently
the
waveletter
works.
When
it
takes
a
snapshot
of
the
pv,
it
will
try
to
get
the
availability
zone
from
the
label
of
the
pv.
B
B
Here
in
volume,
snap
shelter
in
the
create
volume
from
snapshot
function
create
volume
request.
It
will
require
the
volume
available
availability
zone,
especially
for
aws,
because
you
want
to
create
ebs
volume.
B
That
will
cause
the
snapshots
back,
looks
like
this.
B
It
doesn't
have
volume
availability
zone
and
then
this
empty
string
will
be
passed
to
the
aws
plugin
to
create
a
volume
and
will
fail.
So
current
code
doesn't
work
for.
B
You
know
current
way
to
gather
the
availabilities
of
information.
Doesn't
work
for
cisi
tvs.
C
B
The
label
is
not
aws
specific,
but
currently
I'm
thinking
the
solution
is
to
add
a
backup
item
action
to
modify
the
pv
spec
to
read
this
node
affinity,
information
and
add
the
labels,
so
that
when
this
chunk
of
code
is
triggered,
the
label
is
present
and
the
thing
can
move
on,
because
a
different
cloud
provider
may
have
different
requirements
for
availability
zone,
for
example
for
some
club
writer.
Maybe
it's
okay,
you
don't
have
this
information
and
the
pv
will
still
work
and
they
may
have
different
ways
to
get
the
availability
zone
information
if
it's
needed.
B
I
also
asked
nolan
why
this
decision
was
made
and
he
he
he's
also
not
sure,
because
this
was
committed
before
he
joined
the
team.
I
also
tried
to
reach
out
to
the
author
of
the
particular
comment,
but
no
response,
so
I
think
maybe
we
should
move
on
with
what
we
think
is
right.
B
Yeah
before
I
mean
if
we
check
the
the
pv
for
at
least
a
provisioned
by
the
entry
driver
like
this
one.
C
So
my
question
would
be:
do
we
need
to
add,
do
we
you
know,
would
it
be
something
we
can
just
put
into
the
core
code?
B
Yeah,
the
issue
is
that,
like
I
mentioned,
it
may
only
work,
for
it
may
only
require
for
aws
plugin,
and
we
also
need
to
look
for
aws
or
cloud
provider
specific
key
in
the
node
affinity
requirement.
I'm
not
sure
if
that's
the
right
thing
to
do.
If
we
put
in
the
a
core
code,
I
had
written
some
code
to
update
the
aws
plugin
to
add
this
backup.
E
B
C
B
C
B
C
B
I
will
also
think
with
one
kai
offline,
because
yeah
maybe
there's
a
duplication
of
code
like
this
in
addre,
not
sure
how
to
share
code
of
cross
plugins.
Maybe
there
may
be
some.
You
know
duplicated
code
right.
C
C
No
eventually,
we
would
want
to
like
merge
this
into
the
pvc
csi
stuff,
but
so
so
I
wouldn't
worry
too
much
about
you
know
how
clean
this
code
is
as
long
as
it
works.
B
Another
impact
for
this
approach
I
want
to
discuss
with
you
guys
is
that
the
I
hope
you
can
see
my
screen,
I'm
typing
here.
B
B
B
F
A
D
Yes,
I
can
do
that,
so
this
was
a
change
from
frankie
who's
on
the
call
here.
So
thank
you
very
much
frankie
for
adding
this.
So
this
was,
I
believe,
an
issue
where
I
think
valero
used
to
support
having
wild
cards
in
the
exclude,
and
then
there
were
some
changes
that
went
in.
D
Reviewed
that
code,
I
didn't
even
realize
that
we
supported
wild
cards
in
the
excludes.
So
thank
you
very
much
for
for
spying
that
and
fixing
it.
D
This
is
from
ali
patel,
so
I
I
didn't
review
this
one,
but
it
looked
like
there
was
an
issue
in
the
pager
function
that
we're
using
for
for
listing
things
from
whenever
we're
doing
patches
from
the
kubernetes
api.
I'm
not
sure
what
the
issue
was
here.
This
is
where
I
find
that
I,
like,
I
can't
read
under
pressure
and
I
try
to
read
the
pr
descriptions.
D
Sorry,
I
can't
read
that
in
parse
under
pressure,
but
thank
you
for
for
fixing
the
the
bug
in
the
paper
function
that
we're
using
for
the
kubernetes
api.
So
thank
you
very
much
yeah.
So
I
think
it's
just.
A
I
think
this
is
the
the
one
that
got
fixed.
D
D
So
I
think
that
this
was
updating
the
the
the
sample
in
the
helm
chart
to
also
show
how
to
enable
the
csi
plug-in
so
there's
a
flag
that
needs
to
be
set
and
then
also
showing
how
to
add
the
init
container
as
well
for
the
csi
plug-in.
So
if
that's
what
you
want
to
use,
that's
hopefully
more
easy,
it's
easier
now
to
enable.
So
thank
you
very
much
for
that.
A
Awesome
yeah,
that
is
it
for
contributing
shout
outs
this
week.
Thank
you
as
always
bridget
for
being
my
my
helper
here,
and
thank
you
to
everyone
who
has
contributed
to
the
project.
Thank
you.
Everyone
on
the
call
and
have
a
fantastic
rest
of
the
week,
see
you
all
next
week,
bye
folks.