►
From YouTube: Velero Community Meeting - October 8, 2019
Description
Velero Community Meeting - October 8, 2019
A
Hi
everyone
and
welcome
to
this
episode
of
the
Valero
community
meeting
and
open
discussion
with
us
today.
We've
got
the
community
Valero
team
and
the
community,
of
course,
so
we're
gonna
dive
right
into
it
and
we're
gonna
go
over
some
status
updates.
We
have
some
discussion
and
topics
and
I
believe
Steve
want
to
do
a
demo
as
well
of
a
new
functionality.
B
B
C
Go
so
I've
been
looking
at
the
issues
with
degenerated
CEO
D's
in
113
and
below,
and
it
seems
like
issues
just
mostly
around
cube
Nettie's,
not
initially
spoiling
the
nullable
field
in
the
schema,
for
that
so
doesn't
seem
like
there's
a
good
way
around
it.
Apart
from
if
we
can,
if
we
change
the
meta,
V
one-time
types
to
pointers,
I
think
no
mitigate
most
of
the
issues
that
way
the
client
will
just
omit.
Those
fillers
are
not
setting
them
to
know
when
it
sends
when
it
creates
objects,
which
seems
like
the
right
way.
C
It's
kind
of
odd
that
the
clients
actually
sending
those
to
know
anyway
and
I
tried
to
see
if
there
was
a
faster
way
to
fix
this
for
now
in
the
meantime,
but
it
doesn't
seem
like
there
is
there's
no
way
to
just
disabled
a
validation.
I
tried
forcing
the
type
using
a
qubit
go
to
annotations,
but
since
open
API
scheming
doesn't
support
a
null
type
that
doesn't
work
so
see.
I.
Think
their
only
option
here
really
is
to
to
switch
to
pointers
for
the
meta
v1
time
types.
D
D
Mean
we
I
think
we
don't
really
have
an
official
policy
or
you
know,
we've
generally
made
a
best
effort
to
retain
as
much
backwards
compatibility
as
we
can.
I
I
can't
remember
what
the
version
is
that
we
currently
are
saying
we
support
I
think
it's
it's.
Whatever
version
has
deployments
in
the
apps
group
since
we
made
that
switch,
which
is
like
I,
don't
know,
1.1.
C
Not
do
anything
yeah,
yeah
yeah.
Let
me
let
me
I
can
go
back
and
check
on
one
line.
One
line
has
the
security
issues
well
that
I
think
110
might
have
one
as
well
that
wasn't
patched
so
Cuba
Nettie's
only
supports
loss
free
but-
and
that's
not
really
realistic,
because
things
like
UK
and
stuff
take
some
time
to
update
as
well
so
yeah
I
can
I
can
have
a
look
at
one
nine,
one,
ten
and
and
see
what
a
compare
things
like
that
like
is
there.
D
C
That
sounds
good,
and
it
may
be
that
we
have
a
way
to
just
revert.
Sorry,
there
is
another
option
that
I
forgot,
which
is
to
if
we
detect
an
older
version
of
kubernetes,
we
just
make.
We
just
skip
the
validations
on
the
CID
when
we
go
and
create
them,
and
that
might
be
a
good
way
of
having
that
backwards.
Compatibility.
E
E
There
is
an
updated
version
of
the
valar
plugin
example.
There
also
the
updates
1082
to
happen
for
the
move
to
the
new
github
organization
and
finally,
I
have
the
AWS
plug-in
saying
and
building
an
image
at
least
locally
I
need
to
figure
out
how
to
properly
send
it
to
a
repository
and
what
I'm
going
to
do
next
is
I'm
in
the
process
of
moving
the
documentation
to
this
plug-in
project.
D
E
D
Makes
sense
and
I
know
we
touch
base
on
this
last
week,
but
the
I'm,
the
only
PR
I'm
aware
of
that's
open
right
now
that
that
impacts
the
cloud
providers,
the
azure
one
that
is
in
review
right
now.
So
hopefully,
once
we
get
that
merge,
then
the
you
know
the
cloud
provider
code.
There
will
be
kind
of
frozen
yeah.
E
F
F
F
Keep
con
talk
without
non
doing
some
internal
testing
on
our
vSphere
cns
product
with
rustic,
making
sure
that
all
plays
nice
and
also
I've
done
some
initial
work
on
getting
a
docker
hub
registry
set
up
for
Valero,
so
it
will
be
moving
from
the
GCR
space
to
docker
hub
no
timeline
on
that
currently
and
I
think
we
should
probably
have
at
least
a
couple
months
of
overlap
where
we
have
G
CR
and
docker
hub
that
have
images
available.
But
it's
there.
F
The
organization
exists
we
just
haven't
started,
pushing
anything
to
it
yet
I
think
last
week,
I
mentioned
I
will
push
maybe
0
11,
1.0
and
1.1
images
up
to
docker
hub
just
so
they're
there.
So
people
can,
if
they're,
on
old
ark
images
they
can
migrate,
but
we're
not
going
to
move
all
the
old
versions
over.
F
F
The
other
thing
is
with
this
docker
hub
org.
We
can
both
narrow
the
scope,
so
we
just
have
Valero
images
in
there
right
now.
We
have
a
ton
of
images
in
the
fgo
images
bucket,
but
we
can
also
expand
it
so
that
you
can
get
your
Valero
because
we're
like
bundling
the
cloud
provider
plugins
as
images
you
can
get
those
in
the
same
spot.
D
All
right,
let's
move
over
to
Steve
yeah
main
things,
I've
been
working
on
in
the
last
couple
of
days,
so
the
first
one
is.
We
had
a
contributor
from
Microsoft
to
put
up
a
PR
for
adding
support
in
the
azure
plugins
for
the
China
and
German
clouds.
So
in
Azure
there
you
know,
there's
a
US
public
cloud,
there's
a
China
cloud,
the
German
cloud,
maybe
one
or
two
others,
and
they
all
have
different
endpoints.
So
our
current
implementation
of
the
azure
plugin
only
supported
the
u.s.
cloud.
So
this
PR
added
support
for
the
other
ones.
D
I
had
just
been
helping
get
a
couple,
the
the
feedback
items
addressed
so
hopefully
that's
ready
for
merged
relatively
soon
and
then
I've
also
been
working
on
a
prototype
of
taking
the
rustic
functionality
and
putting
it
behind
the
volume
snapshot
or
plugin
interface.
I'm
gonna
talk
about
this
more
in
a
minute,
but
for
those
of
you
who
are
familiar
with
the
rustic
integration,
it
kind
of
it
has
its
own
user
experience.
D
It
also
has
its
own
code
paths
within
bolero,
that's
kind
of
totally
separate
from
the
other
volume
snapshot
or
so
like
the
EBS
snapshot
or
and
lazar
and
GCP
snapshot
and
we've.
You
know,
we've
had
a
lot
of
discussions
since
the
rustic
functionality
has
been
added
about
possibly
looking
at
ways
to
to
make
both
the
code
and
the
UX
for
rustic
more
more
similar
to
the
other
volume
snapshot,
implementations.
D
D
D
So
I
guess
maybe
I'll
just
start
with
the
demo
and
kind
of
show
what
it
looks
like
and
then
we
can
talk
some
more
about
it.
Let
me
share
my
screen
here
and
yeah.
So
you
know
standard
full
disclaimers.
This
is
is
really
just
a
prototype.
There
are
a
bunch
of
things
that
are
you
know,
hard-coded
or
otherwise,
hacked
in
various
ways.
So
don't
take
this
as
a
you
know,
working
code
whatsoever.
D
D
D
It's
pods
PVCs,
so
I've
got
a
basic
deployment
in
this
namespace.
It's
just
running
a
single
pod
which
isn't
really
doing
anything
apart
from
using
this
one,
persistent
ball,
you
claim,
and
that
claim
is
basically
just
on
a
it's
a
I'm
running
on
GK
you
right
now.
So
this
is
just
a
GCE,
persistent
disk
and
if
I
actually
exec
into
the
pod
in
this
namespace.
D
The
PVC
is
just
mounted
in
this
directory
PVC
one
and
if
I
look
at
what's
in
here,
I've
got
a
hello
world
file
and
I've
also
just
got
the
Valero
code
base
checked
out
here,
so
I'm,
just
gonna
back
up
that
workload
using
the
rustic
volume
snapshot
or
so
I'll
just
say,
they're
back
up,
create
demo
and
include
namespaces
ns1,
and
let
me
actually
stream
the
logs
down
here
for
the
server
pod.
So
you
can
sort
of
see
what's
happening.
D
D
Just
do
a
describe
on
it
with
the
details
flag.
So
normally,
if
you're
using
rustic
you'll
have
a
section
at
the
bottom
here
that
says
I
forget
if
it
says
rustic
backups
or
pod
volume
backups.
But
you
see
that
you
don't
get
that
with
this
one.
So
instead
you
just
get
the
normal
kind
of
persistent
volume
snapshots
section.
We
have
the
persistent
volume
that
was
backed
up
and
then
in
the
snapshot
ID
field.
D
So
normally,
if
you're
using
you
know
the
GCP
snapshot
or
you'd
get
some
some
GCP
identifier
here,
but
this
snapshot
idea
is
actually
a
rustic
snapshot.
Id
and
I
can
show
that
in
the
rustic
repo
itself,
so
basically
0bd
is
the
snapshot
here,
which
was
just
created
and
I
can
even
run
a
rustic
command
just
to
list.
What's
in
that
snapshot
and
yeah,
we
get
the
whole
basically
a
listing
of
that
whole
velaro
codebase.
So
so
anyway.
So
so
far,
we've
got
a
rustic
snapshot.
D
Backups
are
created,
backups
are
taken
through
a
rustic
demon
set,
so
we
have
the
main
Valero
server
basically
creates
a
pod
volume,
backup,
custom
resource,
and
then
the
rustic
demon
set
is
watching
for
new
pod
volume.
Backups
that
are
created
and
demon
set
pod,
that's
running
on
the
same
node
as
the
volume
to
be
backed
up
actually
triggers
the
rustic
backup.
I
changed
in
this
prototype
I
changed
how
that
works
a
little
bit,
so
there
is
no
longer
a
demon
set.
D
If
you
look
at
what's
running
in
the
Bolero
namespace,
there's
no
no
rustic
demon
set.
So
instead,
what
I'm
doing
is
the
the
volume
snapshot
or
plugin
is
actually
spinning
up
kind
of
an
on-demand
worker
pod,
which
is
specifically
scheduled
to
run
on
the
appropriate
node
and
that
worker
pod
just
has
a
host
path
mount.
So
it's
still
using
the
approach
of
mounting
the
host
path.
Var
Lib,
cubelet,
pod
subdirectory
to
get
access
at
the
data.
D
Directory
potential,
yes
or
what
I,
what
I
actually
have
it
doing
now,
is
just
I
think
mounting
the
subdirectory
of
our
live
cubelet
pods!
That's
for
the
pod,
that's
using
the
PV
that
we're
backing
up
so
we're
at
least
you
know
we're
not
mounting
every
single
pods
subdirectory
we're
narrowing
it
down
to
one
pod.
We
might
be
able
to
narrow
down
even
further,
so
it's
only
mounting
like
the
specific
directory
for
the
persistent
volume
that
we're
trying
to
back
up.
H
D
C
D
And
that's
kind
of
the
that's
the
challenge
of
you
know:
that's
definitely
a
challenge
with
with
running
a
backup
and
like
where
do
you
get
access
to
the
data?
I
mean
I,
guess
the
you
know.
The
other
approach
you
can
take
is
using
a
sidecar
container
that
actually
runs
in
the
pod
that
you're
that
you're
trying
to
backup.
D
D
D
D
And
the
entire
restore
is
done
so,
let's,
let's
just
take
a
look
at
what
pods
exist
in
that
namespace.
It's
up
and
running
so
I'll,
just
exec
back
into
that
pod
now
and
go
into
that
PVC
subdirectory
and
we
get
our
get
our
hello
world
file
and
get
our
bolero
codebase.
So
so
everything
got
restored.
So
if
you
look
at
what's
happening
on
the
bottom
here
with
the
PV
is
what
kind
of
walk
through
this
and
explain
what's
happening
with
the
restore.
D
So
the
first
thing
is
that
we
spin
up
a
PV
and
it's
actually
claimed
by
a
persistent
volume
claim
in
the
velaro
namespace.
So
we
spin
up
this,
this
worker
pod
in
the
velaro
namespace
and
we
essentially
dynamically
provision
a
new
empty,
persistent
volume
to
restore
into
which
is
consistent
with
with
the
existing
rustic
implementation.
But
then,
within
this
worker
pod
we
we
run
a
rustic
restore
so
that
actually
restores
all
the
data
into
this
persistent
volume.
D
So
then
we
we
essentially
delete
the
persistent
volume
and
persistent
volume
claim
that
were
used
by
the
Valero
namespace,
and
so
we're
left
with
a
cloud
volume
that
actually
has
data
on
it.
And
then,
from
that
point
we
just
go
ahead
with
recreating
a
persistent
volume
that
specifically
uses
that
cloud
volume
that
we
just
populated
and
then
we
just
let
the
the
workload
you
know
essentially
set
up
a
PV
C
to
claim
that
persistent
volume
and
to
mount
it
into
the
pot
as
usual.
F
D
Well,
so
you
know
the
design
doc
that's
out
there
was
for
for
introducing
worker
pods
for
for,
like
backups
and
restores
well.
There
are
backups
and
restores,
which
I
still
think
makes
sense,
but
that
you
know
that
design
kind
of
that
doc
led
me
to
think
about
you
know,
is
there
a
way
that
we
could
use
a
similar
approach
for
rustic,
backups
and
restores
and
that
that
definitely
led
me
to
the
approach
that's
being
used
here.
D
D
You
could
potentially
take
essentially
all
of
the
code
that
we
currently
have
inside
velaro
for
dealing
with
rustic
and
essentially
move
it
behind
the
volume
snapshot
or
interface,
but
you
need
that
you'd
still
need
to
have
like
separate
daemon
sets
running
and
you
need
to
have
custom
CR
DS,
and
so
these
would
all
be
like
a
part
of
would
be
a
necessary
part
of
having
the
plugin
run
but
wouldn't
actually
be
able
to
be
deployed
as
part
of
the
plugin.
So
with
this
approach
you
know
the
plug-in
itself
is
just
spinning
up
the
worker
pods.
D
So
it's
a
it's
all
kind
of
self-contained,
and
there
are
some
nice
things
about
using
that
approach,
I
mean
on
the
on
the
restore
I
guess.
The
other
thing
that
I
didn't
point
out
here.
Is
that
because
we're
actually
just
we
set
up
a
worker
pod,
we
create
a
PVC
and
we
do
a
restore
into
that
PVC.
We
don't
need
to
do
any
kind
of
host
path
mount
to
do
the
restore.
We
don't
need
any
kind
of
a
container
to
wait
for
the
restore
to
be
completed.
D
We
just
I
run
one
pod
that
populates
the
volume
and
then
we
essentially
you
know
let
the
workload
just
spin
up
with
that
pre-populated
volume.
So
the
flow
is
a
little
bit
nicer.
We
it
means
we
never
need
a
reed
right
amount
of
the
host
path,
which
I
think
it's
definitely
nice
from
a
kind
of
a
data
protection
perspective.
F
The
I
guess
the
difference.
There
would
be
that
we
would
have
to
figure
out
how
we,
how
we
play
nice
with
the
PV
B's
PI,
bonding
backups
and
pod
volume
restores
because
those
are
currently
core
kind
of
core
data
types
for
Valero,
but
in
this
paradigm
they
move
into
a
plug-in,
but
the
core
of
Valero
still
uses
them
right.
So,
although
your
your
output
on
the
rustics
on
the
on
the
describe
just
showed
on
the
PBS,
so
yeah.
D
So
the
so
the
volume
snapshot,
our
prototype
is
not
using
the
PvP
or
PvE
are
types
at
all.
So
it's
it's.
Instead
of
using
those
to
signal
that
a
rustic
operation
needs
to
happen,
it's
it's
basically,
just
creating
a
worker
pod
who's
whose
command
to
run
is
essentially,
you
know,
rustic,
backup
or
rustic
restore,
so
we
skip
using
those
at
all.
That
does
bring
up
the
question
of
of
like
you
know.
F
Something
you
had
mentioned
before,
because
I
think
he
said
of
the
survey
respondents.
Only
three
people
were
using
non
non
PVCs
or
like
scratch
tours
or
hemp
eaters,
because
this
is
from
the
implementation
side.
For
us
it
sounds
it's
sounding
to
me
a
little
simpler,
but
it
does
mean
we
can't.
We
can't
back
up
stuff,
that's
directly
on
a
pod
correct.
It
has
to
be
through
a
PVC
PV
which
reduces
it.
The
the
tools,
flexibility,
correct.
D
And
that's
that's
like
that's
the
biggest
thing
you
know.
That's.
Definitely
the
biggest
drawback
from
a
user
perspective
is
that
we
would
be
removing
that
functionality
of
being
able
to
back
up
arbitrary
pod
volumes.
I
think
there
are,
there
are
lots
of
benefits
to
having
it
fit
into
the
you
know
the
kind
of
standard
snapshot
model,
and
so
definitely
like.
Looking
for
user
input
on
you
know
whether
dropping
support
for
that
is,
is
a
major
issue
or
not.
F
D
F
C
D
F
Think
at
the
time
our
volume
snapshot
plug-in
API
was
fairly
new.
If
I
remember
right,
the
the
plug-in
architecture
came
along
in
0.7
and
then
rustic
support
was
0.9
and
I
think
it
was
in
some
ways.
It
was
trying
to
bootstrap
some
supporting
things
that
weren't
already
on
the
plug-in
model
and
we
had
fewer
people
kind
of
flexing
the
plug-in
model
to
see
what
I
could
do.
D
D
D
There's
a
question:
you
know:
if
we
go
with
this
approach,
is
that
still
something
that
like
core
velaro
should
do?
Because,
theoretically
you
know,
if
you,
if
you
move
all
this
rustic
backup
and
restore
code
behind
a
plug-in
interface,
you
can
move
it
out
of
the
core.
Velaro
repo
and
now
like
velaro,
doesn't
necessarily
need
to
know
about
rustic
anymore.
D
We
could
also
spin
it
out
into
its
own
separate
deployment,
and
just
you
know,
if
folks
are
running
rustic,
they
could
you
know
Claro
install,
could
set
up
the
separate
deployment.
That's
running
this
controller,
so
I'm
not
totally
sure
what
makes
sense
they're
there.
You
know
the
bring
your
own
repository
model
is
is
appealing
from
a
kind
of
simplicity
of
code
perspective
in
Valero,
but
it
may
also
be
you
know,
less
less
convenient
for
users.
F
Rustic
deployment
that
is
just
for
managing
rustic
repos,
it
doesn't
really
have
a
whole
lot
to
do
with
Valero
Valero
can
consume
those
repos,
but
that
might
be
an
interesting
thing
to
do
for
us
to
consume
that
I,
don't
know
I,
don't
know
if
there's
anything
that
exists
for
that.
But
I
do
know.
From
our
perspective,
we've
had
users
there.
There
are
users
that
want
to
kind
of
dig
in
to
rustic
and
get
the
full
support
out
of
it,
and
they
want
to
be
able
to
go
and
manage
a
repo.
D
You
know
as
an
implementation
detail
behind
the
interface
so
that
theoretical
users
wouldn't
even
really
need
to
know
about
rustic,
but
I
think
we
found
that
that's
just
not
practical,
like
it's
a
very
leaky
abstraction
and
you
know
you
definitely
require
some
in-depth
knowledge
of
what's
going
on
with
rustic,
and
so
given
that
I
wonder
if
it's
you
know,
if
it
makes
more
sense
to
just
say,
hey,
you
can
create
your
repos
wherever
you
want,
you
can
manage
them.
However,
you
want.
D
You
just
need
to
give
Valero
a
you
know
the
identifier
to
your
repo
credentials
to
access
it,
which
means
both
you
know,
potentially
like
s3
credentials,
as
well
as
the
encryption
password.
If
you
put
that
out
in
a
secret
and
configure
your
backup
storage
location,
to
specify
what
that
secret
is
then
fill
out,
can
connect
to
it
for
taking
backups
and
restores.
F
F
If
the
deployment
existed
in
the
deployment
of
that
proxy
server
existed
in
the
cluster,
that's
something
we
might
be
able
to
use
I
think
the
API
was
slightly
different,
though
I
think
I
can't
remember
if
it
was
like
an
s3
like
HTTP
API
or
not,
but
it
was.
It
was
basically
a
way
to
not
give
every
pod
credentials
to
the
bucket.
So
you
say:
hey
all
you
go
to
the
proxy
and
the
proxy
would
be
able
to
write
to
the
bucket
yeah.
That's
that
would
probably
be
a
little
different
like
I.
D
D
F
D
Sure,
yeah,
you
know
I've
I've
kind
of
asked
the
question
on
the
slack
channel
before
and
didn't
get
any
any
response,
but
you
know
we
can
certainly
make
a
bigger
push
to
to
solicit
input
on
this
because
I
agree,
given
that
it's
a
potentially
gonna
you
know,
break
break
or
remove
functionality
for
certain
users.
We
we
need
to
make
sure
that
it
makes
sense.
Yeah.
F
A
D
C
D
D
D
One
other
one
other
thing
we
may
run
into
is
that
we
may
need
to
actually
add
some
arguments
to
some
of
the
add
some
parameters
to
some
of
the
volume
snapshot
or
interfaces
which
I
need
to
think
about
how
we
can
do
in
a
way
that
doesn't
break
existing
things
or
whether
we're
going
to
need
to
cut
a
2.0
to
be
able
to
do
that.
But
that's
just
one
of
the
details:
I
need
to
figure
out.
D
A
D
I
can
I
can
go
through
this
since
I
added
them
here.
Yeah
we've
had
a
few
good
contributions
in
the
last
week
or
two,
so
the
first
one
is
from
Jay
Wong
101,
who
added
support
for
workload
identity
on
gke.
So
this
is
a
new
feature
that
GK
is
offering,
where
you
can
it's
kind
of
like
it's
kind
of
like
Cube.
I
am
four
for
AWS,
where
you
can
actually
use
GCP
service
accounts
for
pod
identity
within
your
g
ke
clusters.
D
It's
still
a
beta
feature
as
far
as
I
know
in
gke,
but
we
have
the
support
for
it
now.
So
thanks
a
lot
Jay
Wong
101
for
adding
support
there,
and
then
we
had
Boxey
who
added
support
within
our
Asha
plugins
for
doing
cross,
subscription,
backups
and
restores
so
we've
we've
had
support
for
doing
kind
of
cross
resource
group
backups
and
restores
in
Azure,
but
this
actually
enhances
it
so
that
you
can.
D
You
can
create
backups
in
a
totally
separate
subscription
and
then
restore
them
into
a
different
subscription
as
well,
and
that
works
for
both
for
the
standard
kubernetes
metadata,
as
well
as
persistent
volume
snapshots.
So
that's
a
great
addition.
So
thanks
a
lot
Boxey
and
then
spiff,
CS
I,
think
this
is
the
second
PR
that
we've
gotten
in
a
couple
of
weeks
from
this
contributor,
so
I
added
some
more
documentation
around
the
schedule.
Custom
resource
to
bring
the
documentation
they're
up
to
kind
of
in
line
with
the
backup
and
restore
documentation,
so
really
appreciate
that.
F
Wanted
to
toss
another
one
in
there
Anthony
who's
on
the
call.
Thank
you
for
digging
into
a
lot
of
work
on
making
Valero
a
a
component
of
a
larger
backup
system,
particularly
work
around
making
a
backup
storage
locations
optional.
To
start
that,
that's
gonna
be
really
helpful.
Once
we
get
that
merged
in.
I
Yeah
thanks
thanks
Nolan,
yes
right
now
we
have
I'm
still
walking
with
you
know.
We
do
guys
for
the
review
for
those,
but
you
know
we
have
the
integration
going
on.
So
any
issue,
refined,
you're,
just
going
to
work
with
you
guys
and
you
know
just
I-
will
get
integration
working
well
and
yeah.
Thanks
for
your
help
as
well.
I
J
Steve
Harlan
Orion:
this
is
Vincent
I'm.
A
new
employee
in
the
OEM
see
right
now,
I'm
working
with
Anthony
to
focus
on
get
at
new.
Some
new
features
to
the
malaria
and
I
really
like
this
community
for
later
and
I
really
enjoyed
it.
And
besides,
thanks
for
Steven
Nolan,
to
give
me
some
feedback
all
mad
for
request.
I
D
Yeah
I'd
say
we're:
you
know:
we've
kind
of
been
waiting
because
we're
we're
planning
to
extract
the
the
internal
plugins
into
their
own
repositories,
and
so
we've
kind
of
been
waiting
until
we're
able
to
get
the
separate
repos
stood
up
and
get
those
images
published,
but
I
think
we
could.
We
could
consider
cutting
an
alpha
now
I
mean
we
have
had
a
bunch
of
PRS
that
have
been
merged
since
1.1.
I
D
Out,
okay
sounds
good,
well,
yeah,
we'll
decide.
If
we
want
to
do
an
alpha
or
not
I
mean
there
are
almost
always
the
master,
tagged
images,
it'll
that'll,
you
know
let
you
try
out
the
current
latest
master
build
and,
although
those
are
you
know
liable
to
be
broken
at
any.
Given
point
to
this
active
development
branch,
yeah
yeah.
A
Awesome,
thank
you
so
much
for
your
feedback
in
questions
here.
All
right,
I
think.
That's
it
for
today.
Thank
you
all
for
joining
the
valar
community
call
and
open
discussion.
Thank
you
to
the
Lehrer
team
and
thank
you
to
the
Valero
community.
You
all
Rock
have
a
fantastic
week.
Everyone
and
this
recording
will
be
up
on
YouTube
shortly,
have
a
good
one!
Everyone,
oh
yeah,.