►
From YouTube: Unleashing TiDB 7.1: Stability, Performance, and Beyond
Description
Agenda
[00:00] Introductions & highlights of TiDB 7.1
[20:21] Multi-value index
[24:05] Multi-Rocks
[35:40] Resource control
[46:21] Concurrency framework
[48:59] Wrap-up
A
A
It
looks
like
we're
being
live
streamed,
so
welcome
everybody
to
another
meet
up
this
month,
happy
to
have
Sam
Dillard
who's,
our
one
of
our
lead
product
product
managers
at
at
Pin,
cap,
Sam,
well,
happy
Monday,
hope
you
had
a
good
weekend
and
I'll.
Let
you
share
the
screen
and
and
get
things
started.
B
Right
on
thanks,
Ray
pleasure
to
be
here,
I'm,
just
making
sure
that
this
can
be
as
big
as
possible.
B
B
And
is
the
zoom
stuff
in
the
way.
B
No,
this
is
good,
so,
okay
cool,
so
thanks,
everyone
for
for
joining
and
for
those
who
are
tuning
in
later
with
the
recording.
Thank
you
as
well.
I'm
Sam,
as
Ray
mentioned,
I'm
I've,
been
with
pain
cap
for
almost
a
year
now
brought
in
to
do
some
broad
product
management
across
the
the
platform
and
and
so
I'm
I'm
here
to
deliver
some
information
to
you
about
what
is
what
what
is
contained
in
the
7.1
release,
which
is
our
latest
long-term
stable
release.
B
So
these
long-term
stable
releases
will
be
codes
deported
in
full
for
three
years
and
an
extra
year
for
critical
security,
vulnerabilities,
and
things
like
that.
So
you
have,
you
know
three
to
four
years
of
of
support
on
these
releases,
so
you
get
a
long,
long
term
stable,
but
we'll
have
many
more
stable
releases
between
now
and
four
years.
But
if
you,
for
any
reason,
you
cannot
upgrade
that
that's
that's
what
you're
you're
dealing
with
so
the
agenda
for
the
webinar
today
is
pretty
simple:
I
wanted
to
cover
the
differences.
B
The
the
key
differences
between
versions,
6.5
and
7.1,
and
the
reason
I
want
to
do
that
is
6.5.
Was
the
the
long-term
stable
release
prior
to
7.1,
so
in
between
there
we
had
6.6,
7.0
and
7.1
and
I
think
because
most
of
our
customers,
and
even
some
of
our
community
users
only
use
the
long-term
stable
releases.
It's
more
important
to
talk
about
the
difference
kind
of
skip
over
the
the
non-stable
releases
and
talk
about
that
Delta.
So
that's
what
I'll
do
I'll
provide
a
little
bit
of
some
sign
posting
about
which
versions.
B
Some
of
these
features
actually
came
in
if
you're
interested
in
that.
But
the
point
here
is
to
talk
about:
what's
changed
in
a
major
way
since
the
last
long
term,
stable
or
LTS
release.
B
The
next
thing
I'll
do
is
talk
about
four
key
features
that
came
in
in
this
Delta
in
a
more
detailed
way,
so
I'll
talk
about
kind
of
the
problems
that
that
we
wanted
to
solve
and
how
we
solved
them
with
those
features.
B
How
you
do
at
all
is
these
three,
so
resource
control,
which
is
essentially
an
ability
to
provide
for
operators
of
Tidy
B
to
provide
resource
isolation
to
their
own
classified
groups
of
workloads.
However,
they
decide
to
group
those
workloads
or
classify
them.
They
can
group
them
into
basically
resource
quotas
with
with
resource
allotments
and
a
priority
that
will
help
for
largely
increase
stability
across
a
cluster,
especially
if
there
are
multiple
workloads
running
on
that
same
cluster.
So
that's
a
this
is
a
huge
feature.
B
Nobody
really
does
this
today
in
this
space.
So
this
is
a
big
differentiator
as
well,
so
I'm
really
excited
about
it.
I
have
a
Blog
coming
out
about
it
as
well
in
the
next
few
weeks,
so
stay
tuned.
For
that,
the
second
one
is
distributed
online,
ddl
and
6.5.
There
was
a
drastic
speed
up
of
online
ddl,
specifically
ad
index,
which
is
a
pretty
costly
operation
in
in
between
6.5
and
7.1.
B
We
released
an
experimental
version
of
a
of
that
in
a
distributed
manner,
so
that
drastically
sped
up
that
operation
as
well,
and
then
that
will
be
GA
in
coming
releases,
but
just
know
that
in
7.1
you
have
this
experimental
version
of
this
feature
that
you
can
turn
on
and
see
how
fast
you
can
do
ad
indexes
online
and
then
lastly
multi-rocks,
which
is
actually
an
internal
terminology
for
something
we
call
partitioned,
raft
key
value
engine,
and
so
that
might
be
a
a
kind
of
word
vomit
right
now,
so
I'm
going
to
hold
off
but
I'm
going
to
talk
about
that
one
in
depth
later.
B
B
In
this
webinar,
so
the
first
I
I
want
to
group
these
by
category
so
between
6.5
and
7.1,
the
biggest
the
biggest
kind
of
domain
you
know,
product
domain
we
focused
on
was
performance
and
performance
includes
in
this
case
some
is
performance,
stability
as
well
so
speeding
up
of
operations,
but
also
of
making
sure
that
the
performance
actually
remains
stable
and
sort
of
flat
right.
So
the
first
and
anything
in
green
by
the
way
is
going
to
be
talked
about
later.
B
B
So
the
first
thing
was
this:
this
is
the
multirox
thing
I
was
talking
about,
so
this
is
basically
largely
to
a
an
architectural
change
at
the
fundamental
level
of
the
storage
that
is
not
on
by
default
right
now,
because
it's
experimental
as
of
6.6
but
is
it
is
a
huge
change
that
lends
itself
to
a
lot
of
performance
improvements
from
rights
and
from
scaling
in
and
out
and
I'll
talk
about
that.
But
that's.
B
This
is
probably
the
biggest
change
with
the
with
the
most
measurable
benefits
to
our
users
and
you'll,
see
why
why
how
that
works
soon,
we
added
a
new
lot
conflict.
Wake
up
algorithm,
essentially
that
in
scenarios
where
you
have
write
heavy
workloads
that
are
that
are
aimed
at
the
same
Keys,
you
may
have
basically
you'll
have
you'll
have
lot
contention
right.
B
You'll
have
requests
that
maybe
end
up
enqueued
or
waiting
for
a
lock
to
wake
up,
and
so,
when
those
when
the
lock
is
released,
there
may
be
several
queued
requests
and
the
way
that
we
we
dequeue.
Those
requests
was
improved
dramatically
to
improve
upon
tail
latencies
a
lot
in
in
right,
heavy
scenarios,
especially
that
have
like
hot
spot
keys.
So
that's
a
huge
one
more
to
reduce
tail
latency.
B
We
did
it.
We
added
a
feature
that
we
call
load-based
replica
reads:
that's
in
GA
in
in
this
past
really
7.1,
and
what
that
basically
means
is
that
if
you
have
a,
if
you
have
a
read
heavy
workload
where
you
have
hot
spotting
and
on
a
single
node,
so
a
single
key
range
on
a
node,
then
the
Thai
KV
will
quickly
or
the
storage
engine
will
quickly
recognize
that
situation
and
offload
reads
to
a
replica
or
a
follower,
essentially
and
in
a
way
that
that
maintains
snapshot.
B
Isolation
as
well,
so
that
that
will
reduce
in
those
circumstances,
that'll
reduce
tail
latency
quite
a
bit.
So
those
would
that
be
a
read
heavy
workload
and
that
one
so
I
mentioned
50
to
200.
All
three
of
these.
All
three
of
these
performance
optimizations
fell
within
that
in
the
circumstances
they
apply
to
fell
within
that
range
and
I
think
the
low-based
replica
read
was
was
the
one
that
increased
it
by
about
a
hundred
percent,
so
double
the
or
halved
I
guess
the
the
tail
latency
of
of
these
situations.
B
B
You
probably
are
to
some
degree
if
you're
here,
but
we've
had
for
years
a
prepared
plan
cache,
which
means
that,
with
prepared
statements,
the
prepared
statements
can
be
can
their
plans
can
be
cashed
so
that
the
query
Optimizer
doesn't
have
to
figure
out
that
plan
every
single
time
the
the
query
comes
through
which
drastically
speeds
up
performance
in
a
lot
of
cases,
with
an
automatic
plan
cache
or
general
plan
cache,
the
plan
Cache
can
apply
to
non-prepared
statements
with
some
limitations.
B
Currently
there
are
certain
queries
that
that
don't
apply,
but
most
queries
do
apply
here
and
this
this
caches
at
the
session
level.
In
future
releases,
we
will
have
fewer
limitations
on
which
queries
this
will
apply
to
and
we'll
make
it
instance
level,
so
that
the
catch
applies
to
more
queries
overall.
B
And
lastly,
here
is
the
we
call
batch
co-processor
type
KV
tasks.
So,
if
you're
familiar
with
idb,
you
understand
that
it's
a
disaggregated
architecture,
so
tidyb
servers
will
send
requests
to
Thai
KB
servers,
which
are
storage,
engines,
store
storage
nodes.
This
this
feature
essentially
batches
those
requests
if
there
are
going
to
be
multiple
requests
to
the
same
TI
KV
nodes.
B
So
essentially,
if
you
have
a
request
to
a
a
region
of
data
on
a
b,
node
and
another
region
of
data
on
on
a
Tai,
KV
node
and
maybe
a
hundred
others,
all
of
those,
maybe
the
102
requests.
I
just
alluded
to
would
be
bashed
into
one
single
grpc
request
to
the
node,
which
drastically
reduces
clustered
traffic
and
increases
well
reduces
latency,
but
increases
overall
performance
of
quite
a
bit,
and
that
was
release
GA
in
6.6.
B
We
also
increased
High
CDC
throughput
to
Kafka
by
by
architecturally
changing
that
that
pipeline.
So
when
it
when
high
CDC
is
collecting
data
from
the
storage
storage
nodes
of
Tidy
B,
it
will
do
so
in
a
distributed
manner.
When
the
downstream
is
Kafka
in
future
releases,
when
the
downstream
is
tidy
B
we'll
do
the
same
thing,
that's
more
complicated,
so
we
released
the
Kafka
version
first,
especially
since
that's
actually
the
the
most
common
use
case
for
Thai
CDC
and
if
you're
not
familiar
with
high
CDC,
it's
it's.
B
It's
pink
Caps
or
tidy
bees
change,
feed
change,
data
capture
tool
and
lastly,
in
the
performance
section
with
the
fast
edl.
So
I
mentioned
this
already,
but
we
added
parallel
execution
to
at
the
add
index
operation.
Parallel
execution
will
apply
to
more
operations
going
forward
which
I'll
talk
about
on
the
SQL
side.
So
this
is
this
category
is
essentially
defined
by
MySQL
compatibility
features
and
just
general
SQL
language
features
that
are
that
are
kind
of
you've
seen
in
in
other
SQL
databases
that
are
important.
B
So
the
first
thing
we
G8
and
6.6,
or
the
foreign
keys
or
foreign
key
constraints,
which
basically
you're
probably
familiar
with
this,
but
in
case
you're,
not
foreign
key
constraints,
basically
enforce
consistency
between
tables
when
two
when
two
tables
depend
on
each
other.
You
know
you
can't.
You
cannot
write
data
to
a
child
table
if
it
doesn't
have
values
for
the
parent
table,
and
so
it
it
basically
enforces
at
the
database
level
some
business
logic
that
you
want
to
enforce
that.
B
Maybe
you
don't
want
to
have
you
don't
want
to
keep
track
of
in
your
your
client
applications?
This
is.
This
is
a
huge
thing.
Also,
if
you're,
just
maybe
foreign
Keys
may
not
even
be
important
to
you,
but
you
may
be
migrating
from
my
Sequel
and
have
foreign
keys,
and
you
just
want
to.
You
may
get
rid
of
them
later,
but
it
just
makes
migration
compatibility
more
smooth,
so
that
you,
you
know
that
we
don't
kind
of
throw
up
on
the
on
the
foreign
key
stuff
that
you
have
TTL
stands
for
time
to
live.
B
This
was
GA
in
7.0.
He
DL
is
essentially
being
able
to
apply
an
expiration
date
or
an
expiration
age.
I
should
say
to
your
to
the
rows
of
a
table.
So
if
you
define
it
at
the
table
level
and
it's
enforced
the
row
level
so
essentially
it'll
expire
and
then
delete
rows
that
have
aged
out
of
your
your
window,
that
you
define
so
you
could
have
a
table.
That's
constantly
appending
new
data
and
you
don't
want
to
keep
all
the
data
around
forever.
B
Tidy
will
release
that
data,
make
it
basically
available
for
deletion
and
then
actually
delete
it,
so
frees
up
storage
space.
It
also
improves
on
on
query
performance,
because
it
just
reduces
the
number
of
of
rows
that
need
to
be
scanned
in
a
lot
of
cases,
especially
in
large
large
queries.
B
Larger
tables
also
mean,
if
you're
familiar
with
this,
with
the
architecture
of
Tidy,
it
means
more
regions
or,
in
our
case
key
ranges
in
in
Thai
KV
nodes,
which
can
impact
scalability
and
general
cluster
performance.
So
this
you
know,
if
possible,
it's
better
to
maintain
the
size
of
your
tables
and
keep
them
smaller.
If
you
can,
it
doesn't
require
you
take.
There
are
very,
very
large
tables
in
tidy
B,
but
when
it,
when
you
can
manage
their
size
and
TTL
helps
you
do
that
and
then
TTL.
B
B
Lastly,
here
is
the
multi-value
index,
so
my
SQL
supports
this.
This
is
essentially
a
Json
index
which
allows
you
to
store
Json
data
and
then
create
an
expression
that
creates
an
index
on
that
Json
data
that
indexes
a
Json
array
within
that
Json
blob.
So
if
you're,
storing
a
Json
array
or
a
Json
object,
you
can
create
an
index
that
basically
traverses
the
tree,
the
Json
tree
into
a
Json
array
and
indexes
the
values
in
that
array
to
the
key
of
the
row.
B
So
it's
an
N
to
one
index,
so
it's
similar
to
the
secondary
indexes
that
we
already
had,
but
this
is
an
end
to
one
index
that
allows
you
to
check
for
the
for
the
presence
of
any
value
inside
of
an
array
that's
contained
in
a
row
and
I
will.
This
is
when
I
I
forgot
to
Green
this
one,
but
that
one
will
be
talked
about
in
a
little
bit
more
depth
later
as
well.
A
B
B
There
may
be
some
things
you
have
to
change,
but
it's
Case
by
case
at
the
moment.
So
we
that
that,
if
that's,
if
that
is
the
case,
that
will
be
documented
or
you
can
reach
out
to
us-
and
we
can
help
you
through-
that
too.
B
B
Yeah
right
on
okay,
so
in
the
in
the
from
the
perspective
of
stability,
scalability
operability,
these
are
the
last
three
categories
we
added
resource
control.
Again,
I
will
talk
about
that
later,
so
I
won't
harp
on
it
now,
but
that's
that's
green,
and
you
will
learn
more
about
this.
My
most
my
favorite
feature
of
the
of
the
release
Ty
flash
is,
if
you're
not
familiar,
is
the
basically
the
analytical
engine,
the
the
storage
alternative
to
Thai,
KV,
ikv,
being
the
row,
oriented
storage,
being
the
columnar
oriented
storage.
B
So
if
you're
doing
any
kind
of
Full,
Table
scans
or
or
you
know
like
you
want
to
do
scans-
that
don't
require
an
index
or
you're
doing
any
kind
of
analytical
queries
against
your
your
Raw.
B
You
know
system
of
record
data
tie
flash
is
a
good
opportunity,
is
a
good
option
for
you
to
add
to
your
storage
and
that
engine
added
spill
to
disk,
which
essentially
adds
to
stability
of
large
queries
against
against
those
nodes
that
was
released
generally
available
in
7.0.
So
it'll,
it's
really
it's
generally
available
now
in
7.1
for
scalability
we've.
This
could
also
be
where
the
multi
rocks
or
the
partitioned
raft
KB
engine.
B
This
could
go
in
this
that
could
go
in
this
category
as
well,
but
because
it
was
in
the
performance.
I'll
just
add
that
the
tie
flash
also
got
a
re-architecture,
not
quite
as
significant,
but
basically
it's
still
significant.
We
disaggregated
the
compute
and
the
storage
so
so
tidy
EB
server
is
already
the
compute
layer,
but
tie
flash
itself
has
a
compute
layer
within
it
to
do
a
massively
parallel
processing
MPP,
and
so
we
have
disaggregated
in
an
experimental
way
originally.
But
then
we
G8
it
just
recently
to
decouple
that
compute.
B
With
with
this
with
the
story,
those
can
be
scaled
separately
and
then,
with
that
change,
came
compatibility
with
Amazon
S3
compatible
storage.
So
you
can
offload
your
your
Thai
flash
data
into
S3
to
cheapen
in
the
best
sense
of
the
word,
the
storage
of
that
data,
and
by
separating
the
compute
and
storage,
a
lot
of
the
Computing
is
happening
outside
of
that
storage
engine,
so
being
the
S3
doesn't
necessarily
slow
down
the
operations
overall.
B
We
added
in
the
load
data
feature
which
actually
this
is
a
compatible
syntax
with
my
SQL,
but
we've
actually
added
a
since
then
and
I
I
feel
like
I
should
have
updated
the
slide,
I
think
or
maybe
it's
in
7.2
I
can't
remember.
You
have
to
forgive
me
on
that.
One
we
added
import
into
which
uses
the
light.
B
The
Tidy,
B,
lightning
import
tool
as
part
of
Tidy
B,
so
basically
integrates
it
into
tidyb
itself
and
then
allows
it
to
use
all
of
those
features,
as
well
as
reach
out
to
remote
storage
like
S3,
to
grab
files
of
data
and
import
those
into
tight
EB,
which
which
really
speeds
up
the
well
getting
started
with
idb
for
one
and
then
also
on
ongoing
Imports.
B
If
you're
offloading
data
into
CSV
files
and
object
storage,
you
can
be,
you
can
load
those
into
idb
much
more
easily
now
and
then
we
added
a
reorganized
partition,
which
is
essentially
a
oh
I,
have
a
typo
there,
a
syntax
sugar
for
for
managing
partitions.
B
Okay,
so
I
think
this
is
the
the
more
fun
part
we'll
talk
about
the
the
details
of
the
of
the
the
really
cool
features
right
is
there
another
I
just
saw
the
comments.
Go
up.
Is
there
another
question.
A
No,
that
was
just
the
my
my
comment,
so
okay.
B
A
A
B
Know
no
worries,
okay,
so
the
first
is
the
multi-value
index.
I
talked
about
this
a
little
bit
already,
but
the
problems
we
wanted
to
solve
were
that
prior
to
this
users
had
basically
a
choice.
If
they
had
this
kind
of
data,
they
had
to
accept
very
expensive
queries
where
basically,
they
would
have
to
Traverse
the
column
that
the
Json
data
was
in
and
walk
the
Json
data
to
check
for
the
values
in
the
array
in
question
each
every
time
they
ran
the
query,
which
is
basically
infeasible
I.
B
Don't
think
anyone
could
successfully
do
this
at
scale
and
return
and
have
a
query
return
in
a
reasonable
amount
of
time,
if
not
at
all,
right
like
sometimes
these
queries
would
time
out
or
even
crash
a
node.
These
are
expensive
queries
without
an
index,
so
tidy
B
wasn't
really
a
good
solution
for
these
kinds
of
use.
Cases
where
you
have
to
do
kind
of
a
quote-unquote
membership
check
which
is
sort
of
similar
or
akin
to
some
full
text
search
capabilities.
B
The
other
thing,
too,
is
is,
if
you
know,
if
they
still
wanted
to
use
tidy
B
and
they
knew
they
couldn't
use
this
index,
they
might
have
to
refactor
their
their
schema
completely
or
or,
and
probably
as
well
as
their
almost
definitely
their
client
application
code
too,
which
is
just
a
huge
undertaking,
so
another
hindrance
of
adopting
tidy
B
for
use.
Cases
like
this.
B
B
In
this
case
this
is
the
whole
Json
blog
and
the
city
is,
is
the
Json
array
of
cities
and
these
City?
Let's
just
say,
these
cities
are
places
that
person
has
ever
lived
or
is
currently
living
right,
and
we
want
to
ask
of
the
data
give
me
which
people
have
ever
lived
in
a
certain
city
right,
so
let's
say
we
wanted
to
ask
it
which
people
have
ever
lived
in
the
city
of
Albany.
B
So
we
would
create
an
index
that
uses
an
expression
like
this,
which
basically
traverses
the
very
simple
Json
tree
to
the
city,
key
which
returns.
Essentially,
you
know
the
the
the
expression
would
return
the
array
itself
and
then
the
we
index
the
array
to
the
key.
So
we
have
this
mult,
this
multi-value
to
key
index
where
we
we
see
that
the
key
is
present
in
multiple
places,
but
the
cities
can
be.
B
It
can
also
be
duplicated
right,
so
we
have
before
searching
for
Albany.
We
would
walk
this
index
to
find
these
two
cases
of
Albany
and
because
it's
ordered,
we
know
that
once
we
hit
Beijing
Beijing,
that's
another
typo.
Sorry,
then
that's
all
of
the
Albany
occurrences,
and
so
we
would
then
return
quickly
key
one
and
three.
B
B
Okay
on
to
the
big
architectural
change,
this
is
arguably
one
of
two,
if
not
the
most
large
change
that
has
happened.
This
is
by
the
way,
not
a
feature
that
you
are
required
to
use
at
the
moment
it's
off
by
default.
It's
not
technically
generally
available,
even
though
it
is
complete,
there's
various
reasons
why
we
haven't
made
a
ga
yet,
but
in
the
next
stable
release
it
will
be
GA
and
probably
on
by
default,
but
only
in
fresh
clusters.
So
we
don't
want
you
doing.
In-Place
upgrades
to
this
new
architecture.
B
Quite
yet
is
speaking
to
the
In-Place
upgrades
question.
This
will
be
more
complicated
and
we
hope
to
in
the
future,
have
a
better
migration
path,
but
right
now
it's
not
it
there's
a
there's,
a
leap,
so
it's
generally
better
to
have
to
put
this
new
architecture
into
place
on
a
fresh
cluster
that
disclaimer
aside
I
want
to
talk
about
the
problems
that
we
wanted
to
solve
with
this
architecture.
So
the
the
first
problem
is
that
in
this
distributed,
SQL
engine
tidy
B,
we
expect
to
see
very
large
applications.
B
Data
intensive
applications
and
many
of
them
may
be
in
the
same
cluster,
so
clusters
are
expected
to
be
fairly
large
or
hold
a
lot
of
data.
The
more
data
in
the
cluster.
Historically
speaking,
the
more
regions
and
by
regions
we
mean
key
ranges.
This
is
an
internal
term,
so
it's
not
like
an
availability
region.
B
This
is
a
region
of
data,
a
contiguous
data
on
disk
and
more
of
those
regions
there
are
and
the
more
regions
there
are
the
more
heartbeats
there
are
from
the
raft
algorithm
within
the
cluster
and
the
more
heartbeats
there
are
the
closer
to
topping
out
the
size
of
your
cluster
you
get
to
so
basically,
there
was
a
theoretical
or
practical
maximum
size
of
a
cluster,
and
so
we
want
we
want
to
solve
that
problem
so
mark
that
one.
B
The
second
problem
is
that
there's
one
single
log
structured
emergency
LSM
tree
for
the
key
value
data
that
you're
storing
on
the
taikibi
engines.
When
you
have
one
single
LSM
tree
for
various
reasons,
you
will
see
quite
a
bit
of
right
amplification.
That's
a
this
is
kind
of
a
classic
downside
of
the
LSM
tree.
The
LSM
tree
is
a
really
really
good
algorithm
and
it's
really
the
de
facto
one
for
most
nosql
and
even
some
SQL
storage
engines
like
like
Petty
B.
B
But
there
is
this
downside
of
right
amplification,
because,
due
to
the
way
it
does
compactions
in
addition,
is
it
snapshotting.
The
data
for
backup
and
restore
purposes
or
for
cluster
scale
and
scale
out
can
take
quite
a
while,
because
it's
one
tree
that
has
to
walk-
and
it
has
to
look
in.
You
know
a
single
set
of
files
for
data
to
snapshot
so
bookmark
that
one
that
problem
as
well.
B
Similar
on
the
having
one
LSM
tree
means
that
all
region
data
is
contained
in
the
same
files,
same
set
of
files,
so
any
reads,
writes
or
compactions
on
that
data
affect
all
the
other
regions
of
the
data
within
that
node,
so
that
can
there's
there's
some
workload
disk.
I
o
interference
there
and
then
lastly,
I
don't
think
I
mentioned
this.
The
current
architecture
or
the
historical
architecture
was
that
taikivi's
core
storage
engine
is
rocksdb,
there's
a
single
Rocks
DVD.
B
B
implements
for
serializability
a
global
mute
text
on
the
on
the
right
operation
or
the
basically
what
we
call
the
apply
phase
to
make
sure
that
data
is
being
written
in
a
consistent
order,
and
so
with
that
at
the
node
level,
the
rights
are
actually
sequential.
So
there
we
could
there's
room
for
us
to
speed
this
up.
B
So
the
solution
was
to
hopefully
this
diagram
makes
sense,
but
the
solution
was
to
have
a
roxdb
engine
per
region
of
data,
so
we
have
many
rocksdb
engines
or
instances
within
the
same
type,
KB
node
and
that
may
seem
wild
to
you.
But
I'll
talk
about
the
the
benefits
here.
So
the
first
is
that
we
have.
B
We
have
multiple
LSM
trees
right
so
having
multiple
LSM
trees,
going
back
to
the
the
LSM
tree
problem
that
I
mentioned:
there's
less
data
to
Traverse
for
one,
so
snapshotting
is
a
lot
faster
and
so
that
that'll
affect
backup
and
restore
speeds.
But
if
you
look
down
here
in
the
bottom
right,
these
are
some
tests.
We
ran
about
five
or
six
months
ago.
B
These
have
been
improved
since
and
become
more
stable
since,
and
we
have
some
large
customers
in
overseas
that
have
been
using
this
in
an
experimental
way
and
I've
noticed
incredible
benefits.
The
scale
out,
speed
and
scale
in
speed
were
drastically
improved,
so
we
have
at
least
a
5x
Improvement
on
scale
out,
speed
and
scale
and
speed
at
you
know,
on
a
moderate
size,
cluster
and
I
think
I
believe
the
the
the
benefit
is
only
greater
the
larger
the
cluster
you
go,
so
that's
huge
right.
B
B
Also
with
having
multiple
LSM
trees,
there's
less
interference,
workload,
interference
right.
We
talked
about
how
the
one
allows
for
workload
interference,
but
now
that
each
region
has
its
own
files
instead
of
files,
then
anything
that
happens.
Any
write,
read
or
compaction
on
a
region
does
not
affect
any
of
the
other
regions.
It's
totally
isolated.
B
Another
thing
is
that,
because
there's
multiple
rocks
DBS
now
remember
that
Global
mutex
I
mentioned
that
makes
rights
sequential
well.
Now
every
region
can
have
one
of
those
global
music
taxes
right
so
that
so
basically
you
know
tidyb
as
a
whole.
Cluster
was
already
multi-write.
You
could
write
to
multiple
regions.
There
are
multiple
nodes
at
once,
but
now
at
the
node
level
we
can
write
parallel
as
well.
B
So
the
right
throughput,
if
you
see
in
this
graph,
was
also
increased
enormously,
so
I
think
up
to
300
percent
now
is
where
is
where
that
is
so
it's
a
pretty
big
Improvement
on
on
right,
throughput,
with
really
basically
no
degradation
anywhere
else.
B
Also
in
having
multiple
rocks
DB
instances,
we've
we've
reduced
the
the
right
amplification,
so
there's
less
compaction
of
cold
data
happening,
because
when
you
kick
off
a
compaction
event
on
region-specific
data,
it's
not
compacting
all
the
other
regions
data
at
the
same
time.
So
this
is
It's
isolating
compactions
as
well,
and
then
also
we
removed
the
right
head
log,
so
that's
in
in
less
sense,
but
but
still
an
impactful
one
that
this
will
reduce
that
sequential.
B
I
o
as
well
the
last
thing
I
I'm
kind
of
going
in
reverse
order
of
the
problems.
I
mentioned
this
feature,
though
it
hasn't
been.
This
part
hasn't
been
developed.
Yet
if
you
see
this
Dynamic
region
size
text
here,
this
feature
allows
for
us
to
make
region
sizes
dynamic
in
their
size,
as
the
name
suggests,
which
means
that
as
regions
are
cold
or
hot.
So
let's
say
you
have
a
region
of
data.
That's
a
bunch
of
keys
that
are
less
requested.
B
They're
written
two,
less
they're
read
from
less
that
region
is
going
to
be
considered
cold
and
any
region
that's
considered.
Cold
can
be
merged
into
a
larger
cold
region
so
that
we
can
reduce
the
total
number
of
regions
which
will
reduce
the
number
of
heartbeats
and
then
therefore
increase
the
total
size
of
the
cluster
that
we
can
have.
B
So
for
many
reasons,
the
cluster
size
is
going
to
increase
a
lot
and
we
expect,
by
the
end
of
this
year
at
least.
We
hope
to
have
well
over
the
support
well
over
one
petabyte
of
data
in
a
single
online
cluster.
So
if
you're,
if
you
have
multiple
applications
using
hundreds
of
terabytes
of
data,
you
could
put
those
all
in
the
same
cluster.
That's
a
that's!
A
a
really
big
change
and
fairly
fairly
unheard
of
at
this.
At
the
moment,
from
a
from
online
data.
A
Sam
quick
question
that
came
up
is:
are
these
like
a
seven
point?
One
one
features
that
you're
talking
about
are
available
for
open
source
users
only
or
the
cloud
is
this
available
for
cloud
users
as
well.
B
Yeah,
so
that's
the
great
question
everything
I'm
talking
about
here
is
is,
is
or
is
going
to
be
available
in
the
kernel,
which
is
what,
which
is
the
open
source
version
and
how
that
applies
to
cloud
is
Cloud.
Will
we
have
two
Cloud
offerings?
We
have
a
dedicated
Cloud,
which
is
where
most
of
our
Cloud
customers
are
today.
That
is
essentially
the
hosted
version
of
the
kernel
with
the
open
source
version
with
some.
B
You
know
some
Cloud
specific
features
here
and
there,
but
the
way
that
works
is
that
will,
by
default
the
version,
the
default
version
will
be
the
latest
LTS
version,
and
you
can
request
the
non-stable
versions
if
you
want,
but
the
default
will
be
the
latest
LTS
version.
So
today
the
dedicated
Cloud
version
is
this
version.
7.1
multi-rox
will
be
there,
but
it's
off
by
default.
Remember
so
this
you'd
have
to
specifically
you
know
at
a
fresh
dedicated
cluster.
B
The
second
Cloud
offering
we
have
is
serverless,
which
is
which
is
ready
to
be
used,
and
that
is
has
a
similar
architecture
to
multi-rox,
but
it's
actually
kind
of
different.
It's
a
different,
it's
a
sort
of
a
different
code
base
and
is
more
geared
towards
a
serverless
cert
the
serverless
circumstances.
B
B
Yeah,
you
bet,
oh,
and
the
last
thing
here
sorry
is
because
of
the
this
new
architecture.
We've
also
enabled
so
we
not
only
enabled
the
future
ability
for
dynamic
region
sizing,
but
we've
also
enabled
the
ability
to
offload
storage
into
S3
too.
So
each
of
these
rocksdb
instances
can
actually
have
a
different
storage
device
associated
with
them.
So
we
can
have
sort
of
tiered
storage
in
a
way
which
will
also
drastically
improve
upon
storage
costs
overall,
without
really
hindering
performance.
So
much.
B
Okay,
two
more
so
the
and
the
last
one's
very
short.
B
So
another
problem
solution
pattern
here.
So
with
a
cluster
like
tidy
B,
where
it's
distributed
and
horizontally
scalable.
The
idea
really,
the
kind
of
one
of
the
main
reasons
is
you
might
want
to
use
a
distributed.
Sql
engine
is
to
potentially
consolidate
different
workloads,
Consolidated
different
applications
or
have
many
different
teams
or
users
operating
on
the
same
data.
B
Even
if
that
data
is
online
and
is
actually
your
system
of
record
or
your
source
of
Truth,
and
because
it's
online
and
it's
a
system
of
record
and
source
of
truth,
and
you
often
have
critical
and
strict
slas
associated
with
the
workloads
on
that
cluster.
It's
very
important
not
to
have
any
interference
with
those
workloads
with
that
those
workloads
ability
to
access
data,
so
problem
number
one
is
that
foreground
traffic
like
online
data?
B
You
know
your
online
applications
that
are
customer
facing
you:
don't
want
them
to
be
interfered
with
by
background
jobs
like
adding
indexed,
is
ddl
or
doing
Auto
analyze,
which
is
the
cluster's
way
of
gathering
statistics
to
improve
to
to
help
the
query,
Optimizer
or
doing
Imports
through
backup
and
restore
compactions
right.
All
these
background,
jobs
can
have
a
an
effect
on
the
foreground
traffic
because
they're
using
the
same
resources,
so
it
the
problem
gets
worse
too,
if
they're
happening
at
the
same
time.
B
If
you,
if
you're
in
a
compaction
event,
is
going
on
and
then
you
add,
and
you
do
an
ad
index
and
you're
not
really
aware
of
that,
and
then
an
auto
analyze
gets
kicked
off
or
something
like
that
you
can.
You
can
end
up
stacking
up
a
bunch
of
background
processes
that
may
interfere
with
your
foreground
traffic
and
that
the
problem
gets
worse
as
they
stack.
So
we
need
to
figure
out
a
way
to
deal
with
that.
B
Similarly,
we
need
to
figure
out
how
foreground
traffic
themselves,
like
the
parallel
foreground
traffic,
can
actually
avoid
each
other,
so
application
one
and
application
two,
and
you
know
user
user.
Five
can
all
operate
on
the
same
cluster,
with
basically
a
guarantee
of
not
interfering
with
each
other's
resources,
so
that
the
you
know
the
applications
can
maintain
their
slas
and
the
user
can
still
access
whatever
whatever's
left
and
then.
Lastly,
even
when
we,
even
if
you
solve
for
workload
isolation,
how
do
you
prioritize
different
workloads
when
they're,
when
there
still
are
resource
constraints
right?
B
So
we'll
talk
about
all
these
with
the
solution
of
resource
control
by
resource
groups?
So,
fundamentally
there
there's
kind
of
two
mechanisms
to
this
resource
control
solution.
B
One
is
SQL
flow
control,
which
is
essentially
tidy
B
server,
deciding
which
requests
to
send
to
Thai,
KV
storage,
nodes
right
and
I'll
talk
about
how
it
does
that
in
a
second,
but
that's
just
that's
that's
determined
by
resource
groups
and
the
allotment
of
resources
in
the
resource
Group.
The
second
thing
is
actually
storage,
task,
scheduling,
and
so
the
storage
task
scheduling
happens
in
Tai
KV
nodes
directly,
and
that
is
also
deterred
by
the
resource
Group.
But
that's
return
determined
by
the
priority
set
on
the
resource
Group.
B
So
you
can
create
a
resource
Group
and
you
can
set
a
quota
of
resources,
and
this
the
unit
of
resources
is
what
we
call
request.
Units
per
second
request
units
are
an
abstraction
of
CPU
memory
and
disk
IO
request.
Units
per
second
is
a
unit
of
basically
how
quickly
those
basically
these
units
or
tokens
are
back
filled.
So
it's
a
matter
of
kind
of
how
intent
data
intensive
your
your
workloads
are
will
determine
how
fast
you
need
to
replenish
your
bucket
of
request
units.
B
So
when
you
set
your
resource,
Group
of
request
units
per
second,
you
say
50
000,
or
something
like
that.
That's
50
000
request
units
per
second
of
backfill,
so
if
you
use
45
000,
then
you'll
you'll,
you'll,
make
sure
you're
kind
of
always
replenished.
Forty
five
thousand
per
second
that'd
be
a
very
data
intensive
workload,
but
you
by
setting
it
to
50
000.
You
know
that
you
can
kind
of
always
run
that
workload
with
Without
Really,
any
fear
of
interference.
B
A
resource
Group
can
be
assigned
to
any
of
the
following
three
things:
a
user
on
the
left
hand,
side
here,
a
user,
a
session
or
down
to
the
statement
level.
So
a
user
is
going
to
be.
You
know,
obviously,
a
a
ad
hoc
analytical
user,
maybe
but
also
applications
or
different
workloads
systems.
Like
you
know,
analytical
ETL
data
pipelines
right
could
be
a
user
that
you
would
assign
to
a
resource.
Group
a
session
can
also
be
assigned
to
a
resource
Group.
B
So
obviously
that
assignment
will
will
end
with
the
session
ending
and
then
a
statement.
Even
so
you
can.
You
can
provide
a
hint
in
a
single
query
that
will
use
a
resource
Group,
and
you
can
do
that
in
your
application
right.
So
you
can
set
up.
You
know
if
you
have
one
workload,
that
is
that
is
designed
for
Resource
Group
one,
but
in
that
same
workload,
there's
really
this
one
query
that
you
actually
want
to
run
in
a
different
Resource
Group.
B
You
can
sign
that
you
can
have
that
hint
override
the
its
user
level,
Resource
Group
and
have
it
use
a
different
Resource
Group.
So
an
application
can
kind
of
reach
into
multiple
resource
groups
if
it
needs
to,
and
then
lastly,
you
can
set
a
priority
on
Resource
Group
and
that
priority
can
be
low,
medium
or
high,
and
that
priority
is
what
determines
how
the
requests
per
Resource
Group
are
scheduled
when
there
are
basically
enqueued
in
Thai
KV
nodes
when
they're
when
they're
beginning
to
stack
up.
B
So
how
it
looks
in
practice
this
is
it's
as
simple
as
this
to
create
resource
groups
and
left
here.
So
in
this
example,
we're
creating
three
resource
groups,
one
we
have
an
oltp
Resource
Group,
really
boring
names,
sorry,
but
the
oltp
resource
Group
is
really
our
application
traffic.
This
is
an
online
critical
application
for
the
business
that
has
strict
slas.
B
Similarly,
we
create
a
batch
Resource
Group,
which
doesn't
require
as
much.
This
is
probably
an
analytics
pipeline
right.
This
will
be
like
a
spark
workload.
That'll
take
some
data
out
of
Tidy
B
munge
it
and
and
throw
it
up
somewhere
else
into
a
you
know,
parquet
file
or
or
into
a
you
know,
an
analytic
system
like
Domo
or
Tableau,
or
you
know
quick
site
or
something
like
that,
and
you
know,
because
this
is
it's
not
as
critical
as
the
oltp
workload.
B
But
if
there
are,
if
there's
data
available,
these
queries
can
be
pretty
big,
so
it'd
be
nice
if
we
could
have
it
burst
as
well
and
have
it
go
outside
of
its
quota
when
I'm
available
now
there.
Since
these,
both
these
workloads
are
burstable
they're,
both
going
to
be
using
whatever
is
available
right,
which
means
that
you
can
find
you
can
they
can
find
themselves
in
a
resource
contention
situation.
B
So
that's
where
priority
comes
into
play
so
having
the
lltp
workload
set
to
a
high
priority
means
that
it'll
win
over
the
batch,
which
is
what
we
want
and
then,
lastly,
we
have
an
HR
workload,
which
you
know
nothing
against
HR.
But
it's
going
to
be
a
it's
a
deterministic
kind
of
predictable
workload.
B
It
doesn't
have
any
latency
slas,
so
we
just
need
to
make
sure
it
finishes,
but
we
don't
need
it
to
finish
quickly
and
it
and
we
don't
expect
it
to
be
unpredictable
traffic,
so
we
just
give
it
a
non-versible
quota
of
400
R
Us
per
second,
and
we
call
it
a
day.
So
the
way
this
plays
out
is
we're
measure
we're
going
to
measure
this
in
in
two
from
two
different
angles.
B
So
in
phase
one
we
have
our
critical
workload
is,
is
at
its
full
bore
right:
it's
that's
high
traffic
period,
so
we
want
it
to
do
it
as
much
as
it
can
as
fast
as
it
can.
So
it's
we
see
that
in
green
here
notice.
The
legend
green
is,
it's.
You
know
running
at
high
QPS
queries
per
second.
At
the
same
time,
we
notice
that
its
tail
latency
is
low
and
flat,
which
is
exactly
what
we
want.
B
Meanwhile,
we
have
the
batch
workload
on
the
HR
workload,
which
are
running
at
fairly
low
QPS,
with
some
latency
spikes
right.
So
obviously
we
don't
want
to
see
latency
spikes
with
any
workload.
Of
course,
if
we
can
avoid
it.
But
the
point
of
this
is
that
in
these
situations,
where
there
may
be
resource
constraints-
and
you
would
otherwise
need
to
scale
up
your
cluster-
we're
still
getting
the
business
logic
enforcement
that
we
want
right.
We
want
to
make
sure
that
this
green
line
stays
low
and
flat.
B
B
Maybe
it's
like
overnight
or
it's
weekend
or
something
who
knows
and
because
of
that,
some
some
available
resources
or
resources
free
up
and
because
the
batch
workload
is
burstable,
it
can
Spike
up
to
kind
of
work
as
fast
as
it
possibly
needs
to,
and
its
latency
will
come
down
right
and
similarly
because
resources
are
still
being
used
actually
well,
the
HR
workload
wouldn't
be
able
to
to
step
into
that
anyways,
but
they
rate
the
ahr
workload
continues
to
have
a
low
QPS,
and
you
know
somewhat
variable
latency,
depending
on
what's
happening
in
the
cluster.
B
In
the
phase
three,
we
ratchet
back
up
to
the
high
oltp
throughput
and
we
noticed
the
the
pattern
repeat
itself
from
phase
one
where
the
whole
time
oltp
workloads
stayed
low,
latency
and
flat
so
consistently
low,
which
is
exactly
what
we
wanted.
And
then
we
noticed
that
there's
some
some,
you
know
negative
impact
on
the
other
workloads,
but
this
green
line
here
is
exactly
what
we
want.
This
is
this
means.
We've
achieved
our
goals.
B
And
the
last
feature
that
I
want
to
drill
into,
and
this
one's
the
simplest
possible
one
to
explain
as
the
concurrency
framework
so
with
online
ddl
that
that's
an
expensive
operation,
there's
really
no
way,
no
other
way
to
shake
it
right
that
having
online
ddl
is
really
nice.
It's
convenient
it's
it's
it's
friendly
to
users,
but
it
can.
It
can
interrupt
things,
so
we
want
it
to
be.
B
Obviously
we
want
it
to
be
resource
controlled.
So
that
goes
back
to
the
the
feature
from
from
before,
but
we
want
it
to
kind
of
happen
as
fast
as
we
can
and
not
overload
specific
nodes.
B
So
the
second
problem
is,
you
could
parallel
these
paralyze
this
on
on
a
single
tidy,
B
node,
but
in
general
doing
the
whole
operation
in
one
tidy
B
node
is
kind
of
too
heavy
for
the
tie
to
be
node.
Sometimes
you
don't.
If
you
can't
avoid
it,
you
want
to
distribute
that
work
and
make
it
more,
even
so
that
each
tidy
B
node
still
has
a
remain
a
fairly
low
chance
of
of
having
of
hitting
like
an
outer
memory.
B
So
the
solution
was
to
distribute
the
work
across
tidy
B
nodes,
and
so
this
framework
is
going
to
first
apply
to
add
index
or
you
know,
to
ddl
operations,
but
this
same
framework
is
actually
going
to
get
is
going
to
support
really
all
background
tasks
eventually,
so
we're
going
to
start
having
really
any
background
task.
I
mentioned
earlier
TTL
time
to
live,
ddl,
obviously
backup
restore
these
kinds
of
things
are
going
to
be
conducted
in
a
parallel
manner
across
the
cluster,
so
making
them
faster
and
also
evenly
Distributing.
B
The
work
that's
done
to
do
these
operations,
but
again
right
now
is
just
add
index,
but
we'd
love
for
you
to
give
this
a
try
and-
and
let
us
know
how
much
faster
it
really
is.
Then
you
know
what
you're
what
you're
used
to,
or
even
if,
if
you're
used
to
tie
DB
how
much
faster
it
was
than
a
prior
version.
A
Yeah
one
question
I'm,
looking
at
both
YouTube
and
zoom:
do
we
share
like
our
product
roadmap
or
plans
on
on
our
webpage
somewhere?
That's
a
question.
B
Yes,
we
do
trying
to
think
if
we
do
it
for
cloud
I
believe
we'd,
do
it
for
cloud,
but
for
serverless
cloud
I
mean,
but
the
serverless
cloud
is
updated
every
two
weeks.
So
the
release
notes.
That's
that's
an
agile
developed
offering
I'm,
not
sure
it's
necessary
there,
but
yes
for
the
for
the
kernel
and
the
open
source,
which
would
also
apply
to
Dedicated
Cloud.
B
There
is
a
a
tidy
roadmap
page
in
the
documentation
that
basically
looks
out
the
next
two
LTS
releases,
so
it'll,
it
should
say,
depending
on
on
you
know
where,
in
the
year
you're
looking
at
the
roadmap
should
roughly
say,
show
you
what's
planned
for
the
kind
of
key
features
that
we
expect
to
be
in
the
next
LPS
and
then
a
little
bit
looser,
but
still
planned
than
the
features
that
we
expect
to
be
in
the
the
LTS
following,
which
would
be
six
months
after
that,
so
that
roadmap
will
cover
between
seven
months
and
12
months
at
a
time,
basically.
A
Cool
yeah
I
just
pasted
the
link
on
on
both
Chats
on
YouTube
and
zoom,
so
hopefully
people
can
check
him
out
and
Yeah
like
like
all
of
our
documentation,
Pages.
If
you
have
questions
you're,
you
can
open
issues
and,
and
also
you
can
even
directly
edit
the
page.
If
you
find
any
thing,
any
changes
that
you
want
to
request
with
the
pull
request.
B
Yep,
so
this
is
the
roadmap
in
case
people
are
wondering
this
is
this
is
actually
the
I
I
maintain
this.
B
For
the
most
part,
the
this
will
be
updated
in
the
next
week
or
so,
but
we
just
released
our
mid-year
LTS
release,
so
we're
just
waiting
to
to
get
the
final
approval
from
the
documentation
team,
but
we
will
have
this.
Basically,
this
column
will
shift
to
the
left
and
we'll
have
new
stuff
put
here,
so
it
will
have
an
end
of
year,
end
of
calendar
year
LTS,
release
in
this
column,
because
that's
going
to
be
the
next
LTS
release
right
and
then
the
following
one
will
be
a
mid-year
next
year.
B
That's
how
this
that's
kind
of
the
format
of
this
of
this
roadmap
and
the
roadmap
is
grouped
by
those
kind
of
same
product
domains
that
I
had
shown
earlier.
So
this
is,
you
know
if
you're
interested
in
reliability,
scalability
and
performance,
then
these
you
know
first,
two
domains
will
be
of
interest
to
you
and
then
we'll
also
show
you
future
releases.
This
one's
a
lot
softer
of
a
commitment-
it's
really
not
committed
at
all,
but
these
are
kind
of
it
gives
you
an
idea
of
which
direction
you
we're
kind
of
headed.
B
So
you
can
get
a
feel
for
that
as
well
and,
of
course,
we're
open
to
feedback
on
this
stuff
right.
That's
that's
the
benefit
of
having
a
community
and
having
an
open
source
product
we
can
get.
We
can
learn
from
the
community
if
we've,
if
we've
missed
something
or
we've
gone.
If
we
plan
to
go
a
Direction,
that's
that's
fairly
meaningless
to
the
community.
We
want
to
know
about
it,
so
let
us
know.
A
Cool
yeah
I,
like
the
fact
that
this
is
part
of
docs
now
I
think
it
may
have
been
part
of
like
GitHub
like
a
readme
file,
but
I
think
this.
This
is
a
ride
home
for
it
I
think.
B
For
nine
years,
and
and
it's
my
first
time
managing
something
public
like
that,
you
know
it's
I
think
it
speaks
to
some
transparency
which
I'm,
proud.
A
To
have
no
that's
good,
I
mean
I
I.
Think
it's
an
easy
way
to
I
mean
best
way
to
solicit
solicit
feedback
too
versus
having
it
in
like
slides
that
and
then
you
don't
know
how
to
get
it.
Get
a
hold
of
people
with
questions.
So
cool
I
took
one
last
question
that
I'm
seeing
I'm
I'm
assuming
this
is
related
to
when
you
talked
about
multi-rocks
and
using
S3
as
a
backup
storage.
The
question
is:
do
we
support
S3,
compatible
storage
for
backups.
B
Yes,
for
backups
S3,
supported,
S3,
compatible
storage
is
supported
for
backups,
restore
and
then
from,
and
then
those
those
Imports
right
as
far
as
having
hot
data
stored
in
S3.
That
applies
to
tie
flash
right,
as
I
mentioned
in
the
re-architecture
of
Ty
flash.
B
If
you
caught
that
and
then
in
the
future
because
of
the
multi-rocks
that
enables
us
to
to
to
soon
that
we're
in
the
kind
of
design
phase
of
this
right
now,
but
to
soon
be
able
to
actually
store
your
you
know,
hot
application
data
in
in
S3,
where
applicable,
which
is
very
exciting
from
a
a
billing
standpoint.
A
Yep
cool
awesome:
well,
thank
you,
Sam
and
just
making
sure
all
questions
are
answered
and
thanks
for
people
who
join,
live
on,
Zoom
or
or
YouTube
live
stream,
and
also
for
people
that
will
be
watching
watching
the
recording
later
on
and
Sam
will
plan
on
having
you
back
with
the
next
staple
release
by
the
end
of
the
year,
so
we'll
book,
you
know.