►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Thank
you,
hello.
Everyone
Welcome
to
Cloud
native,
live
where
we
dive
deep
into
the
code
behind
Cloud
native
I'm,
Annie,
I'm,
talvasto
and
I'm,
a
cncf
Ambassador
as
well
as
a
senior
product
marketing
manager
at
kamunda
and
I
will
be
your
host
Tonight.
So
every
week
we
bring
a
new
set
of
presenters
to
Showcase
how
to
work
without
native
Technologies.
A
They
will
build
things,
they
will
break
things
and
they
will
answer
all
of
your
questions,
so
you
can
join
us
every
Wednesday
to
watch
live
and
we
hope
to
see
you
at
kubecon
next
week,
so
you
can
still
register
so
grab
those
tickets
and
and
get
into
the
cloud
native
space
next
week
as
well,
perfect-
and
this
week
we
have
hanlin
here
with
us
to
talk
about
how
to
build
a
multi-cloud
database
as
a
service,
very
exciting
topic
for
today
for
cloud
native
live
and
as
always,
this
is
an
official
live
stream
of
the
CNC
app
and
as
such,
it
is
subject
to
the
CNC
code
of
conduct.
A
Please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct.
Basically,
please
be
respectful
of
all
of
our
of
your
fellow
participants
as
well
as
presenters,
but
that
being
said
I'll
hand
it
over
to
hanlin
to
kick
off
today's
presentation.
B
All
right
thanks
so
hi,
hello,
sorry,
hello,
everyone!
My
name
is
harling
and
it's
a
pleasure
to
be
here
today
for
the
CNC
cncf
webinar.
Oh,
can
you
hear
me?
Okay,.
B
Sure
my
team
is
so
my
team
is
working
on
building
a
manage.
The
multi-cloud
database,
Thai
DB
as
a
service
product
and
the
Thai
DB
operator,
is
one
of
the
fundamental
building
blocks
for
making
that
happen
in
today's
presentation,
I'm
going
to
present
a
live
demo
on
how
to
create
and
manage
a
tidyb
cluster
using
the
Thai
DB
operator
on
a
kubernetes
cluster.
B
Okay,
let
me
change
the
slide.
Okay,
well
before
I
jump
into
the
details
of
kubernetes
operator
and
other
Cloud
native
Technologies.
Please
allow
me
to
do
a
brief
introduction
to
Thai
DB
itself.
So
Thai
DB
is
a
MySQL
compatible
SQL
database
by
MySQL
compliance.
It
means
you
can
connect
to
a
teddyb
cluster.
In
the
same
way,
you
connect
to
a
MySQL
instance
different
from
traditional
oltp
database
that
runs
on
a
single
instance.
Idb
is
a
distributed
system.
B
Traditionally,
when
a
database
got
excessive
data,
we
need
to
Shard
the
database
and
it
could
be
challenging
to
manage
those
shards
using
idb.
On
the
other
hand,
you
don't
need
to
worry
about
the
sharding
and
idb
will
manage
that,
for
you,
tidy
B
is
inspired
by
Google's
banner,
and
it
is
also
built
on
top
of
a
KV
store.
In
this
case,
the
KB
store
is
something
called
Tai
KV
type
AV
is
a
distributed.
Kv
star,
store,
empowered
by
roxdb
for
our
KV
store.
B
The
key
range
could
be
very
huge
and
we
will
reach
the
scale
limit
if
we
put
all
the
keys
on
into
a
single
machine
to
address
that
scale
issue,
we
split
the
entire
key
space
into
multiple
continuous
key
ranges
and
we
call
them
key
regions.
The
concept
is
similar
to
shards.
Key
regions
are
distributed
to
different
Thai
KV
instance,
and
it's
a
this
is
a
superficial
description
of
how
we
built
a
distributed.
Kv
store.
A
B
Oh
I'm,
seeing
the
comments:
okay,
what
is
oh
from
redis?
Okay,
so
Venice
is
it's
a
Shard
management
system
so
basically
it
manage.
It
helps
people
to
manage
different
shots,
but
at
the
back
I
think
at
the
back
end,
it's
still
using
the
MySQL
but
tidy
different
from
videos.
It
doesn't
have
any
shards
Concepts
in
there,
so
basically
for
vettas,
because
it's
still
using
the
sharding
technology.
So
there's
some
limitations.
That
probably
you
cannot
do
some
join
operation
or
something.
B
B
Yep
no
worries
so
when
tidy
B
receives
a
SQL
statement,
so
it
will
try
to
parse
the
SQL
generate
a
party
plan.
Okay,
great
I'm
gonna
generate
a
query
plan
and
it
will
determine
in
which
key
regions
the
required
data
is
located.
So
now
the
question
is
which
Thai
KV
instance
holds
those
key
regions,
the
host
those
host
those
key
regions.
This
is
where
PD
comes
into
the
play.
So
PD
stands
for
placement
driver
and
one
of
its
core
functionality
is
and
maintains
the
key
regions
to
Thai
KV
instance.
B
Finally,
in
this
in
this
diagram,
there's
some
component
called
type
flash.
So
what
is
Type
flash
type
flash
is
basically
a
column
storage
engine
optimized
for
analytical
processing,
with
the
presence
of
type
flash
tidy
B
is
capable
of
handling
analytical
workloads
without
interfering
with
ongoing
transactional
workloads.
Okay,
so
I
will
change
the
slide.
B
Okay,
now,
okay,
now
I
think
we
have
some
basic
knowledge
for
Thai
DB.
Then
what
is
titb
operator
and
why
it's
useful
for
us.
Well
as
we
are
seeing
previously
titibi
has
many
different
components
and
managing
it
could
be
tdrs
and
air
prom.
B
Tidy
B
operator
is
a
pool
as
a
204
managing
tidb
in
a
kubernetes
cluster,
similar
similar
to
other
operators
in
the
kubernetes
ecosystem,
hi
hi,
Maha,
yeah,
similar
to
other
operators
in
the
kubernetes
ecosystem.
We
provide
a
set
of
crds
and
the
user
can
simply
describe
the
desired
state
for
a
title
cluster
and
the
operator
will
automatically
Drive
the
cluster
to
its
desired.
States
30b
operator
can
do
lifecycle
management
for
Thai
DB
cluster,
set
up
the
monitoring
and
set
up
data
change
capture
several
things
like
data
change,
capture
clusters
and
so
on.
B
Okay,
let
me
change
the
slide.
Okay,
so
here's
the
plan
for
today's
live
Demo,
First
I
will
we
will
cover
the
installation
for
titb
operator?
Then
we
will
label
the
notes
in
a
pre-created
kubernetes
cluster,
so
that
each
component
could
be
scheduled
to
a
dedicated
node.
Without
then,
we
will
create
a
kubernetes
cluster
by
applying
a
Thai
DB
cluster
customize,
the
resource.
So
after
the
cluster
is
up
and
running,
we
will
run
the
render
tpcc
Benchmark
against
that
cluster
and
we
will
log
into
the
tidyb
to
check
out
the
newly
added
database.
B
Next,
we
will
access
grafana
and
the
titub
dashboard
to
see
the
metrics
being
collected.
We
will
also
try
to
scale
out
in
the
scaling
the
cluster
and
check
out
changes
on
the
dashboards.
Finally,
we
will
clean
up
the
resource.
We've
just
created
notice.
This
is
a
live
demo,
so
things
could
go
wrong.
Any
questions
before
the
demo.
A
B
B
Sure
I
think
I
do
have
a
slack
link.
Let
me
let
me
find
oh
sorry,
I
do
have
a
link
for
the
for
the
demo.
I
will
I'll,
probably
probably
handle
it,
handle
it
in
the
put
it
in
the
private
chart.
A
Perfect,
we
can
get
it
there
and
I
think
we
can
also
include
it
to
the
materials
that
people
get
at
the
event
and
so
forth.
But
also,
if
you
are
watching
this
live
on
your
internet
is,
is
poor
and
whatnot.
You
can
always
watch
this
session
afterwards,
On
Demand
on
the
cncf
YouTube
channel.
A
It's
going
to
be
added
there
immediately,
so
you
can
either
look
at
the
slides
towards
or
you
can
just
watch
the
session
play
by
big
sexy
as
as
it
happens,
so
no
worries
there
as
well,
yeah
sure,
and
then
there
was
another
question
from
laurentinus.
A
Could
you
share
use
cases
that
we
are
trying
to
accomplish
using
dpd?
That
cannot
be
accomplished
with
other
DB.
B
Okay,
so
accomplish
the
using
entire
week,
cannot
be
using
other
DB.
Well,
I
think
that
really
depends
I
think
there
are
different
DBS
on
Market,
something
are
using
a
similar
technology,
I
think
corporate
Stevie
and
others.
These
are
similar
to
tidy,
but
I
think
the
major
difference
is
from
traditional
oltp
database,
for
example
MySQL
or
postgres.
Now
those
databases
are
I
think
by
default
they
are
some
something
like
pretty
pretty
much
single
instance
databases.
B
So
so
you
need
to
manage
your
sharding
and
other
stuff,
and
that
could
be
tedious
and
that
could
also
be
very
challenging
in
some
cases
so
for
teddyb
I
think
the
benefits
for
for
it
is
it's
very
scalable.
It's
very
scalable
so
and
you
don't
need
to
worry
about
managing
the
shards
on
your
own,
so
I
think
that's
one
of
the
major
benefits.
I
think
some
other
features
could
be.
It
is
marketed
as
a
Olt,
sorry
htap
database.
B
So
it's
can
handle
hybrids,
workloads
like
transactional
and
analytical,
because
the
because
I
think,
after
some
type
of
development
people
added
some
analytical
processing
data
source,
something
called
type
flash
I
just
mentioned
in
the
slide.
So
that
means,
for
example,
when
you
are
handling
both
transactional
and
analytical
workloads.
Those
two
were
kind
of
workloads
won't
really
interfere
with
each
other,
so
we
won't
see
a
very,
very
obvious
performance
drop
if
you
are
doing
a
analytical
processing
and
on
the
same
time
you
are
doing
transactionals
actions
yeah,
something
like
that.
A
Perfect
sounds
good.
Then
there
was
another
question
on
kind
of
continuing
I
guess
a
bit
on
that
topic
as
well.
So
my
tea
spot
asks
I'm
curious
about
scaling
and
Disaster
Recovery
capabilities.
Oh.
B
Okay,
I
think
most
of
the
disaster
recovery
capability
is
actually
brought
by
kubernetes.
So
kubernetes
has
some
some
components
like
staple
SATs
and
deployments.
So
we
use
that
for
so.
We
use
that
for
data,
I,
think
yeah,
for
example.
If
some
parts
is,
we
are
typically
seeing
that
in
our
production
environment,
sometimes
teddyb
pass
got
o
end.
B
So,
basically,
when
we
are
handling
some
big
queries,
so
we'll
take
a
lot
of
memory
to
do
the
compute,
and
sometimes
we
didn't
assign
enough
memory
and
the
Pod
just
crashed
and
with
the
help
of
kubernetes
the
possible
I'll
be
bringing
up
automatically.
So
that's
I
think
that's
what
the
cloud
native
brings
to
us.
Yeah.
A
Perfect
then
there
was
a
Jorge
asks:
TD
uses
TVs
to
store
data
right,
or
is
it
like
a
middleman
between
apps
and
DBS.
B
Oh
I
think
the
answer
depends
so
for
some
of
the
so
television
as
a
cloud
I
think
in
our
use
case.
We
use
it
as
a
cloud
native
database
and
we
deploy
it
in
kubernetes
when
we
deploy
in
kubernetes.
Yes,
it
is
using
PVS
as
the
data
store
and
for
some
use
cases,
people
just
Deploy
tinyb
on
a
bare
mellow
machine.
So
in
that
case
people
will,
but
they
are
not
using
kubernetes
in
SK
decades,
so
people
use
local
disk
or
something
similar
yeah
and.
B
Oh
okay,
okay.
So
the
number
of
notes
I,
think
there
are
several
limitations
in
our
use
case.
So
we
are
deploying
that
on
we
are
deploying
title
cluster
on
kubernetes
and
so
kubernetes.
Typically
I
I
forgot
the
exact
number,
but
kubernetes
does
have
a
node
limitation.
So
that's
one
thing
and
what
our
typical
deployment
is
we
will
assign,
for
example,
there
are
several
different
components:
tidyb
PD
and
others,
so
all
of
them
can
have
multiple
replicas
and
we
basically
assign
a
single
part
to
a
single
machine.
B
So
so
I
think
one
of
the
limitation
could
be
the
Thai
DB.
Sorry,
while
the
limitation
could
be
bounded
by
the
number
of
nodes
that
can
join
a
kubernetes
cluster.
So
that's
one
of
the
limitations,
but
apart
from
that,
I,
don't
really
I.
Think
there
probably
is
also
a
another
limitation
on
how
many
nodes
can
join
a
teddyb
cluster,
but
I
need
to
check
out
the
exact
number
yeah.
A
Yeah
no
worries,
then
we
have
ID
sploit
asking:
how
does
it
handle
cluster
slash
node
upgrades
where
nodes
will
need
to
be
refreshed
with
new
node
slash
node
groups.
B
Okay,
so
handle
cluster
upgrade
when
nodes
need
to
be
refreshed.
Let's
you
know,
okay,
so
I
think
I
think
for
upgrade
is
actually
a
little
bit
tricky
I.
Think
in
our
practice
there
are
some
rolling
upgrades,
I
think
well.
Well,
what
we
will
do
is
we
will
change
the
version
number
in
in
the
television
cluster
CR
object,
but
yeah
I
think
that
could
be
a
little
bit
tricky,
but
basically
it
will
trigger
some
rolling
upgrades.
You
know.
B
A
B
No
worries:
okay,
so
I'm
gonna
try
to
switch
to
the
demo
screen.
Okay,
okay,
it's
the
front
size.
Okay,
do
I
need
to
increase
the
font
size.
A
B
I
see
so
the
horse
I
think
it
can
scale
both
horizontal
or
vertical.
So
so
typically,
we
will
install
something
called
the
vpa
that
that
means
that
stands
for
vertical
pod,
Auto
scaler.
So
basically
we
we
will
sometimes
we
want
to
increase
the
memory
or
CPU
for,
for
example,
tidyb
pods
automatically
yeah.
So
that's
one
thing,
but
also
it
can
scale
out.
B
It
can
also
scale
out
so
scale
out
to
scale
out
I
think
we
will
just
need
to
change
the
replica
numbers
on
the
teddyb
clusters
back
to
make
it
happen.
Yeah.
A
B
I
think
there
are
a
bunch
of
them.
I
think
there
are
a
bunch
of
them,
but
probably
it's
not
really
related
to
today's
demo.
Yeah.
B
A
B
School,
maybe
maybe
the
demo
will
take
some
time
to
load
the
data.
Maybe
I
will
kick
off
the
demo
and
answer
the
questions
during
the
wait.
Okay,.
B
Cool
so
I
think
we've
covered
the
plan
for
today.
So
let
me
switch
to
this
this
terminal.
Okay,
so
I've
already
created
a
kubernetes
cluster.
B
Get
notes
so
I've
already
created
a
kubernetes
cluster
on
gke
that
has
seven
worker
nodes
before
this
presentation
to
simplify
the
demo.
The
cluster
nodes
are
all
located
on
U.S,
Central,
One
A2,
a
that
that's
the
availability
Zoom
in
a
typical
production
environment.
Sorry,
you
know
typical
production
scenario
with.
We
would
like
to
spread
the
workloads
across
multiple
azs,
but
for
the
sake
of
the
demo
purpose,
we
will
use
a
single
Zoom
cluster
and
the
first
thing
we
want
to
do
is
to
install
the
kdb
operator.
B
B
B
B
B
B
B
B
B
Foreign,
the
next
thing
we
want
to
do
is
oh
in
this
cluster
we
have
seven
nodes
and
we
want
to
assign
one
node
for
PV
and
two
nodes
for
Thai
DB
and
four
notes
for
on
on
Tech
heavy
nodes.
So
to
tell
the
scheduler
how
to
do
the
assignments,
we
need
to
label
the
notes,
so
that's
first
and
label.
The
first
note
for
PD.
B
Okay,
the
node
is
now
labeled
and
we
will
label
the
the
the
the
following
two
nodes
for
tidy
B.
A
B
That
means
we
are
labeling
correctly
and
the
next
thing
we
want
to
do
is
we
want
to
create
a
namespace,
that's
for
the
Tidy
cluster,
so
the
Tidy
cluster
resources
will
be
located
in
that
namespace.
We
call
them
space
demo.
B
Now
the
name
space
is
created
and
for
the
sake
of
the
demo,
we
will
switch
the
default
context
to
demo.
So
in
the
remaining
of
the
code
lab,
we
won't
need
to
type
Dash
and
demo
again
and
again,
okay
and
okay.
The
next
thing
we
want
to
do
is
we
would
like
to
download
the
sample
the
sample
kubernetes
cluster
yaml,
and
we
would
like
to
do
some
changes
to
that.
Yaml.
B
The
first
thing
we
want
to
change
is
the
replica
number
for
PD
by
default,
it's
three
and
for
this
demo
we
want
to
make
it
one,
and
we
don't
need
that
much
storage
size,
for
example,
10
gigabytes
for
PD,
is
too
much
for
this
demo.
We
make
a
five
and
for
the
number
of
rep
for
the
storage
size
for
Thai
KV
with
we
don't
need
to
100.
B
We
were
just
to
make
it
smaller
make
it
10
gigabytes
is
enough
and
the
the
final
thing
we
want
to
change
is
actually
for
Thai
DB,
so
the
to
make
the
our
sorry.
Let
me
locate
the
teddyb
service
first,
okay.
Here
we
go
so
on
two
on
to
access
3db
to
access
the
MySQL
clients.
We
we
would
like
to
expose
the
Thai
DB
Service
as
a
via
the
load
balancer.
B
So
here
it
originally
by
default,
it's
using
the
internal
one,
but
we
want
to
access
this
in
this
demo,
so
we
make
it
external.
B
I
think,
that's
all
the
changes
we
need
to
make
in
for
this
tidy
cluster
object.
Let's
create
it,
create
dash
of
Thai
DB
cluster
yaml
and
on
the
second
half
Coupe
Carlo,
that's
get
pause
w
and
we
should
be
seeing
the
the
parts
are
being
created.
It
will
only
take
a
few
minutes
for
those
parts
to
come
up.
B
I.
Think
at
this
point
the
part
is,
the
note,
is
yeah
pulling
some
image
and,
after
the
PD
instance
is
ready,
it
will
try
to
spin
up
some
Thai
KV
parts.
B
B
B
B
What
we
will
do
is
we
can
see,
what's
going
what's
inside
the
what's
in
the
database,
so
I'm
just
grabbing
the
endpoint
for
that
idb
cluster
from
the
service
and
then
I'm
going
to
connect
to
that
30b
using
the
MySQL
client.
B
B
Okay,
so
the
next
thing
I
want
to
do
is
I
would
like
to
create
the
monitoring
components
for
it.
So
here
is
the
I
think
this
is
the
monitoring
tools.
Let
me
first
apply
it
and
I
will
explain.
What's
inside.
B
Github
and
let's
try
to
yep
wait
monitoring
tool
pink
hub.
B
Okay,
so
maybe
just
show
the
yamo
raw
yamo,
so
basically
this
you
can
see
from
this
back
basically
tidy
B
monitor
is
a
wrap
on
top
of
the
Prometheus
and
the
grafana,
and
basically,
apart
from
these
two
components,
basically
there's
an
initializer.
The
initializer
has
some
built-in
logic
to
load
the
configurations
to
grafana
and
prompt
use.
I
think
that's
as
simple
as
that.
So
this
is
a
simple
configuration,
but
for
production
you
probably
need
more
fine,
grinds
and
configurations
like
the
configurations
for
passwords
or
the
durations.
B
You
want
to
persist.
The
metrics
for
Prometheus-
and
these
are
all
configurable
in
fine-
grind
configuration
okay,
so
the
now
the
parts
are
up
and
running.
Let's
try
to
run
the
tpcc
Benchmark
against
that
cluster,
so
I
will
try
to
create
that.
B
Let
me
create
a
new
screen
for
that:
let's
first
grab
the
endpoint
IP
and
then
run
this
command
to
load.
The
data.
B
Okay,
as
we
are
seeing
some
outputs
I
think
it
is
loading
the
data
and
we
can
switch
back
to
the
MySQL
clients
and
at
this
point,
when
we
show
the
database
and
see
there's
a
new
space
for
tbc's
being
created
here
and
use
T
pieces
I
think
there
are
some
some
tables
created
in
that
database
show
the
tables
and
the
select
stuff
from
let's
see
the
items.
B
Let's
see
the
five
items:
okay,
you
can
see
that
there
are
some
fresh
stuff
being
added
to
that
database,
so
it
will
take.
It
will
take
a
while
for
the
data
to
be
loaded.
So
what
the
next
thing
we
want
to
do
is
we
want
to
see
the
we
want
to
check
out
the
the
grafana
dashboard
and
also
the
Tidy
dashboard,
so
foreign.
B
Data
and
check
out
the
services
in
the
system,
and
we
can
see
that
that
only
in
the
default
name
space
only
the
type
DB
service
and
that's
the
MySQL
endpoint
as
exposed
using
load
balancer,
but
for
other
system
for
for
other
services.
These
are
all
cluster
IPS.
That
means
these
services
are.
B
This
service
are
not
accessible
from
outside
from
outside
of
the
cluster.
So
to
expose
to
connect
those
servers,
we
need
to
do
a
pause
forwarding
to
do
a
port
forward
yeah.
So
by
running
this
command,
we
are
basically
posting
the
basic
graph,
the
grafana
service,
to
our
local
Frozen
ports.
B
B
B
Okay,
so
now
we
are
seeing
the
graph
and
the
by
defaults
for
the
configuration
it's
using
the
default
password,
which
is
yeah
and
it's
a
weak
password
to
be
honest
and
we
skip
the
change,
password
command
and
and
in
this
dashboard,
see
that
there
are
some
defaults
built
in
net
for
teddyb
cluster
and
we
can
check
out
the
Thai
details
and
sorry
change
this
one.
B
Oh
wait,
I'm
typing
and
details
and
we
can
expand
the
cluster
and
you
can
see
that
the
cluster
has
just
been
created
and
the
let
me
change
the
duration
to
15
minutes
and
refresh
it
every
five
seconds
you
can
see
that
with
cluster
has
just
been
created
and
free
after
it
is
created.
We
are
seeing
some
loads
because
we
are
running
tpcc
against
it
yeah
and
from
this
point
you
can
see
that
this
is
the
metrics
for
regions
on
Thai
KV.
So
you
can
see
that
every
every
instance
got
39
regions.
B
I,
think
that
is
because
by
default
we
will
replica
data.
We
will
have
a
the
data
will
be
replicated
to
three
Thai
KB
instances
yeah.
So
every
single
every
instance
will
have
all
the
regions
here,
yeah
yeah,
and
we
can
see
that
each
regions,
the,
although
we
have
39
regions
in
total
and
we
are
each
each
Tech
heavy
instance-
is
only
like
around
one
third
of
the
leader
of
one-third
of
the
regions,
something
like
that.
B
Okay,
then,
let's
check
out
the
Thai
DB
dashboard
okay,
so
this
is
Thai
DB
dashboard
and
we
are
seeing
okay.
There
are
some
queries
or
queries
in
the
default
dashboard
and
we
can
check
out
the
SQL
statements.
So
tidyb
dashboard
is
mostly
used
for
troubleshooting
and
we
can
for
each
things
that
there's
a
dashboard
for
the
SQL
statement.
So
we
can
see
the
details,
some
Statics
for
the
each
single
SQL
State,
and
we
can
also
check
the
slow
queries
so
slow
queries.
B
I'll
do
some
Statics
for
the
quarries
that
takes
extra
extra
finish
and
we
are
seeing.
These
are
typically
have
around
300
milliseconds.
To
finish,
and
by
clicking
on
the
queries,
we
can
actually
see.
Oh
okay,
this
is
not
very
typical.
Let's
try
to
see
updates
something
like
this
yeah.
For
example,
if
we
check
out
some
of
the
SQL
statements-
and
we
can
see
that
there's
a
detailed
query
plan
for
that
SQL
statement
and
this
information
are
very
useful
for
performance
tuning
yeah.
B
Okay,
so
this
is
a
very
basic
tour
for
the
grafana
and
the
Tidy
dashboard.
Let's
go
back
to
the
screen
and
the
data
is
still
being
loaded.
Okay,
it
takes
some
time
to
load
the
data
and,
as
we
are
loading
the
data,
that's
the
reason
why
we
are
not
seeing
some
search
like
select
or
get
all
in
this
in
the
SQL
statement
section.
So,
basically,
all
the
all
the
state
SQL
statements
are
something
like
insert
and
rights
to
the
database,
because
we
are
currently
loading
data.
B
Okay,
well,
it
takes
quite
some
time
to
to
load
the
data
and
at
this
point,
okay,
so
originally
we
are
I,
think
we
have
39
agents.
Now
we
have
one
more
region
because
we
are
continuously
writing
the
data
and
what
we
can
do
at
this
point
is
we
can
increase.
We
can
scale
out
the
cluster
by
changing
the
by
changing
the
number
of
replica
Tai
KV.
So
let's
try
to
scale
out
the
cluster.
B
So
so,
let's
first
check
well
the
the
current
number
of
replicas
in
the
system.
So
originally
we
in
the
Thai
KV
spec.
Oh
sorry,
in
the
Thai
DB
clusters
back,
we
are
specifying
three
instance.
We
are
just
confirming
that.
We
have
three
instance
for
high
KV
and
then
we
can
do
a
patch
here.
We
can
create
a
patch
to
patch
the
number
to
four
and
you
are
seeing
that
instantly.
We
can
see
a
new
Thai
KV
part
being
created
of
being
created
here.
Yes,
and
we
check
the
replica
number
here.
B
So
so
for
scaling
out.
Typically,
we
will
the
the
shape.
The
change
will
happen
instantly
after
the
change
is
applied
and
it
will
take
some
time
for
the
tech
AV
part
up
and
running,
and
it
will
there
will
be
some
latency
in
the
dashboard
and
later
on.
We
will
we're
supposed
to
a
new
type
KV
in
this
in
the
dashboard.
A
No
worries,
if
there's
some
time,
we
do
have
a
few
audience
questions
that
we
can
get
to
does
not
sound
good
or
okay.
So
there
was
a
question
from
Savage.
Can
we
use
this
tidy
B
in
Java,
older
versions
like
eight?
Will
it
be
compatible
or
supported.
B
I
think
I'm
not
an
expert
on
that,
but
my
understanding
is
I.
I,
think
tidy
being
has
spent
quite
some
time
working
on
the
compatibility
issue
with
my
sequel.
So
if
your
jdk
can
be
connected
to
MySQL
I
think
it
can
be
connected
to
tidy
B
in
the
same
way,
yeah.
A
Great
and
then
I
displayed
had
a
comment
and
I
think
that
he
they
asked
before
about
the
cluster
note,
upgrades
and
and
so
forth,
and
then
they
continued
tricky
is
the
word
when
it
comes
to
DB,
DBS,
says
TVs
and
communities
but
clear
plan.
Slash
process
would
be
nice,
especially
with
no
refreshes
slash
upgrades.
B
Yeah
yeah,
okay,
so.
B
The
upgrade
to
I
think
that
does
need
some
expertise.
Sorry,
I
I,
don't
really
have
the
expertise
on
that,
so
probably
I
will
I
can
probably
get
back
to
my
team
and
find
more
resources
on
that.
A
Perfect
and
then
we
had
a
person
asking,
is
there
Benchmark
available
in
comparison
to
mySQL,
Mario
DP,
and
this
is
and
so
forth.
B
Actually,
I
I'm,
not
sure
on
that
and
I
think
for
the
for
the
Benchmark
it
could
be,
it
could
be,
I
won't
say
tricky,
but
it
won't
be.
It
would
be
a
little
bit
unfair
because
my
sequel
is
it's
a
single
instance
database.
So
it
doesn't
need
to
worry
about
the
latencies
with
different
components,
but
idb
is
distributed.
B
It
is
it's
very
scalable
it
can
handle,
maybe
but
not
petabytes,
maybe
terabytes
of
data,
but
but
Mexico
will
be
hard
to
handle
that
data,
but
data
now
released
leave
things,
so
the
performance
could
be
better.
So
it's
really
it's
really
how
like
what's
the
difference
between
between
performance
versus
scalability,
so
you
know
so.
B
Itself
in
the
I
think
official
website
for
talking
cap,
but
I'm,
not
sure
if
they
have
formed
the
Benchmark
for
different
clusters.
Page
yeah,
yeah.
A
Perfect
and
then
there
was
I,
think
two
people
asking
about
the
git
link
so
that
they
can
follow
along
with
the
steps
that
you're
doing.
B
A
Think
someone
maybe
linked
to
that
already
like
your
dog
slang,
but
but
if
you
have
the
the
get
link
can
always
listen
to
the
attendees
via
the
chat
as
well,
and
then
there
were
question
around
Santosh
asks.
We
should
be
having
a
tinyb
driver
connect
from
java
right,
but
then
I
think
deletion
answered
since
Thai
DB
is
compatible
with
mySQL
protocol
and
syntax.
You
can
use
MySQL
drivers
to
connect
to
type
DB
if
you
want
to
upgrade
on
that.
A
A
And
then
yeah
and
then
people
are
asking
for
the
kid
link
and
then
a
question:
is
it
free
to
use
for
production
environments
without
any
licensing?
Complications?
Question.
A
Yeah
and
then
a
question
was
ask
for
architecture
diagram.
It
uses
rocksdb
behind
the
scene.
B
B
Yeah
sure
so,
let's
get
back
to
the
panel,
and
we
can
see
that
after
some
latency
we
can
see
the
new
tech,
V3
notes
come
up
and
running
and
also
we've
loaded,
all
the
data.
Finally,
and
we
can
run
some
analytical,
we
can
run
the
some
other.
Instead
of
prepare
loading,
the
data
we
can
actually
run
the
test
against
the
cluster.
B
And
from
the
dashboard,
let's
try
to
refresh
it.
Okay,
now
we
are
seeing
some
as
we
are
running
the
tpcc
against
it.
Now
we
are
seeing
some
select
statements
there.
B
B
Okay,
so
there
are
many
many
things
to
tweak
here
and
there's
some
something
like
query
plans.
B
Let
me
let
me
check,
but
okay
now,
as
we
are
seeing
some
outputs,
that
means
the
tpcc
is
really
running
and
okay
come
back
to
the
grafana
dashboard.
So
we
are,
we
are
seeing
as
we
are
seeing
the
new
Thai,
Kevin
nodes
and
I
think
there's
some
balancing
Works
between
the
Thai
KV
and
we
are
seeing
that
there,
the
the
tiger
V3
notes
is
picking
some
leaders
for
the
regions
gradually
and
now
we
want
to
try.
One
thing
is
we
want
to
scale
in
the
cluster
by
one
node.
B
So
different
from
different
from
the
scale
out
operation,
which
happens
in
instantly
right
for
scaling
cluster,
let's,
let's
say
if
we,
if
we
delete
the
Thai
kv3.
No,
if
we
blow
the
attack
kv3
note
directly,
then
some
of
the
leaders
are
not
evicted,
so
it
will
be
a
problem.
So
there
will
be
some
disruptions
in
the
workloads
and
we
will
see
some
difference
in
the
query
per
second
right
and
what
tidy
what
High
TV
operator
will
do
is
it
will
not
directly
remove
this
type
kv3
nodes?
B
B
Okay,
we
can
see
that
the
change
is
applied,
but
on
the
bottom
half
of
the
screen,
the
tech
heavy
nodes
are
not
changed
directly
right
because
at
this
point
it's
trying
to
evit
some
tech,
heavy
leaders
from
this
node
yeah.
So
it
will
take
some
time
to
evict,
often
but
after
the
number
of
leader
on
Thai
kv3
drops
to
zero.
We
should
we
are
supposed
to
see
on
the
parts
being
removed
being
terminated
and
we
are
seeing
that.
Okay,
now
it's
being
terminated
and
I.
B
B
Yeah:
let's
go
back
to
the
Thai
DB
dashboard
I
think
we
can
refresh
all
the
statements
and
we
are
seeing
more
and
more
select
at
this
point.
We
can
try
the
slow
query
and
see
okay.
These
are
these
are
taking
their
this
analyze
statements
are
taking
long,
but
you
can
see
that
the
slack
statement
could
also
take
seconds
to
finish
and
yeah
and
at
this
point,
I
think
these
informations
could
be
useful
for
Performance
Tuning
later
on
yeah.
B
Okay,
now
go
back
to
to
go
back
to
the
screen
and
we
are
seeing
that
taikibi
3
node
is
now
down
because
we
are
not
seeing
its
metrics
here.
Okay,
that
means
the
the
node
is
actually
being
removed.
B
B
You
can
pause,
let's
see
how
many
parts
are
steering
okay.
Now
we
can
see
that
only
three
Thai
KV
instance
are
kept
here.
I
think
that's
the
most
part
of
my
demo
and
before
I
remove
all
the
resource.
Any
any
questions.
I'll
take
some
questions
from
here.
Yeah.
A
B
How
do
we
handle
let
spring
okay,
I,
think
I?
Think
from
my
understanding?
Maybe
it's
not
correct,
but
Thai
KV.
B
Is
it's
basically
using
raft
to
do
the
consensus
so
and,
for
example,
for
when
you
want
to
write
some
data
to
the
database,
I
think
there's
only
one
Tech
EV
instance
taking
the
taking
the
right
request,
but,
for
example
depending
on
the
the
keys
you
are
writing,
depending
on
the
case
we
are
riding,
it
could
be
a
schedule
to
one
of
the
Thai
KV
notes
to
take
the
right
request,
but
there's
only
one
Tech
heavy
notes
taking
a
request,
and
if
that
node
is
down,
then
there
I
think
there
will
be
some
re-election
will
happen,
so
a
new
leader
will
be
elected
and
the
rights
will
be
shifted
to
that
schedule.
B
To
that
type
of
note,
I
think
that's
the
overall
overall
Theory
I
think
so.
Yes,
since
there's
only
one
instance
taking
the
right,
so
I
think
it's,
there
won't
be
a
split
spray.
Okay,
yeah.
A
Yeah
and
if
there's
any
other
like
there's,
no
audience
questions
currently
that
we
haven't
answered
so
far,
but.
B
A
We
have
few
minutes
still
left
if
anyone
has
them.
So
please
do
write
your
questions.
There
was
there
any
I
think
you
had
maybe
handling
like
a
resource
that
you
wanted
people
to
know
in
the
future
regarding
coupon
or
something
where
people
should
be
learning
more
about
these
things,
possibly.
B
Yeah
I've
just
also
shared
the
shared
the
GitHub,
just
a
link
in
the
with
our
stats
so
later
on.
I
think
we
can
also
share
this
GitHub
just
that
I
used
yeah,
perfect.
A
A
Yeah
but
yeah,
if
there's
any
questions
from
the
audience
now
is
the
time
to
ask
them.
But
yeah
did
you
have
something
coming
up
in
kubecon
I
think
you
mentioned
before
Oh.
B
Yeah
I
wanted
to
tell
people
yeah
sure,
so
here's
a
little
advertisement
so
in
coupon
is
at
the
corner,
so
in
the
coupon,
my
colleagues
will
host
some
presentations.
If
you
are
interested
in
Thai
KV,
please
be
sure
to
visit
on
this
this
boost,
and
also
if
you
are
interested
in
chaos,
engineering
we
are
also
hosting
a
chaos.
Mesh
is
also
a
project
initiated
by
pincap
and
be
sure
to
visit
this
post.
There
I
think
that's
that's
the
information
I
have
so
far.
No.
A
Perfect
great
next
steps
and
resources
that
people
can
utilize
going
further
then,
and
we
had
a
question
from
audience
could
have
prepared.
This
could
scale
up
vertically
without
downtime
question
mark.
B
Yes,
yes,
I
think,
like
I
mentioned
before
I
think
in
our
environment
we
are,
we
will
set
up
some
something
like
a
vpa
to
scale
up
the
scale
up:
the
for
example,
memory
and
CPU
for
notes,
yeah
yeah,
that
that
can
be.
That
can
happen.
But
you
need
to
configure
a
vpa
on
your
own
yeah.
A
Perfect
and
we
do
have
still
a
few
minutes,
so
if
anyone
has
any
questions
that
are
typing
them
out
right
now,
let's
send
soon
is
there
anything
else?
Do
you
want
to
still
share
yeah.
B
B
Yeah,
you
simply
remove
the
cluster
and
you
can
see
that
the
pods
are
gone
and
you
can
also.
You
also
need
to
delete
the
Tidy
B
monitor
yeah.
The
monetary
resource
is
now
gone,
and
then
you
need
to
remove
the
PVCs
and
the
persistent
volume,
yeah
and
I
think
that's
it.
That's!
Basically
what
I
want
to
cover
in
this
presentation.
Yeah.
A
Perfect,
yes,
we
have
a
comment
from
us
vertically
without
dying
time.
They
say:
I
think
scale
out
strategy,
okay,
yeah.
A
A
Perfect,
thank
you
so
much
as
is
no
new
questions
have
popped
up.
We
can
start
wrapping
up.
Thank
you.
Everyone
for
joining
the
latest
episode
of
cloud
native
live.
It
was
great
to
have
a
session
about
how
to
build
a
multi-cloud
database
as
a
service,
really
love
the
interaction
and
questions
from
the
audience.
So
many
of
them
always
great
to
see
that
and
as
always,
we
bring
you
the
latest
Cloud
native
code,
every
Wednesday
for
the
next
few
weeks.