►
From YouTube: Red Hat Advanced Cluster Management Presents Cluster Lifecycle w/ Submariner & Storage / DR Use
Description
RHACM as a central management tool that provides capabilities to create clusters at large. In doing so new challenge opportunities arise, like multi-cluster networking and DR. We can demonstrate new technologies like Submariner out of the box (coming to RHACM 2.2 GA in March), and begin to dive into storage / DR use cases like using CockroachDB and scribe.
A
Good
morning,
good
evening,
good
afternoon,
wherever
you're
hailing
from
welcome
to
another
episode
of
red
hat
advanced
cluster
management
presents,
I
am
chris
short
executive
producer
of
open
shift
tv.
I
am
joined
by
not
the
entire
rackham
team,
but
close
scott
barron's
likes
to
bring
the
heat
on
these
calls
so
hold
on
tight
and
scott
I'll.
Let
you
take
it
away
with
all
the
intros
and
everything.
B
Grab
your
hats,
chris,
you
know
we
like
to
bring
the
thunder.
Maybe.
A
B
Bring
the
pain
for
those
of
you
in
the
northeast
under
six
feet
of
snow.
We're
gonna,
try
to
heat
things
up
today
with
rackham
and
some
thrilling
life
cycle
scenarios
around
clusters
and
submariner
and
disaster
recovery
so
really
packed
in
a
dream
team.
We
keep
setting
the
bar
higher
and
higher,
and
your
content
keeps
getting
better
and
better,
so
we're
just
trying
to
keep
up,
and
I
just
keep
having
to
find
brighter
and
smarter
and
more
talented
people
to
bring
in
here.
So
I'm
going
to
kind
of
move
around.
C
Sure,
scott
thanks,
so
my
name
is
randy
bruno
p,
berger
and
I'm
a
software
engineer
from
the
cluster
lifecycle
squad
here
at
red
hat
and
cluster
lifecycle
is
one
of
a
few
squads
working
on
advanced
cluster
management
and
today
I'll
be
introducing
red
hat
advanced
cluster
management.
Nice.
D
Ryan
cook,
I'm
with
the
office
of
cto
today,
I'm
going
to
actually
be
showing
off
a
new
application,
we're
working
on
called
scribe.
What
scrub's
going
to
allow
you
to
do
is
to
move
data
between
clusters,
do
disaster
recovery
and
do
fan
outs
of
multiple
clusters.
E
E
And
ragam.
B
B
Basically
told
someone
to
write
it,
but
a
huge
brain
trust
of
information
there.
We
have
another
individual
that
hopefully
is
going
to
join
after.
He
literally
puts
out
a
fire,
and
that's
michael
elder,
our
senior
distinguished
engineer
in
in
rackham,
been
here
since
the
start
of
rackham
when
it
was
back
under
ibm's
purview
as
mcm,
so
has
been
in
this
space
for
a
couple
of
years.
Hopefully
it's
all
good
on
his
end
and
the
fire
alarms
that
we
were
hearing
in
the
pre-call
roll
up
were
just
a.
B
False
alarms,
so
so
with
that,
let's,
let's
turn
the
mic
over
to
randy
who's,
going
to
kind
of
set
the
table
in
the
cluster
lifecycle.
Make
sure
people
understand
what
we're
doing
in
that
space
and
bring
them
back
up
to
speed
with
rackham
building
up
clusters.
Go
ahead.
C
Very
cool
so,
as
I
said
before,
I'll
be
talking
a
little
bit
about
red
hat,
advanced
cluster
management
from
the
perspective
of
a
squad
member
from
cluster
lifecycle,
and
this
will
involve
version
2.2
of
our
release,
which
is
the
most
recent.
C
Very
good
a
little
ahead
of
myself,
but
let's
get
it
going
right,
so
advanced
cluster
management
is
just
one
part
of
the
larger
openshift
container
platform
right,
but
by
building
on
top
and
leveraging
the
openshift
container
platform
acm
is
able
to
bridge
the
gap
between
high-level
container
control
via
orchestration
and
basically,
the
next
logical
step,
which
is
high
infrastructural
level,
control
of
cluster
scaling
workload
and
application
management
policy
based
governance
and
monitoring
all
from
a
single
console.
Acn.
C
Exactly
so,
essentially,
acm
will
be
leveraging
one
cluster
as
a
focal
point
for
control,
and
we
call
this
the
hub
cluster
and
that's
where
acn
is
installed
and
through
a
process
called
importing.
We
then
put
other
clusters
under
the
purview
of
the
hub
and
we
call
this
our
managed
clusters
right.
C
So
with
advanced
cluster
management,
we
can
import
or
manage
kubernetes
clusters
from
major
cloud
providers
like
ibm,
amazon,
microsoft
and
google,
and
we're
also
capable
of
provisioning
and
installing
all
those
clusters
from
the
context
of
that
hub,
cluster
control,
plane
so
from
a
single
place,
and
you
can
also
provision
and
create
vmware
clusters
from
that
control.
Plane.
C
Sorry,
bare
metal
assets
or
vmware
resources,
essentially
on-premises
resources.
Also
from
your
hub
cluster
and
from
your
hub
cluster,
you
can
view
your
managed
cluster
health.
You
can
deploy
applications,
you
can
enforce
policy
and
you
can
also
target
your
clusters
for
upgrade
deletion
or
de-provisioning.
C
So
a
lot's
going
on
from
this
control
plane
and
we
wanted
to
give
our
developers
the
most
amount
of
control
and
a
sensible
amount
of
ability
to
dive
deep,
but
also
have
a
high
level
of
of
control.
From
the
console,
so
they
don't
have
to
do
too
much
legwork.
C
C
And
today
we're
going
to
be
provisioning,
a
cluster
and
then
upgrading
it
right
now.
I
have
two
clusters
imported,
my
local
cluster,
which
is
basically
the
hub
and
an
abstraction
of
the
hub
managing
itself
and
one
that
I
provisioned
earlier
right.
Now,
I'm
going
to
create
another
one.
B
This
is
great,
so
you've
already
got
two
on
amazon
right
on
aws,
your
local
and
then
importing
one.
So
you
can
run
through
create
you're
quickly
getting
into
the
scope,
or
you
can
have
multiple
clusters
that
might
need
to
talk
to
each
other
right.
They
might
need
some
multi-cluster
networking
right.
Okay,
cool
so
show
us
what's
happening
here.
C
Absolutely
so
you
can
see
here
this
is
our
creation.
Wizard,
like
I
mentioned
earlier,
we
wanted
to
abstract
a
lot
of
this
process
down,
so
that's
very
easy
to
just
grab
and
go
and
create
a
cluster,
but
also
we
want
to
give
our
developers
the
ability
to
access
the
yamls
and
to
further
customize
how
they
want
the
installation
to
go
so
right.
Now,
I'm
going
to
provision
for
another
aws
cluster.
C
I
have
my
provider
connection,
which
is
just
my
credentials
for
aws
already
set
up
and
if
I
wanted
to,
I
could
go
ahead
and
jump
into
my
install
config
yaml
and
make
further
edits
on
the
yaml
directly
or
the
cluster
yaml.
I
don't
need
to
so
I'm
just
going
to
go
ahead
and
create.
B
C
And
that's
a
good
point.
This
release.
We
spent
a
lot
of
time
most
of
our
time,
realigning
the
ui
so
that
we
matched
the
pattern,
fly
styling
that
was
already
happening
in
ocp
and
create
not
synergy
but
connected
styling
through
the
product.
Yeah.
C
So
yeah,
so
this
is
going
to
take
some
time
to
provision.
Luckily
I
already
have
this
nice
little
demo
cluster
sitting
around
and
we
can
quite
easily
see
when
our
cluster
has
a
upgrade
available.
C
You
can
jump
directly
to
the
ocp
console
for
the
cluster,
but
you
can
also
upgrade
from
your
rack
console
by
clicking
this
upgrade
button
and
selecting
your
next
version
right
and
once
I
do
this,
it's
going
to
send
signal
to
the
cluster
itself
and
then
the
table
will
upgrade
this.
This
actually
takes
a
moment,
but
once
it
gets
back
status
from
the
cluster
that
upgrade
is
happening.
This
status
will
change
to
upgrading
in
process
or
in
progress.
You'll
see
it
here
as
well,
and
that
takes
just
a
few
seconds.
C
B
So
it's
all
integrated
tight,
finish,
look
and
feel
create,
upgrade,
destroy,
detach
all
the
life
cycle,
things
that
you
can
do
you're
at
a
point
randy
where
you've
demonstrated
a
new
problem.
You
know
like
a
new
opportunity.
We
call
it
where
we
can
take
advantage
of
these
clusters
and
do
multi-cluster
things.
B
I
see
michael
here,
video
on,
hopefully
everything's.
Okay,
in
his
house
there
was
a
fire
alarm
coming
off,
yeah.
F
Sure
my
name
is
michael
elder.
I
work
in
engineering
on
acm
and
wanted
to
introduce
this
concept
from
what
randy
showed
us.
What
acm
can
do
managing
across
many
clusters
and
we're
going
to
see
some
some
really
neat
examples
of
what
you
can
do
with
this
many
clusters.
But
I
wanted
to
set
up
this
concept
of
submariner
as
we
before
we
kind
of
see
that
in
the
inflow
does
that
still
make
sense.
Scott.
B
F
And-
and
I
still
have
a
habit
of
calling
it
submariner-
and
I
know
a
lot
of
folks
like
to
call
it
submariner
and
it
wouldn't
be
a
good
community
project
name
if
there
weren't
multiple
opinions
on
how
to
express
it.
So
you
know
leave
your
leave,
leave
your
opinion
in
the
comments
and
let
us
know
how
you
think
it
should
be
how
it
should
be
pronounced,
but
this
project,
whatever
you
want
to
call
it,
brings
the
meat
capability.
It
lets
us
actually
think
about.
F
Okay,
I've
got
many
clusters
available
clusters
only
as
important
as
the
workloads
they
run,
and
how
do
you
make
them
more
valuable
for
workloads,
make
them
able
to
connect
and
communicate
more
effectively?
So,
whereas,
if
you
think
about
a
traditional
kubernetes,
cluster,
you've
got
networking
that
extends
across
the
cluster
right.
Pods
have
a
certain
network
services
within
that
cluster.
Have
a
certain
network
and
even
though
pods
containers
run
on
different
nodes
within
the
cluster,
they
have
a
consistent
networking
layer.
Submariner
just
extends
that
concept
out
and
says:
hey.
F
We
should
be
able
to
do
this
across
any
number
of
clusters.
In
this
picture,
we've
only
got
two,
but
clearly,
as
randy
has
already
shown
you,
it's
really
easy
to
stand
up
many
clusters
and
bring
them
under
management
and
with
submariner
or
submariner
you'll
be
able
to
actually
bring
those
clusters
into
their
own
consistent
networking
domain
under
the
covers
the
way
this
works
is
through,
and
let
me
go
ahead
and
put
this
in
present
mode
in
case
some
of
these
pictures
and
icons
haven't
come
across.
Clearly,
we've
got
clusters.
B
E
Oh,
we
went
ahead,
one
yeah,
so
this
one
yes
submariner,
gives
you
a
way
to
create
a
network
tunnel
between
clusters
in
particular,
as
you
know,
with
openshift
we
create
an
overlay
sdn,
that's
where
the
pod
network
is
and
where
they
get
an
ip
from
right.
So
now
with
submariner,
you
can
create
a
network
channel
between
these
sdns
in
different
clusters.
It's
like
putting
a
router
a
switch
router
between
these
two
sdns
and
now
magically
the
pods
can
talk
to
each
other.
E
E
So
if
you
have
a
service
in
one
cluster,
you
can
discover
discover
it
from
another
cluster
using
a
dns
call,
so
very
narrative
to
what
already
happens
in
in
openshift.
You
just
get
a
different
domain
to
do
multi.
You
know
cross-cluster
dns
lookups,
and
then
you
get
load
balancing
also.
So
if
you,
if
you
open
a
tcp
connection
to
a
service
ip,
you
get
load
balance
to
the
pods
behind
that
service,
but
those
that
that
service
and
the
pods
leave
it
in
another
cluster.
E
So
you
you,
you
pick
some
a
coup,
some
nodes
to
be
the
gateway.
Okay
and
the
network
channel
is
implemented
as
an
ipsec
channel.
E
So
these
nodes
need
to
be
routable
between
each
other,
the
rest
of
the
nodes,
don't
have
to
be
routable
and
then
the
the
operator,
the
submariner
operators,
changes
a
little
bit
of
the
routing
rules
in
the
normal
worker
nodes
so
that
when
you
try
to
connect
to
a
pod
in
another
cluster,
the
communication
flows
through
the
gateways.
B
E
Real
quick,
so
you
should
be
able
to
see
my
screen
with
my
presentation
here,
so
I
want
to
present
this
idea
of
cloud
native
disaster
recovery.
This
is
an
idea.
I've
been
trying
to
be
reasoning
upon
in
the
last
months
and
I'm
coming
up
with
this
definition
to
make
a
little
bit
of
contrast
between
traditional
disaster
recovery
and
cloud
native
disaster
recovery.
So
in
traditional
disaster
recovery
most
in
most
cases
the
disaster
detection
is
is
there
is
a
human
decision
behind
behind
the
behind
triggering
the
the
disaster
procedure.
E
So
we
see
that
something
is
going
wrong.
We
decide
okay,
it's
time
to
to
to
kick
off
the
disaster
recovery
procedure.
It's
a
human
decision
in
cloud
native
we
want.
We
would
like
that
to
be
completely
autonomous,
so
the
system,
the
tax
that
one
of
the
data
centers
down
and
reacts
to
that
the
disaster
recovery
procedure
must
in
traditional
disaster
recovery,
can
be
automated.
Sometimes
it's
automated,
but
many
times
that
I
that
I
see
it's
a
mix
about
the
mission
and
human
actions.
E
E
We
want
them
to
be
either
zero
or
near
zero.
So
we
don't.
We
don't
have
discontinuity
in
service
and
we
are
always
completely
consistent.
We
never
lose
state
and
when
I
was
explaining
this
to
other
another
audience,
they
told
me
all
these
statements
are
too
bold.
We
can
really
be
zero.
E
E
Absolutely
zero
data
loss
also,
traditionally
the
process
owner
for
defining
this
disaster
recovery
procedure
is
around
the
stars:
team,
okay,
but
but
in
the
future,
I
think
this
responsibility
will
move
to
the
application
team
and
they
would
be
responsible
of
deciding
how
their
application
is
and
needs
to
do
deal
with
disaster
recovery
and
then
traditionally,
the
main
technical
capabilities
that
enabled
the
disaster
recovery
procedure
were
found
in
storage
right,
so
backups
volume
sync,
this
kind
of
capabilities,
but
in
the
future
I
think
the
capabilities
will
be
found
in
networking
and
eastwood's.
E
Communication
is
very
important
for
this
new
wave
of
application
to
establish
clustering,
so
the
database,
the
cluster
across
different
regions
and
submariner,
is
a
way
to
establish
this
sos
communication
capability
when
you're
running
on
openshift
and
then
a
global
load
balancer
that
can
sense
the
health
of
the
application
that
is
load
balancing,
so
so
a
smart
global
balancer
that
can
do
this.
These
autonomous
disaster
detection
are
the
capabilities
that
we're
looking
for
so
going
going
ahead
with
the
demo.
E
This
is
the
infrastructure
that
I
prepared.
So
here.
If
you
can
see
my
pointer,
I
have
a
control
cluster.
I
just
learned
that
I
should
call
it
a
hub
cluster,
but
it's
the
same
concept.
It's
where
submariner
is
running.
E
It's
what
I
use
to
set
up
the
other
clusters
that
are
running
in
three
different
regions
in
aws,
and
I
also
have
in
this
cluster
a
a
community
global
load,
balancer
operator,
which
will
observe
what
these
clusters
are
doing
and
program
the
program
at
route:
53,
okay,
which
is
our
global
balancer,
it's
the
dns
in
aws
and
then
in
this
cluster.
What
I've
prepared
is
is
the
channel
okay,
the
submariner
channel
there
is
so
they're
all
connected.
E
A
E
The
feature
that,
yes,
that
randy
was
showing
a
little
a
minute
ago
yesterday,
this
cluster
were
not
up
to
the
latest,
so
I
selected
all
of
them,
and
then
I
clicked
this
button
here.
That
now
is
grayed
and
I
just
kicked
the
upgrade
for
all
of
them.
At
the
same
time,
rafa.
A
E
Okay,
so
I
was
saying
I
selected
this
three
cluster
and
upgraded
just
with
a
single
click.
It
was
really
cool.
So
next,
what
I?
What
I
did
on
this
classic,
I
deployed
the
cockroachdb.
E
So
cockroachdb
is
a
new
sql
database
that
allows
you
to
deal
with
these
distributed
geographically
distributed
situations
and
it's
able
to
maintain
consistency
and
availability
when
some
of
its
instances
go
down.
Okay,
it
has
a
other
other
interesting
feature
like
linear,
scalability
and
and
essentially
full
consistency
when,
when
there
is
when
there
is
an
outage,
it's
able
to
reorganize
the
internal
data
structures
that
it
manages
so
the
way
the
way
I
deploy
there
is.
E
There
are
three
instances
for
local
local
leader
detection
and
then
there
are
local
quorum
and
then
there
are
obviously
three
regions
so
for
a
total
of
nine
instances
of
cochlear
city
that
form
a
logical,
a
logical
database,
a
logical
instance
of
a
database,
so
we
can
see
it
here.
E
Oh,
I
can't
I
can't
change
it
down.
Okay,
so
I
have
it
here.
This
is
this:
is
a
web
application
served
by
the
global
load
balancer,
so
I'm
not
sure
exactly
which
regions
is
giving
me
this
ui.
E
B
Ruffle
you
do
us
a
favor,
it's
sometimes
the
audio
is
getting
a
little
loud
when
the
microphone
is
too.
E
B
But
if
you
leave
it
at
your
chest,
chris
can
adjust
the
volume
so
that
it's
it's
perfect
for
the
listening
audience.
Thank
you
all.
E
Right
so
I
have,
I
have
a
client
per
region
that
is
pumping
transactions
to
to
the
database.
Okay,
this
is
the
tpcc
standard
benchmark
test
for
transactional
databases.
So
you
can
use
this
to
measure
how
good
the
deployment
of
cochraceb
is,
but
we
don't
care
about
that
now.
E
We
we
just
want
clients
to
be
doing
some
operations
on
the
database
and
what
we're
going
to
do
is
to
take
out
a
region
by
isolating
the
vpc
on
which
the
openshift
and
calculus
db
are
installed,
and
we
are
going
to
observe
how
the
cluster
the
cochrane's,
the
big
cluster,
reacts.
Okay,
so
this
is
where
the
demo
may
go
wrong.
So,
let's,
let's
hope
everything
is
fine.
E
So,
as
you
see,
we
are
pumping
these
transactions,
I
don't
need
to
explain
exactly
how
the
test
works,
but
we
can
we're
you.
You
see
that
we
are
getting
stats
on
the
on
the
latency
now
on
this
monitor
here
on
this
console,
I'm
going
to
run
the
command
that
isolates
the
cluster
on
on
on
the
third
region.
Okay,
so
I'm
going
around.
B
E
So
no
zoom,
I
I
think
unless
we
egg
here
we
go.
G
G
E
Just
to
make
sure,
let's,
let's
see
that
the
variables
have
been.
G
E
Okay,
so
these
two
commands
will
make
we'll
add
these
deny
rules
to
all
ingress
and
degrees
traffic
from
the
vpc
in
which
the
one
of
the
three
clusters
is
running.
I
think
it's
the
one
in
the
west
region.
E
So
we
now
expect
this
see
we're
not
receiving
a
stream
of
logs
from
this
pod
anymore.
This
part
was
running
inside
the
cluster
because
we
lost
connection
these
two
you,
you
may
see
for
a
while
some
error,
but
they
recover
pretty
quickly.
You
see
this.
This
one
had
one
error
for
a
second,
and
also
this
one
has
one
and
one
error,
but
they're
now
continuing
working
working
correctly.
E
If
we
go
back
to
the
console,
we
see
that
corkscrew
has
detected,
that
it
has
a
problem
with
this
region
here
and
in
a
minute,
it's
going
to
decide
that
the
region
is
actually
down,
but
besides
that
cuckoo
cb
is
still
available
and
still
continues
to
work.
E
So
no
intervention
on
my
side,
except
making
the
disaster
right,
but
the
the
service
continued
to
work
because
it
autonomous
autonomously
was
able
to
recover
so
no
downtime,
no,
no
data
loss.
F
E
E
E
The
original
data
center
is
recovered
to
re
re-swing,
back
the
workload
where
it
where
it
used
to
be
so
instead,
here
we're
going
to
do
the
same
thing:
we're
going
to
re-enable
the
traffic
between
regions
to
the
third
region
and
we're
going
to
see
that
the
cluster
just
autonomously
recover
without
us
having
to
do
anything
so
also
the
operation
of
when,
when
things
return
to
normality,
it's
done
it's
handled
automatically.
So
let
me
grab
the
script
here
so
again.
We
are
now
doing
an
allow
rule
for
ingress
and
egress
in
that
vpc.
E
Okay,
so
the
cluster
here
should
sense
this
well.
My
computer
is
now
you
know
exited,
because
this
spot
died
and
now
it
realized
it,
and
so
this
client
died,
because
because
this
test
is
not
resilient
to
this
kind
of
failures,
but
if
you
had,
if
you
had
a
client
that
was
actually
receiving
traffic
from
a
global
balancer
like
a
front
end,
when
the
front
end
comes
back
up
you,
it
will
start
serving
again.
Okay,
and
here
cockroach
should
come
back,
should
start
recovering
these
under-replicated
ranges.
E
These
are,
I
think,
the
the
word
that
cockroach
uses
for
its
own
partition,
but
and,
as
you
can
see,
these
other
clients
have
kept
working
without
any
issues.
E
I
don't
know
if
you
have
time
to
wait
for
these
under-replicated
ranges
to
recover
fully
yeah
scott.
If
you
want,
we
can
come
back
later
today,.
B
I
think
we
should
let
it
percolate
and
then
I
think,
there's
another
opportunity
we'll
come
back
to
it
later,
yeah
that
that's
it's
tremendous.
I
mean
the
the
work
you've
done
to
to
build
that
out.
Knowing
that
rackham
includes
the
submariner
tech
preview
code,
knowing
how
that's
all
coming
together
with
multi-cluster
opens
up
a
lot
of
opportunities.
B
D
I
just
built
out
these
clusters
in
the
last
I
just
finished
a
couple
minutes
ago.
So
let
the
panic
and
fun
begin.
D
Yeah
see,
I
don't
have
a
fire
alarm.
I
just
have
my
child
next
door,
blasting
music,
so
I
have
my
own
little
disaster
going
on
so
just
to
kind
of
set
context
of
where
I'm
gonna
start
off.
Today
I
come
from
an
ops
background
and
a
lot
of
us
have
been
there.
D
Pager
goes
off
at
three
in
the
morning
and
you
have
to
respond
very
early
on
when
I
first
saw
acm
it
brought
in
some
capabilities
that
would
have
made
my
life
a
lot
easier
back
in
the
day,
just
the
ability
to
go
from
one
data
center
to
another
without
intervention.
D
D
So
what
scribe
really
is
and
what
I'll
be
showing
today
is
rsync
within
scribe,
and
what
we're
going
to
do
is
we're
going
to
have
one
site
serving
kind
of
as
our
primary
data
center
and
another
site
will
be
our
failover
data
center
and
what
this
rsync
capability
is
going
to
do.
Is
it's
going
to
replicate
our
storage
just
over
and
over
on
whatever
schedule
that
we'd
like
to
set
a
time
for
this
demonstration
today,
I
set
a
pretty
aggressive
schedule
of
every
two
minutes:
send
the
data
from
virginia
to
ohio.
D
So
it's
really
cool
just
to
see.
You
know
how
fast
your
mean
time
to
recovery
could
be,
and
you
know
that's
really
dependent
on
your
application
as
well.
One
thing
I
do
want
to
add
in
as
well
I
know
chris
short
will
be
very
excited
about
this.
The
replication
is
completely
managed
by
yaml.
You
just
take
the
cmo
shove
it
in
acm
acm
takes
over,
and
so
it's
that
get
ops,
capability
of
managed,
storage,
replication,
that's
awesome,
and
so
what
are
some
use
cases?
D
I've
talked
about
disaster
recovery,
but
you
know
you
might
have
a
data
center
where
you're
say,
for
example,
running
openshift,
containers,
storage
and
you
might
have
a
provider
running
gp2
those
kind
of
mismatch
when
it
comes
to
storage,
but
with
the
subscribe
you
can
actually
set
up
the
replication
between
those
two
completely
different
storage
classes.
D
All
right-
and
so
today,
I'm
going
to
show
off
this
amazing
docuwiki
site.
Just
gonna
write
a
quick
hello
to
everybody
and
what
we're
gonna
do
is
we're
gonna
fail
this
application
from
virginia
over
to
ohio
and,
as
I
showed
off
at
the
beginning
of
this
call,
this
is
a
brand
new
cluster.
So
what
we
need
to
do
is
we
need
to
add
in
our
failover
data
center,
so
right
now
I
only
have
a
primary
location.
D
D
So
we're
going
to
generate
the
command
to
kind
of
add
our
cluster
in
I'm
going
to
copy
this,
and
when
I
go
back
to
my
terminal,
I
believe
this
testing
is
good
enough.
We'll
just
accept
it,
as
is,
if
it
breaks
it,
makes
the
stream
much
more
fun.
D
D
Okay,
so
what
we're
going
to
do
is
we're
going
to
add
our
failover
cluster
in
this
is
going
to
take
a
couple
minutes
just
to
get
all
of
the
you
know,
kind
of
required
components
placed
so
in
the
meantime,
let's
go
ahead
and
take
a
look
at
what
we're
going
to
actually
do
work
that
one
out
so,
as
I
was
saying
at
the
beginning,
we're
gonna
kind
of
use,
rackham's
cluster
placement
capabilities.
D
As
you
see
here,
we
have
cluster
replicas
set
at
one,
and
I'm
gonna
try
to
zoom
this
in
even
further
and
what
this
means
is.
If
my
application
were
to
fail
on
the
one
cluster
in
which
it's
running,
please
run
it
somewhere
else
that
is
currently
running
to
me.
This
is
the
most
powerful
extra
admin
to
your
team.
Ever
it's
someone,
that's
going
to
say,
hey.
We
need
to
switch
over
without
you
actually
needing
to
wake
up
at
three
in
the
morning
and
switch
over.
D
So
what
are
we
going
to
do
with
our
data?
So
we're
going
to
take
our
primary
sub
m1
cluster,
which
is
using
the
gp2
csi
storage
class
and
we're
going
to
send
the
data
over
to
our
sub
m2
cluster
running
ocs.
D
D
B
D
B
F
D
So
as
you
see,
our
little
checkbox
turned
green,
and
so
let's
go
ahead
and
take
a
look
at
what
was
actually
created
going
down
here
went
to
the
wrong
one:
the
joys
of
life,
let's
look
at
our
actual
emo4
are
the
resource
sample.
D
And
the
important
thing
to
see
here
is
that
at
our
destination,
we're
going
to
use
ocs
as
our
volume
snapshot
class,
as
well
as
our
storage
class
to
you
know,
write
our
application
too
and
then
down
here
at
the
way
bottom.
I
hope
you
can
see
it
for
some
reason.
There
we
go.
There
is
the
submariner
cluster
ip
address
that
allows
me
to
be
accessed
between
the
clusters
using
submariner
and
then
finally,
you'll
see
an
ssh
key
because
we're
using
rsync
it
is
ssh
replication
between
the
two
sites.
D
And
this
repository
will
be
available
after
the
call
for
everybody
to
kind
of
play
with
or
poke
at
and
see
everything
in
there.
So
we'll
get
our
replication.
So,
as
I
said
earlier,
we
are
going
to
be
incredibly
aggressive
with
it
and
we're
going
to
try
to
replicate
our
data
every
two
minutes.
This
is
the
fun
part
about
the
schedule.
D
Depending
on
your
application,
you
might
not
need
to
replicate
it
every
two
minutes,
but
you
know
it's
simple
linuxcron
to
be
able
to
establish
that
relationship
and
then,
lastly,
we're
gonna
take
a
ssh
key
from
the
failover
cluster.
That's
a
secret,
that's
created
by
scribe
and
we're
going
to
bring
it
over
to
our
source
cluster.
D
You
can
actually
by
default,
bring
your
own
keys,
but
I
didn't
want
to
load
those
into
git
and
then
have
my
repository
out
there
and
during
the
demonstration
somebody
decides
to
mess
with
my
cluster
out
in
the
world
when
they're
scanning
for
crete,
for
keys.
So
so
let's
go
ahead
and
get
secrets
for
docuwiki
in
the
context
of
failover.
G
D
As
you
see,
it
is
an
sh
ssh
pub
and
yours
the
secret,
I'm
going
to
scrub
out
a
couple
fields
before
I
load
it
into.
D
D
So
what
this
is
going
to
do
is
acm
is
going
to
find
the
replication
for
the
primary
and
then
deploy
all
of
those
components
onto
our
primary
cluster.
So
going
back
to
acm
now
this
will
update
momentarily
and
we
should
see
a
primary
cluster
and
it's
going
to
get
the
ssh
key
and
the
replication
source.
D
Hey,
I'm
just
absolutely
blown
away
by.
You
know
the
ability
that
acm
gives
me
to
place
all
of
my
stuff
without
having
me
having
to
place
it.
I
feel
like
I
was
somebody's
get-offs
to
one
point
in
my
life.
You've
lived
that
you've
been
that
I've
been
there
so
okay,
so
our
replication
source
has
been
created
and
what
this
is
gonna
do
I'm
gonna
split
my
terminal
just
so
that,
hopefully
I
can
see
when
the
pod
starts,
and
then
we
can
actually
see
the
replication
taking
place.
D
A
I
I
wish
I'm
seeing
in
there.
B
A
D
All
right
so
at
some
point
it's
going
to
be
148
or
248.
248.,
and
this
should
start
just
waiting
waiting
waiting.
I
feel
like
the
waiting
is
actually
more
intense
when
it's
live,
then
you
know.
A
D
So,
let's,
let's
troubleshoot,
live
and
see
if
we
can
see
what's
going.
D
On
okay,
so
waiting
for
snapshot
to
be
bound,
it
hap
it
did
the
work
while
I
was
panicking.
So,
let's
see,
if
get
pvc
dash,
n
donkey,
wiki.
F
F
Of
good
dad
jokes,
they
watch
pvc
never
replicates.
D
D
D
So,
while
I
dance
in
the
background,
why
don't
we
just
rafa
if
you
want
to
show
the
populated
cockroach
db?
Let's
do
that
yeah
I'll
have
something
in
a
moment
all.
E
Right
I'll
take
the
screen
again.
E
Yeah,
so
a
few
minutes
after
we
we
stopped
watching
this.
It
actually
recognized
that
these
clusters
were
back
alive
and,
if
you
remember
we
had
about,
I
think
80
under
replicated
ranges
and
very
quickly.
They
were.
They
came
back
to
be
fully
replicated
and
healthy.
So
the
cluster
occurs
to
be
clustered,
I'm
not
an
expert,
so
I'm
not
probably
making
justice
describing
how
it
works,
but
once
it
it
recognizes
that
those
nodes
that
were
not
reachable
are
back
online.
E
So
if
you
want
to
be
able
to
replicate
this
demo,
we
have
all
the
information
here,
I'm
not
sure
chris,
how
we
can
share
these
links,
but.
E
E
Ryan
right,
thanks
for
reminding
me
that
I
actually
wanted
to
thank
them,
because
they
have
been
very
helpful
in
advising
me
on
how
to
set
up
this
cluster
out
to
run
the
demo,
how
to
run
the
tpcc
test
to
get
good
results.
It
was
a
overall,
a
wonderful
collaboration,
maybe
while
we're
waiting
appointed.
Maybe
I
didn't
make
well
enough.
Is
this
these
pods?
They
all
talk
to
each
other.
They
don't
talk
through
a
load
balancer.
B
Yeah,
it's
been
interesting.
I
mean
I've
been
watching
your
work
from
afar,
but
seeing
the
improvements
and
then
understanding
the
approach
articulating
the
performance
characteristics
along
the
way,
troubleshooting
some
of
the
latencies,
but
getting
to
your
original
goal.
You
know
slide
two
which
you've
talked
about,
which
is
no
human
intervention
right,
rto
rpo
near
zero.
I
mean
you're
you're
like
really
pushing
that
that
model,
which
I
think
is,
is
pretty
unique.
It's
it's
awesome
to
see
it
coming
together
with
these
tools.
E
G
E
B
D
All
right,
I
I
don't
know
if
a
mic's
still
hot
is
my
mic.
So
hot
I
can
hear
you
yep,
okay,
all
right
cool,
so
it's
actually
helpful
to
you
to
define
a
volume
snapshot
class
if
you
intend
to
use
one
so.
G
D
Issues
were
around
volume,
snapshot
class,
not
being
kind
of
defined.
So
so,
if
you
look
here.
D
Okay,
so
we
should
have
that
if
I
don't
get
a
volume
snapshot
in
a
second,
then
we
may
have
to
just
go
to
questions.
B
F
F
It
is
highly
available
across
availability
zones,
so
out
of
the
box,
when
you
deploy
it,
it
will
take
advantage
of
anti-affinity
zones
within
the
cluster.
If
you
are
in
a
hyperscaler
cloud,
it'll
put
a
set
of
pods
across
different
keys.
If
you
don't
have
azs
it'll
at
least
spread
them
across
different
nodes.
But
really
the
question
was
asking
about
multiple
data,
centers
or
region,
high
availability
across
multiple
regions,
and
for
that
pattern
you
can
have
two
hubs
that
are
sourcing
the
same
policies
from
github
or
from
an
object
store.
F
We
saw
that
pattern
with
a
subscription
coming
out
of
a
git
repo
or
we've
seen
that
in
prior
twitches
twitch
streams.
So
you
can
definitely
do
that.
What
we
don't
have
out
of
the
box
today
is
the
agent
failover
behavior.
So
if
I
have
cluster
and
region,
one
attached
to
hub1
cluster
and
region,
two
attached
to
hub2,
they
won't
automatically
fall
back
over
to
the
hub.
We've
experimented
with
architectures
of
that
in
the
past,
but
the
complexity
around
it
made
a
shy
away
from
trying
to
put
it
in
the
product.
F
But
if
you're
interested
in
or
have
you
know
real
use
cases
where
you're
trying
to
do
that,
reach
out
to
us
and
let
us
know
we'll
work
with
you
there
and
then
the
other
question
was
about
service,
mesh
and
submariner
and
how
those
two
things
related
now
really
they're,
somewhat
orthogonal
right
service
mesh
brings
a
set
of
capability
around
registration
discovery
and
submariner's
submarine
is
really
about
establishing
the
network
bridge.
I
wonder
rafa,
you
know,
based
on
the
applications
and
and
what
you've
done
with
cockroach
tv.
E
I
think
well,
I
see
the
service
mesh
mostly
working
on
layer,
seven
and
and
submarine
it's
working
layer.
Three,
it's
true
that
you
can
also
use
the
service
mesh
for
doing
tcp,
but
really
where
it
shines
is
layer.
Seven
yeah.
Our
product
roadmap
has
multi-cloud
support
for
multi-cluster
in
for
the
service
mesh
in
this
year,
sometimes
probably
the
second
part
of
the
year
we
haven't
yet
explored.
E
Whether
submariner
is
a
requirement
for
that
or
not.
It
may
be
that
it
isn't.
So,
if
you
just
need
to
do
layer,
seven,
you
may
be
able
to
do
it
without
submariner,
but
I
think,
in
the
end
used
together
will
will
allow
you
to
do
to
handle
both
the
layers.
3
4.
You
know,
policies,
network
policies
and
layer,
seven
configurations,
so
they
they
they
definitely
can
work
together.
We
I
have
not
heard
yet
of
use
cases
where
they
are
needed
together.
A
So
we
got
a
question
on
discord
yesterday
and
I'd
like
to
ask
that
if
you
don't
mind
a
user's
having
problems
with
distributing
secrets
imported
an
existing
cluster.
When
I
do
that,
though,
there
is
no
cluster
deployment
to
reference
in
my
sync
set
or
your
sync
set,
am
I
just
missing
something
question
mark.
F
Great
question:
so,
if
you're
using
hive
sync
sets
are
a
way
to
push
content
down
just
to
kind
of
make
sure
we're
on
the
same
page
with
what's
going
on
there.
The
cluster
deployment
api
object
is
what
we
use
when
we
create
or
provision
a
cluster
you
create
one
and
behind
the
scenes
hive
will
go
off,
have
a
provisional
job
you're
off
to
the
races.
F
We
use
a
sync
set
to
deliver
the
initial
agent
payload
into
that
cluster,
but
once
the
agent
from
open
cluster
management
connects
back
to
the
hub,
it
will
it's
a
pull
model,
it's
connecting
back
to
the
hub
and
asking
for
state
to
apply,
and
we
do
that
partially.
So,
if
there's
ever
a
disconnect
that
cluster
can
continue
to
enforce
policy
or
other
desired
state.
F
So
if
you
want
to,
if
you
simply
import
a
cluster
wasn't
created
through
hive
cluster
deployments,
then
you
can
actually
use
a
couple
of
different
ways
to
deliver
content
to
it
in
the
product.
We
really
focus
on
that
concept
of
a
subscription,
so
you
saw
ryan
use
that
to
deliver
content
down
and
in
that
model
the
subscription
comes
from
a
source.
Github
object,
store,
etc,
helm
repository
and
you
match
it
to
a
placement
rule
and
acm
will
automatically
deliver
that
content
into
the
target
cluster.
F
You
can
also
use
a
policy,
so
if
it's
something
that
you
just
want
every
cluster
to
have
as
sort
of
a
foundational
bit
of
config,
it's
not
specific
to
a
particular
application.
A
policy
works
in
a
very
similar
way.
If
you
want
to
go
underneath,
the
hood
you
can
take
advantage
of
an
api
object
called
manifest
work.
That's
a
very
low
level,
primitive
that
we
use
to
deliver
part
of
the
agent
mechanism
and
manifest
work
is
declared
in
the
open
cluster
management
api
repo.
F
So
it's
not
as
flexible
as
policies
or
apps,
which
really
use
placement
rules
to
dynamically
place
and
adjust
content,
but
once
you've
imported
sync
sets
won't
work
because
they're,
not
they
don't
have
the
same
information
that
they
would
have
for
hot
cluster
deployment.
But
what
we
typically
walk
users
through
is
using
policies
and
apps.
E
E
E
So
you
need
you
need
to
find
a
way
to
have
a
source
of
truth
for
your
secrets
that
all
of
your
cluster
can
be
connected
to
and
from
where
you
can
search.
All
of
your
secrets.
I
like
to
solve
this
problem.
I
personally,
like
ashikorp,
vault
a
lot.
I
think
it
it's
a
good
tool
for
solving
this
problem.
I'll
stop.
There.
F
I
I
agree
with
that.
I
think
vault's,
a
very
handy
tool.
Yeah.
You
can
also
use
an
object,
store
bucket
and
create
an
acm
channel
to
the
object
store
bucket,
so
you
can
protect
the
object
store
bucket.
So
it's
not
storing
secrets
in
get
another
technique.
I've
seen
is
using
the
sealed
secret
api,
which
is
a
community
api.
F
You
encrypt
the
payload
with
the
public
key
of
a
particular
server
and
when
the
sealed
secret
arrives
at
the
server
it's
decrypted
with
the
private
key,
so
it's
easy
to
protect
it
for
a
particular
cluster,
we're
interested
in
feedback.
One
of
the
things
we
have
kicked
around
in
the
labs
is
creating
a
multi-cluster
sealed
secret
you'd
encrypt
it
once
with
a
key
on
the
hub
and
then
based
on
placement
behavior.
F
A
Yeah,
I
feel
like
we
could
get
christian
hernandez
from
get
ops
happy
hour
and
us
all
together
to
talk
about
secrets
in
get
ops
just
in
general
and
see
how
you
know
that
conversation
pans
out.
That
might
be
a
good
opportunity.
A
B
Hey
that
was
a
fun
day,
rafa
and
ryan.
I
appreciate
you
guys
and
randy
thanks
for
the
demo,
you
know
to
kick
us
off.
You
guys
are
phenomenal.
Chris
thanks
for
hosting
us.
Do
we
want
to
ryan?
Does
anything
else
happen
on
your
screen
or
anything
we
want
to
look
at?
Are
we
all
done
there.
A
Be
prepared
perfect,
I
will
I'll
gird
something
so
yeah
thanks
all.
I
believe
I'll
share
the
link
to
this
and
discord
so
the
person
that
asked
discord-
I
think
they're
watching,
but
I
don't
know
for
sure.
I
think
that
covers
it.
As
far
as
questions
go
anything
else.
Anybody
wants
to
share
before
we
sign
off
here.
F
Appreciate
everyone's
time
and
and
a
good
conversation,
yeah.
A
Until
next
time,
yeah
yeah
cluster
deploy
fire
alarm
go.
So
thank
you
all
very
much
for
joining
us
today.
Thank
you
all
for
watching
out
there
it's
been
a
very
fun
day
of
streaming
here
on
openshift
tv,
be
sure
to
check
us
out
tomorrow
morning,
starting
out
with
the
level
up
hour
at
9
a.m.
Eastern
and
we'll
be
rocking
and
rolling
from
there
when
in
doubt,
check
our
calendar
I'll
drop
a
link
to
that
in
chat.
Right
now,
and
until
next
time
everyone
stay
safe
out.
There.