►
From YouTube: Replatforming Legacy Packaged Applications Block-by-Block w/ Minecraft Dynatrace OpenShift Commons
Description
Replatforming Legacy Packaged Applications: Block-by-Block with Minecraft (Dynatrace)
OpenShift Commons Operator Hour
OpenShift Commons Briefing
November 4, 2020
A
A
A
B
Yeah,
so
I
I've
actually
been
with
dyna
trace
for
almost
seven
years
now,
so
I'm
I'm
a
long
timer.
I
was
a
customer
for
a
couple
of
years
before
that
originally
kind
of
brought
over
my
background
in
cloud
and
big
data.
B
So
I
was
a
crazy
person
who
decided
to
use
apm
solutions
to
understand
what
was
going
on
with
custom
java
mapreduce
like
for
folks
that
remember
when
hadoop
was
a
thing,
and
you
know
one
of
the
things
that
you
know
I
did
many
many
years
ago
for
those
of
you
on
this
call
that
are,
you
know,
viewing
the
recording
here
that
might
remember
openshift
v2,
I
was
the
creator
of
the
the
dynastress
openshift
v2
cartridge
that
actually
allowed
us
to
inject
our
atmon
agent
into
jboth
cartridges.
B
B
A
You
what
makes
you
that
so
you
already
beat
me
to
it
that
that
that
was
a
long
time
ago.
Things
are
certainly
a
lot
different.
I
mean
we
made,
we
kind
of
really
bet
the
farm
on.
You
know
making
the
big
switch
to
100
kubernetes,
and
I
think
that
was
the
right
choice,
because
it's
really
starting
to
get
you
know
kubernetes,
obviously
has
become.
You
know
the
mainstream
way
for
for
doing
these
sort
of
things.
A
So
you
put
together
when,
when
we
were
talking
with
your
with
your
people
at
dynatrace,
we
were
like
hey
we
want
to.
We
want
to
have
you
guys,
come
on
and
be
a
part
of
our
show
today,
but
we're
not
looking
for
you
know
really
in
the
weaves
demos
of
okay.
Here,
let
me
pull
up
a
terminal
window
and
let's
edit
this
config
file
together
and
see
how
thrilling
it
is,
and
so
you
actually
put
together
a
discussion
here.
Something
involving
minecraft
is
that
correct.
B
Yep,
so
so
you
know,
I,
I
have
a
couple
of
different
things
that
that
I'll
be
talking
us
through
today,
but
you
know
given
given
the
context
of
of
the
year
and
and
how
it's
been.
A
lot
of
folks
like
myself,
have
been
kind
of
spending
time,
upgrading
our
our
home
labs,
and
you
know
finding
interesting
things
to
do
to
to
occupy
ourselves
when
we
can't
really
go
anywhere
anymore.
So
this
is.
A
Well,
that's
that
that's
pretty
cool!
I
I
would
like
to
say
that
you
know
we're.
We
are
really
happy
to
have
dinah
trace
on
here
and
and
thank
you
marcy
for
for
lining
up
michael
villager.
A
And,
specifically,
you
know
red
hat
open
shift,
because
when
customers
want
to
put
you
know
their,
I
t
into
production
in
a
multi-cloud
world,
everyone
wants
to
make
sure
that
that
that
it
works
and
it
gets
it's
supportable.
So
you
know
kudos
to
dynatrace
for
being
one
of
our
long
time,
partners
working
with
us
to
to
test
and
integrate
their
software
with
the
openshift
platform.
I
think
that
that
really
helps
customers
be
able
to.
A
Having
said
that,
with
my
gratuitous
plug
for
for
dynatrace
and
how
much
we
love
you
guys,
why
don't
you
get
us
started
on
the
content
that
you
have
mike.
B
For
sure,
thanks
again,
thanks
for
the
very
kind
words
leading
into
this,
it's.
A
Actually,
I
mean
it,
it's
easy
right.
I
mean
we,
you
know,
dynatrace
is
probably
one
of
our
closer
or
closest
software
partners.
We
work
with.
I
mean
I
bump
into
marcy
at
just
about
every
event
we've
ever
been
to,
and
you
know
the
cool
thing
about
doing
this,
especially
when
we
reached
out
to
marcy
and
said
hey.
Would
you
guys
like
to
be
part
of
our
tv
show?
A
B
Yeah
for
sure,
so
I'm
going
to
just
go
ahead
and
kind
of
get
into
it
here
and,
and
hopefully
everybody's
you
know,
seeing
the
screen
so
just
kind
of
an
overview
of
of
you
know
what
what
I'm
going
to
talk
about
here
and
again,
like
I
mentioned
before
this
was
this-
was
kind
of
the
genesis
of
a
a
couple
of
months
of
work
of
some
things
that
was
kind
of,
like
my
my
my
quarantine
project,
to
to
keep
myself
occupied
when
when
things
were,
you
know
not
looking
all
that
great
earlier
in
the
year
and
it
was
really
kind
of
a
an
interesting.
B
You
know
and-
and
that's
why
I
think
this
is
actually
a
really
great
talk
for
literally
today
right,
because
it's
something
that's
going
to
be
fun.
It's
going
to
be
a
little
light-hearted,
I'm
going
to
go
into
the
weeds
just
a
little
bit
when
I
start
talking
about
you
know
the
the
kubernetes
cpi
and
the
csi
and
stuff
like
that.
But
overall
it's
basically
like
how
can
I
play
a
game
with
kubernetes
right?
B
So
it's
a
it's
a
fun
kind
of
topic
that
I
think
is
is
going
to
be
a
little
light-hearted
just
given
how
chaotic
everything
is.
However,
while
it
is
fun,
I
actually
think
it's
relevant
for
some
of
the
problems
that
that
folks
are
encountering
now.
B
You
know
when
you
are
taking
something
that
is
perhaps
a
piece
of
commercial
office
off-the-shelf
software
and
you're,
trying
to
run
that
in
your
openshift
environment
right,
so
we're
gonna
kind
of
talk
a
little
bit
about
you
know
my
own
internal
modernization
journey
that
I've
taken
over
the
the
many
many
years
that
I've
been
providing
a
number
of
of
minecraft
instances
to
my
to
my
friends
to
collaborate
on
and
then
you
know,
moving
all
of
that
into
ocp
and
then
kind
of
at
the
end.
B
I'm
going
to
talk
about
you
know:
tangentially
related
things
around
trying
to
procure
hardware
and
stuff
like
that
when
the
worldwide
supply
chain
was
almost
completely
disconnected
so
there's
some
fun
little
fun
little
learnings
there
too,
and
and
maybe
fun
isn't
the
right
word.
But
you'll
you'll
find
out
more
when
we
get
to
that
right.
B
All
right,
so
why
minecraft
as
an
example
right,
it's
java,
based
which
is
terrific,
but
it's
closed
source
right,
so
this
isn't
a
piece
of
of
open
source
software,
one
of
the
things
that
architecturally
is
really
fascinating
about
minecraft
is
it's
a
multiplayer
game,
but
it's
effectively
single
threaded
alright.
So
what
that
means
is
everything
that
happens
in
the
game
is
all
attempting
to
happen
in
this
50
millisecond
tick.
B
The
the
game
is
designed
such
that
it
tries
to
maintain
this
20
tick
per
second
tick
rate,
and
everything
that
you
need
to
do
has
to
happen
within
that
50
millisecond.
Then
that
includes
all
of
the
players
on
the
servers
on
the
server
either
placing
or
breaking
blocks
what
the
blocks
are
doing
right.
So
is
it
a
piece
of
glowstone
that's
lit
up?
Is
it
redstone
logic?
That
is,
you
know,
making
machines?
Do
things?
B
Is
it
you
know
something
in
modded
minecraft,
which
is
crazy
and
that's
kind
of
what
I'm
talking
about
today
in
modded
minecraft,
you
have
things
like
computers
that
are
inside
of
minecraft
running
lua
script
right.
Some
crazy
person
created
a
mod
that
runs
inside
of
the
minecraft
jvm
and
actually
spawns
kvm
virtual
machines
that
you
can
control
from
inside
of
minecraft.
B
You
know
there's
another
mod
out
there
that
actually
lets
you
administer
your
kubernetes
cluster
from
inside
of
minecraft.
You
know
representing
pods
as
pigs
and
chickens
and
so
on.
Inside
of
the
minecraft
instance.
It's
all
totally
fascinating,
but
that
same
50,
millisecond
tick
also
has
to
represent
what
all
of
the
monsters
and
things
like
that
in
the
game
are
doing,
and
I
use
monster
to
kind
of
mean.
B
One
of
the
things
that's
really
fascinating
is
with
with
modded
minecraft
again
there's
all
this
extra
behavior
that
have
to
still
occur
inside
of
that
50
millisecond
game,
tick!
Right
what
happens
if
your
actions
take
longer
than
50
milliseconds?
B
Is
they
start
to
back
up
and
eventually
they
will
be
skipped,
and
sometimes
this
can
get
really
bad
and
you
might
end
up
skipping
several
seconds
worth
of
changes
to
the
game
world
right.
So
you
know
if
you're
sitting
here
and
you're
down
in
a
cave
and
you're
breaking
block
to
try-
and
you
know
you
know-
to
try
and
get
to
some
diamonds
or
something
like
that
or
gold.
What's
going
to
end
up
happening,
is
this
server
will
reset
back
to
the
state?
B
It
was
a
couple
of
seconds
ago
and
all
of
a
sudden,
those
blocks
that
you
broke
will
reappear
again
or
the
block
that
you
play
will
all
of
a
sudden
disappear
right.
You
know,
and
and
and
folks
complain
about
that
as
as
viewed
as
lag
it's
a
pretty
common
thing,
everybody
knows
about
it,
everybody
kind
of
gripes
about
it.
The
other
interesting
thing
here
as
well
is
that
this
is
a
pretty.
B
This
is
kind
of
sort
of
a
almost
a
worst
case.
Example
for
monetization,
because
there's
really
significant
persistent
disk
requirements
here,
so
the
minecraft
world
itself
is
like
several
gigs,
and
you
know
the
acta
to
that
data
needs
to
be
pretty
low
latency
and
then
you
need
some
place
to
put
backups
as
well,
which
are
also
pretty
large.
B
So
it's
all
you
know
you
could
almost
think
of
it,
maybe
as
a
little
bit
more
analogous
to
you
know
something
like
a
database
as
opposed
to
you
know
something
that
perhaps
a
more
modern
application
right.
So
I
thought
it
was
a
really
great
kind,
of
example,
of
you
know
how
to
modernize
something,
how
to
re-platform
something
where
you
can't
actually
adjust
the
code
and
something
that
really
barely
holds
it
together
to
begin
with
right.
B
So
what
were
my
my
early
steps
towards
modernization
all
right,
so
the
first
things
that
I
did
way
back
when
this
was.
You
know
a
project
that
I
undertook
many
many
years
ago
to
stand
up
a
file
server
in
my
basement
I
was
like
okay,
you
know,
as
as
many
other
technologists
do.
B
B
You
know
a
xeon
cpu
ecc
memory,
all
that
kind
of
good
kind
of
server
enterprise
things.
So
I
built
myself
a
server
that
I
ran
in
my
basement,
and
I
had
my
minecraft
workload
running
at
the
virtual
machine
and
I
was
using
zff
of
my
file
system
and
I
was
using
a
number
of
you
know.
B
Traditional
hard
drives
to
store
that
data
right,
and
this
worked
for
a
couple
of
years,
but
as
modded
minecraft
started
to
get
worse
and
worse
and
all
the
things
that
they
were
starting
to
to
slam
into
that
all
those
things
that
were
trying
to
happen
in
that
50
millisecond
game
loop,
it
just
it
just
wasn't
working
out
right.
This
is
a
common
error
message
that
you
would
see
when
when
something
like
that
happened
right,
you
see
this
error
message
in
the
log
thing.
B
Can't
keep
up,
did
the
time
change
or
is
the
server
overloaded
right
and
the
answer
is.
The
server
is
always
overloaded
right-
and
in
this
case,
when
I
mentioned
before,
the
server
was
actually
almost
24
seconds
behind
what
was
actually
supposed
to
be
happening.
So
when
it
realizes.
A
B
Traditional
hard
drives,
oh
right.
So
when
we
talk
about
spinning
rust,
we
we
we're
basically
talking
about
traditional
hard
drive,
because
the
platters
are
metal
right
and
and
if
hard
drive
actually
rusts,
that's
probably
bad.
I
don't
know
how
that
would
actually
happen,
but
what
you
kind
of
sort
of
jokingly
refer
to
you
know
old-fashioned
hard
drives
as
spinning
rust.
A
B
Yeah,
well,
all
those
things
are
definitely
happening,
but
again
this
is
a
more
traditional
phrase
to
refer
to
our
old-fashioned
hard
drive
right.
So
obviously,
my
basement
is
not
a
real
climate,
controlled
data
center
with,
like
you,
know,
halon
and
all
that
other
fancy
stuff.
So
you
know
it's
a
worst
case
scenario.
B
You
know
literally
having
the
vacuum
bugs
out
of
the
servers
every
once
in
a
while,
because
it's
warm
and
they
like
to
go
there.
Anyways
yeah,
you
know
kind
of
it
as
I
eventually,
but
this
is
like
a
a
real
example
of
when
things
bog
down
the
user.
Experience
is
terrible
because
it's
like
the
whatever
you
did
for
the
last
24
seconds
just
didn't
happen
right.
B
B
So
docker
started
to
be
a
thing
that
people
were
talking
about
and
I'm
like.
Okay
well,
docker
is
a
is
a
nice
thing
to
do
here,
because
it's
still
going
to
allow
me
to
sort
of
kind
of
isolate
things
from
the
underlying
host,
because
I
really
didn't
want
this
to
be
running
on
the
actual
host
itself,
because
I
had
people
from
the
public
internet.
You
know
connecting
into
my
minecraft
and
you
know
with
the
game.
So
it's
not
necessarily.
B
You
know
the
the
same
type
of
standards
that
you
might
have
for
a
real
piece
of
enterprise
software.
When
it
comes
to
security,
you
know,
jvm
does
a
pretty
good
job
of
handling
some
of
that,
but
for
the
most
part
it
was
not
something
that
I
wanted
running
pure
bare
metal
all
right,
but
docker
got
me
too
close
to
bare
metal
performance.
B
There
is
a
little
bit
of
overhead
that
is
arguable
and
discussed
quite
frequently
on
the
internet,
but
docker
allowed
me
to
have
the
isolation
and
allowed
me
to
have
near
bare
metal
performance,
and
then
it
was
a
lot
easier
for
me
to
allow
the
minecraft
instance
to
have
access
to
an
ssd
that
I
have
in
the
server
and
then
also
the
what
I
now
consider
slow,
zfs
storage
that
I
had
in
that
server
as
well
right.
B
So
it
allowed
me
to
take
the
world
itself
run
that
on
an
ssd
with
all
the
great
benefits
of
that.
But
then
it
allowed
me
to
use
the
slower
and
cheaper
hard
drive
for
backup
right.
B
So
I
will
mention
as
well.
You
know
again
running
things
via
docker
fairly
traditional.
You
know
type
of
implementation
there,
but
I
do
want
to
kind
of
another.
You
know
make
a
joke
poke
upon
it.
Myself
like
this.
That's
a
really
old
way
to
do
this,
because
I
was
still
using
bite
mounts,
because
this
was
something
that
I
did
before.
B
We
even
had
volumes
in
docker
right
so
again,
as
I
started
looking
into
this
and
and
kind
of
starting
my
project
and
realizing
that
I
was
still
still
experiencing
a
lot
of
those
kind
of
issues
in
the
environment.
Even
when
I
had
that
close
to
bare
metal
performance,
you
know
with
docker,
I
was
still
having
problems.
My
users
were
complaining
that
there
was
lag.
You
know
anytime.
I
had
more
than
two
people
on
the
server
at
once.
It
was
still
a
pretty
bad
experience,
so
I
dropped
on
a
trace
on
it.
B
One
of
the
nice
things
about
you
know,
working
for
dynatrace
is.
I
can
actually
deploy
down
a
trace
in
my
home
lab
and
when,
when
we
started
kind
of
seeing
the
explosion
of
docker
having
the
one
agent
on
the
underlying
host
allowed
me
to
automatically
monitor
everything
that
was
running
at
the
docker
container,
without
having
to
figure
out,
you
know
how
to
add
one
agent
to
the
container
files
with
them
and
all
that
other
garbage.
It
basically
just
worked,
which
was
nice
I
didn't
have
to.
B
I
didn't
have
to
mess
with
it
right,
but
this
you
know
basically
what
I
what
I
did
here
is
kind
of.
You
know.
I
guess
you
could
consider
the
step
two.
I
don't
know,
but
I
assessed
my
current
states
to
kind
of
see
you
know
what
what's
the
footprint
of
my
modded
minecraft
instance
right.
This
is
the
same
thing
that
you
would
kind
of
do
if
you're
looking
to
move
a
piece
of
more
traditional
software,
and
I
could
see
that
I
was
pretty
much
consuming
an
entire
core
just
about
24
7..
B
You
know
in
a
in
a
12
core
machine,
that's
about
six
to
eight
percent,
and
then
we
can
see
as
well
that
the
memory
utilization
is
crazy
and
even
with
that
much
memory
allocated,
we
still
have
some
pretty
significant
gc
pauses
on
occasion
as
well
right,
so
we're
using
almost
10
gigs
of
of
memory
and
an
entire
core
of
a
12
core
cpu
right.
B
So
I
had
a
good
sorry.
Oh.
B
So
the
next
thing
that
I
wanted
to
do
is
I
wanted
to
actually
understand
how
long
a
tick
actually
take
right,
and
this
is
kind
of
a
fascinating
process
with
with
something
like
minecraft,
because,
again
looking
at
this,
like,
we
would
with
a
piece
of
commercial
office
off-the-shelf
software.
B
You
know
we're
not
gonna
have
access
to
the
source
code
right
and
it's
even
worse
with
minecraft,
because
you
know
all
of
the
the
functions
and
classes
and
things
like
that
are
actually
obfuscated
right,
because
mojang
didn't
want
folk
to
actually
easily
understand
what
was
going
on
here,
but
because
minecraft
became
so
popular
with
the
modding
community
around
changing
how
minecraft
operated
and
adding
all
this
extra
functionality
to
it.
B
There's
the
mod
coder
pack
right,
which
actually,
on
a
regular
basis,
export
a
csv
of
de-obfuscated
function,
names
and
things
like
that,
and
then,
additionally,
I
was
able
to
use
dynatrace
to
actually
crack
ppu
utilization
right
on
a
kind
of
method
by
method
basis,
and
I
was
able
to
find
that
this
you
know
function
underscore
71,
217
e
was
pretty
significant
when
it
comes
to
cpu
consumption
and
then
cross
referencing
that
with
the
mob
coder
pack,
I
found
that,
yes,
that
was
basically
the
best
representation
of
the
master
tech
threat
right
so
again
using
dynatrace.
B
Then
I
could.
I
could
basically
tell
dynatrace
hey.
Normally
our
transactions
start
with
some
sort
of
web
request
right.
That's
what
the
you
know.
Most
modern
architectures
are
doing,
but
here's
an
example
of
something
that
isn't
actually
speaking
http
right.
So
I
define
an
entry
point
manually
based
on
that
you
know:
function,
71,
27,
217,
p,
right
and
now
dynastrace
is
going
to
stay
every
time.
B
It's
a
that's
a
new
transaction
right,
so
that
allows
me
to
you
know
better
understand
the
the
response
time
for
those
ticks
and
understand
that
transaction
rate
there
right-
and
we
can
actually
see
very
easily
here
in
this
environment
that
those
slowest
five
percent
of
ticks
were
pretty
darn
close
to
50,
milliseconds,
literally
all
the
time
right.
So,
regardless
of
whether
or
not
anybody
was
even
on
the
server,
we
were
pretty
close
to
that
50
millisecond
point
all
the
time
right.
So
something
had
to
be
done.
B
I
had
to
you
know,
move
this
forward
to
some
more
modern
hardware
right,
so
you
know
what
what
better
choice
than
something
like
openshift
container
platform
right.
I
wanted
the
advantage
right.
I
wanted
a
good
excuse
to
update
my
home
lab.
I
wanted
to
move
forward
from
a
pretty
darn
old
xeon
to
take
a
look
at
you
know
our
new
epic,
rome
cpus,
which
everybody
was
kind
of
talking
about
it,
was
the
new
hotness
at
the
beginning
of
the
year.
B
A
My
question
for
you,
I
noticed
when
you
when
you
were
listing
out.
You
know
your
operating
systems
that
you
were
using.
You
know
earlier
on,
you
listed,
you
know
ubuntu,
as
as
a
as
a
you
know,
an
upstream
project.
Why?
Wouldn't
you
use
native
kubernetes
for
this,
as
opposed
to
open
shift.
B
Well,
you
know,
that's
a
that's
a
great
question.
You
know
I'm
actually
using
openshift
because
kind
of
my
job
to
investigate
the
the
the
capabilities
of
openshift
as
opposed
to
you
know
some
of
the
other.
You
know
kubernetes
offerings.
B
B
One
of
the
things
that
I
found
fascinating
was
kind
of
how
openshift
is
secure
by
default
and
kind
of
forces
you
to
take
it
to
do
some
best
practices
right
and
actually
found
that
out
the
hard
way
in
in
a
few
slides.
When
I
actually
kind
of
talk
about
my
new
docker
file,
I
found
that
what
worked
on
other
kubernetes
flavors
actually
didn't
work
on
ocp,
and
that's
because
I
wasn't
following
best
practices.
A
A
A
A
No
so
oh
I
got
this
one
here,
so
justin
asked
a
question
a
couple
minutes
ago:
you'd
be
very
curious
if
any
jvm
optimization
comes
out
of
this
to
get
to
get
rid
of
gc
and
heap
size
hog.
B
Yep
yep,
so
that's
it.
That's
a
great
question
where
I
am
at
now
with
it
is
the
result
of
some
pretty
hefty
optimization
efforts,
just
in
order
to
get
it
to
run
on
my
old
hardware
that
I
haven't
revisited.
B
B
However,
folks
in
the
community
have
found
some
pretty
good
performance
improvements
moving
to
alternative
jvms
right
now,
I'm
using
you
know
open
jdk,
and
so
one
of
the
things
that,
if
time
allows
in
the
future
is
I
want
to
take
it.
I
want
to
kind
of
look
at
some
of
the
other
jvms
that
have
been
known
to
work,
to
see
if
some
of
those
improve
things,
I
seem
to
recall
somebody
saying
that,
like
growl,
for
example,
actually
works
really
well
for
them.
A
Okay,
just
one
one
last
one,
then
I'll,
then
I'll
stop
interrupting
chris
wants
to
know
or
what
shari
wants
to
know
is
michael
using
ocp
on
ubuntu,
which
version,
etc,
etc.
B
Ubuntu
is
the
old,
the
old
platform
right,
that's
the
old
platform,
so
now
that
I'm
using
ocp
right,
I'm
using
ocp
on
vsphere,
right
and
you're
kind
of
talking
about
some
of
these
things
here,
because
one
of
the
things
that
I
really
found
fascinating
will
be
my
next
slide,
where
I
talk
about
how
to
get
ocp
running
really
nicely
on
top
of
vsphere,
but
I
am
since
it's
ocp,
I've
deployed
4.5.11
with
the
installer
right,
so
it's
still
coreos
under
the
cover
right.
So
ubuntu
is
not
a
part
of
this.
B
This
particular
deployment
anymore.
It's
all
all
red
hat
all
the
time
except
for
the
v
sphere,
part,
but.
B
Cool
awesome
all
right
so
when
it
came
time
to
build
out
this,
this
fancy
schmancy
new
home
lab,
which
again
is
a
pretty
it's
a
beefy
home
lab.
I'm
I'm
not
gonna
lie,
but
you
know
I
wanted
to
take
advantage
of
vsan
because
the
the
vsan
kind
of
felt
familiar
to
me
based
on
again
that
that
kind
of
hadoop
experience
of
keeping
the
compute
and
storage
together.
B
So
I
kind
of
wanted
to
experiment
with
a
so-called
hyper-converged
infrastructure
right
and
I
wanted
to
do
all
flash
v
sand
because
it's
20
20.
So,
let's
you
know,
take
spinning
rust
out
of
the
picture
and
I
was
able
to
source.
You
know.
Vsphere
is
pretty
particular
about
the
hardware
you
use.
B
It
complains
pretty
heavily
if
you
use
something,
that's
not
on
the
hardware
compatibility
list,
so
I
wanted
to
be
certain
that,
at
the
very
least
with
all
flash
vsan,
you
basically
have
a
cache
drive
and
you
know
what's
effectively
the
storage
drive
right
and
I
wanted
to
make
sure
at
the
very
least
that
that
the
cash
tier
was
on
the
hcl.
B
So
I
was
able
to
find
some
used
intel
ssd
on
ebay,
and
then
I
used
kind
of
garbage
tier
ssds
for
the
capacity
tier
and
vsphere
complains
about
it,
but
it
actually
worked.
It
was
also
an
opportunity
to
upgrade
to
10
gigabit
networking
which,
which
I'm
going
to
talk
about
a
little
later
too,
because
that
was
not
without
its
challenges
right.
B
So
I've
got
this
fancy
schmancy
vsan
cluster
right.
So
now
I
run
around
an
ocp
on
it,
and
this
is
where
things
kind
of
get
fascinating,
because
I
think
we're
at
a
we're
at
a
unique
kind
of
threshold
or
crossroads.
Here.
I
don't
know
if
I
want
to
say
crossroads,
but
we're
the
kubernetes
community
is
kind
of
at
an
interesting
point,
because
every
kubernetes
deployment
is
going
to
have
a
cpi.
That's
the
cloud
provider
interface.
B
That's
what
allows
kubernetes
to
work
with
all
the
underlying
pieces
of
uriah
right!
That's
how
it
you
know
worked
with.
You
know
the
the
storage
and
all
that
kind
of
other
thing
right.
So
now
we
have
this
fascinating
time
where
you
have
the
entry
ppi,
which
is
what
part
of
core
kubernetes
and
you
have
the
out
of
tree
cpi,
which
is
something
that's
provided
by
the
cloud
provider
right.
So
in
this
case,
vmware
has
their
own
out
of
tree
cpi,
which
allows
them
to
control
the
release
caden
right.
B
B
Now
you
have
this
new
container
storage
interface
right,
so
this
is
kind
of
a
new
way
of
abstracting
the
the
storage
from
the
the
container
orchestrator
right
and
that
works
hand
in
hand
with
the
cpi
to
basically
provision
storage
right.
So
when
I
have
when
I
need
a
kubernetes
volume-
and
I
want
to
dynamically-
allocate
that
the
the
csi
now
is
what's
going
to
handle
talking
to
vsphere
and
creating
that
new
piece
of
storage
and
mounting
that
on
all
the
nodes
right.
B
So
that's
kind
of
the
new
fancy
way
to
do
this
with
was
it
v
b,
sphere,
6.7,
u3
and
beyond?
I
think
it
is
I'm
using
vsphere
7,
but
this
is
basically
and
then
you're
actually
going
to
see
all
those
volumes
in
the
vsphere
ui
as
well
and
vsphere
will
tell
you
you
know
which
pod
you
know
a
lot
of
information
around
how
that
storage
is
being
mounted
inside
of
kubernetes.
So
it's
kind
of
a
great
integration
piece
there
and
it
really
works
really
well
inside
of
ocp
right.
B
So
it's
not
something.
That's
in
ocp
out
of
the
box
again,
because
you
know
vmware
is-
is
responsible
kind
of
for
for
distributing
the
the
cpi
in
the
csi,
but
is
a
fairly
trivial
process
to
get
this
up
and
running.
B
I
was
actually
expecting
it
to
be
more
difficult
than
it
was
because
I
I
had
attempted
to
do
this
with
another
kubernetes
offering
like
seven
or
eight
months
ago,
I'm
kind
of,
and
it
was
something
that
I
had
a
lot
of
difficulty
with,
but
luckily
the
community
has
been
all
over
getting
the
new
out
of
tree
cpi
working
inside
of
ocp.
B
So
I
found
some
great
instructions
that
I
that
I've
linked
to
here
I
did
have
to
make
a
couple
of
changes
to
the
underlying
vms,
because
the
the
openshift
installer
creates
vms,
with
kind
of
an
older
compatibility
mode
for
vsphere.
I
think
it
was
version.
13,
I
think,
sounds
right.
B
So
I
had
to
upgrade
that
and
then
there's
also
kind
of
a
toggle
that
you
need
to
enable
for
all
the
vms,
which
you
know
that
uuid
just
kind
of
gives
a
little
bit
more
context
around
which
vm
is
mounting,
which
piece
of
storage
everywhere
and
then
it's
just
a
couple
of
you
know:
oc
commands
to
create
secret
to
apply
some
manifests.
B
You
know
creating
some
roles
and
all
those
fascinating
things
and
then
a
controller
and
a
demon
set
and
then
basically
you're
giving
it
your
vsphere
information,
and
then
that
allows
this
new
out
of
tree
cpi
to
talk
to
vsphere,
to
provision
the
things
that
you
need
in
your
cluster
right
again.
So
now
that
I've
got
the
out
of
trace
dpi
deployed
and
I
have
access
to
the
new
vsphere
csi.
B
It
allows
me
to
kind
of
create
new
storage
glasses
right,
so
I
create
two
storage
glasses,
one,
that's
the
vsan
flash
and
one
that's
my
old
spinning,
rust
vfs
exported
via
nfs
via
vsan.
So
that's
obviously
not
super
performant,
but
you
can
see
here
that
it's
pretty
simple
to
roll
this
out.
You
basically
just
give
it
the
datastore,
url
and
you're
good
to
go
right,
but
I
do
want
to
call
something
out
here,
pretty
specifically
right.
B
Do
you
want
to
make
sure
that
that
you're,
using
this
this
new
one
and
one
of
the
interesting
things
is
this
is
all
still
new
enough
that
sometimes
you'll
find
instructions
referring
to
the
old
way
as
opposed
to
the
new
way
right
and
that's
the
great
thing
about
technology
is.
If
you,
google
things,
you
can
find
all
sorts
of
conflicting
answers,
so
you
have
to
kind
of
use.
B
Your
head
once
in
a
while
right,
so
obviously,
then
you
know
what
I
was
able
to
do
here
is
created
a
couple
of
pvcs
right.
The
pvc
are
what's
going
to
allow
those
volumes
to
get
created
dynamically
because
ain't,
nobody
got
time
to
reprovision
storage.
That
just
sounds
crazy
to
me
and
that's
not
why
I
moved
to
kubernetes.
B
So
this
allows
me
to
to
basically
just
let
all
that
underlying
tech
provision,
the
storage
for
me.
I
just
tell
who
is
what
I
need
right?
I
need
20
gigs
of
fast
storage
and
I
need
100
gigs
of
flow
storage.
Who
really
needs
to
go
figure
that
out
for
me,
and
it
did.
It
was
great
right,
so
those
persistent
volumes
they
get
mounted
as
volumes
in
my
deployment
right.
B
So
in
my
manifest
for
my
app
I
just
say:
hey
take
that
take
that
world
claim,
which
was
the
fast
storage
and
mount
that
as
the
minecraft
data
volume
and
want
you
to
put
that
in
home,
minecraft
animatica
2
world
right,
that's
actually
old.
I
thought
I
changed
that,
but
I
had
actually
moved
that
mountain
path
to
slash
data,
enigmatica,
2
and
you'll,
see
that
when
I
show
my
new
dockerfile
here
in
a
second,
but
so
basically
what's
gonna
happen.
Here
is
the
same
way
that
I
did
things
in
in
docker.
B
I've
now
basically
got
this
storage
that
needs
to
be
persistent
mounted
at
that
file
system
path.
Inside
of
my
inside
of
my
pod
right
right
so
again,
as
you
mentioned
and
asked
me
earlier,
like
you
know
why
use
ocp
for
this
stuff,
one
of
the
other
really
interesting
things
that
I
encountered
as
I
was
kind
of
going
through
this.
B
Is
you
know
what
what
I
had
done
and
what
I
had
experimented
with
on
some
other
kubernetes
distributions
didn't
actually
work
in
ocp,
and-
and
that's
because
you
know
so
many
things
are
kind
of
secure
by
default
with
ocp.
I
had
a
lot
of
you
know:
file
system
errors
due
to
the
way
that
a
random
you,
random,
uid,
gets
assigned
to
the
the
process
that's
running
inside
of
the
container.
B
So
I
had
to
fight
that
a
little
bit
and
I'm
sure
there
maybe
is
a
better
way
to
do
this.
But
what
I
did
is
I
just
775
all
the
files
that
my
process
needs
to
have
access
to
and
and
that
kind
of
got
me
through
it.
But
as
a
part
of
this,
you
know
again
revisiting
my
docker
file
actually
resulted
in
me.
B
You
know
having
some
other
best
practices
and
so
on,
and
I
went
from
don't
laugh
at
me.
I
went
from
a
1.8
gig
docker
file
or
you
know,
docker
image
down
to
a
600,
meg
docker
image
and
600
mags
is
about
as
good
as
it's
going
to
get
because
the
the
unzipped
server
files
are
actually
about
that
big
and
again.
This
is
something
where
you
know:
I'm
actually
using
the
upstream
open,
jdk
image
and
as
I
kind
of
experiment
with
some
other,
you
know
jvms
and
things
like
that.
B
I
might
start
to
experiment
with
that
a
little
bit
more.
But
for
now
this
works
and
and
simple
as
beth
when
it
comes
to
when
it
comes
to
things
like
that
right
and
then,
if
we
look
at
the
deployment
in
full
there's
a
couple
of
interesting
things
that
I
I
don't
want
to
call
out
here
right.
B
This
is
a
monolith
that
you
sk.
You
can't
scale
it
out
like.
If
we
went
back,
you
know
we
see
that
the
pvc
of
read
write
once
and
that's
because
we
we
can't
have
multiple
processes
running
to
the
same
storage.
It's
you
know,
basically,
one
replica
and
and
that's
it
we
we.
We
can't
scale
this
one
out,
so
you
can
scale
it
up,
which
is
kind
of
sort
of
what
I
did
here
by
by
getting
some
new
hardware.
But
you
can't
scale
that
out
in
that
instance,
right
and.
A
A
B
Oh
yeah
yeah
I
mean
you,
can
you
can
actually
do
an
oc
get
sd
to
actually
list
the
storage
classes?
You
can
do
the
same
thing
to
to
get
the
persistent
volume
as
well
or
if
you're,
old-fashioned
or
you
know,
maybe
a
little
bit
more
used
to
cube
control.
You
can
do
the
same
thing
with
cube
control.
A
B
Great
well,
you
know
I,
I
may
not
have
some
some
questions,
as
there
are
little
things
that
I've
kind
of
experienced
throughout
this
as
well
yeah.
B
Sorry,
okay,
that's
fine
too,
and
and
obviously
I'm
making
all
my
own
friends
at
red
hat
as
well.
So,
like
you
know,
talking
to
kevin,
bear
and
things
like
that.
So
I'm
sure
I
can
get
my
own
answers
right.
So
so,
as
I
look
at
my
my
my
deployment
manifest
here,
you
know
there's
a
couple
of
other
things
that
I
can
improve.
B
You
know
some
of
the
environment.
Variables
might
be
nice
to
be
in
a
config
map
or
something
like
that
or
or
maybe
even
as
a
secret,
because
you
know,
for
example,
the
default
op
is
sort
of
kind
of
secret,
like
maybe
that
would
be
better
served
in
a
secret
one
of
the
other
interesting
things
that
I
had
to
do
recently
as
well
was
you'll
see
this
manifest
is
actually
just
kind
of
a
standard,
docker
hub
type
of
image.
B
I
I
did
move
to
temporarily
move
to
harbor,
because
I
have
a
harbor
instance
running
in
my
basement,
but
I
think
I
might
you
know,
make
that
an
image
stream
as
well,
just
due
to
all
the
changes
that
have
happened
with
docker
and
you
know
only
being
able
to
fetch
a
certain
number
of
images
per
hour
or
whatever
it
is,
and
and
if
things
don't
get
accessed
in
a
while
they
get
deleted.
So
I'm
kind
of
experimenting
with
with
some
other
ways
to
to
deal
with
that.
B
B
So
obviously
the
fact
that
I'm
on
f
now
means
that
I
had
to
iterate
over
my
images
a
couple
of
times
before
I
got
something
that
I
liked
the
way
it
worked
right,
all
right
awesome.
So
so
now
I've
got
that
manifest
all
set
up,
and
then
I
basically
get
a
jvm
that
nobody
can
access
from
outside
of
the
cluster.
And
since
this
is
a
multiplayer
game
server
by
itself,
it's
not
really
doing
anything
valuable
without
being
able
to
connect
to
it.
B
So
I
experimented
with
it
with
a
couple
of
different
things
here,
but
you
know
our
kind
of
normal
ingress
controllers
with
you
know
aj,
proxy
and
stuff,
like
that.
Don't
necessarily
make
sense
here,
because
minecraft
isn't
http.
So
it's
not
really
going
to
work
that
way.
I'm
deploying
this
to
my
basement.
So
it's
not
sitting
on
you
know
gcp
ws,
azure
and
so
on.
So
I
don't
have
a
real
load
balancer.
B
Yet
I'm
sort
of
kind
of
thinking
about
trying
to
buy
an
f5
from
ebay
or
something
like
that,
but
I
don't
have
a
real
load
balancer.
Yet
so
I
used
metal
lb
metal
lb
is
you
know
pretty
much
the
kind
of
thing
that
that
folks
use
in
this
type
of
scenario,
as
I
got
through
what
I
was
doing
here,
I
did
find
that
somebody
actually
built
somebody
at
red
hat
built
an
operator
to
manage
this,
and
that
was
actually
something
that
I
didn't
see
until
yesterday
evening.
B
B
B
Don't
actually
do
this.
I
included
this
as
an
example,
but
I
really
really
hate
when,
when
people
just
cube,
control
apply
a
a
file
directly
from
github
or
a
file
from
the
internet,
or
you
know
piping
curl
to
back
or
whatever.
We
always
include
that
in
our
directions
as
like
tech,
companies
and
so
on,
but
what
we
really
hope
is.
We
really
hope
somebody
actually
downloads
that
file
and
looks
at
it
first
before
they
apply
it
to
their
cluster
yeah
right.
So
I've
got
that.
B
You
know
it's
it's
written
this
way,
for
you
know
to
make
things
nice
and
concise
right,
but
let's,
let's,
let's
not
actually
do
this
anymore,
like
it's
just
not
good
all
right.
So
again,
as
I
mentioned
before,
I've
got
that
load
balancer
available,
making
those
requests
to
minecraft
available
via
a
private
ip.
B
So
then,
I
just
use
my
ubiqui
ubiquity
edge
router
to
provide
acta
to
that
port
via
nat,
and
if
anybody
has
any
questions
about
ubiquity
hardware,
please
let
me
know
as
well
as
everybody
is
doing
the
same
thing
as
me,
and
upgrading
their
their
home
networks.
Some
of
my
some
of
my
friends
at
vmware
have
have
started
to
do
this
with
with
ubiquity
hardware
and
I'm
a
huge
fan
so
I'll.
Leave
that
one
out
there
as
well
right.
B
So
one
of
the
other
things
we'll
we'll
talk
about
here
and
we're
almost
done
is
kubernetes
requests
and
limits
right.
So,
as
we
take
a
legacy
application
and
we
want
to
move
it
into
a
cluster
like
this,
that
is
hopefully
going
to
be
doing
more
than
just
running
something
like
minecraft.
You
want
to
make
sure
that
all
these
things
live
together
nicely
and
you
want
to
make
sure
that
kubernetes
is
able
to
place
the
workloads
on
the
nodes
that
can
actually
support
that
workload
right.
B
So
one
of
the
fascinating
things
here,
especially
since
a
lot
of
other
folks,
have
been
using
the
quarantine
to
build
clusters
of
raspberry
pi's,
and
things
like
that.
I
think
minecraft
requires
so
much
memory.
You
know
I've
got
to
go
out
there
and
I've
got
to
say
hey.
I
need
at
least
11
gigs,
and
I
want
to
limit
this
to
12.
right.
So
what
that
means
is,
if
my
cluster
of
all
my
worker
nodes
are
8
gig
worker
nodes,
this
will
never
deploy
because
the
scheduler
is
never
going
to
find
a
node
to
run
it.
B
B
The
interesting
thing,
then,
when
we
start
to
talk
about
request
right,
which
is
what
the
scheduler
uses
the
place
workloads
versus
limit,
is,
if
you
have
your
limit,
that
in
a
certain
way,
if
you
have
your
memory
limit
set
and
you
exceed
that,
it's
going
to
kill
your
pod
right.
If
your
steep
movement
is
hit,
it's
just
going
to
slow
it
down
right.
B
The
nice
thing
here
is
that,
obviously
again
I
work
for
dynastrace,
so
I'm
using
dinotrace
in
my
kubernetes
environment,
I'm
actually
utilizing
our
red
hat
certified
operator
to
monitor
these
workloads
and
with
that
one
deployment
into
my
cluster,
I
can
now
monitor
this.
The
same
way
that
I
did
back
in
the
docker
world,
and
now
I
can
also
track
my
cpu
throttle
right.
B
Technically
it's
bull
because
there
is
a
helm
chart
that
will
deploy
the
operator
right.
B
So
you
know
it's
hurdles
all
the
way
down
and
what
that
operator
is
going
to
do
for
us
is
just
the
traditional
operator
benefits
right
of
modifying
all
that
knowledge
around
deploying
one
agent
to
a
kubernetes
cluster
and
again
to
what
you
talked
about
kind
of
in
the
beginning.
Dynatrace
has
been
working
really
closely
with
red
hat
kind
of
since
day.
One
of
the
operator
framework.
B
You
know
and
then
has
been
jointly
certified
and
it's
jointly
supported
by
by
both
of
our
organizations,
but
what
the
operator
is
doing
is
it's
actually
rolling
the
one
agent
out
to
all
your
worker
nodes
right
and
once
the
one
agent
is
on
the
worker
node
it's
going
to
automatically
inject
into
every
application.
Every
pod
on
that
ocp,
environment,
right,
cool.
B
It's
just
they're
part
of
the
platform
watching
everything
right
and
again
what's
fascinating
about
this
is
just
because
the
application
doesn't
speak
http.
We
can
monitor
that
too
right.
You
just
have
to
tell
it
what
represents
the
transacted
right,
but
in
this
context
well
now
I'm
able
to
validate
that
my
re-platforming
efforts
were
successful
in
that
I've
set
these
limits
to
a
stain
value,
because
I
can
start
to
see
if
their
cpu
throttling
occurred
right.
B
So
in
this
particular
screenshot,
I
can
see
that
there
was
quite
a
bit
of
cpu
throttling
occurring
right.
So
that's
what
kind
of
got
me
to
the
point
where
I
was
starting
to
bump
that
limit
up
right,
because,
based
on
what
I
saw
earlier,
I
was
like.
Oh
okay
like
a
core
and
a
half
should
be
fine,
so
it
was
set
to
1500
m,
but
I
needed
to
increase
that
to
two
entire
cores.
B
All
right,
so
the
current
state,
everything
is
great.
I've
got
a
nice
little
chart
there
at
the
bottom.
That's
the
that's!
The
the
the
master
tick
thread
response
time
for
the
new
environment
in
teal.
I
think
that
is
I'm
not
great
with
colors
and
then
the
orange
one
is
the
is
the
old
instant.
Now
the
old
ins
doesn't
have
anybody
on
it
anymore.
So
that's
basically
15
milliseconds
with
nobody
on
it
and
the
new
instance
down
below
was
like
seven
and
a
half.
B
You
know
five
to
seven
and
a
half
milliseconds
with
a
handful
of
folk
on
it
right
and
now
I'm
going
to
be
alerted
by
dynatrace.
If
that
response
time
is
ever
degraded
and
then
I
can
do
cool
things
like
you
know,
dive
into
the
methods
that
are
part
of
the
master
tick
thread
that
are
causing
trouble
like,
for
example,
I
I
did
have
a
problem
a
couple
of
weeks
ago
and
I
was
able
to
use
to
find
out
that
the
root
cause
of
my
problem
was
frog.
B
I
had
frogs
in
my
minecraft
world
that
were
added
by
a
mod
called
quark
and
for
whatever
reason,
the
ai
responsible
for
governing
the
the
frog
behavior
was
acting
up
and
it
was
taking
like
80
percent
of
that
master
tick
thread.
So,
as
the
minecraft
admin,
I
had
to
hop
onto
my
server
and
kill
all
the
frogs
in
the
entire
world,
so
that
was
kind
of
an
interesting
kind
of
example
right.
B
So
you
know,
we've
only
got
a
couple
of
minutes
left
here,
because
we've
got
a
couple
of
great
questions,
kind
of
along
the
way.
Some
of
the
things
that
were
fascinating
about
this
whole
process
was
sourcing
hardware.
You
know
I
started
this
effort.
B
You
know
kind
of
back
in
march
and
and
sourcing
hardware,
when
all
this
stuff
was
going
down
back
at
the
beginning
of
the
year
was,
was
really
difficult,
and
even
though
I
was
buying
like
real
legit
server
grade
hardware,
I
still
had
a
fascinating
amount
of
hardware
problems
like
I
had
a
doa
cpu
that
took
a
month
and
a
half
to
get
a
replacement.
B
I
actually
had
unrelated.
I
had
a
cpu
socket
that
actually
melted
taking
yet
another
cpu
with
it
and
for
the
first
time
in
20
years
I
had
bad
cabling
that
was
negatively
impacting
things
and
and
oddly
enough
it
was
not
cables
that
I
built
it's
actually
cables
that
were
were
pre-made,
so
10
gigabit
networking
even
for
like
a
short
two
meter,
cable
was
still
really
picky
about
cable
quality
and
then
again
I
want
to
call
this
out.
This
is
my
docker
file
right.
B
There's
many
docker
files
out
there,
but
this
one
is
mine.
It's
probably
not
the
best.
There
actually
is
a
red
hat
example
about
this
that
I
found
and
it
uses
this.
It's
gene,
minecraft
server,
docker
file
and
if
she
also
has
some
more
generic
ways
of
deploying
minecraft
as
a
stateful
set
and
so
on,
but
he's
got
some
weird
things
going
on
in
his
docker
file
and
I
wanted
to
kind
of
simplify
to
the
max.
B
So
I
wanted
something
that
I
would
definitely
understand
throughout
the
process,
but
I
will
probably
migrate
this
to
that
docker
file,
because
it's
a
lot
more
advanced
and
has
some
capabilities
that
I'm
missing
right.
B
If
anybody
has
any
questions
about
this
or
wants
to
hop
on
the
discord
where
I
have
the
server
information,
please
dm
me
on
twitter.
At
mikevillager,
we've
got
a
couple
of
different
kind
of
red
hat,
related
call
to
actions
here.
The
cool
thing
about
dynatrace
is
dynatrace
and
red
hat
have
been.
You
know,
working
together
very
closely
for
quite
some
time
now,
so
we
are
listed
on
the
red
hat
marketplace,
so
you
can
initiate
a
free
trial
via
the
marketplace
or
you
can
buy
donald
trace
via
the
marketplace
as
well.
B
The
link
on
the
left
is
a
white
paper
that
that
we've
kind
of
created
that's
similar
to
what
I
talked
about
here
today,
and
it's
about
you
know
how
dynatrace
and
can
help
you
accelerate
your
migration
to
openshift,
and
then
we've
got
a
customer
story
available
on
the
right.
Where
we
talk
about
some
of
the
things
that
we
were
able
to
do
to
help
the
modernization
efforts
at
porsche,
which
is
a
brand
that
I'm
a
big
fan
of,
and
that
is
the
end
of
my
content.
A
B
All
right
so
so
there's
a
couple
of
key
kind
of
like
in
a
90-second
overview
here,
like
just
a
real
high-level
point.
So
you
want
to
monitor
your
existing
application
to
understand
what
the
footprint
of
that
app
is
and
and
also,
if
possible,
understand
the
dependencies
for
that
application
right
and
then
you
know
have
some
place
to
put
the
app
right.
B
And
then
you
want
to
make
sure
that
you
are
utilizing
something
hopefully
don
trace
to
to
to
understand
that
effort
has
not
been
for
naught
that
that
things
are
actually
working
great
throughout
that
re-platforming
or
migration
process,
because
if
you
kind
of
go
through
this
effort
and
end
up
pissing,
all
your
users
off,
like
that's,
really
no
fun.
For
anybody.
Not
sure.
If
I
answered
that
question.
A
B
A
Really
cool
so
again,
if
people
want
to
get
connected
with
your
minecraft
instance,
you
want
them
to
reach
out
to
you
on
twitter.
B
A
Right
well-
hey!
This
has
been
probably
one
of
the
more
unique
shows
we've
done
here.
I
thank
you
so
much
for
for
putting
that
together
and
I
don't
even
want
to
ask
you
know
what
your
investment
is
in
your
in
your
home
lab,
but
yeah.
Don't.
A
No
no
anyways,
thanks
for
coming,
saw
your
hat.
Sorry,
you
had
to
go
in
there
and
kill
all
the
frogs,
but
someone
someone
had
to
do
it.
You
know
and
to
make.