►
From YouTube: Red Hat OpenShift Dedicated + Google Cloud Platform: The leading enterprise container platform
Description
Sathish Balakrishnan, Director, OpenShift Online, Red Hat and Martin Buhr, Product Manager, Google Cloud Platform speak in this breakout session at Red Hat Summit 2017.
Together, Google and Red Hat have helped make Kubernetes the most popular, fastest growing container management platform. Come learn about how OpenShift Dedicated (Red Hat's managed OpenShift service) on Google's container optimized cloud can help you realize the benefits of Kubernetes, containers, and cloud native application patterns.
https://www.redhat.com/en/summit/2017/agenda/session
A
So
thanks
for
joining
the
session
on
openshift,
dedicated
and
Google
cloud
platform,
I
know
it's
the
last
session,
so
we'll
try
to
keep
it
as
fast
as
we
as
quickly
as
we
can
we'll
go
through
this.
But
you
know
thank
you
really
for
coming.
I.
Think
you
know
we
didn't
expect
this.
Many
people
would
show
up
so
we
are
thrilled.
A
Actually,
we
thought
it's
going
to
be
just
the
two
of
us
and
we
know
each
other
for
a
long
time
and
we
don't
want
to
spend
any
more
time
together
and
so
welcome
to
open
ship
summer
traders
really
looks
like
open
ship
summit.
We've
been
making
a
whole
lot
of
announcements.
So
what
I
wanted
to
do
is
Center
talk
about
open
ship
dedicated.
How
many
people
here
know
heard
about
open
ship
dedicated?
Okay,
that's
actually
slightly
better,
so
we
haven't
been
really
promoting
this.
A
This
is
an
offering
that
we
launched
last
year,
the
beginning
of
last
year
and
we've
been
running
in
production
for
the
last
16
months
now,
so
what
we'll
be
doing
is
talking
about
open
shift
dedicated
and
then
talk
about.
You
know
Google
cloud
Google's
been
a
great
partner
of
ours
for
I.
Think
four
years
now
we've
been
running
well
on
Google
cloud.
For
more
than
four
years
we've
been
running
openshift
on
it
for
more
than
two
years
now
and
we
launched
dedicated
on
Google
beginning
of
end
of
last
year.
A
So
just
want
to
talk
about
that.
So
I'll
get
started
here.
So
this
is
more
for
people
that
don't
know
what
open
shift
is,
and
you
don't
don't
know
the
multiple
offerings
that
we
have
I
think
everybody
here
probably
knows
about
it,
but
if
they
don't
know,
we
did
announce
on
Tuesday
at
this
Tuesday
open
shift
online.
The
next
generation
of
open
shift
online,
which
went
GA,
so
you
can
actually
go
to
open
shift,
calm
and
sign
up
for
an
open
ship
account.
That's
based
on
kubernetes
and
darker.
A
It's
the
worst
first
multi-tenant
kubernetes
solution
of
platform
that
has
been
launched,
open
ship
dedicated
as
think
of
open
ship
dedicated
as
a
single
tenant,
open
ship,
that's
managed
and
operated
by
Red
Hat
in
a
public
cloud,
and
we
have
open
ship
container
platform
that
actually
self
managed.
The
difference
between
dedicated
and
container
platform
is
dedicated
as
a
single
tenant
managed
by
Red
Hat
platform
is
managed
by
the
customer.
A
So
this
slide
actually
I'm
sure
you
guys
are
familiar
with
the
slide.
You've
seen
it.
So
what
you
haven't
seen
is
what
we
have
on
red
and
green
and
the
very
left
of
the
slide,
but
we
tell
you
what
we
manage.
So
when
you
get
dedicated
what
happens?
Is
we
manage
the
entire
OpenShift
platform?
For
you,
the
only
thing
that
you
need
to
manage
really
is
the
application
that's
running
on
OpenShift.
A
A
So
this
is
kind
of
a
slide
that,
in
our
marketing
group
put
together
were
just
kind
of
you
know
what
is
open
ship
bill
right?
It's
the
same
thing.
We
take
care
of
the
open
ship
support
subscription.
We
even
take
care
of
the
middleware
that
runs
in
right.
So
all
the
middleware
products
that
we
have
like
JBoss
fuse
EAP.
All
of
those
things
run
on
top
of
openshift
VRMs
BPMS.
All
of
them
run
on
top
of
open
ship
dedicated.
So
all
of
those
things
still,
you
know
our
responsibility
of
that.
A
So,
if
you
have
a
new
image
of
EAP
that
comes
in,
you
know,
will
actually
make
sure
that's
getting
upgraded.
The
only
thing
you
are
responsible
again
is
for
app
development,
while
in
a
traditional
model,
if
you
were
to
do
it
on
premise,
we
are
really
responsible
for
pretty
much
everything.
So
that's
just
telling
you
now
how
you
can
free
up
resources
for
app
development,
how
you
can
actually
go
faster
to
the
market.
Everybody
knows
about
digital
transformation
and
how
you
know
the
business
value
of
expediting
that
process.
A
The
applications
that
you
guys
are
building
and
I
think
one
differentiator
here
is
even
if
you're
using
a
public
cloud
provider,
you
still
are
responsible
for
everything
from
the
operating
system
upward
right.
So
the
difference
here
is,
you
know,
even
though
it
runs
on
a
public
cloud.
We
take
care
of
all
of
those
things
for
you.
A
So
what
are
the
main
features
of
open
shift,
dedicated
right,
I
think
these?
Are
this
openshift
dedicated
being
based
on
open
shift
version?
Three
already
inherits
all
the
features
that
are
very
an
open,
shipped
container
platform.
On
top
of
that,
you
know
what
are
the
things
you're
getting
right?
One.
Is
you
get
an
isolated
public
platform?
It's
professionally
managed
right.
We
are
actually
having
operation
operators
that
are
working
in
with
developers
and
scrum
teams.
So
we
know
you
know
what's
happening
with
the
product.
You
know
what
are
the
updates
that
are
coming
in.
A
We
actually
test
this.
Like
many
times
over,
you
get
a
Red
Hat
support
you
get
like
connect
and
extend
local
services.
So
if
you
have,
let's
say
an
Oracle
database
in
your
data
center-
and
you
want
to
connect
it,
we
actually
can
connect
it
through
the
VP,
CVP
and
mechanisms
that
all
of
the
cloud
providers
have.
We
also
pretty
much
support
in
all
regions.
So,
for
example,
all
the
cloud
providers
with
Google.
A
We
actually
support
all
of
the
regions
there
and
they've
been
great
and
adding
a
lot
of
lot
more
regions
and
we
support
all
of
the
students.
So
if
you
want
to
run
something
in
Singapore,
you
know
you
can
run
it
there.
If
you
want
to
run
it
in
Tokyo
same
kind
of
thing,
so
if
you
want
to
be
close
to
the
customer
or
close
to
your
developers,
that's
something
that's
possible,
and
it's
completely
built
on
open
standards.
It's
open
source!
Everything
that
we
do
here
is
open
source,
in
fact
our
ops
team.
A
A
So
this
is
an
I
want
to
talk
about
two
things
in
this
slide.
One
is,
you
know,
talk
about
telco
customer,
that's
using
open
ship
dedicated,
so
they
have
their
own
digital
store
where
actually
they
have
all
the
activations
they
sell
phones,
they
sell
all
of
those
things
they
use
open
ship
dedicate.
There's
been
a
really
good
success
story
for
us
they
really
software
every
day.
In
fact,
they
do
11
deployments
a
day
versus
one
deployment.
A
They
used
to
do
every
two
weeks
and
they
have
since
it's
like
all
of
the
things
that
happen
in
a
think
of
a
web
store
right
now
you
have
Samsung
Galaxy,
eight
coming
iphone,
eight
all
of
those
things
they
got
to
keep
changing
things
all
the
time
and
the
business
is
really
demanding
and
how
to
make
this
very
efficient.
So
they
actually
run
this
they're.
Also,
the
both
the
ops
and
the
dev
teams
are
more
satisfied
because
they
get
the
resources
they
want.
A
There
is
no
more
infighting
right,
it's
not
that
they
try
to
develop
a
DevOps
process.
It's
just
that.
They're
what's
happened,
because
you
remove
all
of
the
friction
between
Devon
Ops
by
providing
a
platform,
that's
on
demand
and
that's
completely
managed.
We
also
moved
them
from
versions.
I
think
one
of
the
things
that
we've
heard
from
customers
right.
If
you
really
want
to
get
the
benefit
of
open
shipped,
you
want
to
be
on
the
latest
version
of
open
share
and
we
actually
release
software
every
three
months.
A
So
another
thing
that
you
know
they
actually
get
seamless
upgrades,
they're
continuing
to
move
and
we
have
moved
them
to
three
four
and
you
know
moving
next
scheduling.
Four
three
five:
they
have
already
migrated
121
applications
to
open
shift
and
they've
been
a
customer
for
I
think
less
than
six
months.
So
that's
again,
you
know
it
talks
about
you
know
giving
developers
a
platform,
that's
agile!
That's
nimble!
You
know
that
gives
self-service.
You
know
yeah.
A
If
they
like
it,
you
know
they
make
a
lot
of
things
happen
and
30%
cost
savings
or
their
on-premise
solutions
just
on
the
infrastructure
along
the
Avenue
and
like
accounted
for
the
people,
because
the
people
still
are
part
of
the
organization.
It's
just
30
percent
they've,
already
seen
a
realization
of
30
percent
cost
savings.
The
second
thing
I
want
to
talk
about
is
the
ROI
summary.
This
is
a
great
paper
that
you
know
Martin
on
our
marketing
team
work
with
IDC
to
build
its.
A
This
is
a
study
of
you
know.
Multiple
customers
of
open
ship
that
have
deployed
it
on-premise
and
the
benefits
of
dedicated
are
actually
much
more
than
this,
but
this
is
just
for
somebody
that
put
it
out
in
their
a
data
center.
They
had
a
531
percent
ROI
or
five
years.
You
know,
benefits
from
developer,
is
1.9
million
and
I
think
the
biggest
thing
is
you
know
you
don't
have
to
wait
four
years
to
get
your
payback,
you
get.
They
got
it
in
eight
months
in
eight
months.
A
So
if
I
think
it's
a
very
easy
sell
right
and
that's
probably
a
testament
to
all
the
customers
that
are
here,
the
benefits
that
you
realize
of
openshift
come
out
really
quickly
within
the
first
year
and
bunch
of
other
productivity.
For
one
hundred
developer
said
what
is
the
business?
Productivity
is
IT
productivity
gains.
So
this
is
something
that's
available
on
our
website.
If
you
were
to
open
ship
Khan,
just
look
search
for
that.
Roi
summary
and
you
will
get
this
and
you
can
also
look
it
up
on
our
Twitter
feed.
A
We
actually
have
it
as
a
print
tweet
that
so
now,
I
want
talk
about
Google
and
Red
Hat
right.
So
you
guys
we've
been
working
together
for
more
than
four
years
it
has,
since
Google
clouds
launched
the
rel
has
been
running
there
and
I.
Think
I,
don't
know
how
many
of
you
are
users
of
openshift
to
here
and
users
of
openshift
too.
So
we
are
like
one
person
that
took
a
couple
of
people
that
have
done
open
shift
too
so
I
think
openshift
o
was
in
our
rack.
We
had
our
own
broker.
A
We
had
our
own
gears
as
we
call
the
containers
at
the
time.
So
open
shifts
been
there
for
six
years
now
and
we
had
our
own
version
of
open
ship
which
had
our
own
container
technology
own
orchestration
layer
and
at
the
time
that
we
actually
read
had
enough.
You
guys
remember
back
in
September
of
2014
2013.
We
actually
recognized
the
darker
is
a
really
important
thing
and
we
were
the
first
large
enterprise
to
embrace
docker.
A
At
the
same
time,
we
were
actually
going
to
reorder
tech,
the
whole
platform
on
darker,
and
we
were
looking
for
an
orchestration
technology
when
Google
approached
us
and
talked
about
you
know
what
they
were
going
to
do
with
kubernetes
and
actually
that's
how
we
started
working
it
go
Google
in
you
know,
I,
would
say
found
since
the
founding
of
kubernetes
and
we
between
Google
and
Renaud.
You
can
look
at
the
contribution
right.
We
actually
contribute
pretty
much
in
a
60
70
percent
sixty
five
percent
of
the
co2
kubernetes.
A
So
you
actually
get
like
you
know
most
of
kubernetes
from
us
and
we
do
the
same
thing.
On
a
darker
side.
We
are
the
largest
external
contributor
darker
from
Red
Hat,
so
we've
had
relationship
with
Google
and
we
have
been
working
with
them
on
getting
all
our
products,
not
on
Google
Club
and
since
we've
announced
OpenShift
dedicated
on
Google.
There's
been
a
lot
of
attraction
to
that
offering,
especially
because
of
the
things
that
Google's
been
doing
on
their
cloud.
A
B
B
What
I
want
to
give
you
guys
a
sense
of
I
think
you've
got
a
good
idea
of
what's
going
on
in
the
open
shift
in
your
application,
so
I'm
going
to
spend
most
time
not
at
the
application
layer,
but
what
it's
running
on
and
then
other
additional
services
that
wants
applications
running
can
supercharge
that
application.
So
that's
kind
of
the
direction
we're
going
to
go
through
and
how
it
exists,
and
why
so
I'll
move
through
these
relatively
quickly.
B
Some
history
does
any
want
to
run
anything
on
Google
cloud
platform
today
in
the
room
perfect,
so
I
assume
generally
speak
for
the
great
man
and
the
beard
in
the
back
that
this
is
probably
relatively
new
to
you.
So
I'm
one
of
the
things
education,
wise
and
if
you
came
down
some
coding
with
us,
we're
very
aware
that
our
biggest
challenge
largely
is
just
people
coming
and
using
it,
because
almost
anyone
didn't
code
labs
at
our
booth.
B
Okay
for
all
of
you,
thank
you
and
all
new,
so
I
think
that's
a
lot
of
it's
just
an
awareness
question.
Not
of
that
we
can
do
it,
but
just
oh,
we
actually
have
it
so
spend
a
little
bit
of
time
there
to
think
about.
The
first
thing
to
understand
is
from
Google's
standpoint.
Is
our
infrastructure
itself?
First
off
is
based
on
containers
and
has
been
since
I,
don't
know
10
12
years
ago,
and
what
that
means
is
anything
and
everything
that
you
would
run
on.
B
Google
runs
in
a
container,
including
VMs,
so
the
container
is
the
baseline
platform
in
which
we
are
able
to
run
now.
Every
one
of
these
products
touch
a
billion
users
a
day
right.
So
we're
in
the
process
really
of
building
an
amazing
platform.
Amazing
places
for
people
to
do
the
development
and
run
their
info
structure.
B
Give
you
the
background
in
terms
of
what
we're
delivering
so
Google
runs
everything
in
teeners.
All
of
these
we
launched
about
two
billion
containers
a
week
and
then
those
are
all
managed
so
I
think
it's
important
they're
saying.
Why
does
that
matter
because
you're
like
create
more
containers,
more
numbers?
That
doesn't
mean
a
whole
lot
to
me.
So
let
me
give
you
some
of
the
ideas.
What
happens
when
you
put
a
VM,
for
example,
in
a
container
right
by
the
way?
This
is
when
we
take
Google
Cloud.
This
is
marketing
required.
B
B
So
there's
probably
four
ways
or
four
primary
places
that
we're
seeing
all
of
our
customers
come
to
us
for
one
they're,
getting
an
innovation
curve
that
they
don't
get
anywhere
else
and
hopefully,
when
you
walk
away
from
there,
that's
something
you
feel
pretty
comfortable
and
I'll
put
some
validation
as
to
why
that
is
they
come
to
us
from
data
because
we
handle
we
process.
B
We
bring
insights
out
of
data
faster,
more
efficiently
and
more
uniquely
than
any
else
can
we
can
deliver
new
apps
faster
as
we're
partnering
really
closely
with
Red
Hat
to
make
sure
you
can
develop
the
applications
quickly
and
then
you
can
quickly
both
scale
them.
If
you
need
to
globally,
if
you
need
to
and
then
you
get
services
underneath
them
to
move
you
forward,
so
those
applications
get
supercharged.
And
then
we
spend
a
lot
of
time,
energy
and
resources
to
make
sure
we
deliver
best-in-class
per
support.
B
So
as
he
should
gone
and
through
from
openshift
dedicated,
you
know
we
have
entire
team
that
sits
behind
them
and
make
sure
that
the
infrastructure
is
running
great
if
they
have
any
questions,
we're
locked
out
like
this.
Well,
we
have
helpdesk
to
go
back
and
forth
to
make
sure
that
everything
runs
perfectly
for
our
customers.
So
that's
it.
Let's
walk
through.
Why
opens
you've
dedicated
white
Google?
Why
you'd,
rather
than
Google,
because
again
the
advantage
is
you
can
run
it
anywhere?
B
That's
why
you
would
go
to
the
platform
it
allows
you
to
run
on
any
platform,
so
why
there?
So,
let's
first
off
this,
some
things
are
just
better
together.
Let's
be
honest,
you
know
James
Kirk
Spock.
It
doesn't
matter
what
age
when
they're,
always
better
together
milk
and
chocolate
chip
cookies
are
awesome,
whether
you
want
to
dunk
them
or
guzzle
the
milk
after
you
I
have
a
really
dry
mouth
they're,
delicious
your
chocolate
peanut
butter
same
way.
So
let's
say
why
we
look
at
kind
of
a
back
and
forth
standpoint.
B
You
know,
Red
Hat
clearly
has
leadership
in
open
and
open-source
in
the
enterprise
right
and
while
we
spend
a
lot
of
time,
we
have
amazing
infrastructure.
We
clearly
do
not
have
the
years
and
years
of
experience
or
the
customer
base
that
red
hat
does
and
the
experience
of
delivering
those
unique
things
to
customers.
They
do
deep
customer
relationships
and
obviously
a
very
trusted
partner.
Clearly,
as
you
guys
highlight
on
the
flipside,
we
have
deep
experience
with
containers
having
run
containers
for
12
years.
As
you
said
at
scale,
we've
only
done
everything
in
the
cloud.
B
So
that's
the
only
way
we
know
is
our
data
center.
A
data
center
is
a
machine
machine.
Is
the
data
center
they're
one
in
the
same
as
far
as
we
see
them,
and
then
the
last
one?
That's
really
a
really
great
place.
Is
that
data
piece
and
the
machine
learning?
So
let
me
jump
in
that
problem.
It's
it's
on
it.
Unbeliev
it
or
not.
Containers
still
all
sit
in
VMs
leave.
Our
container
still
sit
on
machine,
so
while
server
Alyssa's
got
no
servers,
it
does
run
on
servers.
B
So
we
should
probably
spend
a
little
bit
of
time
of
why
performance
is
different.
So,
let's
go
through
a
couple
things
that
mattered
you
the
developer.
First
off,
really
straightforward
thing
is
VM
boot
time
container
boots
faster,
not
surprisingly,
a
VM
in
a
container
boots
a
lot
faster
than
if
you
try
and
do
that
just
on
its
own.
So
this
is
great
when
you're
going
from
a
development
standpoint.
This
is
really
good
in
terms
of
if
you're
scaling
a
large
number
of
stuff
and
it's
most
awesome
when
you're
doing
tests
and
dev.
B
So
let's
spend
some
time
there
and
it
says
dev
standpoint.
Obviously
the
more
you
ramp
up
and
down
those
little
pieces
matter
a
lot
we'll
get
into
why
we
do
things
little
little
thing
behind
the
scenes
called
live
migration.
So
when
you
have
to
patch
a
host,
you
might
need
to
move
off.
So
when
we
found
heartbleed
Google
discovered
it
we're
able
to
patch
those
underlying
hosts
patch
everything
underneath
and
move
you
live
to
somewhere
else.
The
other
thing
is
nice
from
a
performance
standpoint,
anyone
run
anything
on
a
public
cloud.
B
Anyone
willing
to
admit
they
run
something
in
a
public
cloud:
it's
not
Google!
You
can
do
it,
raise
your
hand
who
runs
AWS.
You
know
you
want
to
there.
We
go
anyone
ever
dealt
with
anything
anyone
familiar
with
the
term
noisy
neighbor
couple.
Okay,
so
the
term
noisy
neighbor
happens
when
red
happens.
When
Netflix
shows
up
on
your
VM-
and
you
don't
know
it's
Netflix
but
you're
like
oh,
my
gosh
might
perform
it's
a
terrible,
but
I
have
no
access
to
underlying
hosts.
So
what
do
you
do?
B
Well,
if
your
VM
running
in
the
container
and
Netflix
shows
up,
you
know
what
I
do
I'll
just
move
you
to
another
host,
because
the
performance
and
they're
eating
up
the
network
I'm
able
to
move
another
host
in
the
same
zone,
all
the
above
right,
but
your
performance
incident
is
significantly
better
there.
So
we
can
do
the
live
migration,
no
operational
overhead
and
again
that's
why
you
run
an
infrastructure
I'm,
more
checking
on
my
time.
Oh
I
got
so
much
time.
B
You
have
so
much
time
with
me,
secondarily
persistent
disk,
so
we
have
a
very
different
network
differentiate
network
on
the
back
and
what
that
means
is
you
can
attach
up
228
volumes.
This
means
you
can
put
4
from
an
open
shift
standpoint.
What
this
means,
as
you
can
put
very
large
clusters
right
and
the
bigger
the
cluster,
the
more
efficient
you
can.
Pack
it,
the
more
efficient
can
run
it,
etc.
B
Right,
and
so
this
is
an
order
of
magnitude,
it's
probably
2
or
3
X
larger
than
any
other
public
cloud
same
with
the
amount
of
terabytes
of
disk
you
persistent
disk
you
can
attach
to
it.
We've
got
obviously
dynamic
sizing,
which
is
really
nice
and
encrypted
by
default,
either
your
keys
or
ours
so
bring
your
own
keys
encrypt.
All
you
want
everything
at
Google
on
our
platform
is
encrypted
at
transit
and
at
rest.
So
then
we
do
a
price,
for
these
are
the
other
cloud
workloads
and
everything
and
other
lying
VMs.
B
To
give
you
a
sense
for
those
of
you
that
have
not
run
something
in
a
public
cloud,
you
have
to
choose
the
sizing
and
the
volume.
Almost
every
time
you
go
in
and
choose
it
so
I
want
a
you
know:
I
want
a
16
way
machine,
but
especially
in
this
space,
as
you
think
about
what
happens
when
you
go
to
16
way
to
next
step
is
a
32
Way.
B
B
You
can
choose
your
cores
and
choose
your
memory
and
you
get
exactly
the
right
size
every
time
you
only
pay
what
you
use,
however,
there's
other
very
nice
things
like
hey
by
the
way,
if
you
run
a
VM
for
the
entire
month,
rather
than
pre
paying
for
stuff,
what
we're
going
to
say
is
that's
excellent.
You
ran
it
for
the
whole
month.
We're
going
to
give
you
a
discount,
so
you
can
kind
of
get.
An
idea
of
here
might
be
a
little
bit
of
an
eye
chart.
So
just
think
of
it.
B
This
way,
if
I
have
a
VM
and
I
run
it
the
whole
month,
Google
is
going
to
give
me
a
40%
discount
off
the
base
rate
and
then
we'll
great
it'll
come
in
on
a
step
wise
function.
However,
it's
not
just
that
we
also
like
the
game
of
tetris.
B
So
let
me
explain
what
you're
seeing
here
there's
a
sustained
use
discount
if
you
run
it
for
the
entire
month,
you
get
the
40%
discount,
but
what
happens
if
hey
you're,
doing
a
bunch
of
tests,
dev
right
in
that
case,
you're
going
to
have
a
bunch
of
like
say
for
you
instances
you
turn
mom.
You
come
off,
draw
and
trim
off
well
what
this
is.
If
you
look
here
at
the
tetris
chart
here,
those
are
each
perhaps
a
for
you
instance
that
you've
turned
on
and
off
three
four
different
ones:
five
different
ones.
B
What
we
do,
that's
our
actual
use,
that's
what
it
look
like
if
you're
paying
per
BM.
What
we
do
is
at
the
end
of
the
month.
We
create
inferred
instances,
so
we
take
it,
we
stack
them,
you'll
see
we
split
them
up
down
below
and
then
we
shove
them
together
and
what
you'll
see?
Let
me
do
this
down
here.
You
may
read
no
VM
for
the
month,
but
we
took
one
and
three
like
excellent.
B
You
ran
that
the
whole
month
we're
going
to
put
those
together
and
give
you
a
40%
discount,
because
guess
what
we're
running
in
containers
and
we
just
packed
them.
So
if
that's,
how
we
do
it,
your
bar
price
should
look.
Your
PI
should
like,
like
my
price
right.
Maybe
you
shouldn't
have
a
PhD
in
figuring
out
whether
you
should
have
a
you
know,
an
RI
or
someone
else
figure
out
exactly
how
you
handle
a
spot
instance,
but,
speaking
of
spot
instances,
do
I
get
there.
B
Oh,
you
didn't
even
get
spot
instances,
so
we
have
so
have
preemptable
VMs
will
give
you
an
80%
discount.
If
you
want
to
run
your
workload
in
a
way
that
in
60
seconds
we
could
take
it
away.
So
do
not
put
production
work
cases
in
a
preemptable
vm.
That's
public
announcement
right,
but
you
know
exactly
what
it
costs
we'll
give
you
an
80%
discount
on
that.
B
We've
heard
you
guys,
and
we
say:
hey
one
of
us:
we
really
were
in
Google
and
we're
willing
to
make
a
committed
use,
but
we
kind
of
would
be
nice
if
you
know
maybe
six
months
from
now,
I
had
a
for
you
instance,
but
I
don't
really
want
to
buy
a
for.
You
instance
for
entire
three
years
on
a
machine-type,
that's
going
to
get
old
that
I'm
going
to
be
stuck
to
I
mean
no
one's
ever
done
that.
B
But
if,
in
theory,
if
you
had
written
that
problem,
that's
not
a
good
place
to
be
what
you
really
want
to
buy
is
I
want
to
make
a
commitment
to
that.
But
I
want
to
be
able
to
move
those
CPUs
anywhere
I
want,
you
might
need
to
run
them
over
in
Europe
or
I
might
need
to
run,
run
a
zone
or
I
might
need
to
go
from
a
four-way
to
a
32
Way
machine.
B
Excellent
will
give
you
this
kind
of
you
just
want
to
commit
to
the
long
term,
but
you
can
run
it
any
way
any
shape
any
way.
You
want
right.
So
let's
go
there.
So
that's
kind
of
base
I.
Think
of
that
is.
This
is
the
world
that
you
are
running
on
if
you're
running
OpenShift
dedicate
or
enterprise
or
any
version
of
OpenShift
or
any
Red
Hat
product
on
Google,
that's
kind
of
what
you
see.
Secondarily,
let's
go
into
what
else
this
is
nice.
We
have
a
little
thing
called
a
global
network.
B
We
have
a
little
we've
spent
a
little
bit
of
money
on
a
network
because
we
have
this
little
thing
called
YouTube
and
search,
so
we've
now
externalized
that
for
everyone,
so
no
one
has
ever
heard.
The
most
important
thing
is
in
terms
of
yours
is
have
a
data
center
everywhere
you
could
possibly
have
it
right.
It
doesn't
matter
how
big
it
is
just
as
long
as
I
stamped
it
out
in
every
possible
place
in
the
world.
Well,
except
that
thing,
is
that
a
user
right?
B
Well,
the
thing
for
today
is
number
of
hops
on
the
Internet.
It's
not
how
close
you
are
like
if
I
jump
on
the
internet
I
have
to
go
from
South
America
the
US
to
back,
because
we
don't
know
how
to
do
routing
or
we
don't
have
any
of
that.
That's
going
to
still
slow
you
down,
so
we
have
network
edge
presences
everywhere
and
we
own
all
of
this
network
right.
So
we
own
the
dark
fiber
that
connects
our
data
centers.
We
have
a
global
load
balancer,
yes,
a
global
load,
balancer
anywhere.
B
It
hits
on
the
edge
any
one
of
these
points
of
presents
or
hunt
points
of
presence
that
are
built
out
for
YouTube.
Now,
in
our
cloud
platform,
customer
goes
ISP
1/2,
it's
Google
Network
one
hop
on
the
dark
fiber
to
your
data
center.
In
order
to
your
to
your
openshift
workload,
one
shap
back
right,
the
customer
for
hops,
always
only
for
hops
and
we're
doing
all
the
directing
from
anywhere
on
the
edge
that
it
hits
anywhere
in
the
globe
right.
So,
but
we
do
care
cuz.
B
It
doesn't
matter
that
you
have
a
bunch
of
oh
there's,
our
global
load,
balancer,
so
million
queries
per
second
no
warming
time.
So
you
don't
have
to
call
up,
you
know,
call
up
your
provider
and
say
I'm
going
to
have
a
hype
traffic
load.
Can
you
please
make
sure
you
put
a
bunch
of
h.a
proxies
in
front
of
them
name?
Something
else
perfect.
Please
do
this?
No,
no,
no,
always
they're
always
available
on
every
point
in
the
world
right
in
front
of
your
application.
B
The
other
thing
is
yes,
we
have
a
large
number,
so
this
is
a
little
bit
of
an
eye
chart
to
be
totally
transparent.
You
can
see
a
few
of
these.
Let
me
do
it
this
way.
The
purple
are
regions
and
zones
that
we
already
have
right.
These
are
up
and
running,
so
we
have
good
global
coverage.
However,
in
the
next
12
months,
we're
also
going
to
be
opening
ones
in
u.s.
we're
at
some
palo
finland,
Frankford,
netherlands,
London
Montreal,
someone
asked
for
Montreal
Canada
done,
Sydney,
Singapore
and
Mumbai.
B
The
other
part
is
when
you
get
open
to.
If
you
get
access
to
all
the
api's
and
all
the
data
pieces
that
you
would
get-
and
let
me
spend
a
little
bit
of
time,
so
this
is
kind
of
how
they
all
come
together.
We've
talked
a
little
bit
about.
We
talked
a
lot
about
the
open,
a
Red
Hat
OpenShift,
how
that
fits.
It
runs
on
top
of
our
cloud
platform.
You
get
to
the
services
anywhere
a
nice
part,
and
somebody
would
encourage
you
to
mend
a
lot
of
time.
B
Looking
at
is
the
work
that
we're
doing
with
Red
Hat
around
the
open
service
broker.
Api
right.
This
means
that
any
you
can
have
access
to
the
services,
both
Google
and
any
provider.
This
works
together
with
us
and
there's
a
bunch
of
them.
Their
services
are
then
extensible
out
of
OpenShift
anywhere
in
the
Gulf's.
Are
the
consistent
integrated,
easy
to
use
same
credentials?
All
the
above?
B
This
has
been
something
that's
been
a
challenge
kind
of
across
the
industry,
and
this
is
one
we're
working
with
a
large
number
of
providers
together
with
so
you
can
take
a
look
at
that
that
will
be
coming
through
and
if
you're
curious
how
that
works,
we're
working
together
with
Red
Hat
being
the
kubernetes
community.
You
can
look
upstream
see
where
it's
headed.
It
should
be
towards
the
end
of
year.
You
can
see
exactly
when
that'll
land
inside
of
OpenShift,
and
that
is
that
and
I
have
ended
with
some
time
for
10
minutes
for
questions.
B
B
The
intention
is
because
we
do
bin
packing,
because
the
VM
sits
in
a
container
if
it's
the
same
size,
it
ends
up
overall
in
Google,
looking
the
same
from
from
a
packing
sampling
on
that
so
think
of
it
as
a
for
you
instance,
as
long
as
it's
like
the
same
kind
of
for
you
instance
yeah,
we'll
pack
that
together
for
you
and
then
give
the
discount
overall
something
you
should
expect-
and
this
is
all
we
have
a
pricing
calculator
and
can
take
a
look
at
it
most
of
our
customers.
B
Seeing
about
on
the
core
infrastructure,
so
the
BM
base-level
see
but
a
40-ish
percent
price
performance
of
Google
over
other
providers
right.
So
those
would
end
up.
So,
if
you're
functionally,
if
you
think
about
that
you
on
and
off
for
about
12
hours
on
12
hours
off,
you
kind
of
walk
that
through
for
the
entire
month
and
those
would
be
aggregated
into
roughly
maybe
a
50%
and
you
get
then
we'd
apply
that
margin
at
a
50%,
consistent
utilization
number
we
do
have
been
offering
that's
called,
contain,
Google
container
engine,
that's
correct,
but
it
doesn't
have.
B
Let
me
go
back
into
this
stepped
up.
Dub
dub,
dub,
dub,
dub
I'll
get
there,
but
it's
worth
having
a
slide
to
speak
to
this
yeah.
So
when
you
look
at
this,
there's
a
whole
bunch
of
stuff
in
here
in
particular,
so
we'll
have
like
storage
registry
like
networking,
but
there's
a
lot
of
some
of
the
work
here.
A
lot
of
the
cluster
services
that
and
then
everything
here
of
course
right
in
terms
of
the
deployment,
the
CI
CD
and
everything
that's
a
full
platform
when
you
come
to
it
but
yeah.
So.
A
Yeah,
so
think
of
cook,
so
with
open
ship
we
manage
kubernetes.
So
there
is
no
configuration
that
they
are
doing
in
on
kubernetes,
while
GK
will
have
to
do
some
of
those
things.
But
you
can
actually
move
your
OpenShift
container
platform,
that's
on
premise
and
move
it
to
Google
Cloud
and
you
can
manage
it
on
your
own,
but
you
will
run
it
on
GC,
not
GK
cluster.
B
Federation
is
still
what
can
it's
a
work
in
progress,
so
it's
something
like
that
was
correct.
I,
think
cluster
Federation
will
be
in
1:7,
yes
and
September
time
so
September.
When
that's
out,
then
I
think
it's
a
very
valid
question.
It
is
one
that
we
talk
about
all
the
time
on
how
we
see
that
working
for
everyone.
It's
a
good
question.
I,
don't
think
we
have
yeah
I
I.
A
Don't
there
are
a
few
technical
challenges
just
because
of
the
access
that
you
have,
because
we
are
like
a
cluster
admin
and
you
can
do
a
lot
of
things
on
open
shift,
on-premise
versus
on
gka.
There
will
be
some
things
that
you
may
not
be
able
to
do,
but
in
theory
they
are
the
same
kubernetes
and
it
should
be
possible
right
what,
with
software
everything
is
possible,
but
it
requires
a
little
bit
of
work
to
make
it
happen,
but
I
think
you
know
we'll
be
able
to
address
it.
A
A
So
we
already
have
a
block
few
blocks
where
we
have
shown
how
open
ship
dedicated
and
integrates
with
Google
services.
If
you
go
to
blog
door,
open
ship,
comm
you'll
find
some
of
those
integrations,
so
those
integrations,
even
without
the
service
broker,
exist
and
work.
It's
just
that
you
may
have
to
do
a
bunch
of
manual
configuration
to
make
that
happen,
and
we
have
outlined
those
chips
courses.
When
we
get
the
service
broker,
then
it's
much
easier,
because
these
are
just
services
in
a
catalog
that
you
can
pre-select
and
the
binding
happens
automatically
yeah.
They.
B
Work
really
well
together.
There
might
be
a
little
bit
extra
work
today
then
there
be
when
we
get
to
the
open
service
broker,
and
then
it's
just
an
API,
API
and
API
it'll
be
consistent
API
so
that
you
handle
things
like
credentialing
and
security
and
everything
the
same
and
that
you
can
do
it
today.
It's
just
a
little
bit
of
extra
work
and
we
do
it
far
across
stack
drivers,
obviously
big
one,
all
the
sequel
stuff
spanner,
which
is
our
global
database.
The
world's
largest
database
gives
you
sequel,
sequel,
syntax,
with
no
sequel
scale.
B
So
all
those
services
are
one
big
query.
Our
data
warehouse
is
all
everything
integrated,
and
so
we
do
that
regularly
on
the
OpenShift
roadshows
we
do
with
Red
Hat,
so
it'll
do
road
shows
where
we'll
get
you
guys
up
and
running
comfortable
with
OpenShift,
along
with
Google
services
and
and
you
work
with
both
in
in
a
day
will
actually
probably
be
back.
I
think
we're
going
to
be
back
in
end
of
this
quarter,
beginning
next
quarter
in
Boston.
A
B
And
then
I'm
going
to
bring
up
something
to
you
guys
all
have.
We
also
have
something
for
you
to
get
started.
If
you
want
shift
perfect
there,
we
go
so
there's
a
the
reference
architecture
and
everything
also
for
open
shift
on
GCP.
You
can
google
it.
If
you
want
me,
google,
that,
for
you,
let
me
know,
and
what
you'll
find
is
we
just
launched
this
week,
is
a
little
test
drive.
B
So
if
you
want
to
get
started
in
and
working
on
it
there's
a
multiple
ways:
there's
openshift
dedicated
on
GCP
ask
Red
Hat
and
you
Red
Hat
wrapping
everything
that's
up
and
running.
If
you
just
want
to
come
and
play
with
it.
All
of
these
will
feel
the
exact
same.
So
I'll
run
the
same.
There's
just
some
different
aspects
for
you
jump
in
and
get
your
hands
wet
hands
dirty
on
it.
So
this
is
openshift
run
on
GCP,
you're
off
and
running
it'll
set
up
an
environment
for
you
and
you
get.
B
You
know
a
couple
days
to
play
with
it,
and
then
you
know
you
can
work
with
red.
If
that's
hey,
that's
working
well
for
you
perfect,
you're
off
and
running,
so
it's
real
good
for
people
just
kind
of
get
there.
What's
it
look
like
without
going
through
all
the
process
necessarily
of
having
to
set
it
up,
it's
just
for
dev
that
goes
in
and
says
now:
I
see
how
it
works.
So
just
a
mother
stuff
that
we're
up
to
so.
A
In
your
example,
you
are
assuming
that
that
node,
that
you
have
on
open
share
and
on
Google
Cloud
as
part
of
open
shift
cluster,
that
you're
running
on
premise.
His
question
is
it's
a
completely
different
kubernetes
that
is
running
can
I
have
something
that
orchestrates
between
one
kubernetes
on
openshift
under
kubernetes,
on
GK
yeah.
B
So
that
thing
and
then,
if
you're,
looking
for
an
H
a
so
that
you
have
the
thing:
that's
it's
if
you're
doing,
multiple
again
back
to
where
you're
doing
Federation
something
we've
learned
in
seeing
lots
and
running
billions
and
containers.
Is
your
network
between
data
centers,
pretty
dang
important?
So
certainly
that's
something
to
be
looking
at
if
you're
trying
to
run
them
in
an
H,
a
constantly
load
balance
space.
B
So
if
you
want
to
run
in
two
zones
in
Google,
obviously
that
dark
fiber
the
connectivity
between
the
zones
outside
regions,
to
my
terms
correctly,
the
connectivity
between
the
regions
would
be
particularly
differentiated.
So
certainly,
if
you're
looking
for
an
H,
a
cluster
of
openshift
that
we
at
work
saw
that
Google
is
a
pretty
pretty
dang
good
fit
for
that.
So
good
lime
between
you
and
probably
some
people
in
a
plane
and
or
a
beer,
so
guys
they
really
do
appreciate
the
time.