►
From YouTube: Preparing for Uncertainty with Red Hat and Google
Description
In this talk from the OpenShift partner theater at Red Hat Summit 2018, hear Brandon Jung from Google talk through the current and future of Red Hat and Google' partnership on OpenShift.
A
Okay
sounds
like
I'll
get
started
and
I'm
told
other
people
start
rolling
back
in
so
anyhow.
I
want
to
spend
a
little
bit
of
time
was
asked
to
come
chat
a
little
bit
about
what
Red,
Hat
and
Google
have
been
doing
specifically
in
the
OpenShift
arena.
So
what
I
want
to
do
is
give
you
guys
a
little
bit
of
sense
of
the
work,
we're
doing
and
perhaps
a
little
bit
of
details
around
where
we're
headed.
So
a
couple
things
just
heads
up.
A
If
you
have
not
yet
visited
head
on
down
to
our
booth
down
the
way
you
can
sign
up,
you
can
do
some
test
labs
and
you
can
pick
up
an
AI
kit
when
you
come
on
down.
So
that's
our
lab
you'll
see
a
whole
bunch
of
other
stuff
we're
doing
keynote
tomorrow,
which
will
talk
additionally,
a
lot
around
how
you
get
started
with
machine
learning
this
today,
I'm
going
to
get
send
the
foundation
for
tomorrow's
keynote,
which
would
give
you
an
idea
of
what
we're
doing
up
today.
A
So
we
look
at
the
world
in
a
very
similar
way
to
what
what
red
hat,
how
red
hat
covers
it
and
that's
very
much
open,
first
open,
always
open
everywhere,
and
so
we
work
with
red
hat
in
every
one
of
the
open
source
communities
that
you
hear
them
chat
about
the
big
one,
of
course,
being
kubernetes,
so
we'll
spend
a
little
time
there.
They.
The
question
people
usually
ask-
is
you're
here
at
your
thread
hat.
Obviously
you
made
that
choice.
The
question
is:
why
would
you
work
with
Google
and
Red
Hat?
A
Why
do
these
two
and
how
might
those
two
actually
fit
together?
So
first
off
lots
of
time
was
established
relationship.
This
is
one
that
goes
all
the
way
back
to
core
Linux
kernels.
So
if
you're
gonna
ask
like
hey
by
the
way,
how
did
these
lil
cee
groups
and
containers
show
up
that's
a
Google
contribution
into
core
Linux
and
it's
one.
A
We've
been
working
with
obviously
thread
hat
ever
ever
since
there's
a
little
bit
of
history
from
where
we
came
all
the
way
back
to
the
world
of
KVM
all
the
way,
through
all
the
work
that
we've
contributed
into
cloud
forms
into
ansible
and
obviously
all
the
Cooper
Nate
stuff,
with
open
shift
so
engineering's
there
I
think
this
is
a
little.
This
is
actually
quite
drastically
different
from
the
way
engagement
models.
A
I
think
we
may
see
with
other
providers
right,
and
so,
if
you
look
with
a
lot
of
other
providers,
Red
Hat
does
an
awesome
job
with
getting
to
run
on
their
platforms.
One
of
the
things
is
interesting.
If
you
look
back
historically,
Obon
shift
has
been
dedicated
in
all,
has
been
running
on
Google
since
2016,
and
that
didn't
require
much
engineering
on
RedHat
spark
because
surprise,
surprise.
We
already
do
that
right.
A
The
two
teams,
if
you
want
to
know
jump
into
github
and
you'll,
find
20
different
projects
that
we're
working
on
right
now,
just
under
kubernetes
special
interest
groups
that
Google
and
Red
Hat
are
co-leading,
and
so
we
do
that
engineering
every
single
week,
every
single
week
we
have
those
tie
out
so
that
make
sure
the
products
align.
That
makes
sure
that
we're
headed
in
the
right
direction
and
that
just
means
when
Red
Hat
shows
up
and
wants
to
run
anything
on
Google
function.
That's
all
they've
been
doing
that's
what
our
engineering
has
been.
A
So
there's
not
a
separate
team
to
do
it.
That's
just
real
stream
into
how
we
execute.
We
have
very
similar
vision
in
terms
of
the
view
of
DevOps
a
lot
of
the
investment
there.
The
investment,
like
you
said
with
with
ansible,
etc.
There's
a
little
bit
of
ideas
to
give
you
a
sense
of
the
size
of
the
open-source
commits
kubernetes,
which
is
now
the
largest
project
on
github
between
the
two
companies
we
contribute
45%
of
the
code
to
this
Google
will
do
about
38%
right
now
and
Red.
Hat
is
by
far
the
largest
second
contributor.
A
In
there
we
look
at
the
Linux
kernel,
which
is
huge
between
the
two
companies
are
doing
10%
of
that
and
it
kind
of
cuts
across
the
back
you
even
looking
over
at
OpenStack
Google
contributes
in
OpenStack.
You
might
ask
why
again,
that's
hey.
We
want
to
make
sure
we
hit
those
customers,
we
meet
them
exactly
where
they
are
and
a
lot
of
them
running
OpenStack.
A
lot
of
you
obviously
are
running
OpenStack,
so
we
got
a
lot
of
computing
community
powered
innovation.
This
cuts
across
the
big
areas.
We
talked
a
bit
about
the
cloud
native.
A
This
is
gonna,
be
down
to
the
fedora
project.
That's
gonna,
be
in
the
core
Linux
product.
It's
gonna
be
gr
PC
for
networking,
big
ones
you're
used
to,
but
all
the
way
out
to
this
machine
learning-
and
these
are
the
areas
we'll
spend
some
time-
the
big,
the
patchy
projects,
the
Apache
beams,
we're
looking
at
tensorflow
we're
looking
at
are.
A
Those
are
big
areas
that
Google
has
long
since
invested
in
and
actually
two
of
those
our
primary
Google
projects,
as
well,
tensorflow
being
the
second
largest
project
on
github
beam
being
the
universal
runner
that
allows
you
to
run
on
any
hadoop
distribution.
We
did
a
little
bit
here.
That
said,
let's
jump
into
the
fun
stuff,
the
other
question
that
comes
up
when
you
do
it,
which
I
now
have
a
few
people
so
I
can
engage.
A
You
not
sure
I
can
get
you
forward,
but
at
least,
if
you're,
there
I
suspect
in
some
ways
the
question
comes.
Why
Google
right?
Why
our
platform?
What
does
it
do?
Why
is
it
different?
What
does
this
co
engineering
actually
mean
first
off
just
flat
out
best
performance
flat
out
best
performance
that
comes
from
the
co
engineering.
We
do,
but
it's
also
benchmarked.
So
if
you
all
are
looking
for
anything,
there's
a
open
source
benchmarking
again,
it's
called
cloud
benchmark,
err
or
benchmark
ur
at
cloud
benchmark.
So
it's
called
cob
benchmark
err.
A
You
can
get
to
cloud
benchmark
comm
if
you
want
to
look
at
those.
These
are
open
source
measurements
of
all
the
important
things
that
you
might
want
to
do
so
anything
from
boot
times
to
your
Linux
kernel
to
your
speed
of
your
cluster
to
your
network
throughput.
These
are
all
pieces
that
everyone
cares
about,
are
really
important.
That's
something
that
we've
invested
in
it's
something
we
do
very
well,
that's
one
that
can
be
benchmarked
to
open-source
any
time
you
want
and
best-in-class
one
last
well,
I
went
to
pieces.
There
we
go.
A
Han
will
invest
a
little
security,
so
I'll
spend
some
time
there.
Reliability,
we
own
our
own
network.
We
have
best-in-class
operations.
The
world
that
YouTube
sits
on
is
one
that
you
guys
get
to
leverage
as
well.
So
a
couple
pictures
here
when
you
look
at
it,
this
picture
is
relatively
easy
at
first
to
see
this
looks
like
everyone
else's
diagram
that
says
how
many
zones
do
you
have?
How
many
regions
you
have
the
big
blue
dots
with
little
numbers
in
them?
Those
are
regions
that
exist
today,
so
functionally
I.
A
Think
if
I'm
looking
about
the
only
interesting
place
that
we
don't
have
is
good
Africa
coverage
from
a
core
place
that
we
have
data
centers,
blue
ones,
already
built
out
there
ready
to
go.
White
ones
are
a
couple:
more
will
land
we
built
up
about
ten
of
these
a
year.
So
this
will
just
keep
multiplying.
A
That's
interesting,
but
I
think
that's
pretty
similar
I
think
the
more
important
piece
to
look
at
this,
in
particular
when
you
think
about
a
global
application
or
best-in-class
performance
you're
trying
to
deliver
to
your
retail
customer
is
all
those
little
small
dots,
because
the
small
dots
when
you
look
at
them.
Those
are
the
points
of
presents
and
all
those
points
of
presents
come
from
an
investment
in
YouTube,
which
means
every
place
that
a
customer
exists.
A
There
are
only
one
hop
to
getting
on
Google's
network,
so
you're
with
an
isp
you're
one
hop
for
the
customer
to
hit
the
point
of
presence
from
that
point
of
presence,
you're
on
dark,
fiber
directly.
Google
data
center
process
out
google
data
turn
it
to
the
end
of
the
pop
back
to
customer.
So
that's
four
hops
that
does
not
exist
anywhere
else
in
the
world,
even
close.
So
a
couple
on
top
of
this,
the
best
CDN
in
the
world.
A
Do
you
have
the
best
CDN
fastest
network,
fully
encrypted
at
transit
in
transit,
I,
didn't
say
at
rest
in
transit,
you
have
the
best
security
best
speed,
best
solution,
best
place
to
run
your
open,
stat
OpenShift
bar
none
so
that
network.
It
takes
huge
investment,
it's
also
required
years
and
years
and
years
of
refining
that
we
put
out
there
and
that's
a
piece
that
you
would
work.
Other
one
comes
up
a
lot,
so
I
talked
a
bit
about
performance.
Let's
be
honest:
we
also
care
a
whole
lot
about
price.
A
So,
if
you
look
at
a
regular
pricing
stand
where
a
standard
provider
will
give
you
a
price
with
no
work
whatsoever,
and
this
isn't
really
as
obvious.
But
so
let
me
walk
a
little
bit
through
this.
We
have
a
whole
bunch
of
stuff
from
per
second
billing,
committed
use,
discounts,
other
pieces
that
don't
exist
and,
on
average,
we're
looking
at
most
customers.
A
If
you
want
a
VM
for
the
month,
for
example,
where
does
it
give
you
a
30%
discount
because
you
ran
it
for
the
month
and
it
was
the
most
efficient
platform
that
you
could
so
it
saves
us
money,
so
you
should
save
money
and
you
shouldn't
have
to
do
any
magic
are
eyes
or
you
shouldn't
have
to
buy
unique
pricing
mechanisms.
You
shouldn't
have
to
have
an
economists
on
staff
in
order
to
do
that.
Well,
that's
there's
a
couple
other
pieces
that
are
pretty
important.
If
you
matter
to
an
open
shift
one
and
a
lot.
A
Also
to
your
data
space,
you
can
resize
the
disks
on
the
fly.
That's
a
non-trivial
process,
pretty
damn
hard,
but
also
really
important
from
their
boot
up
times.
Second-To-None
custom
machine
types.
So
if
anyone
anyone
is
anyone
or
anything,
a
cloud
just
help
me
out
a
little
bit.
Yes,
anyone
looked
at
when
you
buy
in
the
cloud.
What
do
you
buy
you
a
to
Wii
U
for
you
and
8?
U
a
16,
it's
an
orders
of
two
and
then
you
can
choose
some
combination
of
high
CPU
or
low
CPU.
A
A
custom
machine
type
is
exactly
what
it
says.
If
you
want
a
14
Way
machine,
any
amount
of
memory
you
want
to
it
and
any
disk
you
want
to
it.
That's
exactly
what
you
get
now,
there's
a
couple
reasons
that
were
able
to
do
that
and
it
comes
down
to
functionally
the
way
Google's
architected,
so
I
think
in
the
keynote
yesterday
we
went
through
it.
There
was
a
lot
of
questions
about
what's
called
Qbert
and
Qbert
is
a
VM
running
in
a
container
great
idea.
A
But
if
people
like
I,
just
not
how
you
do
it
well,
the
funny
thing
is
everything
in
Google:
that's
a
VM
runs
in
a
container
and
we
did
it
for
a
really
interesting
reason.
We
had
an
option.
If
you
tell
the
story,
if
we
go
back
a
bit,
if
you
go
back
say
6
years,
when
we
first
decided
we're
gonna
roll
out
VMs,
we
had
a
really
easy
decision
or
a
tough
decision.
A
I
should
say
we
could
make
every
other
public
cloud
and
the
way
you
solve
to
a
VMs
as
you
lay
down
bare
metal
and
throw
VMs
on
top.
You
know
a
VM
on
top
of
it.
You're
off
and
running
big
problem
was
no
one
in
Google
rayona
VM
everything
was
already
container
based,
so
we
made
a
very
fundamental
decision
when
we
went
about
this
and
it
took
a
lot
more
engineering
and
a
little
bit
more
time
to
get
to
market.
We
took
vm's
and
we
put
them
in
containers
now.
A
These
aren't
docker
containers,
but
you
can
think
of
them.
Relatively
simple
Google
had
already
orchestrated
this,
and
we
have
something
that
runs
our
containers
in
our
platform
called
Borg
and
what
that
means
is
or
what
bork
does
is
it
schedules
I
think
if
I
walk
across
her
out
to
get
feedback,
I
won't
do
that
board
schedules
between
2
&
3
billion
containers
a
day
and
as
it
schedules
those
it
figures
out
the
very
best
host
to
run
it
on
it
does
all
that
work
by
taking
VMs
and
putting
them
in
containers.
A
A
If
you
don't
know
what
a
noisy
neighbor
is
it's
in
a
virtualized
environment
when
someone
particularly
like
a
Netflix
happens
to
land
on
same
host,
you're
on
and
that's
not
good
news,
because
Netflix
is
going
to
use
every
last
resource
that
they
can,
particularly
on
the
network
side,
and
what
we
see
is
customers
generally.
When
that
happens,
they
end
up
with
a
noisy
neighbor,
but
they
don't
know
why?
A
Why
it's
there
and
it's
not
easy
for
them
to
tell,
because,
of
course,
it's
shared
resources
and
we're
not
sure
you're,
not
evil,
sure
that,
what's
going
on
that
work,
I
went
back
to
why
we
did
the
work
around
a
container
based
world.
We
put
stuff
into
containers
vm's
into
containers,
because
it
allowed
us
to
resize
them
and
move
them.
So
we
can
move
a
1080p
application
running
in
a
VM
streaming
at
1080p
service
move
it
in
the
VM.
We
can
seamlessly
move
that
to
a
completely
different
host
in
the
same
region.
A
A
This
problem
is
slightly
frightening,
so
if
you're
not
familiar
with
what
you're
looking
at
this
is
spectra
in
meltdown.
These
are
core
linux
issues,
their
core
linux
problems
that
showed
up
early
this
year.
Well,
in
reality,
they
showed
up
over
a
year
ago.
These
right
here
are
core
in
your
BIOS.
They
are
really
really
really
dangerous
and
they're
extremely
difficult
to
patch
interesting
information.
The
team
that
found
these
exploits
as
a
team
called
project
zero
at
Google.
A
A
On
top
of
that,
so
that's
kind
of
your
base,
hey
what
do
we
have?
Why
does
it
matter?
You
know
why?
Is
he
engine
solid?
What's
your
transmission
look
like
once
you
have
your
engine
transmission,
then
let's
go
ahead
and
throw
this
up.
You
guys
have
seen
this
a
ton
of
time.
You
probably
heard
we've
heard
all
about
open
shift
all
day
week,
so
I'm
gonna
spend
my
last
couple
of
minutes
detailing
this
slide,
which
is
seamed
it's
a
little
dense.
A
So
let
me
make
sure
I
can
explain
what
you're
seeing
at
the
core
here
you're
seeing
core
open
shift.
This
is
your
core
open
shift
environment.
It
runs
on
top
of
compute
engine.
So
all
the
things
we
talked
about
earlier,
the
custom
VMs
the
boot
times,
the
throughput
all
the
things
that
make
sure
these
applications
have
a
killer
experience
sitting
on
top
of
our
infrastructure.
They
can
lever
at
Cloud.
Registry
global
cloud
registry
sits
with
our
persistent
storage
with
Google
Claire
and
then,
of
course,
the
whole
networking
layer
on
the
top
right.
A
So
you
bring
the
same
thing
you've
ever
gotten.
It's
been
abstracted,
we've
worked
with
Reddit,
they
do
an
awesome
job.
These
are
things
you
would
never
see
in
your
open
ship
deployment.
You
just
take
advantage
of
what
you
just
heard,
because
it
already
runs
there,
that's
powerful,
but
the
power
of
open
is
really
what's
on
the
right,
and
this
is
your
services.
So
we're
pretty
familiar
with
what
a
service
broker
is
but
well
having
a
service
broker,
is
useful.
Being
able
to
getting
access
to
services
is
tricky,
so
it's
Red,
Hat
and
Google
went
actually.
A
The
process
was
a
little
different
than
this.
The
open
service
broker
open
service
broker
API
that
allows
now,
in
this
case,
I'll
kind
of
break
break
the
news
to
you.
Every
Google
service
that
you
would
want
you
get
free
act
is
instantaneous
access
on
Prem
or,
if
you're
running,
on
top
of
Google
to
all
these
kind
of
services
to
your
applications.
Now
the
use
case
we're
seeing,
of
course,
with
customers,
you
guys
will
see
this
again
with
Coles.
Tomorrow
they
love
the
data
services.
They
love
our
machine
learning,
they
love
the
api's.
A
They
love
all
the
things
that
they
want
to
have
access
to.
They
get
seamless
access
if
there's
nothing
that
that
company
ever
had
to
do
because
Google
and
Red
Hat
an
S,
AP
and
IBM
all
work
together
in
this
open
service
broker.
That
service
is
all
of
these
anytime.
You
want
from
anywhere
you
want
so
now
when
you
want
to
supercharge
that
application,
or
you
want
to
give
your
developer
hey
developer.
Would
you
like
to
be
able
to
put
that
another
language
perfect
one
API
call
to
the
best
translate
in
the
business.
A
If
you
want
to
do
that,
if
you
want
to
look
at
and
analyze,
you
know
speech
to
text,
perfect,
one-click,
API,
you've
got
it
right
same
same
thing.
If
you're
running
on
the
platform,
I
wouldn't
suggest
you
run
your
data
tier
separate
right,
don't
know
in
your
sequel
instance
in
Google
and
your
application
on
Prem
try
and
keep
those
together.
But
if
you
decide
you
want
it
on
Google,
you
also
get
access
without
any
work,
to
some
really
very,
very,
very
powerful
tools
like
spanner.
A
So
if
you
are
a
sequel
user
for
reference,
what
spanner
is
is
spanner
is
what
happens
when
you
take
in
no
sequel
database
that
can
go
near
on
the
globe,
with
sequel
from
the
consistency
standpoint,
so
you
always
had
to
take
a
hit
in
terms
of
a
database.
It
was
always
a
hit
of
I
could
be
consistent,
but
I
couldn't
be
geographically
distributed.
So
spanner
is
a
global
database.
It
is
the
one
that
sits
underneath
all
of
Google
is
a
granddaddy
of
all
databases.
A
It
allows
you
to
write
simple
sequel
and
literally
the
same
time
commits
within
milliseconds
on
opposite
sides
of
the
globe.
Right
don't
have
to
manage
it,
don't
have
to
do
any
work,
good
done,
ready
to
go
and
that's
something
you
can
surface
and
take
advantage
or
your
your
your
app
your
users
on
Google
they're,
running
Google.
They
have
free
access
to
all
of
that
right
through
the
service
broker,
so
you
now
fully
feel
like
a
first
party
service
on
pram
or
with
GCP
or
for
that
matter.
A
If
you're
running
another
cloud-
and
you
like
the
services
that
we
have
you're
welcome
to
access
those
that
are
easy
to
access
from
anywhere,
so
this
is
really
kind
of
powerfully.
This
is
what
we're
seeing
is
a
ton
of
work
with
customers
around
taking
great
applications
and
making
that
much
better
right,
giving
developers
exactly
what
they
want
with
no
work
now.
The
other
thing
this
also
solves
is
very:
are
back
controls
right,
the
role
based
access
controls,
the
consist
see
who
gets
what
all
of
that
is
extremely
important.
A
All
that's
obviously
baked
into
the
work
that
we
already
did
around
the
service
broker,
so
I
will
pause.
One
last
thing
we'll
spend
some
more
time
tomorrow,
I'm
almost
at
the
end
of
time.
If
you
want
to
play
with
these
api's
or
you
want
to
do
anything
specifically
again,
we
have
a
booth,
we'd
love
to
give
you
a
we're.
It's
a
Google
home
kit,
it's
a
Raspberry
Pi
that
you
can
win
or
not
win.
All
you
have
to
do
is
do
a
couple
coding
exercises
it's
off
and
running.
A
We'll
spend
a
much
time
tomorrow
going
through
with
you
guys
at
the
keynote
anything
you
already
have
so
since
everyone's
a
redhead
customer,
just
I'm,
just
being
fully
transparent,
too
easy
if
you're
right
a
customer,
it
all
runs
on
Google
today
best
place
to
run
your
rel.
You
want
per
second
billing
on
rel
on
Google.
You
want
to
bring
your
relic
license
to
Google
done.
You're
an
ansible
best
place
to
run
ansible
same
thing
with
your
openshift,
all
that
they
are
easy
ready
to
go
been
for
a
long
time.
A
Last
piece
someplace
to
do
like
clearly
come
run,
this
hey.
If
you
just
anyone
new
to
open
shift,
let
me
say
another
way:
who
has
used
open
ship
who
has
not
used
open
shift?
Do
it
not
okay,
so
this
might
be
a
fun
place
for
you
test
drive.
This
is
gonna,
set
up
an
entire
environment.
One-Click
all
set
up.
You
have
an
entire
open
shift
environment
in
a
test
drive,
it's
all
built
for
you,
you're
ready
to
go
build
whatever
you
want.
Take
it
through.
It's
called
a
test.
A
Drive
requires
no
cost,
nothing
sign
up,
ready
to
go
off
off
and
running,
and
all
this
stuff
integrated
in
there.
So
none
of
the
day
one
openshift
awesome
it
takes
some
day.
One
work
to
set
it
up.
This
means
you
can
go
straight
to
day
to
get
to
play
with
it
off
and
running
no
problem
whatsoever.
So
I
will
stop
and
pause
with
the
last
couple
seconds.
Any
questions
anyone
brought
that
they
would
want
to
answer.