►
From YouTube: Openshift on AWS - Kubernetes, Open Service Broker and Customer Obsession – David Duncan, Amazon
Description
OpenShift Commons Gathering December 5th 2017 Austin, Texas
David Duncan, Amazon Web Services.
A
Yeah
so
I'm
just
gonna,
throw
out
a
few
statistics
here
and
things
that
I
thought
were
very
interesting
about
just
the
adoption
of
the
container
space
and
kubernetes.
So,
first
off,
just
dr.
Conn
I
got
struck
by
the
numbers
of
applications
that
were
containerized
today
and
the
number
of
things
that
were
happening.
A
Just
such
an
active
space
and
data
dog
actually
had
a
an
article
where
they
talked
about
the
adoption
over
this
past
just
the
past
one
year
and
the
increase
in
the
number
of
docker
images
that
were
being
used
in
production
and
right
scale
very
much.
The
same
and
I
like
this
metric,
because
I
felt,
like
this
targeted
very
directly
an
enterprise
space.
A
So
but
then
the
one
that
really
really
impressed
me
was
from
the
CN
CF
survey,
where
it
looked
like,
with
just
just
native
adoption
of
kubernetes,
about
63%
of
those
workloads
we're
ending
up
on
AWS.
So,
looking
at
that
information
and
and
a
strong
effort
that
we've
had
over
quite
a
long
time
now,
with
red
hat,
we
were
working
on
putting
together
more
of
a
picture
for
what
OpenShift
look
like
and
kubernetes
was
getting
this
wider
adoption
across
not
just
enterprise
business,
but
a
lot
of
mid-market
business
and
a
lot
of
it.
A
A
A
A
Working
over
the
past
year,
or
so,
this
was
announced
at
summit
that
we
were
working
on
an
integration
of
the
the
open
service
broker
API
through
collaboration
with
Red
Hat
and
in
our
joint
work.
We
put
together
a
group
of
services
that
were
natively
available
to
Red
Hat
OpenShift
customers
through
the
the
open
shift
console
that
work
was,
is
built
around
the
open
shift,
ansible
service
broker
and
the
collaboration
work
we
did.
There
was
that
cloud
formation
templates
were
created
and
then,
in
the
framework
in
the
open
chef,
danceable
service
broker.
A
A
A
So
a
lot
of
that
work
went
into
building
a
an
immediate
user
experience
so
that
not
knowing
everything
about
the
command
line?
You
might
be
able
to
use
those
service
brokers
immediately
from
the
console
so
I'm
proud
of
this
photo,
because
it
shows
that
a
lot
of
this
work
was
done
using
open
shift.
Origin
2,
so
there
were
10
services
that
we
focused
on.
A
So
if
you'd
heard
any
of
the
open
service
broker,
API
discussions
early
on
they
talked
about
one
of
the
first
surfaces
for
attack
was
the
database,
so
RDS
I
relate
the
Amazon
relational
database
service
was
one
of
our
of
first-first
projects
for
adoption.
So
early
examples
of
the
open
service
broker
in
in
action
showed
the
RTS
MySQL
being
used,
but
then
other
analytic
services
like
Athena
Amazon,
EMR
and
Amazon
redshift
all
have
become
part
of
our
first
grouping
of
tools
available
or
supposed
today
through
the
service
broker
API.
A
So
where
are
all
the
rest?
That's
what
you
want
to
know.
Well,
we
want
to.
We
do
want
to
eventually
you
know.
In
the
fullness
of
time,
we
would
like
to
see
all
those
services
in
the
open
service
broker
what
we
would
really
like
to
work
on
them
in
the
order
that
is
most
efficient
for
your
use,
so
we're
looking
forward
to
getting
feedback
on
what
should
be
next
in
the
adoption
line.
A
So
we
think
this
offers
you
incredible
agility
inside
of
the
ordinate
of
the
the
infrastructure,
so
we'd
like
to
see
how
you
use
this
and
get
more
get
more
detail.
You
may
have
noticed
that
I
have
just
a
few
statistics.
The
reason
I
don't
have
any
statistics
that
show
open
shift
is
because
I
need
to
hear
more
information
from
the
users.
I
can't
see
what
you're
deploying
I
can
only
see
that
it's
been
deployed.
I
know
there
are
millions
of
containers
out
there,
millions
of
images
that
are
being
deployed
and
I'm
sure
there.
A
There
is
a
very
large
number
of
OpenShift
deployments
out
there
and
I'd
love
to
hear
more
more
feedback
from
you
about
how
you're,
using
redhead,
OpenShift
or
OpenShift
origin
in
the
environment.
You
have
a
global
footprint.
You
can
deploy
pretty
much
anywhere,
we
feel
like
we
have
a
great
shared
security
model.
A
So,
from
the
from
the
instance
up
from
the
host
up,
we
know
you're
using
OpenShift
and
you're,
using
your
own
tools
to
secure
your
environment
and
we're
handling
that
from
the
instance
and
with
multiple
industry,
certifications
and
best
practices
for
security
there
and
again,
this
partner
ecosystem
that
we
have
includes,
basically
everything.
That's
in
the
the
Red
Hat
suite,
but
also
we
have
thousands
of
partners
who
were
there,
both
technology
partners
who
are
adding
software
and
machine
image,
products
and
services
inside
of
AWS.
A
You
remember
I,
remember
Telus,
saying
the
gentleman
from
Telus
saying
that
they
had
only
a
few
people
inside
of
their
organization
who
were
AWS
experts
I'm
sure
they
only
had
a
few
people
who
were
open
shift
experts
as
well,
but
we
want
to
make
sure
that
we're
empowering
people
in
organizations
where
they
have
neither
or
one
of
those
expertise
in
a
very
quick
way.
So
inside
of
Amazon,
we
have
the
AWS
QuickStart
team
and
the
QuickStart
team
is
built,
or
it's
made
up
of
partner
solutions.
A
Architects,
similar
to
myself,
but
also
a
whole
host
of
people
out
from
the
community
who
integrate
and
collaborate
on
those.
Those
quick
starts,
so
that
they
can
understand
better
how
they
run
and
make
sure
that
they
run
in
the
best
possible
way,
and
we
wanted
to
make
sure
that
customers
could
quickly
and
efficiently
deploy
an
open
shift
environment
of
their
choice
for
experimentation
and
then
with
a
single
click
and
in
the
cloud
formation
console
roll
back
the
entire
implementation.
A
So
this
this
work.
This
represents
basically
the
architecture
that
gets
deployed.
This
architecture
is
based
upon
the
Red
Hat
reference
architecture
and
we
worked
pretty
closely
with
Scot
Collier
and
his
associates
from
the
Red
Hat
reference
architecture
team,
and
they
in
turn,
worked
in
tandem
with
the
OpenShift
online
team
to
find
best
practices
for
deploying
OpenShift
in
AWS,
and
that
included
some
interesting
things
like
leveraging
the
last
the
elastic
load
balancer
as
a
way
to
balance
load
for
the
application
nodes,
and
also
to
provide
a
front-end
for
applications
that
are
deployed
as
within
your
projects.
A
So
you'll
see
on
here
that
we
used
serverless
functions.
We
use
the
service
function
to
ensure
that
we
were
getting
the
ssh
credentials
across
to
each
one
of
these
individual
nodes
and
and
we
generate
auto-generated
those
credentials
and
the
reason
we
use
that
service
function
outside
of
of
the
open
shift
environment
was
because
that
gave
us
an
opportunity
to
do
the
full
cleanup.
So
we're
deploying
that
stack
that
stack
is
fully
functional,
but
then
the
wrapper
around
that
is
negative.
A
A
So
that's
still
around
today,
so
looking
forward,
our
hybrid
team
is
going
to
carry
this.
Our
partner,
hybrid
team
is
going
to
carry
this
into
the
OpenShift
3.7,
where
we'll
incorporate
the
AWS
service
broker
some
native
logging
and
metrics
and
change
some
things
about
the
way
that
that
we
use
route
53
for
DNS
and
one
more
thing
that
I'm
really
excited
about,
which
is,
we
have
a
best
practice
for
scaling
out
nodes
that
was
put
together
by
the
QuickStart
team,
so
I
say
QuickStart
team.
A
A
The
other
thing
about
this
work
is
that
this
this
commit
I've
pointed
to
on
the
or
the
pull
pull
request
that
I've
pointed
to
on
the
slide.
This
was
done
by
Andrew
Glenn,
who
is
one
of
our
support
engineers.
So
not
only
is
the
QuickStart
team
collaborating
with
members
of
support,
but
support
has
also
built
a
community
of
practice
around
OpenShift
and
is
getting
a
much
better
understanding
of
how
to
how
to
work
with
it.
A
A
So
some
of
the
interesting
demands
we
found
in
in
building
the
auto
scaling
models
where
that,
with
our
lifecycle
hooks,
we
had
about
four
minutes
to
finish
our
job
and
that
didn't
necessarily
always
gel
with
the
with
the
time
necessary
for
the
ansible
scripts
to
play
out.
So
in
using
those
ansible
scripts,
we
were
improving
those
ansible
scripts.
We
worked
with
the
ansible
team,
the
Red
Hat
ansible
team,
to
provide
their
feedback
and
improve
those
openshift
ansible
scripts.
A
We're
using
CloudFormation
templates
as
a
basis
for
the
QuickStart
we're
using
Identity
and
Access
Management
to
to
do
fine-grained
security
around
those
specific
specific
elements
of
the
the
environment,
so
that
we
can,
you
know,
maintain
the
least
privilege
we're
using
the
virtual
private
cloud
to
ensure
that
the
end,
the
deployments
that
we're
doing
are
as
localized
as
possible
and
non-routable,
where
necessary,
but
routable.
Where,
where
you
require
your
applications
to
be
exposed
specifically
through
the
eeob
and
then
we're
using
serverless
functions,
they
are
native
for
serverless
functions
to
support
the
deployments.