►
From YouTube: Learning How to Pronounce Kubernetes to Production in 3 Months! by Sheriff Mohamed, GolfNow
Description
Learning How to Pronounce Kubernetes to Production in 3 Months!
Presented by Sheriff Mohamed, GolfNow & Josh Chandler, golfchannel.com
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
How
you
doing
everybody
can
I
hear
good,
come
on
all
right,
yeah,
let's
get
some
life
in
here,
so
we're
gonna,
tell
you
a
little
bit
of
our
journey
into
kubernetes
and
how
we
went
from
learning
it
you're
learning
how
to
pronounce
it
to
production
in
about
three
months
time
or
so
we're
from
golfNow,
which
is
a
branch
of
Golf
Channel,
which
is
also
a
branch
of
NBC
Sports,
and
it
keeps
going
from
there
and
I'm
going
to
spare
you
all
those
details.
So
who
are
we?
A
I
am
Sharif
Mohammed
director
of
architecture
on
golf
now
been
with
company
seven
years.
Actually,
today
is
my
seven
year
anniversary
yeah.
There
you
go.
Thank
you,
yeah
Josh,
Chandler
senior
software
engineer
on
architecture
team
as
well
he's
been
two
plus
years
about
so
yeah.
So
what
do
we
do
at
golf?
Now?
We
are
an
e-commerce
platform.
A
40
times
we
do
teeth.
We
sell
tee
times
for
military
things
like
that
for
nice
discounts,
obviously,
for
them
we
have
our
counterparts
in
the
Europe
and
Europe
called
tee
off
times.
We
also
have
a
Groupon,
like
application
called
deal
caddy
to
sell
golf
clubs.
Gloves
things
like
that,
we
also
help
with
operations
point
of
sale
for
the
golf
courses,
electronic
TSheets.
We
help
courses
that
are
mom-and-pop
shops,
all
the
way
up
to
resorts
with
their
operations
there.
We
also
offer
services
like
phone
answering
services,
24/7
365.
A
A
So
we've
been
growing
like
crazy,
we've,
I
think
bought
about
10
companies
or
so
in
the
past
couple
years.
So
it's
been,
it's
been
awesome,
so
that
takes
us
into
growth.
Is
awesome
right,
making
tons
of
money
you've
got
people
popping
up
from
anywhere
and
everywhere,
even
from
places
you
don't
even
realize
you
would
have,
and
everybody's
high-fiving
each
other
sales
team
is
like
yeah
we're
killing
it
and
everything-
and
it's
great,
however,
on
the
technology
side
were
thinking,
wait.
A
second
growth
is
hard.
A
We
got
to
deal
with
twenties
plus
second
page
load
times
making
sure
our
website
is
up
and
running.
Our
infrastructure
is
getting
hammered
things
like
that
sleepless
nights,
of
course.
So,
with
all
that
we
actually
went
through
a
lot
of
redesigning
our
code,
we
had
a
lot
of
issues
with
our
code
store
procedures
not
being
very
efficient,
so
we
fixed
a
lot
of
that
stuff.
So,
a
few
months
after
that,
Steve
are
sitting
me
on
the
corner.
A
Over
there
VP
of
Technology
comes
to
me,
says:
I
want
you
to
make
a
platform
that
is
geo
load
based.
If
a
data
center
goes
down
user,
doesn't
notice,
go!
Oh,
my
all
right,
awesome!
Yeah!
We
could
do
that.
Our
problem
and
a
few
seconds
later
kind
of
sunk
in
I'm
like
wait,
a
second
sure
what
what
do
I
got
to
do
and
just
kind
of
calmed
down
from
there
realize
all
right.
Well,
whatever
our
pain
points,
why
did
he
ask
to
do
this?
Why
do
we
want
to
do
this?
A
We
just
fixed
a
ton
of
code
right,
but
we
were
still
growing.
There's
still
a
lot
coming
down
the
pipe
for
us
and
we
needed
to
do
something
else.
We
needed
to
learn
how
we
could
scale
in
a
much
more
efficient
way,
so
we
could
actually
sleep
because
we
went
for
about
two
to
three
years,
four
years
or
so
not
sleeping
at
night
when
our
infrastructure
was
getting
hit
pretty
hard.
A
So
the
lay
of
the
land
of
when
we
started
looked
a
little
bit
like
this,
where
we
were
West
Coast
datacenter
only
then
we
had
a
dr
over
on
the
east
coast
and
then
we
had
a
one-way
data
sink
that
would
happen
every
15
minutes
or
so,
and
it
was
great,
but
the
problem
was:
is
that
we'd
have
to
build
an
identical
data
center,
so
that
means
double
the
cost.
Double
the
time
of
deployment
double
the
time
of
test.
Then
it's
become
a
reactive
thing,
not
a
proactive
saying.
A
So
if
the
data
center
on
the
West
goes
down,
we
have
to
scramble
for
a
couple
hours
or
whatever
the
case
may
be
to
make
sure.
Yeast
is
okay,
so
that
was
at
least
the
lay
of
our
infrastructure.
Then
our
application.
It
was
a
monolith.
It
was
the
textbook
example
of
reusability
that
you
learned
in
computer
science,
so
it
shared
everything
the
problem
was
sharing.
Everything
is
that
we
made
a
change.
A
As
you
all
know,
it
would
ripple
be
a
huge
ripple
effect
across
everything
in
all
our
systems,
which
means
we'd
have
to
test
everything
all
the
time,
all
over
again
free
release,
which
means
our
cadence
for
releases
or
a
lot
slower
so
from
there.
We
knew
that
we
needed
to
break
it
apart,
so
we
wanted
to
go
towards
more
of
a
micro
service
architecture.
Of
course
we
wanted
to
be
able
to
when
something
goes
down
and
yes,
I
made
it
a
database.
We
wanted
where
something
went
down.
A
A
So
at
this
point,
after
kind
of
knowing
at
a
high
level
of
what
direction
we
wanted
to
go,
we
knew
that
we
wanted
to
go
containers,
docker
and
all
these
things,
but
the
problem
was
containers
and
docker
were
great
at
a
local
level,
dev
machine,
Josh
and
I
passive
containers
back
and
forth.
It
was
awesome,
made
things
a
lot
easier
in
that
sense,
however,
we
had
a
lot
of
questions.
How
is
this
going
to
go
to
production?
How
will
this
scale?
How
are
we
going
to
schedule
this?
A
B
A
Yeah
we
looked
at
a
couple
of
different
ones
at
mesosphere,
actually
in
DC
OS,
but
then
we
stumbled
on
this
staying
call
coober
or
something
or
other,
and
we
were
like
okay,
this
seems
kind
of
promising,
and
at
this
point,
when
we
discovered
kubernetes,
this
was
about
I
think
September
of
last
year.
It
was
a
1.0,
four
or
five
version,
and
we
realized
that
this
is
probably
the
direction
we
want
to
go.
We
were
really
heavy
on
Mazouz.
We
were
about
to
go
in
that
direction.
A
However,
we
noticed
that
it
wasn't
as
easy
for
us
to
be
able
to
put
it
in
different
cloud
providers.
If
we
wanted
to,
we
were
pretty
much
stuck
in
a
tobias,
at
least
at
that
time
we
were
pretty
much
stuck
in
AWS
with
DCOs,
so
we
wanted
to
be
able
to
make
it
so
that
we
could
go
in
GCE,
AWS
private
wherever
and
kubernetes
kind
of
gave
us
that
light.
A
It
gave
us
that
ability
and
we
kind
of
shifted
gears
from
there
and
at
exactly
around
that
time
we
had
this
battle
between
kubernetes
and
Cloud
Foundry.
We
kind
of
were
tasked
with
okay.
Go
ahead
and
build
out
a
kubernetes
cluster
and
another
team
was
building
out
Cloud
Foundry
cluster
to
kind
of
see
at
the
end
of
about
three
months
or
so
what
is
going
to
be
more
production
ready?
How
can
we
use
it
and
things
like
that?
A
So,
as
we
were
going
through
this
journey,
we
realized
towards
I,
don't
know
two
and
a
half
months
or
so
into
it.
Cloud
Foundry
wasn't
gonna
work
for
us
for
our
particularly
use
cases
and
the
way
we
wanted
to
use
it.
We
wanted
it
to
be
able
to
run
docker
because
we
had
already
containerized
and
docker
eyes
a
couple
of
our
applications
as
it
was
at
that
time
and
we
wanted
to
actually
go
a
level
deeper
and
containerize
our
data
tier,
which
was
crazy
at
the
time,
but
we
wanted
everything
to
the
same.
A
A
A
But
it
was
all
running
on
Heroku
compose
and
a
bunch
of
other
third-party
services
for
the
data
tier
and
everything
like
that.
So
it
was
a
perfect
candidate
for
us
to
bring
over
into
our
new
platform
that
we
were
building
so
I'm,
very
competitive
and
a
little
crazy,
I
guess
and
I
wanted
to
prove
that
this
new
platform
is
definitely
the
way
we
want
to
go
and
it's
the
wave
of
the
future
for
us.
A
Obviously
we
chant
when
we
moved
one,
we
make
sure
everything
was
running,
let
it
run
for
a
couple
days
and
then
move
the
next
and
so
on.
So
it
took
us
about
a
week
or
so
and
nobody
noticed
which
was
awesome.
Obviously
only
person
I
really
knew
I,
really
the
two
of
us
and
a
couple
other
engineers
that
worked
with
us
and
Steve,
because
we
had
to
tell
him
obviously
so
we
kept
gaining
confidence
kept
going
from.
A
There
started
moving
our
rabbitmq
queuing
service,
our
Redis
and
just
kept
going
from
there
and
in
about
I
think
it
was
like
a
two
to
three
week.
Time
frame,
we
moved
the
entire
infrastructure
that
was
running
third
party
sass
over
into
our
new
platform.
That's
actually
running
in
GC
and
no
not
GK,
because
I've
had
that
question
many
times.
A
So
life
was
good
at
that
point,
but
that
didn't
really
solve
the
complete
requirement
that
was
given
to
us
at
the
beginning.
So
this
was
just
proving
out
that
we
could
have
these
different
clusters
all
around
the
world.
So
we
had
this
in
one
data
center,
then
now
we
had
to
get
into
designing.
How
would
we
geographically
distribute
all
this
data
now?
Obviously,
we
can
go
to
easy
way
and
use
Mongo's
replica
sets
across
the
globe
and
let
it
do
its
thing
from
there
and
we'd
be
done.
A
However,
we
didn't
want
to
marry
ourselves
to
one
particular
data
store
because
everything
changes
and
it's
constantly
going
and
moving
forward
and
we
want
to
be
able
to
move
forward
with
it.
We
don't
want
to
be
stuck
anywhere,
so
we
needed
to
devise
a
different
way
of
moving
all
this
data
around.
So
what
we
came
up
with
was,
let's
put
Kafka
on
top
of
all
of
this.
A
Let's
have
a
service
that
can
watch
the
OP
blog
on
whatever
data
store
that
we
use
and
basically
take
that
OP
and
push
it
up
to
Kafka
and
then
from
there.
We
could
use,
mirror
maker
and
move
all
the
operational
logs,
basically
all
over
the
globe,
all
over
to
the
different
data
centers
that
need
that
data
that
who
does
not
have
that
there,
because
the
data
originates
from
a
particular
data
center
and
then
has
to
move
over
across
to
the
others.
A
B
So
after
you
know,
Dave
two
of
coop
con
I'm
sure
this
is
gonna,
be
kind
of
familiar
workflow
to
the
most
of
you,
but
we'll
just
walk
through
this.
Very
briefly,
so
you
know,
the
life
cycle
starts,
of
course,
with
any
engineer
and
wants
to
make
any
kind
of
change
to
anything.
In
fact,
you'll
probably
see
some
of
these
messages
popping
up
as
we
go
through
our
demo,
which
we're
going
to
do
in
a
second.
As
soon
as
somebody
pushes
to
github.
B
B
Jenkins
first
order
operation
is
to
build
that
code
and
do
an
automated
test.
Whatever
that
may
be,
npm
build
around
sorry,
npm
install,
go,
build
we're
on
the
automated
test
and
make
sure
it
succeeds
if,
in
fact,
it
does
then
we'll
have
a
docker
build
and
it
will
deploy
straight
out
to
keith
at
I/o
our
container
repository
and
then
on.
The
collation
of
that
operation
will
push
that
out
to
staging
kubernetes
instance
that
we
have,
after
that's
done,
we'll,
have
jenkins
file
a
ticket
on
our
ticket
tracking
system
zero,
which
will
inform
a
QA
engineer.
B
Hey,
go
check
this
out.
We
just
made
a
change,
so
they'll
be
informed
as
such.
They'll
interface
with
that
code
out
there
in
the
QA
q
brittany's
cluster
they'll
sign
off
on
the
ticket
that
pushes
back
to
Jenkins
that
hey
mission
complete
and
then
we'll
roll
that
out
to
production
kubernetes
cluster.
Currently
we
have
it
at
a
push
of
a
button.
We
could
make
this
automated
if
we
wanted
to
doesn't
really
matter
we're
just
kind
of
trying
to
get
things
off
the
ground.
B
B
There
we
go,
there's
really
not
much
to
toy
box,
it's
a
toy,
it
really
just
counts
hits
that's
what
you
see
here.
So
every
time
you
go
to
the
page,
it'll
count
a
hit
and
log
it
back
and
of
course,
you
can
see.
We've
got
all
that.
You
know
version
history
and
stuff
like
that.
We
have
the
business
cue,
a
in
fact.
We
have
two
builds
of
this.
One
is
running
in
our
staging
instance.
The
other
one
is
running
in
our
production
infrastructure.
B
It's
the
exact
same
code
and
it
does
the
exact
same
thing
before
I
get
into
the
demo
I
just
kind
of
wanted.
To
give
you
an
overview
of
what
this
Jenkins
stuff
looks
like.
So
we
kind
of
haven't
gone
the
whole
Jenkins
file
route.
Yet
we
did
just
recently
up
version
Jenkins
and
we
are
using
pipelines,
but
we
kind
of
got
things
distilled
down
to
a
whole.
Just
tell
me
kind
of
what
you
want:
one
interface
or
the
cluster:
okay,
great
here's,
your
configuration
for
it.
B
We
have
things
about
the
tooling
that
you
need
to
build
out
things,
environments
that
you
want
to
use
source
control
management.
One
day,
maybe
we
won't
be
on
get
one
I
may
want
to
change
that
over.
So
we've
done
this
and
flattened
this
view
out
across
most
of
our
build
properties,
and
it's
worked
out
very
very
well
for
us-
we're
really
happy
with
it
by
and
large.
B
B
So
if
we
flash
back
the
application,
as
this
pod
start
to
spin
up
I
just
scaled
up,
this
requester
module-
that's
just
making
essentially
curls
against
it
over
and
over
again
and
you'll,
see
that
the
traffic
is
kind
of
spiraling
out
of
control,
and
that's
not
very
good.
Just
up
the
ante
on
that
too
I'm
going
to
do
the
exact
same
thing
in
production,
because
demo
gremlins,
you
know,
never
happened,
and
it
would
also
kind
of
be
nice
to
see
what
transpires
when
we
do.
This
rollout.
B
Okay,
so
the
thing
I'm
gonna
do
here
is
put
in
some
Redis
and
that's
gonna
be
to
do
some
rate
limiting
over
the
top.
It's
one
thing
that
we
really
really
like
about
our
micro
service
architecture,
insofar
as
everything
is
so
easy,
so
smooth.
In
fact,
when
I
wrote
this
out
in
the
first
place
before
I
did
all
this
syntactic
sugar
over
the
top
of
this
process.
It
really
took
five
minutes
to
put
it
all
together,
which
is
something
I,
don't
think
we
could
say
for
our
traditional
monolithic
build.
B
We
have
kind
of
moved
away
from
actually
talking
machine
kids
directly,
so
late,
we've
been
kind
of
getting
into
the
whole
chat,
ops
thing
and
building
out
thoughts
and,
in
fact,
instead
of
going
into
Jenkins
to
finish
this
all
out,
I'm
going
to
talk
to
our
bot.
His
name
is
Rick
Jenkins
and
he's
essentially
our
interface
into
Jenkins
itself
and
I'll
talk
a
little
more
about
where
he
came
from
here
in
just
a
minute,
but
that
build
is
completed,
I'm
skipping
over
a
couple
of
things.
Oh
so
sorry.
B
A
B
You
look
over
here,
a
toy
box,
you'll
notice,
the
request
traffic
is
really
tailed
off.
The
deployment
happened
to
the
QA
cluster,
just
like
that
and
commensurately
you're
gonna
see
the
exact
same
thing
over
here
on
the
production
edition,
we're
gonna,
taking
a
whole
lot
of
flack
right
now,
a
lot
of
traffic
as
the
rolling
update
occurs,
but
give
it
a
little
bit
of
time
and
suddenly
and
we
get
the
super
cool
addition.
Instead,
the
exact
same
here
same
thing
here
for
QA.
B
So
or
let's
retake
back
over
okay
I
was
gonna
talk
about
some
of
their
supporting
application
infrastructure.
When
we
came
into
this,
you
know
we,
as
Sharif
mentioned,
we
inherited
that
company
t
leader
and
they
came
from
Heroku
Heroku-
is
got
all
of
its
nice
features,
and
things
like
that,
so
we
kind
of
wanted
to
get
those
things
off
the
ground
in
kubernetes
as
well.
B
One
of
the
first
things
we
did
was
a
higher
fire
equivalent
called
that
we
call
wissen,
which
is
essentially
responsible
for
monitoring
queues
and
modern
RabbitMQ
their
sizes
and
then
scaling
them
up
when
the
time
comes.
So
we
have
the
settings
that
are
quite
a
bit
like
hire
fire
and
so
far
as
that
goes,
we
use
Pingdom
a
lot
internally
to
monitor
applications.
We
wanted
a
way
to
look
at
things
like
MongoDB
and
elasticsearch
semantically.
So,
instead
of
just
saying
you
know,
can
I
get
to
this
thing?
Can
I
do
DNS
lookup?
It's
not
enough.
B
We
wanted
to
see
something.
That's
more
about.
You
know
the
semantics
of
how
that
application
works.
I
should
be
able
to
log
into
a
cluster
run.
Some
query
get
a
result
back
and
everything
should
work
out
from
top
to
bottom
there,
which
Kingdom
doesn't
relie,
give
you
that
right
out
of
the
out
of
the
box,
so
we
were
an
app
called
theif
and
it
does
exactly
that.
We
have
interfaces
for
DB
for
elastic,
search
for
rabbitmq
itself.
B
That
thing
is
really
kind
of
meant
to
abstract
away
the
concepts
of
you
know
your
particular
API.
We
kind
of
wanted
to
do
like
a
who
Bob
thing,
but
for
the
new
age
of
slack.
So
this
Amy
framework
is
something
we're
hoping
to
open
source
in
the
near
future.
So
we
have
a
twitter
handle
for
this
that
real
Rick
Jenkins.
You
want
to
follow
us
along.
We
think
it's
a
really
great
project,
you'll
really
get
a
whole
lot
out
of
it.
B
What
else
we've
been
doing?
Backups
of
our
entire
cluster
we've
written
a
lot
of
things
to
be
able
to
just
kind
of
snapshot.
The
state
of
things
push
it
into
buckets
out
there
in
GCE.
That's
been
very
good.
In
fact.
We
have
saved
ourselves
numerous
occasions
by
no,
we
obliterated
this
object.
What
are
we
gonna?
Do
let's
go
out
to
the
bucket
and
just
replicate
it,
and
then
there
you
go
or
if
we
want
to
replay
in
a
particular
cluster.
We
can
do
that.
B
One
very
big
thing
we
did
with
our
cluster
was:
we
were
trying
to
get
some
geo
service
data
that
we
were
paying
for
it's
very
expensive,
very
prohibitively
expensive.
We
wanted
to
sub
that
out.
We
were
tasked
with
doing
that
and
in
about
a
week
with
this
micro
service
architecture,
we
were
able
to
get
one
of
these
services
off
the
ground,
with
our
own
data
store
saved
a
lot
of
money,
a
lot
of
money
and
we're
able
to
survive.
B
A
Guys
so
we're
gonna
talk
about
a
little
bit
of
our
lessons
learned
what
we've
done
we
wished.
We
hadn't
done
that
kind
of
stuff.
So
one
of
the
big
things
as
Josh
mentioned
the
BA,
the
chat
bot
that
we
created
we
were
trying.
We
were
trying
to
create
a
UI,
however,
because
the
dashboard
just
wasn't
there.
Yet
it
was
the
early
version
of
the
UI
I
didn't
have
deployments
and
all
that
stuff.
So
we
were
gonna,
start
trying
to
create
our
own
and
give
it
back.
A
You
can't
do
that.
So
that's
that's
been
great.
So
that's
been
a
big
lesson
learned
for
us.
Another
lesson
learned
for
us
is
at
the
very
beginning:
when
we
first
started,
we
wanted
to
have
a
production
cluster
and
a
QA
cluster.
We
understood
the
point
of
namespaces
and
all
that
good
stuff.
How
ever
since
we
had
to
be
PCI
compliant.
We
weren't
sure
how
that
was
gonna
go
over
with
all
the
auditors,
so
we
figured
okay,
let's
just
separate
them
completely.
A
Even
physically,
so
when
we
started
out
with
that,
unfortunately,
we
we
used
a
coup,
bub
script
at
the
time,
and
we
didn't
realize
that
the
coup
bub
script
is
only
going
to
create
one
subnet
for
you
to
run
the
cluster
so
in
and
spun
up
the
second
one,
it
was
having
the
same
subnet.
So
we
were
load
testing
playing
around
with
this
thing
and
we
were
getting
cop
packets,
20
second
load
times
like
what
the
hell
is
wrong
with
this
community.
A
Why
do
they
all
like
kubernetes,
and
this
thing
is
not
even
working
so
one
morning
woke
up
in
a
cold
sweat
had
this
Epiphany,
it
was
like.
Let's
kill
this
other
cluster
cuz
after
digging
deep
into
networks.
How
to
networking
worked.
I
realized
that
it
may
be
this,
so
we
killed
the
cluster
and
as
soon
as
that
happened,
that
thing
was
snappy.
It
was
quick.
It
was
just
back
to
the
point
where,
like
okay,
now
we
get
it,
we
get
why
everybody
loves
kubernetes
as
much
as
they
do.
A
So
we
had
to
fix
the
coup
bub
scripts
from
our
perspective,
to
create
a
new
subnet
for
us
when
we
did
deploy
it
into
GCE
at
the
beginning
as
well.
We
because
we
had
started
off
with
c-sharp
Don
net.
Everything
was
as
he
sharpens
today.
There
still
is
a
lot
of
C
struct
on
that
there
was
no
Don
net
core.
There
was
no
container,
is
Kenneth
Tanner
ization
of
that
language,
so
we're
trying
to
work
towards
that.
So
that's
been
kind
of
a
long-winded
lesson
learned
for
us
we're
still
working
through
it.
A
Now
we
actually
have
a
few
engineers
on
the
core
team.
That's
that
are
actually
starting
to
rewrite
a
lot
of
our
services
to
utilize
that
and
then
there
was
this
one
day
we
were
trying
to
use
a
back,
they
back
plug-in,
so
we
could
have
authorization
and
this
person
to
want
to
name
any
names.
This
idiot
wanted
to
try
and
do
it
on
a
production
cluster
and
the.
B
A
So
yes,
it
was
me,
the
idiot
I
thought
I
was
I,
knew
what
I
was
doing
installed
a
back
plug
in
tried
to
get
it
up
and
running
and
brought
down
our
entire
cluster,
and
this
was
the
production
cluster
mind
you.
So
this
was
part
of
my
like
crazy
side
where
I,
just
you
know,
I
was
being
a
little
too
ambitious.
So
with
that,
the
great
thing
was
is
that
we
had
created
a
config
backup
mechanism
that
will
take
our
entire
kubernetes
infrastructure
and
Jason
files
output
and
put
him
into
a
bucket
for
us
somewhere.
A
So
I
learned
that
at
that
point
we
needed
a
QA
cluster
and
try
not
to
use
alpha
applications
within
a
production
cluster.
So
with
all
that
I
mean
the
point
was:
is
that
a
lot
of
the
lessons
that
we
learned
were
because
we
created
problems
for
ourselves
and
a
lot
of
it
was
because
of
networking,
and
things
like
that.
So
if
you
haven't
had
an
issue
with
networking
or
kubernetes
your
you
have
not
used
kubernetes,
you
have
not
exercised
it
enough
yet
so
so
no,
it's
our
big
lessons
learned
I!
A
Think
from
there
so
wrap
it
up.
I
want
to
share
some
stats
with
you
guys
in
this
migration
and
how
much
we've
actually
done
in
such
a
short
period
of
time
and
really
between
one
and
a
half
resources,
because
I
was
about
half
the
time.
Josh
has
been
a
lot
of
the
time,
full
time
on
it
and
a
couple
of
ancillary
developers
that
were
helping
us
out
we're
running
about
200
plus
services
within
all
our
clusters.
A
We're
running
four
clusters:
we
have
a
Golf
Channel
cluster,
which
Golf
Channel
is
actually
hosted
through
kubernetes.
We
have
a
golf
now
cluster,
it's
running
some
of
this
tee
leader,
that's
actually
running
all
of
the
tee
leader
application
on
which
is
going
to
be
kind
of
like
a
central
micro
service.
A
Everything
is
going
to
move
towards
that
direction,
and
then
we
have
a
QA
cluster
cue,
a
staging
cluster
in
the
last
cluster
that
we
have
is
a
cluster
in
the
EU
with
our
counterparts
over
there
to
also
kind
of
get
into
all
this
we're
running
about
30,
plus
different
replica
sets
MongoDB
replica
sets.
These
are
all
different
clusters
for
different
micro
services.
The
idea
that
we
had
was
for
a
micro
service
to
be
truly
a
micro
service.
We
need
to
also
make
sure
your
database
is
essentially
a
micro
service
in
itself
as
well.
A
We
have
about
3
terabytes
of
managed
data,
and
this
is
again
running
all
our
data
to
your
within
a
containerized
environment,
ephemeral
containers,
all
that
good
stuff,
we're
using,
obviously
a
persistent
disks
within
Google
and,
lastly,
our
lone
Kafka
cluster,
which
will
grow
based
on
the
slide.
I
showed
you
guys
earlier
with
the
geo
data
sink,
but
Kafka
has
been
working
great
for
us
actually.
A
Another
stat
that
I
forgot
to
put
in
here
was
we
have
about
three
two
or
three
two
or
three
RabbitMQ
clusters,
also
to
kind
of
Fedder
a
lot
of
our
messages
that
are
going
all
over
the
place
again.
Nothing.
So
we
don't
have
a
single
point
of
failure,
all
the
good
stuff.
So
with
that
appreciate
you
guys
listening
to
us,
you
have
a
few.
B
A
So
the
question
was:
is
that
in
the
geodata
sync
design
that
we
have?
Does
it
assume
that
there's
no
overlap
in
the
data?
Yes,
it
does
because
every
data
center
is
going
to
generate
their
own
tee
times
for
that
particular
region.
So
we've
got
a
European
region,
that's
going
to
generate
those
tee
times
for
the
European
region
and
then
that
data
is
going
to
go
across
to
the
US.
A
So
if
there's
a
person
in
the
US,
that's
going
to
be
planning
a
trip
over
into
Europe
their
tee
times,
don't
necessarily
need
to
be
minute
by
minute.
They
don't
need
to
be
there
right
away.
They're
gonna,
probably
looking
for
two
weeks
three
weeks
a
month
or
much
a
couple
months
out,
so
that
data
doesn't
doesn't
really
turn
as
much
as
data.
That's
going
to
expire
soon,
what
the
tee
times
are
coming
up.
So
yes,
it
does
assume
that
the
data
is
unique,
that's
being
generated
from
all
over
the
place.
A
So
through
our
tests,
originally
we
wrote
the
the
service
that
pulled
the
data
out
of
the
OP
log
and
pushed
it
over.
We
wrote
that
in
nodejs
and
that
was
taking
quite
a
bit
of
time
if
I
remember
it
was
taking
five
to
ten
seconds
if
I'm
not
mistaken,
but
right,
yeah
and
that
to
us
was
not
going
to
be
scalable
and
it
was
running
about
I.
A
Think
actually
now
this
sad,
let
me
bring
it
back,
I
think
the
stack
was
that
was
about
3,000
or
so
data
points
per
second,
it
was
moving
over
and
that
to
us
was
not
going
to
scale
right.
It
was
not
going
to
work,
for
us
is
we're
gonna
start,
probably
going
lower,
actually
so,
based
on
what
we
were
looking
at,
what
we
were
seeing
as
our
traces,
we
noticed
that
it
was
actually
node
that
was
causing
issue.
A
Node
was
not
able
to
keep
up
with
that
constant
barrage
of
data
because
of
his
asynchronous
nature
and
single
threaded
miss
and
all
that
good
stuff.
So
we
realized
at
that
point.
Node
is
awesome,
but
note
is
awesome
for
web
facing
consumer
facing
type
of
deals,
but
if
you're
trying
to
do
a
lot
of
heavy
lifting
in
the
background
we
went
with
go,
we
tried
go.
This
was
our
first
time
ever
using
go
and
it
was
kind
of
like
alright.
A
Well,
let's
play
around
with
going,
we
tried
it,
we
would
go,
we
the
thing
jumped
up
to
like
10
to
15,000
a
second,
and
it
was
our
literally
our
first
application
ago.
So
there
is
plenty
of
room
for
to
be
able
to
make
it
better
and
faster
and
all
that
good
stuff.
So
at
that
point
we
realized
all
right
so
all
back
end.
Heavy
lifting
type
of
applications
need
to
be
a
lower
level.
A
So
the
question
was:
how
do
we
manage
our
database
cluster?
Our
clusters,
it's
in
GCU
right
now.
Our
goal
is
to
eventually
get
it
into
a
private
cloud.
So
the
point
was
is
like
I
said:
was
that
that
we
wanted
it
to
be
able
to
be
lifted
up
and
moved
into
any
data
center
that
we
wanted
GCE,
AWS
whatever,
as
well
as
private?
So
at
the
moment
right
now
we're
only
in
GC.
A
The
goal
is
to
get
into
private
so
that
we
can
have
the
Federation
control
plane
to
you
know,
go
in
and
out,
so
we
use
kuba
the
coop
up
script.
We
just
use
what
is
and
kubernetes
right
now
with
the
coop
up
script,
so
yeah
I
know,
there's
a
cuvette
DM
that
we
were
actually
going
to
try
to
exercise
in
our
private
cloud
and
then
further
utilize
it
into
our
cloud,
our
actual
public
clouds.
A
Why
not
GK
a
question
the
reason
I'm
gonna
go
GK.
He
was
again
so
that
we're
cloud
agnostic
cloud
provider,
agnostic
GK,
is
awesome.
We
actually
played
around
with
a
we
used
it
and
the
beauty
of
it
was.
We
didn't
have
to
deal
with
the
master
right,
however,
the
point
of
us
being
agnostic
to
the
cloud
it
didn't.
It
wasn't
conducive
to
that
and
again
it
was
just
again
to
prove
that
we
can
actually
go
anywhere
we
wanted
to
in
the
future.
A
We
may
just
say
all
right,
GK
for
a
GC
and
then
use
our
own
homegrown
and
to
private,
and
they
just
have
to
fit
the
kubernetes
Federation
plane
on
top,
so
we're
actually
running
1.4
today
and
we
try
to
run,
we
try
to
upgrade
every
half
version.
So
next
version
is
1.1,
1.4
0.5,
which
we're
gonna
plan
after
this,
and
so
on.
A
A
We
would
like
to
get
to
self-hosting
we're
not
there
yet
by
no
means
we're
literally
utilizing
the
upgrade
scripts
that
are
in
in
the
repository
we
just
I
think
we
adjusted
a
few
things
in
it.
We
did
two
to
utilize
it
the
way
we
wanted.
So
that's
a
little
safer.
They
made
us
give.
It
gave
us
just
a
warm
up,
fuzzies
a
little
bit,
I
guess
as
we
were
doing
it,
but
it's
basically
the
upgrade
script.
B
A
B
A
lot
of
the
data
tier
actually
elasticsearch
is
among
the
easier
of
those,
but
we
run
them
as
containers.
Oh
I'm.
So
sorry,
the
question
was:
how
do
we
run
elasticsearch,
whether
it,
whether
it's
containers,
so
we
run
them
as
standalone
containers?
We've
done
a
lot
of
clever
hacking
to
get
all
of
our
data
tier
instances
up
and
running.
Elasticsearch
was
one
of
the
easier
ones
to
Fedder.
Eight-Mile
gonna
be
was
the
hardest
hands-down
absolutely
I
mean
like
I'm
getting
flashbacks
right
now.
Just
thinking
about
it
is
a
matter
of
fact.
A
B
A
We
actually
when
we
we
had
been
talking
to
at
the
time
and-
and
we
had
told
them
that
we
had
done
that
and
they
kind
of
were
really
scared
like
you
can't
do
that
I!
Oh,
we
did
it
and
we've
been
running
it
for
like
a
year
and
it's
fine,
so
it
seemed
like
we
were
one
of
the
first
to
do
that
at
least
that
they've
heard
of
we
haven't
seen
anybody
at
running
in
replicas
sets
in
that
manner.
A
B
A
The
question
was
PCI
since
were
PCI
compliant?
What
kind
of
measures
did
we
take
to
secure
and
everything?
So
we
didn't
do
anything
necessarily
for
the
container
to
harden
it
Ernie
and
anyway,
that
shape
or
form
actually
to
begin
with,
because
we
have
PCI
on
the
mine,
we
were
actually
creating
our
own
base
images.
This
was
basically
yoona
kernel
before
you
know.
Kernel
had
a
name
since
we
were
trying
to
create
it
and
we
were
kind
of
building
our
almost
our
own
distro
of
what
we
needed
exactly
so.
A
We
went
from
the
application
and
built
it
down
from
there
and
took
the
things
that
we
needed
if
it
was
whatever
we
need
to
forego
or
nodejs
or
whatever
the
case
may
be.
We
built
the
base
image
from
that
again
for
that
security,
to
harden
it
as
much
as
possible.
So
we
know
we
don't
have
SSH,
because
we
don't
need
it.
We
don't
have
USB
drivers.
We
don't
have
any
of
this
other
fluff
in
the
OS
that
we
don't
necessarily
need.
A
So
we
tried
to
harden
it
that
way,
but
that
was
becoming
more
of
a
pain
for
us
and
then
the
fact
that
we
were
using
key
and
it
giving
us
the
security
scans
we're
like
well,
maybe
we're
not
doing
it
right
and
we
went
back
to
just
using
a
base,
10
Debian
image.
However,
now
looking
back
at
it,
maybe
it
was
better
for
us
to
actually
build
our
base
images
from
scratch.
The
way
we
were
to
make
it
a
little
bit
more
secure.
Yes,
we
were
losing
the
security
scans.
A
A
In
that
sense,
what
we
actually
did
was
just
we
made
sure
that
our
micro
services
that
we
created
were
all
decoupled
from
the
fact
that
the
data,
the
the
the
pods
that
were
running
within
the
data
center
did
not
process
any
data
as
far
as
credit
cards
go
and
everything
was
offloaded
to
our
PCI
zone,
which
was
on
the
west
coast
showed
in
the
diagram.
So
it's
kind
of
like
what
we
did.
A
A
We
don't
we
don't
have
default
services.
The
service
accounts
that
do
that
have
access
to
everything
we
create
a
new
one
for
each
service
that
and
if
it
needs
access
to
that
particular
service,
it
will
get
access.
Otherwise,
we
don't
share
anything
like
that
yeah
and
we're
actually
going
to
a
different
model
right
now,
where
all
our
passwords
and
everything
are
in
config
Maps.
However,
we
wanted
to
be
also
be
able
to
also
put
the
config
Maps
within
our
repositories,
so
we
can
version
control
them
as
well,
but
then
we
have
passwords
in
there.
A
So
we
were
gonna.
Do
is
actually
create
a
config
map,
that's
specific
to
passwords,
only
they
go
in
their
own
repo
and
then
the
application
specific
config
maps
are
pretty
much
wide
open
that
don't
have
any
security
type
of
issues
that
would
be
would
be
an
issue
if
you
have
put
them
in
a
repository.
So.
B
Just
briefly
follow-on
I
mean
and
I'm
sure
it's
kind
of
a
given.
We
have
not
actually
been
audited
yet
so
it's
gonna
be
kind
of
an
open
question
as
to
what
happens
yeah.
You
know
it's
right
now
kind
of
a
question
of
following
best
practice.
So
yeah,
you
know
definitely
segment
all
your
account
stuff.
Of
course
we
get
the
freebie
of
not
having
to
deal
with
credit
card
data,
so
that
definitely
does
help.
But
you
know
they're
they're,
gonna
they're
going
to
want
to
know
so.
I
think
that'll
be
open
question
for
next
year.
Well,.
A
A
A
Yes,
we
do
have
my
sequel
and
then
the
question
was:
do
we
have
any
relational
databases?
Yes,
everything
is
on.
Yes,
everything
is
off.
That
was,
that
was
what
we
were
trying
to
do.
We
were
trying
to
make
it
so
that
literally
everything
runs
within
kubernetes
so
that
we
don't
have
to
worry
about
anything
really,
and
that
was
why
Cloud
Foundry
actually
didn't
work
out
for
us,
because
they
couldn't
do
that.
So
I
think
you
want.
Oh
sorry,
all.