►
Description
Migrating microservices applications to Kubernetes especially one that involves things like stateful services and load balancers can be a daunting task. In this webinar, we talk about our experience of migrating a microservices-based platform to Kubernetes. The migration journey brought to fore the complexities of K8s and led the team to explore ways to simplify and streamline efforts. This resulted in an app-centric approach that abstracts K8s, accelerates migration and makes possible self-service delivery for application teams.
A
Today,
welcome
to
today,
CN
CF
webinar,
simplifying
app
migration
to
kubernetes,
with
an
app
centric,
abstraction,
I'm,
Chris
short
principal
Technical,
Marketing,
Manager,
Red,
Hat
and
also
cloud
native
ambassador
I'll
be
moderating
today's
webinar
a
few
housekeeping
items
before
we
get
started
during
the
webinar
you're,
not
able
to
speak
as
an
attendee
sorry,
but
you
have
a
Q&A
box.
We
really
want
you
to
use
this
Q&A
box
as
the
as
a
webinar
is
progressing
feel
free
to
drop
in
questions
there.
A
We
will
get
to
as
many
as
the
end
and
I'll
be
curating
them,
and
we
have
another
moderator
on
the
line
as
well,
so
your
questions
will
get
answered
more
than
likely
today.
So
what
was
really
cool
that
you're
here
today
for
this
webinar?
This
is
an
official
CN
CF
webinar.
As
such,
it
is
subject
to
the
CN
CF
code
of
conduct.
Please
do
not
add
anything
to
the
chat
for
questions
that
would
be
in
violation
of
that
code
of
conduct
basically
be
respectful
of
all
of
your
fellow
participants
and
presenters
I
would
be
excuse.
B
B
Some
technical
insights
that
we
thought
might
be
interesting
or
useful,
and
also
you
know
how
we
went
about
simplifying
the
whole
thing
all
right,
so
the
application
itself
was
an
enterprise
platform.
This
platform
is
a
local
development
platform,
so
users
which
sign
up
in
order
to
build
out
applications
in
an
easier
way.
That's
what
the
platform
let
them
do
and,
as
you
can
see,
this
is
a
picture
of
the
kind
of
services
that
were
there
and
it
had
service
discovery,
load,
balancer,
stapled
services
and
so
on.
B
So
so
they
wanted
to
move
to
kubernetes,
because
a
lot
of
their
customers
were
on
different
clouds
and
they
wanted
this
platform
to
be
set
up.
You
know
for
them
on
the
cloud
of
their
choice,
and
so
kubernetes
became
a
natural
choice
for
this,
because
you
know
you
can
set
up
your
application
and
and
deploy
in
exactly
the
same
way
everywhere,
and
there
were
a
couple
of
other
reasons,
so
they
would
get
you
know,
user
demand
with
some
level
of
volatility.
B
So
sometimes
you
would
have
more
users
using
the
platform
than
at
other
times
and
again-
and
you
know
it
turned
out
to
be
a
natural
choice,
kubernetes
and
then
they
also
wanted
to
make
things
a
bit
more
economical
and
then
finally,
the
declarative
approach
and
the
mutability
of
containers
were
particularly
attractive
to
make
everything
reliable
much
more
than
what
it
was
before.
So
the
pre
kubernetes
scenario
looked
something
like
this.
B
So
yeah.
You
know
we
had
these
Williams
and
the
DevOps
would
set
up
the
application,
stack
requirements,
Java
or
Tomcat
and
other
stuff
like
that
and
configure
all
of
that
and
then
also
handle
things
like
security
patch
into
those
stacks,
etc
and
and
then
obviously
the
the
deployment
was
very
simple.
So
you
would
mainly
deploy
the
the
wall
file.
This
is
a
java-based
application,
so
it
would
mostly
be
Wars
our
jar
files,
and
that
would
be
the
deployment
scenario
with
kubernetes.
B
This
was
all
about
to
change
its,
and
it
is
obviously
easier
to
talk
about
in
hindsight.
But
essentially
you
know
all
of
these
things,
the
stuff
that
there
was
folks
on
the
team
dead
and
the
pure
developers
on
the
team
dead.
All
that
had
to
sort
of
get
combined
packaged
into
an
image-
and
you
know-
and
then
all
of
this
would
go
through
as
a
single
deployment.
B
B
B
You
know
with
the
immutability
of
containers
coming
in
any
small
change
that
would
be
needed,
whether
it's
a
you
know,
a
change
in
the
application
or
a
change
in
its
dependencies
and
whatever
it
was.
It
would
mean
that
you'd
kind
of
redeploy
you
rebuild
and
redeploy.
You
never
update
anything,
and
this
also
meant
that
you
know
the
frequency
would
kind
of
go
up.
A
lot
of
containers
would
live
for
a
very
short
time.
There
would
be
a
change
in
how
troubleshooting
would
happen.
B
So
this
entire
shift
brought
about
three
new
challenges,
so
the
whole
workflow
had
to
change.
There's
a
lot
of
new
concepts
and
terminologies
that
everybody
involved
in
the
delivery
would
have
to
become
familiar
with
I'm
sure.
A
lot
of
you
are
familiar
with
with
the
fact
that
there's
new
stuff
to
be
learned
here
and
and
again
you
know,
troubleshooting
in
office-
is
very
different
as
we're
gonna
see.
B
So
what
we
want
to
do
here
is
to
kind
of
get
into
a
bit
of
detail
on
the
kind
of
things
that
we
had
to
deal
with
and
again
we're
not
wanting
to
present
sort
of
best
practices
or
anything
like
that
in
a
very
comprehensive
manner
that's
available
to
us.
But
what
we
wanted
to
talk
about
is
the
journey
that
we
went
through
and
some
of
the
learnings
we
got
through
that
journey
and
hopefully
that's
useful,
at
least
to
some
of
you
who
would
want
to
migrate
your
applications
to
kubernetes.
B
B
So
a
lot
of
the
applications,
the
micro
services
based
applications
in
particular
or
even
other
applications,
might
be
dependent
on
service
discovery
mechanisms
from
the
past,
and
some
of
those
mechanisms
may
not
work
very
well
or
even
if
they
do
work,
we'll
see
why
it
might
make
sense
to
move
to
kubernetes
native
discovery.
How
we
went
about
trying
to
do
that
without
making
changes
to
the
code
first
and
then,
and
then
later
did
that
so
we're
gonna
talk
about
at
the
points
you
see
here.
B
B
So
there
were
several
reasons
to
want
to
move
to
where
Cuba
natives
kubernetes
native
discovery.
One
obvious
point
would
be
that
we
wouldn't
need
to
maintain
or
run
console
anymore.
Kubernetes
itself
would
do
the
job
and
then
registration
is
automatic.
So
here
we
would
need
to
write
some
post
ups
and
do
stuff
like
that.
But
in
kubernetes
you
don't
need
to
do
registration,
then
you
know
because
this
works
well
with
kubernetes
DNS.
B
So
if
you
worry
for
a
service
name
in
kubernetes,
once
you
deploy
to
your
service,
then
you
would
be
able
to
get
back
the
service
IP
by
simply
making
a
DNS
call
just
like
that
right
without
any
extra
effort
or
extra
code
or
scripts
on
your
part.
And
finally,
you
know
if
you
have
a
lot
of
replicas
for
your
service.
I
assume
most
of
you
would
be
familiar
with
pods
and
replicas.
B
Then
you
know:
if
your
service
has
many
pause
and
as
replicas
then
then
kubernetes
service
discovery
will
return.
You
a
service
IP,
which
is
a
single
IP
and
then
behind
the
scenes.
When
you
hit
the
service
IP,
it
would
automatically
round-robin
the
traffic
to
all
of
the
pods
behind
to
the
different
pot
I
piece,
which
is
again
very
useful.
B
So
so
how
did
we
go
about
that
right?
So
that's,
that's.
You
know
playing
kubernetes
stuff,
but
so
initially
from
console.
We
wanted
to
not
change
everything
at
one
shot,
so
we
had
like
in
this
application
there
were
15
or
so
micro
services
and
all
those
had
the
console
library
as
part
of
the
code,
which
would
the
code
would
talk
to
console
through
that
library
and
all
of
that,
so
we
would
have
to
go
in
and
and
rip
out
that
quote
from
all
the
15
services
with
it.
B
So
the
idea
was
they
didn't
want
to
do
that
until
they
were
sure
that
communities
would
work.
So
initially
we
simply
modified
the
entries
that
went
into
console
so
that
console
the
court
would
continue
to
query.
Console
and
console
would
return
just
the
kubernetes
service
name
to
the
fan
IP
and
then
from
that
point
the
service
would
would
talk
or
make
HTTP
calls
using
kubernetes
service
name
which
would
simply
resolve
through
the
kubernetes
DMS.
B
So
from
there
we
came
to
the
second
attempt,
which
was
to
avoid
the
two
hops
and
then,
since
by
that
point
we
had
shown
that
it
would
work
reasonably
well,
so
we
got
rid
of
console
completely
and
then
simply
depended
on
kubernetes
dns
to
get
the
service
IP.
There
are
some
interesting
caveats
here
to
know
when
you
do
this,
so
one
interesting
thing
is
that
when
you,
when
you
try
to
resolve
the
service
name
in
kubernetes,
you
will
get
a
service
IP.
B
Even
if
the
pods,
you
know,
there's
not
even
a
single
Todd,
that's
that's
healthy,
so
you'll
still
get
the
service
IP.
So
you
wouldn't
know
until
you
make
the
request
to
that
service,
and
then
there
were
issues
we
faced
with
connection
pooling,
not
sure
if
that,
as
a
result
or
can
be
because
you
get
a
service
IP.
If
you
have
multiple
replicas
behind,
and
so
you
don't
get
to
the
actual
part,
IP
right.
B
The
service
I
t's
like
a
proxy
to
the
parts
behind,
and
so
you
can't
really
set
up
a
connection
pool
which
is
very
useful
if
you're.
If
you
want
to
do
that
for
a
database
use
case
and
again,
you're
limited
to
you,
know
the
ground
robin
of
the
traffic
to
the
pods
behind.
So
then
we
did
a
different
thing
so
that
we
could
only
get
to
the
healthy
parts.
B
So
what
we
did
then
was
to
change
things
around
so
instead
of
depending
on
the
kubernetes
dns
to
get
the
service
IP,
we
would
query
kubernetes
api
to
get
the
the
pod
IPS.
So
if
you
had
three
parts,
three
part
replicas
for
your
service,
it
would
return
the
deployed
ids
and
we
would
then
have
to
cash
those
IPS
and
periodically,
invalidate
and
refresh
the
cache
etc.
But
now
we
could,
then
you
know
kubernetes
would
return
they're,
only
those
which
were
healthy
and
we
can
do
things
like
bullying,
etc.
B
B
So
after
this
whole
migration
was
done
in
in
recent
times
in
recent
months,
they're
having
some
other
options
that
have
come
up
and
and
we've
come
across,
you
might
want
to
use
something
like
the
ribbon
library
if
you're
using
spring
or
there
could
be
similar
libraries
in
other
languages
which
will
help
you
to
query
the
kubernetes
api
for
polities
in
your
code
and
then
console
and
and
even
others,
like
maybe
eureka,
etc
service
discovery
mechanisms.
They
have
now
libraries
or
ways
to
link
to
set
up
a
sync
between,
say,
console
and
kubernetes
service
discovery.
B
B
Alright,
so
and
I
move
to
the
second
thing
and
of
course,
the
six
things
that
we're
going
to
talk
about
here
are
not
comprehensive
right.
There's
a
lot
more
things
that
you
would
need
to
do
to
migrate
your
application.
But
you
know
these
were
kind
of
important
or
insightful
things
for
us,
and
so
we
thought
to
cover
these.
So
in
this
one
I'm
going
to
talk
a
little
bit
about,
you
know
what
we
did
and
what
we
try
to
do
for
volumes
and
handling
data
etc.
B
So
again
a
quick
recap.
You
know
I'm
sure
many
of
you
are
familiar
with
this
these
concepts,
so
there's
persisting
volumes
PD's
in
short
right
and
for
example,
this
represents
I
mean
in
very
rough
terms.
I
can
say
this
represents
a
physical
volume.
So,
for
example,
it
would
present
an
EBS
volume
if
you're
using
EBS
as
your
storage
and
then
there's
the
PVCs
or
persistent
volume
claims
where
you
kind
of
specify
which
pod
this
particular
pv
needs
to
get
attached
into.
B
B
One
is
you
might
want
to
deploy
them
as
a
deployment
as
what
is
called
a
deployment
right
now
or
what
is
called
a
stateful
set,
but
in
this
case
with
PBS
you
want
to
do
stateful
set,
because
then
the
state
will
set
will
ensure
that
whenever
the
pod
is
scheduled
or
rescheduled,
the
volume
attachments
are
properly
maintained,
and
so
that's
a
quick
recap
all
right.
So
some
of
the
things
we've
you
know
we
did
in
the
course
of
the
migration.
B
So
if
you
know,
wherever
we
had
some
EBS
volumes
or
you
know
either
with
data
or
you
know,
we
brought
back
a
volume
from
a
snapshot
or
something
like
that
or
we
had
yeah
at
that
point
we
didn't
have
they
didn't
have
snapshot.
The
snapshot.
Support
in
committees
was
still
alpha.
I
think
it's
still
beat
all
right
now,
so
we
we
would
create
a
volume
from
the
snapshot
and
then
create
a
it
PB
to
sort
of
represent
that
volume
and
then
do
all
of
these
things
that
I
talked
about.
B
So
this
was
a
how
we
kind
of
brought
back
data
that
was
existing
elsewhere.
In
you
know,
in
a
volume
or
in
a
in
a
snapshot,
I
look
today,
I
believe
you
can
also
specify
your
snapshot
I
directly
here
in
your
penis
or
pcs
and
black
kubernetes
handle
it.
That
is
now
supported
by
several
providers.
But
again
it's
a
beta
feature.
You
might
want
to
use
with
caution,
then
there's
dynamic
provisioning,
which,
which
basically
means
instead
of
creating
a
you,
know
persistent
volume.
B
B
And
if
we
wanted
to
scale
that
service,
then
that
service
we
would
create
with
what
is
called
a
clean
template
inside
stateful
set,
and
so
this
makes
sure
that
whenever
a
new
replica
part
is
created,
then
you
know
a
a
persistent
volume
is
automatically
attached
to
that
new
replica.
It
also
makes
things
easier
because
you
just
handle
the
stateful
set.
B
Yamo,
but
that's
anyway,
that's
where
we
use
this
kind
of
a
thing
so
some
of
the
considerations,
so
we
would,
you
know,
sometimes
get
breakages
here
because,
as
you
can
see,
there's
a
lot
of
referencing
that
needs
to
be
proper
and
you
don't
want
to
take
a
chance
with
your
data
volumes
and
stuff.
So
you
need
to
make
sure
that
you
refer
the
right
things
in
the
right,
ya,
Mo's
or
yellow
sections
I.
B
B
So
if
you're
using
a
claim
template,
then
you
know
that
would
add
it
up.
So
you
would
want
to
query
the
claim
template
to
figure
out
what
clean
got
created
for
you
by
this
claim,
template
and
then
apply
a
resize
onto
it.
So
yeah
a
bit
of
technicality,
but
that
was
again
something
that
tripped
us
up
and
we
had
to
do
that.
B
So
now,
after
at
some
point
after
we
did
this
migration,
so
I
think
we
have
topology
aware
provisioning
in
kubernetes
last
I
checked
that
was
still
beta
and
supported
on
the
major
public
cloud
providers.
So
this
might
not
be
a
problem
anymore,
but
if
you're
running
some
kind
of
a
zone
set
up
in
your
data
center,
then
you
will
likely
face
this
problem.
B
So
what
we
did
to
solve
this
was
we
don't
want
this
products
to
come
back
in
a
different
zone
when
it
does
so
when
the
pod
dies,
kubernetes
will
try
to
read
automatically
right,
so
we
we
don't
want
it
to
come
back
in
a
different
zone.
So
you
would
want
to
use
labels
for
your
zones
and
set
up
what
is
called
no
definite,
East
right
and
then
set
up
the
node
affinity
for
your
part,
so
that
the
poured
would
always
get
rescheduled
onto
the
same.
B
So
where
you
have
your
volume,
so
we
sold
it
with
no
affinity,
alright,
so
moving
on,
so
you
know
we
we
had
our
own
low
balancer
and
in
kubernetes
Thursday's,
so
that
they
need
a
way
to
do
that.
How
do
we
configure
rate
and
what
sort
of
abstraction
does
kubernetes
provide?
Let's
talk
a
little
bit
about
that,
so
so
in
the
pre
kubernetes
scenario
we
had
consoles,
and
you
know
and
and
then
we
would
have
some
code
which
would
watch
console
and
register
all
the
nodes
into
the
load
balancer
in
kubernetes.
B
You
will
use
something
like
an
English
controller
right,
so
so
the
ingress
controller
web
sort
of
control
the
whole
of
the
way
work,
and
then
you
have
ingresses,
which
is
basically
a
set
of
rules,
and
these
rules
specify
what
context
path
should
go
to
which
service.
So
if
you
get
traffic
for
a
specific
context,
part
like
slash
login,
then
it
should
go
to
the
login
service.
So
this
sort
of
here
is
in
plain
English,
but
obviously
you
need
to
write
the
animal
for
it.
B
So
you
would
want
to
do
this
and
then
you
will
create
an
ingress
and
deploy
that
ingress
into
kubernetes,
and
then
your
ingress
controller
is
watching
for
mewing
grasses
that
you
submit
and
then
those
particular
rules
would
then
get
activated.
So
again,
this
is,
you
know,
to
burn
any
stuff,
a
couple
of
things
that
we
did
so
initially
we
aggregated
all
of
our
context
pots
pointing
to
different
services
in
a
single
ingress.
B
So
you
multiple
roles,
multiple
rules,
and
so
we
would
we
created
a
single
ingress
and
then
the
English
controller
would
pick
up
that
ingress
and
apply
all
the
routes.
Basically
men,
all
the
context
parts
one
of
the
problems
with
that
is
that
you
can
you
know
the
you
can
only
set
a
single
set
of
timeouts
or
things
like
headers
and
so
on
for
all
of
it.
So
things
like
you
know,
headers
or
timeouts
would
be
set
at
the
controller
level
per
ingress.
B
So
if
you
have
all
the
route,
all
the
rules
or
all
the
context
spots
in
a
single
ingress,
then
all
of
them
would
get
the
same.
Configurations
of
you
know
size
limits
or
timeouts
and
stuff.
So
what
you
want
to
be
able
to
do
is
then
you
actually
want
to
create
a
different
ingress
itself
for
each
of
your
services.
So
that
was
one
interesting
thing
we
went
through
and
then
the
other
thing
is
you
would
want
to
do.
B
Ssl
configuration
for
for
some
of
the
ingress
or
some
of
the
hosts
that
you
have
and
that
you
will
need
to
create
a
kubernetes
secret,
which
is
the
secret
store
in
kubernetes
right
and
and
then
you
would
push
your
certificate
chains
and
stuff
like
that
into
the
secret
and
then
refer
that
inside
your
ingress
rule.
So
you
know,
then
again,
we
spent
a
fair
bit
of
time
trying
to
get
that
right.
B
Some
were
considerations
about
ingress,
so
the
context,
path,
routing
and
a
couple
of
things
around
it
like
setting
up
different
host
names
for
route
etc.
So
if
you
those
context
path,
related
things
are
abstracted,
but
a
lot
of
other
configurations
are
provider
specific.
So
one
thing
I
a
missed
to
mention,
which
was
on
the
earlier
slide,
was
that
ingress
controllers
are.
B
They
are
not
native
in
the
sense
that
ingress
controllers,
you
have
different
types
of
injuries
controllers,
so
you've
got
different
providers.
For
example,
you
have
an
engine,
X,
ingress
controller
or
a
traffic,
English
controller
and
so
on
right,
and
so
whatever
configuration
goes
into.
The
ingress
controller
is
sort
of
provider
specific
so
like,
for
example,
if
you
use
an
engine
X
ingress
controller,
then
you
would
need
to
use
config
maps
and
ingress
annotations
to
configure
it.
If
you
used,
maybe
a
traffic
unless
controller,
you
might
use
service
annotations
to
configure
etcetera.
B
So
things
like
that,
so
every
ingress
controller
type
has
provider
specific
configurations
and
that's
I,
guess
it's
just
the
way
it
is,
and
it's
probably
evolving
and
that's
something
we
have
to
deal
with
and
again.
Another
thing
was
that
the
reg
X,
that
is
supported
that
you
put
into
an
English,
is
a
little
different
from
the
one
that
is
directly
supported
by
some
of
the
providers
like
nginx,
etc,
and
one
thing
to
definitely
definitely
do.
Is
you
want
to
set
your
ingress
controller
to
watch
out
for
ingress
only
within
relevant
namespaces?
B
So
let's
say
you've
got
a
staging
namespace
and
a
production
namespace
alright
and
you
have
a
an
ingress
controller.
That's
you
want
to
restrict
to
watch
staging
namespace
for
aggresses.
You
don't
want
a
production
ingress
to
be
grabbed
by
that
stadium.
Rest
controller,
so
that
that's
that's
one
thing
to
do.
I
guess!
That's
one.
Also
one
way
of
using
namespaces
in
kubernetes
is
to
do
different
environments
in
each
namespace.
That's
how
we
did
it
moving
on,
so
you've
got
configuration
properties
right
and
and
templates,
so
there's
contact
maps
in
kubernetes.
B
B
So
basically,
the
application
is
getting
packaged
into
your
container
and
the
props
are
going
separately
into
the
context
or
in
kubernetes,
and
then
you
can
configure
that
to
get
injected
as
an
environment
variable
or
get
mounted
as
a
file
inside
your
pod
right,
and
so
that
the
application
can
then
continue
to
read
it.
The
way
it
used
to
read
it
before
some
things
to
be
aware
of
so
since
you're
going
to
be
deploying
these
properties
are
changing
the
values
in
your
config
map,
independent,
potentially
independent
of
your
image
changes.
B
You
would
want
to
make
sure
that
you'll
revise
those
config
Maps
in
some
way.
You
maintain
revisions
by
yourself
somewhere
either
you
get
with
some
tagging
or
some
other
mechanism
to
know
when
what
value
was
deployed
so
that,
if
you
ever
want
to
roll
back
some
of
those
configurations,
value
changes
you'll
be
able
to
do
that.
Another
important
thing
is
so,
if
you're
using
environment
variables
for
properties-
and
you
change
a
value
in
a
context
map,
then
those
changes
won't
reflect
until
you
restart
your
pod.
B
So,
for
example,
for
a
given
service,
you
might
write
six
or
seven
different
llamo
files
and
that
sort
of
gets
grouped
nicely
without
having
charged,
but
once
you
deploy
this,
then
that
grouping
is
lost
and
you
know
sort
of
becomes
like
just
scattered
and
independent
resources.
Inside
kubernetes,
we
found
the
charts
to
be
useful
for
a
lot
of
off-the-shelf
services
and
less
for
the
kind
of
custom
services
that
we
had
in
that
particular
application.
B
All
right.
So
the
fifth
thing
I
want
to
cover
is
the
manifest
themselves,
the
llamo
files
very
important
to
know
what
goes
into
the
docker
file
and
what
goes
into
the
Animas.
So,
for
example,
you
might
have
ports
in
your
docker
file,
but
you
know
that's
not
going
to
make
any
difference
that
won't
get
honored
and
last
you
put
those
ports
in
your
kubernetes
llamo.
B
So
things
like
that,
so
you
need
to
know
what
goes
where,
especially
if
you're
coming
from
the
container
world,
then
that
becomes
important.
It's
also
important
to
know
what
type
of
kubernetes
configurations
go
into
which
kind
of
kubernetes
resources.
So
what
what
kind
of
things
can
you
put
inside
a
stateful
set
versus
a
deployment
versus
a
replica
set
versus
a
job,
and
so
on?
So
that's
important
and
then
everything
everything
that
you
would
ever
deploy,
whether
it's
a
property
configuration-
and
you
know
some
sort
of
definition
whatever
it
is
all
of
it
is
now
Yambol.
B
So
it's
very
important
to
understand
kubernetes,
terminologies
and
concepts
which
can
get
quite
daunting
very
quickly
and
then
also
you
need
to
bind
those
things
together.
So
you've
got
all
these
six
seven
llamo
and
different
resources
configured
inside
them
and
you
would
use
labels
to
sort
of
bind
all
those
different
things
together
to
work
correctly
for
a
given
service,
all
right,
so
some
challenges
with
troubleshooting.
B
So
typically
in
order
to
get
logs,
so
you
would
go
figure
out
your
deployment
get
the
pods
of
that
deployment
then
get
the
containers
in
the
pod
and
then
go
get
the
logs
of
each
of
those
containers
and
do
that
for
all
the
replicas
that
you
have
so
something
get
the
gets
really
tedious
and
difficult
at
some
point.
So
you
definitely
want
to
consider
log
aggregation.
You
would
well,
you
know,
want
to
use
citecar
agents
to
collect
the
logs
and
send
it
off
somewhere.
B
B
B
So,
with
all
of
those
kind
of
things
that
we
dealt
with
to
do
the
migration,
what
did
we
find
was
that
there
are
a
lot
of
complexities
that
we
would
require
many
people
in
the
team
to
understand,
and
that
definitely
is
time-consuming.
There's
a
lot
of
rapidly
left
foot
because
we
had
to
create
a
lot
of
different
yeah
moles
for
different
services
and
and
again
you
know
do
that
for
other
applications
do
subsequently
we
started
to
migrate
other
applications
and-
and
you
know,
we
found
similar
challenges
in
the
first
couple
of
migrations.
B
There's
some
friction
because
you
know
everybody
is
working
off
the
same
set
of
Yama's.
There
are
some
inputs
coming
in
from
developers
some
inputs
from
DevOps
and,
as
we
saw
at
the
very
beginning,
this
is
sort
of
like
a
process
overlap
that
has
happened,
and
then
the
debugging
challenges
would
cause
sort
of
delivery
cycles
to
slow
down.
B
So
one
of
the
things
we
did
was
to
create
an
abstraction
over
to
an
ad
so
that
you
know
developers
or
application
teams
could
simply
specify
the
needs
of
their
service
in
terminologies
that
they
already
understand
and
wouldn't
have
to
learn
something
new
and
then
automatically
generate
all
the
ammo
files
and
the
doctor
files
needed.
This
would
provide
a
standard
way
of
doing
things
and
different
teams.
Different
service
teams
wouldn't
go
away
and
start
reinventing
the
wheel
in
different
ways,
and
it
would
provide
a
I
mean.
B
The
goal
was
to
provide
a
self-service
way
for
the
teams
to
quickly
deliver
or
deploy.
So
what
is
the
abstraction
that
we
created?
So
you
know,
there's
terminologies
that
application
teams
would
intuitively
understand
right,
so
people
would
be
family.
What
is
the
volume
on
her
health
check
report
and
things
like
that?
B
But
in
kubernetes
there's
a
whole
lot
of
things
to
know.
Obviously
you
know
your
service
or
application
may
not
need
all
of
these
things,
but
it's
important
to
figure
out
which
of
these
things
you
should
know
and
how
those
different
things
fit
together
and
hence
we
felt
the
need
for
abstracting
a
lot
of
the
stuff.
How
do
we
do
that?
B
So
we
started
to
create
app
centric
keywords:
I'll
show
you
some
examples
in
the
next
slide
that
are
very
easy
for
teams
to
just
put
in
or
intuitive
to
understand
when
you
read
them
and
then
you
know
then
start
writing
a
tool
that
would
then
infer
what
needs
to
be
done.
So,
for
example,
if
you
specify
that
your
service
needs
a
data,
part
right,
a
particular
part
to
where
it
stores
data
to
be
persisted,
then
we
would
infer
the
tool.
B
Would
then
look
at
that
and
infer
that
it
needs
to
generate
a
stateful
set,
llamo,
a
PVC
template
and
so
on
and
so
forth,
right
and
and
then
so
all
these
things
would
be
translated
into
kubernetes
speak,
which
means
translating
all
the
relevant
kinds
and
resources
bind
them
all
correctly,
using
the
right
labels
and
then
also
try
and
provide
a
way
to
troubleshoot
in
in
plain
English,
I
guess.
So
this
is
the
kind
of
abstraction
that
we
started
to
build
from
whatever
we
learned
through
that
migration
and
also
other
migrations.
B
So
these
are
some
sample
snippets
of
what
we
called
heads
back.
We
made
the
schema
for
the
spec
heads
back
available
on
github
and
you
can
check
it
out.
So
if
you
want
a
volume
for
your
service,
you
pretty
much
just
just
say
that
something
like
that
and
then
then
essentially
let
it
create
all
the
person's
volume
claims
and
staple
tests,
and
so
on.
Let's
say
you
want
to
set
up
auto
scaling
for
your
service,
so
you
we'll
just
say:
I
need
one
man,
one
for
max
or
whatever.
B
So
a
few
months
ago
we
decided
to
make
this
available
to
the
community,
and
so
we
made
that
open
source
and
it's
available
again
at
that
github
URL
that
you
see
on
the
screen
right
now,
so
so
yeah,
that's
how
we
did
the
deployment
and
essentially
we've
started.
You
know
building
out
the
instruction
for
troubleshooting.
B
So
if
you
get
UQ,
ladies,
you
know
returns
in
error
message
like
crash,
do
back,
then
you
know
we
try
to
run
through
a
flowchart
inside
the
tool
automatically,
and
that
would
check
various
things
and
then
it
would
try
and
tell
you
where
the
problem
might
be
so
might
tell
you
hey,
there's
problems
in
the
start,
commands
like
your
CMD
or
other
stuff
and
or
it'll
tell
you
hey.
Some
of
your
health
check
is
failing
or
it's
an
incorrect
health
check
and
so
on.
B
B
You
know
we
focused
mostly
on
the
automation
and
the
abstraction
required
which,
which
is
where
a
lot
of
the
you
know,
because
that's
where
you
talk
to
kubernetes
and
there's
a
lot
of
complexity,
but
obviously
to
complete
the
migration.
We
also
built
sort
of
a
you
know,
layer
other
than
this
right
or
on
top
of
this
automation,
which
we
do
things
like
integrate
with
your
CI
or
Jenkins
and
then
do
container
image
scanning
other
dedsec,
ops,
stuff,
do
change
tracking
and
all
of
that.
B
B
So
some
of
our
observations,
especially
after
we
employed
some
of
the
solutions
that
we
built
you
know
in
some
of
the
abstractions
etc.
So
there
are
some
some
things
which
we
were
kind
of
able
to
measure
like
you
know
to
be
save
on
some
of
the
effort
or
something
you
know
like.
We
saved
a
lot
of
like
it's
hard
to
measure,
but
a
lot
of
lines
of
the
animals,
for
example,
and
definitely
update
times,
went
down
between
the
VM
world
and
the
kubernetes.
B
So
there
were
some
things
which
we
could
kind
of
measure,
and
then
there
were
some
soft
findings
like,
for
example,
you
know
the
the
learning
curve
went
really
low,
so
we
would
be
able
to
bring
somebody
new
onto
the
team
and
they
would
be
able
to
very
easily
deal
with
deployments
in
a
very
short
span
of
time.
Things
like
that.
So
that's
pretty
much
pretty
much
it
about
our
experience.
B
B
B
A
Sure,
thank
you
very
much
for
the
wonderful
presentation
today.
So
there
is
a
question
in
the
Q&A,
but
I
think
it's
like
a
broader
question
that
we
could
probably
both
help
answer
to
an
extent.
Question
is
with
pod
process.
Namespace
sharing,
that's
new
and
gates
1.17.
Is
it
possible
to
decouple
tightly
coupled
legacy
apps
communicate
through
IPC
named
pipe
shared
memory
so
forth
and
migrate
as
separate
containers
within
a
pod?
So
do
you
want
to
take
a
stab
at
that,
or
do
you
want
to
talk
about
this
together?.
B
A
Yeah
Tamil.
This
is
that
this
is
a
very
like
kubernetes
Pacific
question,
but
I
can
help
you
with
it
a
little
bit.
I
don't
run
any
multiple
containers
in
the
same
pod.
It's
fine!
Alright,
like
you,
can
do
that
natively
right
out
of
box.
Now,
that's
basically
what
sidecars
are
they're
just
another
container
in
the
same
pod.
Sharing
that
same
environment,
how
you
communicate
within
that
pod
is
entirely
up
to
you
at
that
point.
Right,
like
you've,
got
the
pod
Network
there
everything's
self-contained
in
that
pod.
So,
however,
you
want
to
communicate.
A
Is
fine,
I'm,
not
sure
the
question
about
names
face
sharing
and
memory
sharing,
I'm,
not
I'm,
not
sure
where
that
comes
in,
but
feel
free
to
contact
me
afterwards.
Chris
short
on
Twitter
Chris
at
Chris
short.
That
is
my
email,
and
we
can
talk
about
that
further.
If
you
want
another
question,
come
in
from
a
tool,
a
assuming
that
you
will
be
able
to
ingest
existing
yellow
files
and
create
a
CH
spec
for
use
with
high
scale
question
mark.
What
about
pure
Dockers
form
deployments?
Would
you
be
able
to
convert
to
Kate's
environment
constructs
Thanks.
B
B
Yeah
and
also
about
the
ingestion.
So
at
this
point
of
time,
so
we
since
we
started
to
build
this
abstraction
mainly
to
help
with
you
know,
to
help
with
moving
our
clothes
into
kubernetes.
So
you
know
it
started
off
with
the
assumption
that
there
is
no
kubernetes
GMOs
existing
right,
because
that's
that's
how
it
won't.
B
B
We
don't
have
that
right
now,
but
we
can
then
make
it
possible
for
you
to
just
simply
use
the
aspect
and
then
continue
deployments
with
the
head
spec
and
let
it
manage
the
Yama's
and
the
life
cycles,
whatever
yamo
changes
need
to
happen
and
about
docker
swarm
yeah.
So
again,
there's
no
conversion
back
right
there
for
kubernetes
families
or
for
dog
or
swarm,
but
it's
fairly
simpler
to
come,
come
away
from
docker
swarm
because
a
lot
of
the
sort
of
the
constructs
of
the
directive
you
be
intuitively
familiar
with.
B
A
I'm,
going
going
from
docker
swarm
mammals
to
kubernetes
animal
is
possible,
there's
tool
set
out
there,
I've
used
them,
but
if
your
swarm
configuration
is
very
very,
like
you
know,
long
or
except
you
know,
like
very-
has
a
lot
of
configuration.
Options
that
are
very
docker
specific
there
might
be
ya,
might
have
to
do
your
own
mucking
around,
but
there's
a
way
to
go
from
swarm
to
kate's,
but
then
injesting
from
Kate's
into
high
scale
like
that
is
the
interesting
point.
B
B
A
A
B
Yeah
there
are
some
things
around:
creating
additional
specs
for
things
like
jobs,
for
example,
is
something
that
we've
been
tasked
and
again
we're
trying
to
refer
to
early
new
in
in
the
community.
Like
I
said
it's
just
a
couple
of
months
now,
and
you
know
in
between,
we
had
all
the
holidays
in
December.
B
So
we'd
certainly
like
to
hear
more
from
you
guys,
please
feel
free
to
put
some
issues
on
get
up,
and
we
also
make
some
efforts
to
put
up
the
roadmap
on
get
up
soon,
so
that
you
can
see
what's
coming
up
and
yeah.
The
other
thing
we
want
to
strengthen
is
the
troubleshooting
stuff
right
so
definitely
want
to
strengthen
troubleshooting.
We
find
that
incredibly
useful
and
we
hope
thank
you
as
well.
I
know.
C
So
so,
basically
I
know
in
this
journey
of
automating
and
making
simply
a
whole
journey
to
component
is
simpler.
What
we
are
doing
with
high
scale
is
we
started
with
the
most
popular
workloads,
so
basically,
our
traditional
web
workloads
and
custom
application
workloads.
We've
now
taken
this
journey
to
also
other
kinds
of
work.
Lots
of
now
we
are
looking
at
data
workloads.
We
are
looking
at
new
form
of
cellulose
and
other
kind
of
workload,
so
not
there's
rapid,
so
we
are
increasing
in
the
types
of
workloads
that
high
scale
can
automate.
C
Additionally,
as
kubernetes
is
progressing
in
its
a
journey,
we
are
almost
in
sort
of
line
in
in
line
with
kubernetes,
so
we
make
a
lot
of
improvements
in
high
scale,
with
the
learnings
that
we
have
from
kubernetes
and
there's
two
versions
of
high
school.
So
there's
the
open-source
version
and
there
is
an
enterprise
version.
So
for
folks
who
also
want
to
are
looking
at
more,
like
not
scale
deployments,
there
are
a
lot
of
capabilities
in
the
enterprise
version
that
we
are
adding
and
on
monitoring
and
on
troubleshooting
around
just
overall
operational
simplicity
of
kubernetes
deployment.
A
B
Yeah
I
guess
that
really
depends
on
the
application
itself.
So
usually
what
we
do
is
that
at
the
beginning
of
the
migration,
we
do
an
assessment
of
the
application
itself
even
before
we
start.
So
you
know
what
kind
of
an
application
do
you
have
is
the
type
of
services
they
have
legacy,
components
and
what
kind
of
staple
services
include,
etc,
etcetera,
and
then
you
know
typically
one
of
the
things
that
invariably
comes
up.
Is
you
know,
hey?
Is
this
good
for
kubernetes
right?
B
Should
we
do
kubernetes
at
all
for
this
kind
of
an
application,
so
that
assessment
usually
helps
us
decide
the
time
there
are
some
really
small
things
that
some
teams
have
done
like
you
know,
hey
I,
have
these
three
services
and
can
I
just
migrate
it.
So
that's
you
know
in
roughly
it
a
week
to
two
weeks,
they've
got
these
services
running
inside
kubernetes
because
of
you
know:
hey
I,
just
write
a
head
spec
and
deploy.
B
So
that's
very
simple,
but
if
you
have
a
large
application,
which
is
in
production
a
large
number
of
users
and
we'll
go
through
a
much
much
bigger
cycle
and
the
first,
the
big
one
that
I
talked
about
in
the
beginning
of
their
presentation
today
that
took
us
several
months
several
months
to
do
that.
Migration
just.
B
B
B
That's
how
you
would
do
it
and-
and
one
interesting
point
here-
is,
if
you're
doing
side
parts
to
high
scale,
then
so
right
now
we're
just
deploying
those
as
simple
containers
inside
the
pot
right,
but
with
1.18
we're
definitely
going
to
take
advantage
of
the
flight
card
behind
and
we
hope
that
comes
out.
It's
supposed
to
come
out
in
17,
but
once
that's
out,
then
high-skill
will
automatically
generate
a
fight
card
kind.