►
Description
OpenShift Commons Gathering @ Kubecon/NA San Diego November 18 2019
A
We're
gonna
just
motor
right
on
there's
some
really
great
synergies
that
go
on
between
RedHat
and
IBM
and
you're
about
to
hear
some
of
them
and
get
some
great
demos
as
well.
Brian
is
going
to
be
with
us
he's
gonna,
do
the
front
load
this
talk
and
then
he's
gonna
disappear
because
he's
got
a
call
and
then
he's
going
to
come
back
absolutely.
A
B
Steve
Steve
will
probably
need
the
podium
mic
for
tapping
and
doing
everything
same
time.
I
am
a
far
less
intelligent
and
less
talented
portion
of
this.
You
know,
I
am
the
one
here
to
just
sit
and
tap-dance
and
talk
about
the
technology.
Steve
is
going
to
handle
the
show
I'm,
the
tell
so
speaking
of
the
tell
we
are
here
to
talk
about
the
idea
of
surface
meshes
and
where
they're
going
kind
of
the
beyond
piece
of
everything
from
here.
So
the
big
question
you
know,
of
course,
for
most
people
is.
B
You
know
the
way
what
we
actually
have
here:
a
number
of
control,
plane
services
like
pilot
mixer,
Citadel
and
galley,
which
are
all
there
to
kind
of
handle
discrete
components
of
the
way
that
history
actually
operates,
and
then
another
component
called
sto
proxy,
which
is
actually
derived
from
the
CN
CF
project
envoy.
So
with
this,
we
collocate
a
number
of
containers
together
within
a
pod
with
in
kubernetes
and
then
that
collocated
sidecar
for
envoy
actually
executes
much
of
the
kind
of
functionality
of
Sto
in
concert
with
the
control
plane
itself.
B
So
we're
gonna
be
talking
about
a
number
of
major
areas
of
functionality,
the
first
of
which
is
the
idea
of
the
connection
between
services.
So
historically,
with
a
service
mesh.
You
know,
part
of
the
purpose
was
to
glue
the
inner
communications
between
services.
Together,
you
know
to
handle
a
lot
of
the
discovery
and
you
know
kind
of
rerouting
of
traffic
and
with
Sto
we
are
able
to
achieve
that
same
similar
goal.
The
connectivity
between
services
actually
is
provided
by
the
various
discovery
services
that
are
available.
B
You
know,
there's
been
a
bunch
of
work
behind
the
scenes
for
abstracting
the
different
discovery
services
into
something
called
incremental
XDS,
but
these
are
all
used
to
kind
of
make
sure
that
all
of
the
glue
between
things
between
your
various
service
components
are
able
to
be
scheduled
together
and
consistently
so
that
you
don't
need
to
go.
What's
the
connection
to
or
what's
the
hostname
for
my
database
or
how
do
I
connect
to
Redis
or
things
like
that?
B
B
The
biggest
piece
for
my
side
to
talk
about,
though,
is
the
control
aspect
you
know
is
SEOs
primary
function
is
the
control
of
how
traffic
is
flowing
between
components.
You
know
one
example
is
the
Canary
deployment
you
know
you've
got.
You
know
that
small
team
of
individuals
working
in
Boston
and
then
everybody
else,
and
you
want
to
make
sure
that
that
new
version
of
the
service
that
you're
going
to
deploy
works.
But
you
want
to
test
it
on
a
small
group
of
people
before
you
just
shotgun
blast
roll
it
out.
B
So
here
we
take
Dan,
Walsh
and
everybody
else
in
Boston.
You
know
we
make
them
run.
The
new
version
of
our
service
make
sure
that
everything
looks
good
before
we
roll
it
out.
We
can
take
this
a
step
further,
though,
and
actually
use
more
of
an
a/b
deployment
model
where
once
we
have
established
through
the
use
of
our
canary
that
everything
looks
good.
B
We
can
treat
this
kind
of
like
an
oven
knob
and
we
can
dial
the
number
of
traffic
back
and
forth
between
our
new
version
of
our
service
and
our
previous
version
of
the
service
which
allows
us
to,
as
we
have
a
higher
degree
of
confidence,
that
our
new
service
actually
operates.
The
way
that
we
expect
it
to
that,
we
are
able
to
then
roll
that
out
to
more
and
more
users
as
an
aside.
This
is
part
of
how
we
did
the
actual
deployments
of
software
within
container
Linux
as
well.
B
We
don't
necessarily
want
to
just
kind
of
endlessly
flow
traffic
to
our
services
because
they
seemingly
or
they
similarly
could
fail
as
well
so
circuit
breakers.
Allow
us
to
kind
of
make
decisions
about
the
ways
that
our
services
should
fail
and
do
this
independent
of
the
actual
code.
It
also
allows
us
to
kind
of
improve
response
times
where
we
can
say
you
know.
In
this
case
you
know,
we've
got
a
call
chain
of
service
A
to
B
to
C.
B
You
know
if
we
were
to
have
a
kind
of
failure
between
service,
B
and
service
C
or
there's
some
kind
of
increased
latency.
We
can
make
a
decision
about
whether
or
not
to
just
drop
that
request.
Potentially,
it's
not
actually
a
critical
path
for
the
rest
of
our
services,
so
it's
better
to
have
that
service
fail
and
keep
everything
running
smoothly
to
keep
our
customers
happy,
rather
than
just
kind
of
having
one
slow,
node
kind
of
pull
everything
down.
B
If
for
the
folks
who
ever
worked
with
HPC
systems
and
had
to
deal
with
MPI
problems,
but
next
up,
timeouts
and
retries.
Similarly,
rather
than
having
to
implement
timeout
and
retry
logic
within
our
applications,
now
we
can
do
that
at
the
actual
proxy
level
we
can
make
decisions
about.
You
know,
should
we?
B
B
This
law
also,
then
coupled
with
the
circuit
breaking
and
the
timeouts
and
retries,
allows
us
to
do
a
lot
more
with
rate
limiting
as
well.
So
through
the
rate
limiting
we
can
make
deterministic
decisions
about.
Do
we
care
about
the
number
of
concurrent
requests?
Do
we
only
want
a
service
to
see
a
maximum
number
of
inbound
connections,
so
we
can
kind
of
make
decisions
there.
B
We
can
also
use
these
then
to
kind
of
take
and
flip
that
idea
on
its
head
now,
instead
of
just
kind
of
making
decisions
about
what
happens,
when
things
fail,
we
can
introduce
failures
intentionally.
You
know
Mike
Tyson,
Mike,
Tyson
famously
said
everybody
has
a
plan
until
they
get
punched
in
the
mouth.
So
this
is
the
opportunity
to
allow
you
to
actually
in
a
very
safe
space.
B
Have
that
sparring
where
you
know
you
test
what
will
happen
when
things
fail
and
then
know
that
your
operations
teams
have
that
runbook
established
and
know
exactly
what
they're
going
to
do
so
beyond
just
kind
of
introducing
time-based
problems,
we
can
actually
introduce
protocol
specific
errors
as
well.
You
know
what
will
happen
when
there's
a
400
error
versus
a
500
error.
B
You
know,
will
it
attempt
to
do
the
retry,
because
it's
a
transient
error
or
does
your
application
just
always
think
that
you
know
any
type
of
error
is
permanent
and
that
it
shouldn't
have
to
so.
This
allows
us
to
kind
of
really
make
sure
that
everything
operates.
The
way
that
we
would
expect
it
to
the
last
area
of
functionality
to
talk
about
is
observability
so
for
observability,
with
sto
and
specifically
in
the
context
of
OpenShift
service
mesh.
B
But
before
we
get
to
the
metrics
scraping,
the
distributed
tracing
piece
actually
refers
to
making
a
request
across
a
call
chain,
graphing
the
latency
between
all
of
those
and
actually
even
capturing
header
information
for
the
requests
in
a
sampled
manner
and
I'm
intentional
about
saying
sampled
manner,
because
I've
talked
to
customers
who
go
great.
This
is
base
like
a
pcap
I'm,
now
just
going
to
record
all
of
my
traffic,
and
that
means
that
you
now
need
infinite
storage
to
capture
all
of
those
infinite
requests.
B
So
again,
you
should
be
sampling
things
and
then
taking
a
look
at
the
overall
patterns
to
make
decisions
about
how
your
applications
are
operating
by
capturing
this
information,
though,
and
having
the
data
in
Prometheus
available
and
the
data
in
iaeger
available.
There's
a
net
new
project
that
Red
Hat
created
called
key
Olly,
whose
purpose
is
to
visualize
the
total
running
of
an
application.
B
You
know-
and
here
we
have
the
standard
sto
book
info
service,
with
the
graph
of
all
of
the
call
chains,
with
multiple
different
versions
of
the
services
involved
and
for
anybody
who's
interested
in
talking
more
about
key
Olly.
We
actually
have
the
PM
4
key
Alim
Altran
here,
who
will
raise
his
hand?
So
if
you
want
to
go
bug
him
a
bit
and
ask
questions,
you'll
be
able
to
as
well,
but
this
gets
into
why
OpenShift
service
mesh?
B
You
know
why
specifically-
and
this
is
that
you
know-
we
have
taken
and
glued
a
bunch
of
these
components
together-
that
we
think
any
operator
is
going
to
need
or
any
administrator
is
going
to
need
to
have
a
well
running
service
mesh
and
coordinated
this
through
the
operator
framework.
So,
while
yes,
you
could
just
take
the
upstream
helm,
charts
and
slap
them
onto
a
cluster,
the
value
of
OpenShift
is
providing
the
automated
updates.
The
management
through
CR
DS,
the
overall
kind
of
lifecycle
of
all
of
these
components
and
gluing.
B
B
You
know
this
is
now
a
new
layer,
it's
a
little
bit
of
a
different
model
than
what
some
of
our
users
are
used
to,
but
this
is
included
with
openshift
container
platform
version
4
and
above
so
you
know
there
not
any
additional
components
that
you
need
to
license
or
anything
in
order
to
use
this.
So
at
this
point,
I
want
to
switch
things
up
a
bit.
B
I
have
the
pleasure
of
being
able
to
stand
on
the
stage
with
one
of
the
leaders
of
open-source
development
and
specifically
cloud
native
development
within
IBM
he's
been
focused
on
sto
most
recently,
but
mr.
Steve
dake
is
going
to
come
up
here
and
talk
to
you
about
sto,
Multi
cluster
and
where
this
is
going
in
the
future,
both
kind
of
with
things
you'll
see
now
and
talking
about
the
overall
direction
in
a
really
high
level.
So,
thank
you
much.
C
Okay,
so
folks
can
hear
me
I'm
an
environments
worker
blade
for
sto
I'm.
Also,
a
maintainer
I've
been
working
on
this
field
for
about
sure
I've
been
working
on
Israel
for
about
two
and
a
half
years.
My
main
responsibilities
inside
of
the
sto
is
installation.
Automated
operations
and
multi
cluster
installation
and
operations
are
well
understood.
Problems
Multi
cluster
isn't,
and
this
talk
is
about
what's
beyond
service
mesh.
C
So
the
purpose
of
this
section
is
about
is
your
multi
classroom
and
what's
beyond
service
mesh
in
the
future,
I'm
gonna
talk
a
little
bit
about
multi
cluster
I'm,
not
even
gonna,
try
to
pronounce
this
manifold,
but
this
is
how
we
think
of
multi
cluster.
Today
in
sto,
there's
six
dimensions
I'll
get
more
into
the
six
dimensions
later,
but
this
is
how
you
would
visualize
this.
How
physicists
or
mathematicians
would
visualize
this?
If
you
study
this
diagram
in
detail,
I
can
guarantee
you
your
mind
will
be
boggled.
Mine
is
so
I.
C
Try
not
to
set
it
too
much.
It's
just
a
cool
diagram.
It's
an
interesting
way
of
representing
six
diagrams
on
a
2d
display,
so
for
our
modeling
of
is
geo.
Multi
cluster
we've
really
got
six
different
vectors
and
we
call
them
a1
through
a
six.
A
one
is
networking,
a
two
is
clusters,
so
these
are
the
kubernetes
clusters
we
may
have.
We've
got
control
planes
which
are,
which
is
what
we
refer
to
as
a
three
we've
got
identities
and
trusts
now
another
way
of
thinking
of
identities
and
trusts.
C
Our
policies,
security
policies,
for
example,
we've
got
meshes,
that's
the
concept
of
combining
multiple
meshes
together,
so
one
service
mesh
with
another
service
mesh
that
is
independent.
They
operate
completely
independently
and
securely
and
then
there's
tenancy.
This
is
idea
that
there's
a
multiple
tenants
of
available
in
your
service
mesh
now
Red
Hat,
openshift
service,
mesh
I,
think
this
name
of
the
product
does
support
multi-tenancy
I
already
so
they've
got
one
of
these
multi.
C
My
dissapoint
went.
They
have
one
of
these
multi
cluster
vectors
handled
already,
so
we're
good
here.
I
think
it
just
came
back.
So
the
six
vector
model
I
spoke
a
little
bit
about
the
six
vector
model.
The
problem
with
the
six
factor
model
is:
whenever
you
have
a
vector,
it
goes
from
0
to
infinity.
So
one
of
the
things
we're
trying
to
do
in
the
future
is
figure
out
how
to
make
it
not
go
to
infinity,
and
when
you
have
six
of
these
things
going
to
infinity,
that's
a
big
problem
to
solve
so
we've.
C
The
idea
of
compact
ization
is
that
you
can
press
things
down
into
smaller,
compressible
chunks
that
you
can
then
model
now
what
we
want
to
model
with
this
work,
you
can
read
the
slides
for
what
we
think
are
the
best
practices,
but
really
what
we
want
to
model
is
how
is
steel
would
be
deployed
across
the
world
because
our
about
that
across
the
United
States
across
a
pack
across
amia
in
different
clusters,
with
different
tenets
from
one
description
language?
So
that's
what
we're!
C
After
with
this
six
dimensions,
we
want
to
compress
this
into
one
description
language.
So
this
is
cool
I'm,
going
to
show
a
demo
how
one
of
these
one
very
small
part
of
this
works,
which
is
networking
with
security
with
TLS,
so
multiple
networks
I'm
going
to
demonstrate
and
we'll
give
it
a
go.
Hopefully
it
works
and
I'm
certain
it
will
maybe
we'll
find
out.
I
can
get
my
display
in
there
right.
C
Try
that
so
true
story,
I
had
this
demo
working
I
spent
about
40
hours,
getting
it
working
the
first
time
and
it
was
working.
Great
and
I
was
so
tired
at
the
end
of
that,
I
didn't
file
a
bug
for
a
defect
and
then
guess
what
I've
spent
like
the
last
40
hours,
getting
the
demo
working
again
going
through
the
same
thing,
because
I
forgot
what
the
problem
was
so
Keith
all
the
engineers
always
file
a
defect
very
important.
C
C
So
I'll
just
talk
a
bit
about
it.
What
this
does
is
this
deploys
the
first
section,
C
1
C
l2
l3
that
selects
a
cluster
we
want
to
run
on
so
cluster
1
is
in
the
US
cluster
2
is
in
Europe.
Cluster
3
is
in
a
pack.
These
are
running
on
IBM
cloud,
which
is
regular
kubernetes,
not
openshift.
We
we
deploy
a
certain
set
of
services
for
for
this
demo,
so
we
deploy
there's
11
micro
services
in
this
demo.
C
We
deploy
three
in
one
cloud
three
in
another
and
then
five
and
another
on
these
micro
services,
all
communicate
over
the
Internet,
with
security
with
the
new
TLS
to
connect
with
each
other.
The
first
part
of
the
script
deploys
the
manifests
the
services
in
the
deployments.
So
that's
what
the
first
part
does.
The
second
part
deploy
something
called
a
service
entry.
A
service
entry
is
a
way
of
specifying
a
service
that
is
external.
This
is
very
detailed,
but
a
service
that
is
external
to
the
mesh
in
which
are
operating.
C
By
the
way,
this
is
the
hipster
shop
demo,
so
Google
is
kindly
kindly
put
this
demo
available
for
free
on
the
internet
with
today,
as
self
to
license.
Assuming
my
internet
is
working.
This
will
start
going
this.
The
way
this
the
way
this
the
way
this
hipster
shop
demo
works,
is
it's
a
like
a
shopping,
cart
system
and
I,
don't
think
I've!
Ever
oh
yeah
there
we
go
it's
starting.
It's
just
slow,
a
lot
of
people
using
the
Internet.
C
C
A
A
C
Getting
like
denial
here,
denial
services
going
on
okay,
so
we're
getting
the
I
think
we're
getting
a
pack
here.
Finally,
excuse
me
we're
getting
a
pack
and
normally
we
need
to
play.
If
you
were
to
play
this
from
your
laptop,
it
would
literally
take
40
about
40
seconds
to
get
all
the
services
to
play
and
started.
That's
how
long
it
takes
me
at
home
or
in
my
office.
C
So
the
next
step
is
we'll
deploy
the
SC's
for
each
of
the
regions,
so
we're
getting
are
for
each
of
the
zones
in
different
each
of
the
clusters
in
a
different
region.
So
we're
right
now
we're
deploying
the
SC's
for
us,
and
then
we
will
deploy
the
SCS
for
amia
bad
at
guessing
how
long
this
is
going
to
take.
C
C
There
you
go,
that's
exactly
how
Nestle
works
and
since
we
have
to,
since
we
have
three
clusters,
we
need
to
deploy
two
sets
of
SC's
one
for
one
cluster,
one
for
the
other
and
that
allows
us
to
route
to
those
other
clusters.
So
that's
one
of
the
key
important
factors
of
how
this
works
all
right.
Actually,
you
know
that's
a
good
point.
Maybe
I
should
take
questions.
While
this
thing
is
the
point,
if
there's
any
an
audience,
yes,
SC,
it's
a
call
to
service
entry.
C
It's
much
like
an
in
point
point
slice.
So
within
sto
we
just
approved
I
just
approved,
actually
the
integration
of
endpoints
slice
in
disco,
so
we're
gonna,
replace
SC
with
an
with
endpoints
slicing
or
endpoint
slices,
which
is
a
new
kubernetes
feature,
although
we
can't
do
it
quickly
because
it
just
came
out
so
we're
very
close.
I
feel
like
it's
coming.
I
feel
it
yeah.
B
C
What's
really
cool,
though,
is
I'm
really
impressed,
because
the
Internet
is
clearly
bad.
No
offense
Stan
you've
done
a
great
job,
putting
together
an
event,
but
kubernetes
and
coop
cuttle
are
actually
deploying
correctly.
It's
awesome,
it's
really
cool.
So,
okay,
we're
done.
Let
me
go
to
cluster
two
and
I'm
going
to
get
the
services
on
cluster
two
and
the
person
I'm
doing.
This
is
there's
a
load
balancer,
that's
created
by
this
demo
and
by
the
software.
C
C
C
Brian
made
this
all
happen.
It
was
all
Brian,
so
real,
quick
I
want
to
show
some
of
the
different
features
of
the
hipster
shop,
because
I
think
it's
a
really
cool
demo
Google
did
a
great
job
on
it:
I'm,
not
a
hipster
but
I'm,
really
into
hi-fi.
So
this
vintage
record
player
is
looking
kind
of
cool
I'm
diggin
that
I'm
gonna
click
on
that.
Let
me
see
what
it
is.
What
do
we
got
here?
C
Wow,
it's
a
thorns,
so
this
is
probably
like
a
five
or
ten
thousand
dollar
record
player
new
and
it
still
works
so
I'll
buy
that
and
probably
like,
sell
it
on
eBay
or
something
I'm
buy
for
you,
no
one's
guaranteed
to
work
like
that.
To
my
cart
and
all
and
these
micro-services
are
all
communicating
with
each
other
across
the
internet.
The
11
different
micro-services
I'm,
going
to
browse
more
products.
Now
I've
got
one
item
in
my
cart.
We
see
that
up
here.
C
In
the
view,
cart
it's
a
little
slow
because
of
the
because
we're
going
across
regions.
Normally,
you
may
not
set
an
application
up
in
this
way.
Typewriter
I,
maybe
my
typing
can
use
some
improvement.
So
typewriter
looks
good
in
your
living
room.
It
doesn't
say
anything
about
whether
it
works
or
not
so
I.
Don't
it
just
looks
good
I,
don't
know
what's
cool
about
it?
Is
you
have
to
really
press
the
keys
hard?
So
let
me,
let
me
add
that
to
my
cart,
I'll
just
add
it:
it's
not
that
much
67!
C
It
has
nothing
to
the
roller.
So
I'm
gonna
place
my
order
and
then
all
this
stuff
gets
filled
in
automatically
as
it
would
in
a
normal
form,
and
then
your
order
is
complete.
So
I
paid
a
one
hundred
forty
seven
dollars
for
nothing
there,
because
I'm
not
gonna,
get
shipped
anything
because
I
see
down
here
this
this
website
is
hosted
for
demo
purposes.
Only
so
thank
you.
I
appreciate
your
time.
Brian
appreciates
your
time.
A
A
C
So
yeah
the
question
was:
how
are
these
things
wired
up?
How
are
they
connected
together?
Really
good
question?
So
here's
the
single
cluster
hipster
shop,
anatomy.
This
is
how
a
hipster
shop
comes
from
upstream.
This
is,
in
fact,
a
cut-and-paste
of
a
diagram
from
upstream.
You
see
the
part.
The
front
end
is
what
we
connected
to
the
Internet's
where
and
everything
comes
in
from-
and
this
is
all
different
services
that
that
the
front-end
communicates
with.
So
what
I've
done
in
my
kind
of
hacking
of
this
manifest?
This
was
one
manifest.
C
What
I've
done
for
this
demo
is
I
put
things
into
three
different
clusters
in
multiple
regions
and
the
clusters
are
just
in
one
zone.
So,
for
example,
North
America
I've
got
the
email
service,
the
payment
service,
the
shipping
service
and
so
on.
Now
these
are
all
protected
by
TLS.
So
we've
got
us,
we've
got
amia,
we've
got
a
pack.
The
diagram
is
a
little
more
messy
so
like.
If
we
do
an
a/b,
that's
that's
what
we've
got
deployed.
C
That's
what
it
looks
like
on
a
single
cloud,
so
you
have
to
be
really
intelligent
about
how
you
structure
your
applications.
There
are
a
ton
of
use
cases
for
this
work,
a
ton
of
use
cases,
but
you
do
have
to
be
careful
about
how
you
structure
your
your
applications
and
I
want
to
I
want
to
make
something
clear
to
thank
you
for
the
question.
It's
a
great
question.
I
want
it
I
want
to
make
something
clear
that
it's
very
we're
in
barely
very
early
stages,
and
we
spent
two
years
figuring
out
these
six
dimensions.
C
So
this
is
like
one
of
the
dimensions
like
networking.
That's
one
of
the
advantages
we've
solved
so
far.
We've
got
solutions
for
everything
else,
but
we
haven't
done
this
integrated
them
into
one
solution
and
other
service
meshes
are
kind
of
following
us
in
this
way,
issues
leading
in
Multi,
multi
class
or
multi
cloud.
Whatever
you
want
to
call
it.
Networking
across
clouds
is
your
ox
for
that
and
I'm
really
proud
of
the
team.
That's
put
this
work
together.
Are
there
any
more
questions?
Oh.
C
Yeah,
oh
that's
a
great
question
so
today
we
already
support
that.
We
supported
that
in
0.9
or
0.8,
it's
called
mesh
expansion.
You
can
connect
sto
to
a
virtual
machine
that
works
today
now
in
1/4,
which
is
in
the
latest
version
of
Vista.
We
released
a
new
new
functionality
to
add
services
to
the
mesh
without
a
bunch
of
hassle.
So
you
say
it's
to
cuddle,
add
to
mesh
and
you
give
it
the
name
of
the
service
and
you're
off
to
the
races.
That's
all
there
is
in
the
past.