►
From YouTube: OpenShift Commons Briefing #142 Istio 1.0 Release Update with Brian 'redbeard' Harrington
Description
Istio, an open source tool to connect and manage microservices, has become a category leading service mesh (essentially a configurable infrastructure layer for microservices) for Kubernetes. This week, Istio celebrated a milestone of the general availability of Istio 1.0.
Istio provides a method of integrating services like load balancing, mutual service-to-service authentication, transport layer encryption, and application telemetry requiring minimal (and in many cases no) changes to the code of individual services.
In this, briefing Istio Product Manager, Brian 'redbeard' Harrington gives a great introduction and overview along with a 1.10 release update .
A
P
and
welcome
to
another
OpenShift
Commons
briefing.
Today
we
have
red
beard,
aka,
Brian
Harrington
and
the
product
manager
here
at
Red,
Hat
forest
EO
and
he's
going
to
give
us
an
introduction
to
sto,
1.0
and
happy
birthday
to
is,
do
1.0
great
to
have
you
and
I'm
gonna,
let
Brian
and
do
all
the
talking
here
and
we'll
have
chat.
You
can
answer,
ask
questions
in
the
chat.
It's
like
pattering
I
will
interrupt
Brian,
but
otherwise
we'll
save
them
all
for
the
end
for
some
live
Q&A,
okay,.
B
You
know,
boring
in
this
sense
is
actually
the
thing
that
we
are
looking
for
them.
You
know
boring
is
exciting,
because
it
means
that
we
have
a
lot
of
testing
going
on
in
the
project
in
the
1.8.
Focus
has
really
been
on
the
testing
the
upgrade
ability.
There
was
a
lot
of
work
that
was
done
within
the
community
of
doing
upgrades
from
the
0.8
release
to
the
Wando
release
they
actually
just
a
for
about
a
week
before
the
actual
release
had
a
big
push
where
they
were
getting.
B
That's
it
well
kind
of
I
mean
most
of
these
changes
were
actually
happening
under
the
hood.
That
is
to
say
that,
in
the
process
of
doing
that,
you
know
they
said.
If
we're
going
to
say
that
this
release
is
1.0.
We
really
need
to
make
sure
that
when
folks
go
to
actually
use
it,
you
know
when
they
have
their
first
experience.
That
1.0
means
that
you
know
they.
B
They
have
that
degree
of
trust,
and
you
know
that's
why
there's
been
a
lot
of
uptick
recently,
also
in
the
forty
oh
project,
which
is
doing
the
scaling
and
load
testing,
and
that
is
where
all
of
the
work
for
red
hat
really
kicks
off.
You
know.
One
of
the
pieces
that's
been
exciting
to
me
is
yelling.
B
You
know,
Jiali
is
a
component
that
gives
you
a
visualization
of
the
entire
application
mesh
that
you're
working
on
you
know
it
gives
you
a
you,
know
a
nice
graph
so
that
you
can
kind
of
see
where
how
all
the
different
components
connect
together-
and
this
is
one
of
the
aspects
that,
in
terms
of
red
hat,
is
something
that
we
decided
to
lead
on.
We
saw
that
there
was
a
need
here.
B
That's
both
testing
of
the
upgrade
workflow
as
the
community
has
been
doing
its
the
testing
of
the
actual
packaging
mechanism,
and
it's
ensuring
that
when
you
are
rolling
out
these
components
that
it
is
seamless,
we
are
also
having
a
big
focus
on
the
actual
user
experience.
You
know
the
kind
of
hallmark
of
the
upstreams
1.0,
the
the
thing
that
they
were
focusing
on
with
that
was
that
they
wanted
to
make
sure
that
things
were
production
ready
in
the
core
feature
set
now.
B
You
know
we
are
also
going
the
focusing
on
more
of
operator
based
deployment
model
in
the
future
here
that
allows
us
to
do
the
changes
that
we
feel
are
more
specific
to
what
Red
Hat
customers
explicitly
want.
We've
had
folks
inside
of
Red
Hat,
who
have
been
focusing
on
different
areas
like
you
know,
removing
the
need
for
running
privileged
containers
for
the
actual
sidecar
injection
process.
B
B
That
is
the
thing
that
we're
here
to
talk
about
today,
so
beyond,
just
the
playing
with
it
today.
You
know
this
is
where
I
have
talked
to
a
lot
of
folks
and
understood
that
they
here
service
mesh.
They
know
they
look
at
the
basic
bullet
points
of
sto
and
then
they
go.
But
what
does
that
mean
for
me
and
how
do
I
get
the
things
that
I
actually
want?
You
know
what
does
this
actually
mean?
What
is
a
service
mesh
and
that's?
Why
we're
going
to
take
a
second
and
talk
about
under
the
hood?
B
You
know
we've
been
running
through
this
idea
of
we've
built
mandalas
for
a
long
time.
We
need
to
stop
building
monoliths.
How
do
we
do
that
and
the
thing
that
we
are
hoping
to
see
with
sto
here
is
giving
developers
the
ability
to
break
down
applications
into
smaller
services
and
decentralize
them.
You
know,
that's
the
whole
idea
of
containerization,
you
know,
and
it's
there's
a
good
reason
for
that.
It
makes
it
easier
to
manage
these
services
hypothetically
and
when
you
can
build
and
deploy
and
scale
them
separately.
B
It
means
that
you
can
separate
the
concerns
into
different
teams
and
really
allow
them
to
have
the
ability
to
work
at
the
cadence
that
is
most
optimal
for
them
and
not
have
as
many
kind
of
hard
inter
dependencies
I
mean.
That's
the
basic
of
distributed
systems.
Theory,
and
while
we
talk
about
you
know
in
this
slide,
micro
service
is
what
we're
really
talking
about
is
building
distributed
systems
you
know,
distributed
systems
are
not
necessarily
a
new
idea.
You
know
this
kind
of
goes
far
back.
B
You
know
well
before
the
idea
of
even
you
know,
system
seven
signaling
within
old.
You
know
Bell
Labs
hardware,
but
ultimately
it's
the
idea
of
having
a
control
plane
and
a
data
plane.
You
know
the
nice
thing
here
is
that
you
know
we
have
a
lot
of
paradigms
that
folks
have
gotten
used
to
with
this.
You
know
if
you
look
at
kubernetes
kubernetes.
Is
this
distributed
system
where
you
have
both
the
control
plane,
which
is
the
actual
kubernetes
api?
B
And
then
you
have
the
data
plane,
which
is
the
nodes
themselves
that
are
executing
containers,
and
one
of
the
hallmarks
of
any
distributed
system
is
that
a
failure
of
the
control
plane
should
not
mean
a
failure
of
the
data
right.
So
in
the
case
of
kubernetes
like
if
that
control
plane
goes
down,
if
the
API
servers
go
down,
none
of
your
applications
go
down.
Those
applications
are
just
going
to
stay
scheduled
on
the
host.
It's
just.
You
can't
execute
a
state
change
on
them
and
as
we
move
to
more
of
this
distributed
architecture.
B
So
fortunately
folks
inside
Red
Hat,
you
know
put
together
this
nice
slide
that
I,
like
of
kind
of
summarizing
some
of
the
fallacies
of
distributed
computing,
and
this
helps
to
understand-
why
is
tio
is
important.
You
know
I'm
sure
some
of
you
have
seen
this
before,
but
as
developers
and
administrators,
we
tend
to
forget
about
these.
A
lot
of
the
time-
and
it's
important
to
note
that,
like
when
you're
moving
to
building
cloud
native
applications,
we
cannot
be
ignorant
of.
We
have
to
build
all
of
this
sort
of
resilience
into
our
services.
B
You
know
Verna
Vogel,
the
t2
of
Amazon,
famously
said
everything
fails
all
the
time
and
if
you're
not
actually
keeping
that
in
mind
and
working
and
building
things
with
that
mentality,
it
can
definitely
come
back
to
bite
you.
So
how
are
we
going
to
actually
deal
with
all
of
this
complexity
in
the
future?
Well,
fortunately,
container
simplified.
This
is
something
that
I
know.
Everybody
on
this
call
is
hopefully
aware
of.
If
you're
not,
please
join
the
club,
you
know
at
between
RedHat
and
core
OS.
B
We've
been
really
pushing
the
idea
of
containers
for
quite
some
time.
You
know
it's
nothing,
new
anybody
who
used
to
run
solaris,
you
know
and
had
experience
with
zones
or
used
to
run
AIX
machines
and
had
a
pars
you're
familiar
with
a
container.
So
you
know
it's
not
like.
This
is
some
newfangled
fancy
technology.
We
just
really
had
a
lot
of
focus
on
the
deployment
experience.
B
You
know
a
company
a
few
years
ago
had
a
lot
a
large
focus
around
making
it
as
easy
as
possible
to
track
and
kind
of
pull
down
those
containers
to
your
individual
services
so
that
you
didn't
need
to
deal
with
or
care
about
that
in
the
way
that
you
had
to
with
LXE
or
System
DNS
bot.
So
this
gets
you
to
that
build
once
deploy
anywhere
sort
of
idea.
You
know
so
as
long
as
you're
kind
of
running
on
you
know
base
operating
system
like
rel.
B
It
actually
doesn't
matter
whether
it's
fair
metal,
public
cloud
etc,
and
this
then
gets
you
into
having
the
challenge
of
the
actual
configuration
management
or
the
configuration
of
your
individual
applications
and,
as
you
start
getting
lots
and
lots
of
services,
they
need
to
be
able
to
find
each
other.
So
you
know
in
the
case
of
configuration,
you
know
you
have
these
config
servers
that
folks
started
building
and
then
you
had.
You
know
the
service
discovery,
components
that
started
being
introduced
as
libraries,
and
you
know
this
is
still.
B
Why
you
hired
them,
and
it
means
that
you
know
for
the
next
month,
hopefully
less
possibly
more.
They
now
have
to
learn
how
to
use
all
of
these
libraries
and
how
your
processes
actually
work.
So
what
about
the
state
of
you
know
the
applications
that
you
have
already
got,
or
even
worse,
these
six
different
applications
that
you
want
to
find
some
way
to
make
them
work
together,
but
there's
no
clearly
defined
path
here.
One
of
my
favorite
pieces
of
software
that
serves
purposes
like
this
is
this
component
called
OAuth
to
proxy.
You
know
Cora
west.
B
We
used
to
use
it
all
the
time.
It's
really
just
this
thing
that
you
can
use
to
inject
headers
with
that,
then
allows
you
to
do
basic,
like
oh
style,
fronting
of
an
application
so
that
you
can
say:
okay,
the
backend
application
we
just
kind
of
punt
on
having
to
worry
about
how
we
actually
incorporate
all
of
the
actual
authentication
services,
because
we
push
that
off
to
this
other
micro
service,
which
solves
that
problem
for
us
by.
B
The
thing
that
we're
trying
to
do
is
address
all
of
the
complexity
of
all
of
this
infrastructure,
and
you
know
automating
the
container
deployment.
You
know
the
the
automation
of
that
container
deployment,
that's
where
OpenShift
comes
in,
that's
where
kubernetes
comes
in,
and
this
gets
you
to
where
you
know
you
build
all
of
these
cloud
native
apps
with
open
ship.
You
know
you
kind
of
utilize,
roar
and
everything
else,
but
let's
hold
on
here.
Second
y'all
joined.
This
call
to
hear
about
is
to
you.
You
know
we
talked
about.
B
You
know
the:
what
does
the
transition
from
ODOT
eight
to
one,
not
owning
you
know,
I
said
that
I
was
going
to
explain
like
more
of
what
this
service
mesh
stuff
is
for
the
folks
who
weren't
familiar
with
that,
but
all
that
I've
been
doing
is
talking
about
the
way
that
things
work
well
in
reality,
I
was
talking
about
the
way
that
things
were,
but
really
we're
talking
about
the
evolution
of
what
this
means
in
all
of
these
micro
service.
You
know,
as
we
started
with
the
platform
years
ago.
B
You
want
to
be
in
a
situation
where
each
form
is
responsible
for
providing
those
services.
If
that
is
the
goal
of
the
platform,
and
so
in
the
idea
of
kind
of
the
service
mesh
architecture,
you
know
we
have
this
idea
of
having
proxies
that
get
injected
into
the
individual
pods
and
then
again
in
distributed
systems.
Theory,
the
idea
of
a
control
plane,
which
is
what
sto
actually
is,
is
do
you
know,
with
the
exception
of
Jaeger,
really
is
all
of
those
boxes
at
the
top.
B
You
know
for
the
folks
who
have
been
around
a
bit
longer
and
have
mangled
some
of
these
sorts
of
things
under
the
hood.
You
can
kind
of
think
of
sto
as
operating
more
as
a
socket
driven
model
or
more
as
like
an
S
tunnel
type
component.
You
know
sto
or
Envoy
in
this
case
is
actually
mediating,
both
the
ingress
and
egress
traffic
to
the
pot.
So
it
means
that
your
application
only
ever
needs
to
know
about
communicating
with
this
one
other
component
that
will
always
be
in
the
same
network
namespace.
B
So
it's
very,
very
predictable
to
make
sure
that
it
is
sending
and
receiving
traffic
in
that
way
and
by
having
the
Envoy
mekka
or
the
Envoy
in
there
with
sto
acting
as
that
mediator
of
the
traffic.
It
also
allows
you
to
actually
have
more
visibility
into
the
individual
response
times
and
knowing
what
the
exact
state
is
community
of
communication
between
different
services.
You
know
this
allows
you
to
actually
configure
individual
timeouts
and
retries
they've
all
become
transparent
to
the
actual
services.
B
You
can
actually
specify
these
through
policy
which
will
allow
you
to
protect
against
you
either
huge
amounts
of
traffic
like
denial
of
service
attacks,
or
you
know,
potentially
just
beyond
what
they're
able
to
accommodate,
and
that's
just
one
component
of
being
able
to
deliver
on
additional
security
to
the
services.
But,
as
we
were
saying
you
know
without
this,
do
you
end
up
in
a
situation
where
the
TLS
communications
are
all
kind
of
coupled
in
the
source
code?
And
that's
really
pretty
suboptimal?
You
know
it's
easy
to
look
at
that
and
see
oh
well.
B
If
we
have
envoy
which
is
actually
communicating
between
all
of
these
services,
then
yes,
the
communication
within
that
first
pod
from
service
aid
envoy
that
maybe
traffic,
that's
in
the
clear.
But
that
is
also
never
in
the
clear
outside
of
its
network
name
space
and
then
because
envoy
is
mediating
that
traffic
it
can
handle
the
issue
or
the
request
and
rotation
of
the
individual
certs
that
make
it
able
to
communicate
with
all
of
those
other
actual
pods
and
because
it
is
doing
that
with
TLS
based
communications.
B
It
also
means
that
you
can
actually
control
how
those
services
communicate.
You
know
you
can
make
it
so
that
it's
more
of
a
model
of
the
client
based
store
access
than
just
strictly.
You
know,
can
component
a
or
is
component
a
sign
just
by
this
general
CA.
Yes,
then
move
on
from
there.
This
gets
you
to
the
exciting
capability
here
of
the
concept
of
chaos.
Engineering.
You
know
in
the
past.
B
If
you
wanted
to
do
that
sort
of
thing,
you
have
your
continuous
deployment
mechanism
like
spinnaker,
and
then
you
have
these
other
components
like
the
entire
Netflix
chaos
army.
You
know
from
chaos
Kong
where
you
can
just
destroy
an
entire
V
PC,
all
the
way
down
to
chaos,
monkey
kind
of
nuking,
individual
services,
and
that
is
useful
in
so
much
as
yes,
you're.
B
Actually
doing
some
of
the
things
that
you
should
be
doing,
testing
resilience
of
your
services
and
making
sure
that
everything
works,
the
way
that
you
would
expect
it
to,
but
it's
even
better
when
you
can
just
constantly
be
doing
that
and
not
have
to
start
pinning
up
more
and
more
infrastructure,
just
to
be
able
to
do
that
test.
You
know
using
these
mechanisms,
you
can
actually
inject
delays
that
are
transparent
to
the
service,
and
why
would
you
want
to
inject
delays?
Well,
I
mean.
Obviously
you
testing
is
a
it
will.
Just
that
will
become.
B
You
know,
a
mantra
that
you'll
you'll
start
to
embrace
over
time:
testing
testing
testing,
but
for
anyone,
who's
read
the
Google
SR
ebook.
You
know
they
have
a
great
section
talking
about
the
idea
of
service
level
objectives,
not
just
service
level
availability
and
in
a
service
level
objective.
You
want
to
make
sure
less
that
things
are
always
up.
You
know
it
may
be
a
consideration
that
you
want
your
service
to
be
more
consistent.
You
know
they
realized
with
the
internal
service
chubbie
at
Google,
which
for
anybody
who's
not
aware.
B
Chubbie
is
analogous
to
sed
that
when
they
allowed
developers
to
just
talk
to
chubby
as
fast
as
possible
developers,
obviously
we
talked
to
Shelby
as
fast
as
possible
when,
on
the
other
hand,
they
set
the
expectation
of
saying
no,
your
latency
will
always
be
in
this
exact
window.
A
request
will
never
take
less
than
five
milliseconds,
but
it
will
also
never
take
more
than
you
know:
100
milliseconds.
It
meant
that
even
when
things
were
going,
you
know
slower
than
that
5
millisecond
mark.
B
It
was
really
acceptable
for
users
because
it
got
to
the
point
where
they
had
a
more
predictable
outcome,
and
you
know
I'm
not
saying
that
you
know
with
this.
Do
you
want
to
just
always
inject
delays
on
all
of
your
requests,
though,
as
an
aside
Dan
Kaminsky,
the
kind
of
just
gentleman
who
discovered
the
famous
vine
bug
back
in
2007,
he
had
a
really
awesome
idea
for.
B
Kind
of
not
necessarily
hiding
your
services
but
making
it
harder
to
profile
them.
It's
based
on
the
ideas
of
belief,
I,
believe
you,
as
a
physicist,
named
Boltzmann
of
a
Boltzmann
filter
where
you
intentionally
inject
small
amounts
of
latency
and
randomize
delay
to
your
services,
because
it
means
that
it
becomes
harder
to
profile
their
location
and
how
they're
actually
interacting
inside
of
a
larger
system.
And
you
know
these
are
the
kind
of
advanced
ideas
that
just
come
as
a
part
of
the
overall
package.
B
So
then
we
need
to
keep
moving
on,
though
you
know
we
have
these
ideas
of
being
able
to
inject
protocol-specific
errors.
You
know
that's
the
nice
thing
about.
You
know
boys.
It
actually
can
act
as
a
lair,
for
you
know
and
layer,
seven
proxy.
So
you
know
it
can
do
protocol
aware
functions,
but
we
also
get
into
dynamic
rap.
So
without
s,
do
you
know
we
have
these
service
discovery
mechanisms
and
these
kind
of
custom,
routing
libraries
that
which
try
to
do
more
of
that
intelligent
load,
balancing
and
routing?
B
You
know
these
are
going
to
be
things
similar
to
other
Red
Hat
products
that
you
may
have
used,
but
with
sto
we
get
to
the
state
of
having
all
of
that
intelligence
be
in
the
platform
so
that
you
can
start
to
say.
Okay,
if
users
come
through
and
we
annotate
them
with
a
header
that
says
that
you
know
they
are
located
in
Boston.
We
can
actually
do
more
globally
aware
mechanisms
of
kind
of
making
sure
that
certain
users
go
to
certain
versions
of
the
service.
B
So
you
know
here
we're
using
everybody
in
Boston
as
the
guinea
pigs
for
testing
out
v2,
you
know
of
our
internal
B
service,
and
you
know
we
know
that
those
users
in
Boston
they're
super
advanced.
You
know
they're
willing
to
kind
of
bleed
a
little
bit
in
order
to
serve
the
greater
good
for
everybody,
and
so
this
gets
us
to
where
you
know
we
can
actually
have
more
incremental
deployments
and
incremental
changes.
You
know
it
also
allows
us
to
kind
of
split
the
traffic
off.
B
B
You
know
when
you
are
doing
that
it
also
beyond
just
kind
of
doing
those
a
B
deployments
beyond
doing
quantiles
of
traffic,
and
you
know
they're
kind
of
like
we
were
saying
kind
of
shadow
rollouts
just
for
those
folks
in
Boston
we
can
also
do
mirroring
of
the
actual
traffic
so
that
you
can
say
okay,
we
know
that
everything
on
service
be
in
version.
One
is
looking
good,
but
we
want
to
actually
fork
a
copy
of
that
traffic.
To
version
two
of
the
service
and
actually
see
is
version
two
ready
for
primetime.
B
Is
it
actually
kind
of
standing
up
to
the
load
that
we
expect
it
to?
Is
it
getting
everything
done
you
know?
Are
we?
Are
we
ready
to
roll
that
out
and
start
to
bleed
all
the
traffic
across
and
then
you
can
start
to
see?
Okay?
Well,
if
you
start
here
and
go
okay,
we've
deployed
both
versions
of
the
service.
We
have
a
mirror
of
all
of
our
traffic
go
to
version.
Are
we
getting
errors
cool
we're,
not
getting
errors?
Now
we
start
rolling
it
out.
Just
for
the
Boston
employee.
B
Are
the
Boston
employees
happy
with
it?
You
know?
Is
it
getting
done?
What
they
need
to
do?
It
is
great.
Now
we
can
start
to
roll
that
out
to
half
of
all
of
our
users.
Is
everything
still
looking
good?
It
is,
and
then
finally
you
finish
the
migration
of
that
traffic.
Any
sto
is
giving
you
the
knobs
and
the
control
mechanism
to
execute
that
policy
of
making
sure
that
your
application
is
actually
working.
B
The
way
that
you
need
it
now
this
gets
outside
of
you
know
any
of
the
stuff
that
you
know
is
directly
affected
by
the
Wando.
But
one
of
the
things
that
we
are
presenting
is
a
part
of
the
entire
service.
Mesh
solution
is
distributed
tracing.
You
know,
as
you
have,
that
entire
spiderweb
of
services,
you
need
to
begin
to
be
able
to
see
how
the
services
are
interacting
between
each
other.
B
B
Well,
you
in
the
past,
if
you
had
all
of
the
other
infrastructure
set
up,
you
know
you
might
incorporate
one
of
these
libraries
into
your
service
and
then
set
up
a
tracing
server
to
kind
of
push
all
of
the
traces
to
and
then
try
to
watch
them,
and
you
know
profile
it
that
way.
But
what,
if
you
didn't
build
that
application?
What
if
it's
some?
You
know
commercial
off-the-shelf
piece
of
software
that
you
don't
have
the
source
code
to
so
you
well
in
the
case
of
you,
know
deploying
these
sorts
of
systems.
B
With
this
do
we
can
actually
begin
to
see
okay
from
communicating
from
service
a
to
service?
B
we
had
a
two
hundred
and
ten
millisecond,
oh
and
between
service
B
and
service
C.
There
was
seven
hundred
twenty
milliseconds,
oh
well
yesterday,
that
was
only
you
know,
a
hundred
milliseconds.
So
what's
going
on,
did
we
deploy
a
new
version
of
service
C?
Is
there
some
kind
of
header
handling
between
service,
D
and
service,
see
that
got
mucked
up?
What's
going
on
with
this?
B
B
They
are
giving
them
the
ability
to
take
and
actually
deploy
systems
and
deploy
services
in
the
way
that
they
need
to
and
make
them
most
efficient
so
that
they
can
actually
achieve
all
of
the
goals
that
they
have,
which
are.
Ultimately,
you
know.
No,
none
of
us
are
actually
in
the
software
business
or
not.
A
Well,
that
was
awesome
that
was
much
more
than
I
actually
thought
you
were
gonna
do
today,
usually,
when
I
ask
someone
to
come
and
do
a
briefing
on
like
a
new
release.
I
get
like
the
white
back
screen
with
the
bullet
points
of
what's
in
this
update,
and
this
has
been
really
thorough
and
great
and
I'm,
hoping
that
some
of
you
might
have
questions.
B
A
C
Like
what
words
question
or
just
go,
just
Oh
ask
the
question:
go
for
it.
Okay,
so
I
was
just
curious
about
this.
Do
then
I'm
just
curious
about
what
protocol
is
speaks,
so
services
are
speaking
HTTP
to
each
other
and
then
under
that
TLS
is
making
sure
that
it's
secure,
but
persons
HTTP
doesn't
have
to
think
about
the
fact
that
the
TLS
is
there.
Is
that
how
this
works
so.
B
I
I
will
answer
that
question
but
I
have
to
you
know
inside
coral
s.
We
always
had
this
joke
that
there's
a
red
beard.
Explanation,
which
is
you
know,
I,
go
on
these
wild
tangents,
but
I
always
get
us
back
to
the
beginning.
So
one
of
the
things
that
I
mentioned
was
the
ISTE
o
is
acting
as
a
layer,
four
and
layer,
seven,
what
a
balance
so
in
that
way,
sto
can
just
operate
as
a
simple.
You
know:
TCP,
UDP,
etc,
load
balancer,
completely
agnostic
of
the
underlying
firm.
B
It
can
also
act
as
an
HTTP
proxy
and
that's
honestly,
the
way
that
I
expect
a
lot
of
our
users
to
utilize
it
inside
with
all
of
its
inter
communication
within
their
kind
of
components
themselves,
it's
actually
operating
as
GRP,
see
if
I
remember
correctly.
So
what
kind
of,
if
you
think
about
it,
is
going
from
HD.
B
You
know
there
may
be
an
inbound
HTTP
request
within
like
inside
of
envoy
it's
operating
as
G
RPC,
and
then
it
may
regress
as
HTTP
independent
of
that,
though,
like
the
way
that,
like
that
software,
that
I
talked
about
s
tunnel
actually
works,
is
it
would
set
up
like
you
can
think
about
it
as
like,
a
TLS
or
SSL
based
VPN,
where,
like
your
VPN,
doesn't
really
care.
What
the
underlying
protocol
in-between
is.
B
It's
just
more
like
hey
I,
have
you
know
kind
of
the
inbound
side
of
the
pipe
over
here
and
then
I
know
that
my
traffic
is
encrypted
and
on
the
outbound
side
it
comes
out.
So
that's
how
to
answer
your
question
directly.
It's
working
in
this
case.
You
know
all
boy
will
take
and
actually
encapsulate
that
protocol,
whatever
it
is
inside
of
a
TLS
connection
between
the
two
different
services
and
then,
when
it
aggresses
out
the
other
side
of
that
you
know
proxy
tunnel.
It
then
re-emerges
D
encapsulated
as
the
original
form
that
one.
B
C
B
B
B
B
There
are
both
there's
the
rules
engine
inside
of
Ischia
or
you
can
use
sto
CTL
to
kind
of
manipulate
some
of
that.
If
I
remember
correctly,
you
can
also
utilize
the
labels
mechanism
within
kubernetes
to
pull
additional
properties
and
in
like
the
HTTP
case
where
you
actually
have
deeper
knowledge
of
protocol
itself,
you
can
end
up
injecting
headers
into
that
request
that
you
can
then
take
action
on
there's.
Actually
a
super-awesome
paper
that
Google
wrote
a
few
years
ago
or
I
guess
published
a
few
years
ago.
B
The
internal
engineers
wrote
it
which
I
have
been
tracking
for
a
long
time
and
at
first
glance
sto
seems
completely
orthogonal
to
it.
But
the
paper
itself
is
called
maglev.
B
That
then
tell
the
entire
system
to
either
fork
out
copies
of
that
packet
to
the
tracing
system
or
to
just
kind
of
send
back
to
lemma
tree
at
each
tracing
way,
and
now
the
ISTE
o
is
released.
This
paper
that
I
was
reading
years
ago
just
makes
absolute
sense.
It's
like
having
these
two
lenses
that
over
time
are
slowly
turning
in
opposite
directions,
and
then
you
know
the
polarization
aligns
and
the
laser
gets
through,
and
everything
makes
sense.
B
B
A
B
Okay,
so
we
talked
about
at
properties,
oh
yeah,
and
so
somebody
else
mentioned.
You
know
config
Maps
as
well.
The
cons
of
adding
this
is
to
do
control,
plane
and
envoy.
Is
there
a
price?
Okay?
Yes,
so,
as
with
anything,
there
is
a
cost.
You
know
when
you
are
kind
of
jumping
doing
IPC's
within
you
know
a
single
will
say
a
single
kernel.
You
know
the
amount
of
latency
that
you
are
going
to
have
is
probably
measured
in
nanoseconds
to
the
high
end
microsecond.
B
Some
of
that,
and
you
know,
one
nice
thing
is
that,
with
the
fast
data
project
which
is
FD
io,
the
fast
data
is
working
on
standardizing
a
set
of
api's
so
that
you
can
do
what's
called
a
TCP
offloading.
That's
also
described
in
the
Maglev
paper.
You
know
they
mentioned
using
it
with
one
specific
mechanism
for
a
vendor
called
SolarWinds
and
another
mechanism
from
Intel
called
DP
DK.
The
data
flame
development
kit,
fast
data
is
standardizing
all
of
those
api's
that
you
can
begin
to
say.
B
So
no
Ryan
asked
a
couple
of
questions.
Are
there
rate
limiting
and
circuit
braking
limited
to
the
local
instance
of
pods
or
to
all
the
pods
of
that
service,
meaning
if
my
service
B
can
only
take
300
a
minute?
Do
I
need
to
configure
100
minutes
over
three
instances,
so
you
can
actually
configure
it
at
the
actual
service
level
and
that's
actually
a
good
point.
In
the
difference
between
like
G,
RPC
and
HTTP
is
within
the
implementations
themselves.
B
They
it's
actually
a
little
bit
more
nuanced
than
that,
because
in
some
cases
you
can
say
well
this
application.
You
know
we
measure
the
health
in
different
ways.
You
know.
Maybe
we
are
saying
that
it
can
look
at
a
total
number
of
requests
over
some
period
of
time.
Possibly
it's
you
start
shifting
the
traffic
based
on
if
it
is
using
too
much
memory
or
if
it's
getting
starved
of
MillerCoors.
B
You
know
in
the
kubernetes
sense,
so
in
the
case
of
G
RPC,
because
I
was
reading
through
the
issue
yesterday
on
the
SEO
issue
tracker
they
action,
we
end
up
with
a
situation
where
they
don't
have
as
rich
of
telemetry
4G
RPC
connections.
So
they
were
suggesting
at
the
moment
as
a
triage
until
they
can
get
it
kind
of
more
generally
fixed
dialing
down
the
number
of
total
connections
for
a
particular
service,
so
not
to
a
pod
per
se
but
say:
okay.
B
We
need
to
lower
it
down
overall
so
that
it
licks
it
a
bit
more
evenly
so
that
you
can
actually
achieve
that
mechanism,
like
you
were
saying
so,
I
hate
to
frame
it
this
way
but
says
the
answer
to
question
one
is
it
depends.
You
know
it's
largely
based
on
what
the
protocol
of
your
service
is,
what
types
of
kind
of
health
checks
it's
able
to
do
to
see
the
state
of
that
and
then
with
sto
our
distributed
tracing
patterns,
good
to
use
with
a
hundred
percent
sampling.
B
Are
there
rules
we
can
turn
on
the
control
plane
to
dynamically
turn
off
during
failures
and
high
response
times,
and
you
know
I
must
cop
to
ignorance
on
that
that
level
of
detail
when
it
comes
to
the
tracing
is
outside
of
the
wheelhouse
of
which
I've
been
able
to
kind
of
get
a
grip
on
at
the
moment.
But
the
rad
thing
is:
is
that's
the
sort
of
thing
that
I
add
to
my
list
of
things
of
like
you
take
to
do
that
because
I?
Can
you
know
if
you're
asking
about
it?
B
B
Well,
so
that's
going
to
be,
you
know,
config
maps
are
more
just
you
know
a
mechanism
of
kubernetes,
so
that's
going
to
lead
to
you
know,
can
kubernetes
do
that
and
yes,
you
can
I
mean
that's
that's
getting
into
the
difference
between
create
semantics
versus
patch
semantics,
and
you
know
you
would
want
to
kind
of
batch
those
changes
together
if
I'm
understanding
your
kind
of
question
correctly
as
like
a
transaction
to
patch
multiple
things
and
sto
kind
of
stands
on
the
shoulders
of
kubernetes.
In
that
sense,
rather
than
frankly
executing.
A
B
I,
don't
think
so,
though
I
am
quite
possibly
wrong.
You
know
if
it
would
not
be
the
first
time.
I've
been
wrong
and
it
will
definitely
not
be
the
last,
but
I
don't
believe
they
can
did.
A
Even
deeper
dive
briefing
coming
soon
in
a
our
future
to
answer
some
of
those
questions,
there's
10
a.m.
is
asking
a
beginner
question.
Maybe
that's
a
good
way
to
do
this,
so
if
this
do
becomes
the
load
balancer
for
ingress,
then
do
we
replace
it
with
h8
proxy
in
their
future
than
this?
Do
control
plane
will
run
in
Enfamil
it's
in
John's.
B
You
know,
one
I
guess
one
thing
that
I
should
say
is
you
know,
with
the
kind
of
O'day
dish
into
work.
Fuzzy
from
earlier
versions
into
one
auto
sto
did
have
a
concept
that
mutated
a
little
bit
of
moving
into,
what's
called
a
gateway,
and
so
they
implement
this
idea
of
gateways
which
are
kind
of
their
own
specific
API
spec
within
the
actual
structures
that
are
getting
laid
out,
and
you
know
the
thing
that
I've
been
interested
in.
B
Is
you
know
in
talking
about
kind
of
the
ideas
of
that
Maglev
paper
and
utilizing
the
idea
of
kubernetes
cluster
IPS,
which
are
it's
closer
to
like
a
vrrp
style
moving
and
IP
address
around,
though
it's
really?
No,
it's
less
that
it's
it's
kind
of
like
having
an
IP
with
a
/,
32
subnet
mask
that
listens
on
every
single
node,
and
so,
if
you
route
traffic
to
that
node
it
how
to
answer
and
then
kind
of
respond
on
it.
B
And
so,
if
you
did
that,
you
could
use
that
to
then
like
actually
use
your
network
router
to
kind
of
get
the
traffic
into
the
cluster
as
the
low
balancer
and
then
from
there.
Sto
would
be
able
to
kind
of
tape
that
traffic
and
then
move
it
around
the
cluster.
So
it's
it's
a
thing
where
you
end
up
kind
of
combining
some
of
the
primitives
of
kubernetes
and
OpenShift,
with
the
capabilities
of
sto
to
solve
those
problems.
I.
A
Think
that
might
be
all
we
have
time
for
today
and
Redbeard.
It
just
is
was
amazing
presentation,
so
I'll
try
and
get
it
uploaded
to
the
rh
openshift
youtube
channel
later
today,
probably
with
very
few
edits
and
then
we'll
post
it
on
blog
dot,
open
ship
comm
with
some
of
the
links.
If
you
send
me
your
slide
deck
and
the
links
to
the
past
data
and
the
Maglev
paper
and
the
books
and
all
the
references
here,
I'll
try
and
get
them
all
into
the
blog
post
too,
so
that
we
can
have
those.