►
Description
Don't miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from 18 - 21 April, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
Hi
everyone
Welcome
to
Cloud
native,
live
where
we
dive
into
the
code
behind
Cloud
native
I'm,
Annie
tavasto
I'm,
a
cncf
Ambassador
as
well
as
I,
lead,
marketing
and
vision
and
I
will
be
one
of
your
hosts
today.
So
we
have
Sharia
here
with
us
hosting
as
well
hi
sharir
there
we
go
he's
going
to
be
handling
the
Q
a
particularly
for
us
today.
So.
A
They
will
build
things,
they
will
break
things
and
they
will
answer
all
of
your
questions,
so
you
can
join
us
every
Wednesday
or
a
few
other
days
with
the
week
as
well,
sometimes
to
watch
live
so
this
week
we
have
Lynn
here
with
us
to
talk
about
circuit,
breakers
and
dynamic
request
routing
with
Linker
D
2.13
very
excited
for
this,
and
as
always,
this
is
an
official
live
stream
of
the
cncf,
and
as
that
it
is
subject
to
the
cncf
code
of
conduct.
A
C
Thank
you
happy
to
be
here,
as
always,
Let's
turn
the
screen
chair
on.
If
you
could
and
we'll
go
from
there,
we
go.
Yes,
we
are
talking
about
circuit,
breaking
and
dynamic
routing
which
really
I
really
should
have
pitched
this
as
circuit
breaking
and
dynamic
routing,
The
Good,
the
Bad
and
the
Ugly.
There's
a
lot
of
good
there's
some
stuff.
That
might
be
bad
and
there's
definitely
some
stuff,
that's
still
kind
of
ugly,
so
we'll
be
breaking
all
that
down
as
we
go.
C
So
a
lot
of
this
will
be
a
demo
you
can
follow
along.
If
you
like,
the
workshop
source
is
that
URL
or
you
can
just
go
and
scan
the
QR
code
there
I
will
be
doing
this
with
a
sibo
cluster.
It
works
fine
with
a
k3d
cluster.
It
doesn't
really
matter.
Just
you
know,
choose
what
you
want
or
just
follow
along
with
the
demo
and
don't
mess
with
anything
on
your
end.
Either
way,
if
you
do
want
to
run
this
on,
your
own
cluster,
make
sure
you're
using
an
empty
cluster.
C
The
setup
assumes
that
we
can
do
whatever
we
want
with
it.
Please
do
not
run
this
against
your
production,
cluster,
very
bad
things
could
happen,
and
that
would
definitely
involve
breaking
things
all
right.
With
that
in
mind,
the
agenda
we're
going
to
talk
briefly
about
circuit,
breaking
and
dynamic
request,
routing
we're
going
to
go
and
do
with
a
demo
of
dynamical
quest
routing,
then
of
circuit
breaking
then
we'll
come
back
to
the
slides
for
a
second
to
talk
about
gotchas
and
then
we'll
do
a
little
bit
more
in
terms
of
debugging
things.
C
If
we
have
time,
everybody
cross
your
fingers
on
that
one
right:
okay,
Dynamic,
request
routing:
this
is
the
thing
that
originated
kind
of
with
Linker
D
2.12,
and
earlier
we've
been
able
to
do
this
for
a
while
using
traffic
splits
from
SMI
that
was
kind
of
limited
to
very
coarse
grained
routing,
where
you
could
say:
okay,
one
percent
of
the
traffic
to
Foo
I
want
you
to
peel
it
off
and
send
it
to
a
new
thing.
C
C
You
cannot
route
on
bodies,
because
that
would
be
weird,
but
this
permits
you
to
do
a
bunch
of
really
interesting
things
like
Progressive
new
delivery
anywhere
in
the
call
graph,
instead
of
right
at
the
edge
where
the
Ingress
can
control
it.
Likewise,
you
can
do
a
b
testing
deep
in
the
call
graph
or
per
user
canaries
or
whatever
there's
a
lot
of
really
interesting
things
you
can
do
with
this.
C
These
are
configured
with
the
Gateway
API
HTTP
route
resource,
so
no
longer
with
the
SMI
traffic
split
part
of
this
is
that
linkready
is
actively
participating
in
the
gamma
initiative
to
bring
the
Gateway
API
into
the
service
the
world
of
service
meshes.
So
this
is
kind
of
a
new
capability.
Here
you
use
the
parent
ref
of
the
HTTP
route
to
bind
the
route
to
a
service
with
gamma.
C
This
is
a
good
opportunity
to
talk
really
briefly
about
the
Gateway,
API
and
Gamma.
If
you're
not
familiar
with
these
things,
Gateway
API
got
started
a
few
years
ago,
2020,
broadly
speaking,
it
got
started
because
a
bunch
of
people
looked
at
the
perpetually
in
beta
forever
Ingress
resource.
That
was
not
really
as
expressive
as
we
would
like,
and
we're
trying
to
figure
out.
How
do
we
do
better
than
this?
So
Gateway
API
itself
is
at
version
0.7
at
this
point,
hoping
to
reach
1.0
this
year,
another
one
to
cross
your
fingers
there.
C
Last
year,
an
initiative
got
started
within
gamma
to
figure
out
sorry,
an
initiative
get
started
within
the
Gateway
API
to
figure
out
how
to
use
the
Gateway
API
to
configure
meshes
as
well
as
Ingress
controllers.
Simply
because
you
know
I
mean
HTTP.
Routing,
for
example,
is
a
thing
that
applies
pretty
well
at
both
in
both
of
those
areas.
C
A
gamma
actually
does
stand
for
something.
I
really
should
remember
what
it
stands
for
off
the
top
of
my
head,
especially
because
I
became
one
of
the
co-leads
this
week,
but
I
do
not
and
I'm
terribly
sorry
about
that
I'll.
Look
it
up
before
the
next
one
of
these
Linker
D.
The
project
I
personally,
have
been
very
active
in
gamma,
because
it's
pretty
interesting
to
us
right,
we
started
to
use
it
with
2.12.
We
picked
up
the
HTTP
route
crd.
C
C
We
like
the
Gateway
API
for
this,
because
it
is
quite
powerful.
It's
quite
flexible.
It's
on
a
good
path
to
probably
already
be
in
your
cluster
and
maybe
most
importantly,
somebody
else
can
maintain
all
these
crds
as
opposed
to
just
us.
C
There
are
also
things
that
we're
actively
working
on.
There
are
a
lot
of
things
you
cannot
yet
do
within
the
Gateway.
Api
retries
comes
up
immediately
as
a
thing
where
you
just
cannot
do
that.
Likewise,
conformance
is
kind
of
a
big
deal.
The
Gateway
API
defines
a
standard
set
of
conformance
tests
so
that,
if
you
want
to
say
you
support
Gateway
API,
you
need
to
pass
the
tests.
C
Originally,
the
conformance
test
required
that
you
be
an
Ingress
controller
to
pass
them
and
linkerty
obviously,
is
not
so
we
could
not
do
that
as
a
direct
result
of
that
we
are
actually
still
using
HTTP
route,
for
example,
in
the
policy.linkerty.io
API
Group,
rather
than
the
official
Gateway
API
API
Group
conformance
is
an
area.
That's
changing
very
quickly.
In
particular,
there's
been
a
lot
of
good
work
around
conformance
profiles,
so
you
can
talk
about
Gateway,
API
conformance
for
a
mesh,
as
opposed
to
for
an
Ingress
controller.
So
keep
an
eye
on
this.
C
C
The
idea
behind
circuit
breaking
is
that
if
you
have
a
failing
workload
endpoint,
it
probably
does
not
make
sense
to
keep
hammering
the
failing
endpoint
with
yet
more
traffic
and
make
it
Fail
Harder.
So
when
we
see
a
failure
or
when
you
see
some
group
of
failures,
then
the
idea
is
that
the
circuit
breaker
will
open
so
that
traffic
to
that
endpoint
stops
after
a
little
while
you
can
try
another
request,
see
if
things
are
working
and
if
they're
working
again,
then
you
close
the
circuit
breaker
and
let
requests
go
through
again.
C
The
implementation
in
2.13
is
a
little
bit
limited.
Like
I
said.
This
is
a
a
very,
very
new
thing,
so
2.13
can
only
open
the
breaker
when
it
sees
too
many
consecutive
failures
to
a
given
workload.
Endpoint
and
failure
here
means
an
HTTP
5
series
response.
C
C
You'll
also
note:
if
you
look
at
the
docs
that
all
of
the
annotations
currently
have
failure
accrual
in
their
name,
which
is
the
internal
name
for
the
implementation
of
circuit
breakers
and
honestly,
partly
that's
there
just
to
remind
everybody
that
yeah
this
is
going
to
be
changing.
The
annotations
are
not
the
way
we're
going
to
be
doing
this
forever.
C
Okay,
some
examples
really
quickly
about
circuit
breaking
there
and
we
will
be
demoing
this
to
break
the
circuit.
After
four
consecutive
request
failures,
you
say:
hey
the
failure.
Accrual
mode
is
consecutive,
which
is
the
only
one
that's
currently
supported,
and
the
consecutive
Max
failures
is
four.
C
If
you
then
set
the
consecutive,
the
failure,
accrual
consecutive
minimum
penalty
to
30
seconds
as
soon
as
the
breaker
opens,
it
will
stay
open
for
at
least
30
seconds,
then
it'll
retry,
and
if
that
retry
fails,
the
delay
will
get
longer
and
longer
up
to
the
consecutive
Max
penalty,
which
here
I've
shown
how
to
set
it
to
two
minutes
and
again
we
will
be
showing
all
this
in
the
workshop.
So
don't
feel,
like
you
have
to
remember
all
of
this
right
off
the
top
of
your
head.
C
Okay,
I'm,
going
to
describe
the
architecture
of
the
demo
itself,
and
then
we
will
go
in
and
look
at
the
demo.
And
what
we're
doing
here
is
we
start
with
a
cluster
in
our
cluster.
We
are
sorry
outside
of
the
cluster,
we're
going
to
use
the
web
browser
as
the
GUI.
It's
getting
this
single
page
web
app
called
the
faces
GUI.
It
then
will
regen
and
call
the
face
service
repeatedly.
C
The
face
service
will
then
compose
the
two
together
and
hand
them
back
to
the
GUI.
We
also
in
this
particular
cluster.
We
have
a
smiley
2
service,
which
returns
a
hard-eyed
smiley.
Now
we
have
a
color
2
service
which
Returns
the
color
blue
and
at
the
beginning
of
this
demo,
nothing
is
talking
to
either
one
of
those.
C
So
if
you
see
blue
heart,
eyed
Smileys,
something
is
going
wrong
if
you
are
familiar
with
the
faces
demo
elsewhere
in
many
demos,
we
deliberately
so
start
this
off
with
the
faces
demo
being
horribly
broken
so
that
we
do
not
often
see
green
background
smiley
faces
in
this
particular
case,
we're
going
to
start
it
off
with
the
faces
demo
not
being
broken.
C
So,
let's
see
what
happens
and
as
Annie
mentioned
breaking
things
is
always
a
possibility
all
right.
Well,
the
first
thing
that's
going
to
happen
here
is
that
I
am
going
to
quit
this
and
you
get
to
see
a
little
bit
of
behind
the
scenes
stuff,
because
I
was
testing
this
right
beforehand,
I
disabled,
a
bunch
of
it
there
you
go
folks,
that's
how
you
know
we're
doing
it
live
so
the
tool
I'm
using
here
dimash
is
also
open
source.
You
can
find
that
on
GitHub,
if
you
like,
ask
questions.
C
C
C
All
right.
That's
the
same.
I
should
point
out.
That
is
not
a
different
web
browser.
That's
the
same,
one
I
was
just
showing
you
just
you
know
scaled
down
and
fit
into
the
corner
there,
so
we
can
see
the
text
and
the
web
browser
at
the
same
time,
all
right
as
I
mentioned
before
that
green
color
in
the
background
comes
from
our
color
workload.
C
The
color
two
service
or
color.
Two
workload
also
gives
a
wow
I
cannot
talk
today.
Color
two
gives
the
blue
color
instead
of
the
green
color,
we're
now
going
to
try
to
shift
traffic
over
to
color
two
with
an
HTTP
route
resource.
This
is
the
HTTP
route
resource
we're
going
to
use,
so
the
name
isn't
all
that
relevant.
The
namespace
is
relevant,
though
we
need
this
HTTP
route
to
live
in
the
namespace
with
the
workloads
it's
trying
to
modify
in
this
case.
C
An
important
Point
here
is
that
you'll
see
these
port
numbers
scattered
through
those
port
numbers
must
be
the
one
in
the
service
record
that
the
service
is
is
dealing
with.
In
this
actual
workload,
the
workload
is
listening
on
port
8080.
Don't
use
that
Port,
it
won't
work,
use
the
one
in
the
service
which
is
port
80..
C
There
we
go.
I
shall
leave
it
as
an
exercise
for
the
reader
to
determine
what
10
of
a
4x4
grid
is.
But
you
know
close
enough
now:
let's
take
a
quick
look
back
at
the
demo
architecture,
just
to
follow
exactly
what's
going
on
here.
What
we're
doing
here
is
that
normally
traffic
coming
through
is
going
to
hit
Emissary
Ingress,
who
will
hand
it
off
to
the
face
service,
which
we'll
hand
it
off
to
the
smiley
service.
What
we're
going
to
do,
though,
is
hit
the
wrong
button
immediately.
That's
pretty
awesome,
yeah
all
right.
C
C
If
we
do
that,
we
should
now
be
seeing
50
of
the
traffic
rather
than
10
percent.
So
now,
roughly
half
of
our
things
get
a
blue
background.
Instead
of
a
green
background,
we
can
also
once
we
decide
that
we're
really
happy
with
this.
Like
I,
don't
know,
maybe
everybody
likes
blue
instead
of
green.
We
can
switch
everything
so
that
all
of
the
traffic
goes
to
color
two.
C
We
could
do
this
just
by
deleting
that
back-end
ref,
but
instead
I'm
going
to
do
it
by
changing
the
weight
to
zero,
simply
because
that
can
often
be
easier
to
do
in
a
patch
than
deleting,
if
you've
ever
tried
to
delete
a
stanza
within
a
resource
using
Google
control
patch,
for
example.
It's
really
annoying,
but
changing
a
weight
is
really
easy.
So
we're
going
to
show
that
that
actually
works
here,
I
apply
this
one.
We
should
see
no
green
backgrounds
at
all.
Once
this
takes
effect
there
we
go.
C
C
Another
thing
we
can
do
is
a
b
testing,
and
for
this
one
we
will
end
up
using
two
browsers
rather
than
just
one.
So
we
have
one
browser
which
is
our
normal
browser
that
we've
been
looking
at
so
far.
That
is
not
sending
anything
fancy
and
if
you
look
at
it,
you
can
see
that
it
says
user
unknown
up
here,
which
is
telling
us
that
this
browser
is
not
doing
anything
strange
with
any
headers.
C
C
C
C
So
if
I
come
over
here,
we
can
apply
this
HTTP
route
to
switch
the
smiley
between
Smiley
and
smiley2,
depending
on
the
header.
So
what's
going
on
here
is
same
thing
right.
This
has
to
be
in
the
faces:
namespace
we're
going
to
act
on
traffic
coming
to
The
Smiling
service,
on
Port
80,
and
then,
if
that
matches
a
header
with
a
name
of
x-faces
user
and
an
exact
value
of
test
user,
all
of
that
traffic
will
be
routed
to
the
pods
behind
the
smiley
2
service.
C
C
So,
if
I
apply
this
one,
we
should
immediately
see
the
bottom
browser.
That's
using
mod
header
to
send
over
xbases
user
test
user.
We
should
immediately
see
that
one
switch
to
hard-eyed
Smileys
rather
than
normal
smoothies,
and
so
what
this
has
shown
us
is
a
thing
that
I'm
not
going
to
fix
if
I
come
over,
I
have
to
figure
out
the
right
way
to
show
you
all
this
now.
C
C
C
Once
we
get
to
the
point
that
we're
done
with
the
a
b
test
and
we
decide
that
everybody
really
really
likes
the
hard-eyed
Smileys,
then
this
time
I
will
just
delete
the
stanza
there's
no
real
point
in
leaving
it.
You
know
trying
to
leave
the
header
match
in,
but
also
giving
it
a
weight
of
100
and
then
giving
the
other
one
zero.
We
can
just
delete
it.
C
Hola
Chile
all
right
now,
let's
show
off
some
circuit
breaking
stuff,
so
the
first
thing
we're
going
to
do
here
is
we're
going
to
actually
switch
the
UI
so
that,
in
addition
to
seeing
the
Matrix
of
faces,
I'm
also
going
to
click,
the
show
pods
button
which
brings
up
this
other
thing
that
shows
me
which
faces
pod
is
giving
me
what
kind
of
result
this
just
makes
it
a
lot
easier
to
see
when
circuit
breakers,
open
and
close
I
should
also
point
out
that
this
display
of
the
pods
is
not
really
relying
on
any
service
mesh
magic,
I
just
arranged
it
so
that
the
service,
the
face
service
hands
back
its
ID
to
in
a
header,
so
that
the
browser
can
show
it
to
us
all
right
again
same
browser.
C
C
So
now
we
have
a
bunch
of
face
two
pods
that
are
returning
a
bunch
of
errors,
all
good
and
so
now
we're
going
to
go
through
and
annotate
the
face
service
to
enable
the
circuit
breaker
to
hopefully
get
rid
of
the
pink
background
map
faces,
as
we
showed
before
we're
going
to
activate
the
consecutive
failures
mode,
we're
going
to
tell
it
that
it
needs
30
consecutive
failures
before
it
does
anything
and
the
minimum
penalty
will
be
10
seconds.
So
once
I
do
this.
C
If
you
look
down
in
the
over
here,
you
should
see
these
numbers
so
once
we
get
there,
those
are
actually
the
numbers
are
the
count
of
failures
for
that
particular
pod.
So
once
I
apply
this,
nothing
should
happen
until
they
go
up
by
another
30
or
so,
and
then
we
should
see
the
breaker
breaker
open
and
those
pink
metaphases
should
just
disappear.
C
A
C
Yeah
all
right
come
on
there
we
go
so
now
you
can
see
a
couple
of
things
that
are
interesting.
One
of
them
is
that
we
only
have
blue
heart,
eyed
Smileys
and
the
other
one
is
that
the
face
two
pods
almost
vanish
from
the
Pod
set
that
then
every
so
often
we
see
one
of
the
pink
faces
come
back.
What's
going
on
here
is
that
Linker
D
is
allowing
a
request
through
to
see
whether
that
phase
two
pod
has
recovered,
and
it
doesn't
do
anything
artificial.
It
just
allows
through
an
actual
application
request.
C
So
one
of
the
interesting
things
with
circuit
failure
circuit
breakers
is
that
once
they
open
yeah,
you
will
actually
still
see
occasionally
a
real
failure
that
comes
all
the
way
through
back
to
the
client.
This
is
another
thing
that
I,
don't
know,
might
change
in
the
future.
It's
fairly
difficult,
though,
with
circuit
breaking
to
both
probe
the
service,
to
make
sure
it's
happy
and
not
let
any
actual
application
request
through
okay,
so
that's
it
for
the
demo.
C
C
Yeah,
okay,
sorry
about
that,
I
got
a
little
bit
too
far
from
the
mic.
So
yeah.
If
you
have
a
service
profile
that
defines
a
route,
it
will
take
precedence
over
an
HTTP
route
with
conflicting
routes
and
it'll
take
precedence
over
any
circuit
breaker
that
the
any
circuit
breaker
for
a
workload
that
that
service
profile
uses.
C
The
reason
for
this
is
that,
if
we
did
it
the
other
way
around,
where
HTTP
routes
took
precedence,
we
were
able
to
come
up
with
lots
of
very
surprising
behaviors
on
upgrades
that
would
have
badly
confused
people
and
possibly
broken
their
clusters
when
they
went
from
2.12
to
2.13,
and
that
did
not
seem
like
a
good
idea.
C
C
C
If
you
are
doing
things
with
this
that
are
confusing
when
you
try
to
do
them,
then
some
rules
of
thumb
for
debugging
make
sure
you
don't
have
any
service
profiles.
If
you're
trying
to
use
HTTP
routes,
that's
the
biggest
one
there's
another
one
along
here,
though,
where,
if
you
are
trying
to
use
HTTP
routes,
they
don't
work,
and
then
you
realize,
oh,
hey,
I,
actually
have
a
service
profile.
C
Here
then,
when
you
delete
the
service
profile,
you
may
actually
need
to
restart
pods
for
the
thing
you're
trying
to
apply
the
HTTP
route
to
or
the
thing
that
you
had
service
profiles
attached
to.
The
reason
for
this
is
that
the
Linker
D
proxy
kind
of
has
to
decide
if
it's
going
to
be
a
2.12
proxy
or
a
2.13
proxy.
C
As
far
as
this
is
concerned,
and
in
most
of
the
cases
that
I've
seen
it
has
managed
the
switch
correctly
when
you
delete
and
reapply
resources
and
such
occasionally,
you
know
that
that
could
be
a
little
problematic.
C
Another
thing
that's
worth
noting
is
that
there
is
a
new
linkage,
Diagnostics
policy
command
that
can
help
with
this
stuff.
So,
let's
go
back
for
a
moment
to
our
demo
to
look
at
that.
Linker
D,
Diagnostics
policy
command.
C
C
It
starts
by,
for
example,
confirming
that
there
is
an
HTTP
route
associated
with
this
you'll,
also
notice
that
this
says
HTTP
one,
because
you
know
liquidy
treats
hdd1
and
HTTP
2
very
similarly,
it
can
go
through
and
talk
to
you
about
back
ends
which
are
all
Smiley
too.
At
the
moment.
There's
only
one
back
end
in
here.
C
It
can
go
through
and
talk
to
you
about.
Oh
look.
C
On
internet
work,
okay,
basically,
there's
a
lot
of
information
here
you
should
get
used
to
using
things
like
JQ
or
you
know
whatever
to
go
through
and
look
at
things
like
this,
but
there's
also
a
lot
of
very
useful
information
in
here
where
it
can
tell
you
a
lot
about
what
it's
actually
talking
to
and
what
it's
willing
to
do.
C
I
hope
that
was,
you
know,
audible.
So
what
we're
going
to
do
here
is
we're
going
to
restore
one
of
the
color
routes,
we're
going
to
switch
back
to
the
50
50
color
Canary.
That
I
did
earlier
and
just
in
case
you're
curious.
What
that
looks
like
with
both
the
circuit
breaker
and
this
I
actually
turned
off
the
browser.
C
And
now
we
can
check
the
Linker
D
Diagnostics
policy
for
service
color,
Port
80
in
the
faces
namespace
we
can
see
okay,
great
the
color
Canary
is
there.
But
now,
if
we
come
through,
we
see
multiple
back
ends.
Buried
in
here,
I
have
to
actually
scroll
back
and
forth
because
it's
over
both,
but
we
can
see
it
say:
hey
yeah,
I'm,
splitting
traffic
between
these
two
things
and
it
confirms
a
50
weight
on
this
back
end
and
a
50
weight
on
this
back
end.
C
C
So
if
we
look
at
traffic
to
the
face
deployment,
we
see
100
success
at
5.9
requests
per
second,
which
is
you
know,
pretty
reasonable.
We
also
see
that
the
latency
is
absurd,
but
that's
not
the
point
of
this
demo.
It
is
in
fact
configured
to
be
absurd,
so
we
can
also
go
through,
though,
and
ask
Linker
Davis
to
show
us
pods
in
the
faces
namespace
that
match
the
service
equal
face
deployment,
which
is
exactly
the
same
one
that
the
face
service
is
using.
C
So
this
will
show
us
all
of
our
pods,
whether
they're
the
good
face
pods
or
the
broken
face
two
pods,
and
if
we
do
that,
then
we
can
go
through
and
see
that.
Oh
look:
these
two
are
only
taking
a
tiny
amount
of
traffic
because
the
circuit
breaker
is
kicked
in
and
these
two
are
taking
almost
all
the
traffic
a
couple
of
things
that
are
interesting
to
notice
here.
One
of
them
is
really
what's
up
with
this
almost
100
success
rate.
C
Well,
the
answer
is
that
the
there
are
a
couple
of
answers,
but
the
biggest
one
is
just
that:
it's
not
feeding
it
very
much
traffic,
and
so
it
goes
through
and
miscomputes
that
also
well.
No
actually
link
Rd
will
consider
a
four
series
response,
a
success
for
this
graph,
but
everything
coming
back
here
is
a
5
series.
The
other
one
that's
interesting
is
that
the
failures
are
much
lower
latency
than
the
successes.
C
C
C
Linker
Davis
stat
computes
things
over
time,
and
it
can
take
it
a
little
bit
if
you
go
through
and
you
change
something
it's
going
to
take
it
a
little
bit
to
catch
up.
So
if
we
go
through
and
work
to
go
through
and
play
with
things,
then
you
would
need
to
possibly
wait
a
second
or
two
to
see
that
going
on
I
feel,
like
my
screen
share,
just
decided
that
it
was
going
to
break
so
Annie
I
might
need
to
get
you
to
swap
back
and
forth
there.
C
C
B
A
C
You
can
you
drop
the
share
and
then
put
it
back
on
for
a
second.
A
B
C
A
Yeah,
while
we're
waiting
for
that
to
that
I
want
to
remind
everyone
of
the
Q
a
so
if
you
have
any
questions
from
Glenn
and
you
can
leave
them
to
the
chat
so
that
we
will
get
to
them
after
we
get
done
with
that,
like
them,
the
main
content
stuff,
you
say,
oh.
A
C
It's
and
now,
of
course,
my
videos
video
is
frozen
while
it's
waiting
for
the
share
to
reactivate
it's
always
I'm
used
to
worrying
about
the
demo.
Breaking
I'm
not
usually
used
to
worrying
about
the
presentation
breaking.
So
this
is
really
quite
entertaining.
C
C
Okay,
I
I
had
to
cheat
and
go
back
to
just
directly
sharing
a
browser
window
instead
of
using
OBS
and
all
the
fancy
stuff.
So
if
we
want
to
go
through
and
switch
views,
then
that
might
actually
be
a
little
slower
now
all
right.
So
this
is
another
gotcha
about
Dynamic,
request
routing.
You
cannot
sequence
Dynamic
routing
like
this,
where
you've
got
one
route
that
splits
food
between
Foo
and
bar
and
then
another
one,
this
blitz
bar
between
bar
and
baz.
That
will
not
work.
C
If
you
send
traffic
to
Bar
it'll
do
the
right
thing,
but
if
you
send
traffic
to
Foo
it
will
never
follow
the
Bas
leg
at
all.
The
reason
for
this
is
that
HTTP
routes
have
to
distinguish
between
what
we've
taken
in
gamma
to
calling
the
front
end
of
service
at
the
back
end
of
a
service.
The
front
end
is
basically
the
DNS
name
and
a
cluster
IP.
The
back
end
is
the
collection
of
endpoints
that
provide
all
the
compute
for
the
service.
C
Routing
decisions
happen
at
the
front
end,
which
is
shown
in
blue,
but
once
the
decision
happens
it
goes
straight
to
an
end.
Point
goes
straight
to
the
back
end.
So
when
you
set
up
when
the
foo
route
decides,
oh
I
need
to
go
to
Bar
it's
going
to
go
directly
to
an
endpoint.
It
will
not
go
through
the
bar
front
end.
So
bar
never
gets
a
chance
to
make
another
routing
decision
effectively.
C
A
E
Yeah
thanks
so
much
okay.
So
let's
just
work
with
the
Q
a
so
nice
stuff.
What
is
next
for
Linked
LinkedIn
project
and
any
sneak
peek
that
you
can
give
today
with
us.
C
I,
don't
have
any
sneak
peeks
in
terms
of
demos
a
lot
of
what
we're
working
on
right.
This
second
is
really
trying
to
get
to
a
place
where
the
new
HTTP
route
world
has
feature
parity
with
what
we
could
do
in
2.12,
so
that
we
can
get
rid
of
that
gotcha
about
how
they
don't
compose
by
just
saying
yeah.
They
don't
compose
it's
okay,
everything
that
you
used
to
be
able
to
do.
You
can
still
do
in
the
Brave
New
World,
like
I
mentioned
I,
think
I
mentioned
earlier.
C
The
first
one
of
those
is
timeouts
that
you
know
timeouts
according
to
Gateway
API,
Gap
1742,
if
y'all
or
if
anybody
listening
is
not
familiar
with
The
Gap
process,
modifications
to
kubernetes
go
through
a
thing
called
the
kubernetes
enhancement
proposal
or
a
kep
T
modifications
to
the
Gateway
API
go
through
a
Gateway,
API
enhancement
proposal
or
Gap,
and
one
of
them
that
got
to
an
implementable,
Point,
really
fairly
recently
dealt
with,
including
timeouts
in
HTTP
routes.
This
is
something
that
a
great
many
people
have
been
after.
C
C
C
E
Okay,
so
can
you
share
some
good,
more
resources
so
that
people
can
deep
dive
dive
deep
into
the
topic.
C
Yeah
whoops
not
that
way,
though,
let's
use
this
browser
window
for
a
moment.
B
C
All
right,
this
is
the
main
page
for
the
Gateway
API
itself,
so
this
is
a
good
place
to
start
learning
about
things
that
are
coming
up
for
the
Gateway
API
and
the
things
that
you
can
start
expecting
to
see.
At
least,
if
you
take
a
look
in
the
reference.
B
Things,
let's
see,
there's
always
the
Liberty
GitHub
repo,
of
course,
where
we
have
obviously
everything
there
is
to
know
about
liquidy,
and
there
are
a
couple
of
places
where
I
will
put
back
the
slides.
You
mentioned
always
a
good
resource
next
month,
we'll
be
coming
back.
B
About
that
in
a
year
or
so,
and
so
it's
it's
fine,
there's
also
the
LinkedIn.
C
B
B
Channels
are
great
they're
a
good
place
to
go
and
get
lots
of
information,
but
it's
very
difficult
to
search.
B
Is
intended
to
fix
those
two
things
in
particular,
what
else
those
are
the
best
resources.
C
And
I
believe
that
I'm
going
to
put
up
my
you
know
how
to
find
me
on
stock
too.
Any
other
questions
that
can
combined.
E
B
E
C
I
also
forgot
that
I
had
this
slide
in
terms
of
resources.
There
is
a
fundamentals
of
the
service
mesh
online
course
now
with
a
bunch
of
Hands-On
labs,
and
so
that's
a
good
way
to
get
started
with
that
as
well.
E
C
So
yeah
cool
I
was
also
going
to
mention
you
can
go
through
and
take
a
look
at
buoyant
cloud
in
terms
of
managing
things
and,
yes,
you
can
reach
me
as
Flynn
at
buoyant.io.email
Flynn
at
buoyant.io,
for
email,
wow
or
as
at
slack
or
at
Flynn
on
the
Liberty
slack
man.
It's
not
just
it's
like
my
English
is
breaking.
My
presentation
is
breaking
it's
lots
of
breaking
going
on
today,
yeah
when.
C
B
E
Okay,
yeah,
that
is,
awesome,
actually
Happy,
New
Year
in
the
station
cool,
okay,
so
yeah.
So
if
we
are
done
actually
I
guess
we're
done
with
the
demo,
if
you
guys
have
any
questions
feel
free
to
add,
but
as
we
are
done,
I
guess
we
can
end
this
session
right.
So,
let's
end
the
session.
So
thanks
everyone
for
joining
the
latest
episode
of
cloud
native
live.
We
enjoy
the
interaction
and
questions
from
the
audience
thanks
for
joining
us
today
and
we
hope
to
see
you
again
soon.