►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
welcome
to
cloud
native
live
where
we
dive
into
the
code
behind
cloud
native,
I'm
andy
talvasto
and
I'm
a
cncf
ambassador
as
well
as
senior
product
marketing
manager
at
camuta,
and
I
will
be
your
host
tonight.
So
every
week
we
bring
a
new
set
of
presenters
to
showcase
how
to
work
with
cloud
native
technologies.
A
A
So
look
forward
to
that
and
this
week
we
have
jason
morgan
here
with
us
to
talk
about
locking
down
your
communities
cluster
with
linker
d,
very
exciting,
and
as
always,
this
is
a
official
live
stream
of
the
cncf
and
as
such
it
is
subject
to
the
cncf
code
of
conduct.
So
please
do
not
add
anything
to
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct.
B
Oh
awesome,
thank
you
so
much,
hey,
folks,
hello
and
welcome.
So
today
I'm
going
to
talk
to
you
about
how
we're
going
to
lock
down
a
cluster
with
linker
d.
So
I'm
going
to
show
you
how
to
set
up
mtls,
I'm
going
to
show
you
how
to
restrict
traffic
so
that
only
only
traffic
in
namespace
works
and
then
I'm
going
to
show
you
how
to
use
the
new
http
routes
that
come
from
linkedin
2.12
and
the
gateway
api.
B
Actually,
I'm
going
to
show
you
how
to
use
that
to
specify
pot
by
verb
and
by
path
what
can
happen
and
hello
people
from
raleigh
and
india
and
hopefully
other
folks,
I'm
in
washington
dc
it's
great
to
great
to
meet
y'all.
Is
it
okay?
If
I
share
some
slides
to
start
with.
B
All
right
so
just
actually
before
I
even
do
this,
let
me
tell
you
who
I
am:
I'm
jason
morgan.
I
am
a
technical
evangelist
for
buoyant
the
company
that
makes
the
lingerie
project
and
it's
my
job
to
tell
folks
how
awesome
linkery
is
and
try
to
encourage
you
to
use
it.
So
today
I'm
going
to
be
talking
about
locking
down
clusters,
but
before
I
say
that
I
really
have
to
say
what
is
authorization
policy
and
what
am
I
talking
about
right?
Well,
the
the
standard
setup
in
kubernetes.
B
The
standard
setup
in
kubernetes
is
to
kind
of
allow
traffic
to
and
from
any
pod.
The
standard
setup
in
link
rd
is
to
allow
traffic
to
and
from
any
pod.
But
with
the
caveat
that
when
you're
using
linker
d,
you
have
mtls
everywhere,
so
authorization
policy
refers
to.
How
do
we
restrict
who's
allowed
to
talk
to
whom
in
your
cluster
right-
and
you
know
it's
it's-
we
call
it
authorization
policy,
because
things
that
aren't
authorized
don't
get
to
don't
get
to
work.
B
So,
let's
go
to
the
next
slide
here
for
clarity
right
with
linker
d,
we
are
using
service
mesh
based
policy
to
restrict
traffic.
So
what
does
that
mean?
Well,
it
means
that
we
need
linker
d
in
the
loop
right.
We
we
can
restrict
traffic
to
pods
that
have
the
liquidity
proxy,
because
that's
how
lingerie
does
everything
it
does?
B
B
Using
this
verb
and
executing
this
path
and
yeah,
we
can
do
it
in
a
very
in
a
very
fine-grained,
very
fine-grained
way
and
just
a
quick
heads
up
just
so
we're
clear
what
is
authorization
policy
versus
network
policy
network
policy,
at
least
when
I
think
of
it
I
think,
of
network
policy
as
firewall,
I
think
of
application
policy
or
service
mesh
policy
as
layer
seven.
So
what
what
layer
are
we
operating
at
with
linkerid?
B
We
use
the
workload
identity
right
so
just
in
general,
in
linkedin,
every
pod
gets
its
identity
based
on
the
service.
Count
that
you
configure
in
kubernetes
and
we'll
show
you
a
little
bit
of
that
again,
the
demo.
It
automatically
includes
encryption,
it's
enforced
at
the
pod
level
right
so
because
we've
got
the
proxy
running
beside
your
application.
We
enforce
it
right
there
at
the
individual
pod
level.
B
B
Just
as
a
note
as
we're
going
today,
there's
going
to
be
a
lot
of
stuff
happening,
the
more
you
can
interrupt
me
and
ask
questions
and
get
clarification.
I
think
the
better.
This
will
go
for
for
everyone,
so
please,
please
feel
free
to
interject.
Whenever
you
can.
B
B
C
B
B
All
right,
so
I'm
hoping
y'all
can
see
my
terminal
hello
from
nigeria.
Oh
awesome,
so
I'm
hoping
y'all
can
see
my
terminal,
the
top
at
the
top.
Here,
I'm
going
to
actually
be
putting
in
commands
at
the
bottom
left
and
bottom
right,
I'm
just
going
to
be
showing
you
some
watches.
B
The
bottom
left
here
is
just
all
the
pods
that
are
running
in
my
cluster,
because
we're
going
to
do
some
stuff-
and
I
want
you
to
see-
what's
happening
with
our
pods
as
we
do
it.
The
bottom
right
is
going
to
be
the
current
state
of
authorization
policies.
So
what
is
authorized
in
in
linkedin
for
the
booksat
namespace,
so
we're
going
to
start
our
books
app
pods
exist,
but
none
of
them
are
part
of
the
service
mesh.
That
is
none
of
them.
None
of
them
have
a
proxy
running
beside
them.
B
C
B
Yeah
great
question:
let
me
go
a
little
bit
further
and
then
I'm
gonna
gonna
hop
into
that
question.
So,
first,
let's
get
this
running.
I
wanna
talk
about
what
I'm
doing
here.
So
I
get
the
deployment
and
I'm
going
to
add
one
line
to
each
ammo
manifest
that
line
is
going
to
say,
linkery
inject
dot
enabled
right
it's
it's
just
going
to
tell
the
lingerity
admission
controller
to
go
ahead
and
or
liquidity
web
hook
to
go
ahead
and
modify
these
deployments.
B
So
let's
do
that
and
while
that's
going
I'll
answer
your
question
so
now
we
see
new
pods
are
getting
created
right
and
they've
got
they've
got
an
additional
proxy,
so
yeah,
istio
and
linkerity
are
both
now
cncf
projects
they're
both
service
meshes
and
linker
d,
and
they
do
some
really
similar
things
right,
the
the
main
difference
being
that
lingerie
well,
hopefully,
you'll
see
with
this
webinar
or
this
live
stream.
Lingerie
is
really
easy
to
use.
We
also.
B
Running
this
locally,
we
can
see
that
I've
got
an
app
it's
working,
one
thing
that
I
have
now.
That
was
pretty
easy
right
and
this
is
actually
the
start
of
our
differences
with
other
service
meshes
is
to
get
up
and
running
with
linker
d.
You
don't
need
to
use
any
custom
resource
definitions,
adding
complexity
or
adding
something
like
that.
Yeah,
it's
a
great
question.
B
Linker
d
doesn't
require
to
use
gateways
virtual
services.
Anything
like
that
right,
like
the
core
linker
d,
just
uses
kubernetes
services
and
an
annotation
to
set
up
mtls
and
and
we're
off
and
running
right.
So
with
no
with
no
custom
resources,
my
app
still
works.
I
can
get
some
good
statistics
about
it.
So
if
I
go
back
here
launch
my
dashboard.
B
B
All
right
back
to
back
to
the
actual
work
of
of
loch
ness,
locking
this
down.
So
to
start
what
I've
done
is
I've
generated
mtls
between
all
my
connections.
So,
even
though
we're
allowed
everything,
everything
in
my
cluster
is
allowed
to
talk
to
everything
else
right
now
we
can
see
when
we
look
at
our
our
policies
in
the
bottom
right.
We
can
see
that
you
know.
B
We've
got
two
policies
called
default,
unauthenticated
one
for
the
main
route,
one
for
the
probe
right,
the
probe
just
being
that
that
health
check
and
we
can
see
how
many
we
can
see
that
nothing
is
unauthorized,
because
it
allows
everything
to
occur
all
the
time,
but
we're
gonna
we're
gonna
change
that
right
now.
B
We
can
look
at
our
deployment.
We
can
see
some
statistics
about
what's
going
on
right,
so
this
is
just
some
high
level
details
about
the
traffic
here
in
the
books,
app
name
space
going
beyond
that
right
now,
we're
going
to
start
we're
going
to
start
getting
into
custom
resources
and
we're
going
to
start
getting
into
advanced
configuration
for
linker
d.
B
So
the
first
thing
I
want
to
do
is
I
want
to
set
up
the
policy
inside
my
books,
app
namespace
that
says
your
default
behavior
should
be
to
deny
traffic.
We
don't
want
anything
to
work
unless
we
tell
it
to
work
all
right.
What's
notable
here,
let's
actually
go
change.
What
we're
showing.
B
What's
notable
is
even
though
I
turned
on
my
default
deny
policy,
I'm
still
seeing
traffic
flow
through
right.
The
reason
for
this
is
the
default
policy
for
a
given
proxy
is
set
at
startup
time
for
that
proxy
right.
So
this
is
a
really
important
caveat.
If
you're
watching,
when
you
set
default
policy,
you
have
to
restart
pods
in
order
for
that
policy
to
take
effect.
B
We
can
see
that
the
pods
are
still
running
yeah.
We
can
see
the
stats
in
the
deployment.
Sorry,
I'm
showing
you
that
already
and
now,
if
I
trigger
a
rollout
restart,
what
we're
gonna
see
is
we're
gonna,
see
all
the
apps
get
we're
gonna
see
all
the
all
the
pods
restart
and
we're
gonna
see
all
our
traffic
totally
fall
off
and
die.
B
I
just
wanna
show
you
something
really
quick.
If
we
look
at
our
pods
right,
one
thing
that
if
you
use
policy
in
liberty,
2.11
you're
going
to
expect
different
behavior
than
what
you
see
here
right.
So,
let's
let's
talk
about
this
really
quick,
so
we
see
all
of
our
pods
restart.
That
is
because
the
default
behavior
in
linguity
212
is
to
allow
health
checks
to
continue
to
exist
all
the
time
right
or
except
for
in
one
very
special
case
that
will
get
to
you
at
the
end.
B
So
it's
going
to
keep
your
your
readiness
and
liveness
probe
succeeding,
even
though
the
app
is
receiving
no
traffic
right.
One
of
the
one
of
the
unfortunate
consequences
of
this
is
we're
going
to
see
these
things
like
these
things
are
going
to
slowly
taper
off
to
zero
and
then
disappear,
and
all
of
our
all
of
our
stats
from
the
dashboard
for
this
app
are
gonna
die
right.
They're,
gonna,
they're
gonna
disappear.
B
B
B
Yeah,
so
it
it
does
if
you
aren't
prepared.
So
if
you,
if
you
do
this
without
knowing
what
policy
you've
like
without
creating
the
right
policy,
you're
going
to
take
down
time
right,
I'm
doing
this
step
by
step
in
my
example,
to
show
you
how
it
works,
but
this
can
easily
be
a
one-shot
go
where
we
apply
everything
immediately
right
and
it
all
works
policy.
Now,
when
we
first
announced
policy,
like
my
boss,
you
know
said
loudly:
this
is
the
biggest
foot
gun
we've
ever
given
liberty,
users.
B
This
is
a
great
way
to
create
an
outage
with
your
service
mesh.
If
you're
not
careful
about
what
you're
doing
that
being
said,
there
is
no
requirement
to
take
down
time
to
set
up
policy
right.
You
can
actually
do
it.
You
can
actually
do
it
all
in
advance
and
then
go
ahead
and
set
something
like
a
default
deny.
B
So
what
I'm
going
to
do,
I'm
going
to
start
allowing
some
traffic
back
right.
So
the
first
thing
I'm
going
to
do
is
I'm
going
to
create
a
I'm
going
to
create
a
server
resource
right.
So
we've
got
a
couple
resources
in
play
here.
I
think
we're
dealing
with
a
total
of
six
custom
resource
definitions
that
you
have
to
deal
with
in
liquor
d
right,
so
it's
more
than
more
than
none
but
but
less
than
15
right.
B
So
it's
not
it's
not
that
big
a
handle
to
do
and
again
you
don't
need
to
deal
with
this
unless
you're
trying
to
add
in
policy
so
we're
going
to
create
a
server
for
our
admin
port.
This
is
going
to
map
a
individual
port
in
our
application
to
an
object
that
has
policy
applied
to
it
right
I'll,
show
you
these
yamls
in
a
second
just
going
to
go
ahead
and
apply
them
and
get
that
started.
B
So
after
I
make
a
server
object,
I'm
going
to
create
a
I'm
going
to
create
a
policy
to
start
allowing
allowing
admin
traffic
so
allowing
linker
to
use
dashboard
to
start
understanding
what's
happening
so
now
that
I've
got
this,
we're
going
to
start
seeing
some
statistics
come
back
right.
It's
going
to
take
a
second,
but
we're
going
to
see.
Statistics
come
back
because
now
we've
authorized
linguity
to
ask
about
the
admin
port.
So
let's
show
you
what
we
did
here
so
first
I
created.
Let
me
make
this
a
little
smaller.
A
B
Did
what
it
did
was?
It
was
looking
for
all
pods
in
the
name
space
right
so
inside
of
its
namespace.
It
was
looking
for
any
pod
that
matched
any
label,
and
it
was
looking
for
pods
that
had
a
port
called
linker
d
admin
and
it
told
it
told
linkerity
that
hey
on
that
port,
the
traffic
is
hp2.
So,
instead
of
having
lingerie
trying
to
detect
what
the
traffic
was
for
this
port,
we
just
hold
it
explicitly
so
that
it's
a
little
bit
it
can.
B
B
So
the
next
thing
we
do
is
create
a
policy
right
and
what
we're?
What
we're
saying
here
is
that
for
that
server
or
sorry,
if
you're
in
the
namespace
books
app,
we
want
you
to
accept.
We
want
to
accept
mesh
tls
connections
and
we
want
you
to
accept
mesh
tls
connections
from
our
linkerity
prometheus
service.
So
that
is
the
the
thing
that
collects
the
data
and
then
our
tap
service,
which
shows
some
cool
metrics
about.
What's
going
on
in
our
app.
B
Oh
we're
not
broadcasting.
Okay,.
B
B
Okay,
great,
so
that's
what
we
did
right
so
we've
now
allowed.
Let's
go
back
to
our
diagram,
real
quick!
We
have
now
allowed.
B
The
viz
extension
to
talk
to
bookshap,
but
right
now
all
of
these
links
here
are
down,
so
they
are
all
being
denied
by
policy.
The
only
collection
from
our
viz
extension
from
this
lingerie
viz
namespace
is
connecting
in
and
specifically
only
the
prometheus
and
tap
service
counts
are
allowed
to
open
connections.
Here,
that's
why,
when
we
look
at
linker
d,
we
have
data,
but
there's
not
there's
not
much
going
on
here,
because
our
app's
pretty
empty.
So
let's
show
you
what's
next.
B
So
what
I'm
going
to
do
now
that
we've
now
we've
got
this
so
now
that
we've
got
you
know
our
initial
stats
working
we're
going
to
go
ahead
and
allow
some
in-app
traffic.
So
what
I
have
is
I
have
four
apps
right
and
four
just
think:
four
distinct
ports
right,
one
port
per
app
or
actually
three
ports,
perhaps
because
our
traffic
service
doesn't
actually
accept
any
traffic,
so
we're
gonna
tell
authors,
books
and
web
app.
We're
gonna
define
a
server
that
claims
the
actual
application
port
on
all
of
these
all
these
instances.
B
So
we
created
one
for
authors,
creating
one
for
books
and
we're
creating
one
for
our
web
app
they're,
all
pretty
identical,
so
I'm
just
going
to
show
you
one
of
them
and
then
after
we
create
those
we're
going
to
set
up
a
policy
that
allows
everything
in
our
namespace
to
talk,
write
our
only
service
accounts
within
our
namespace
to
talk
to
other
other
services.
So
yeah,
that's
done
right.
So,
first
off
what
you're
gonna
see
now
is
some
new
some
new
things
like
so
right
now,
the
unauthorized
column.
B
You
see
those
numbers
start
to
drop
and
you're
gonna
see
success
rates
and
the
requests
per
second
increase
for
actual
authorized
routes.
Same
thing.
On
the
left
hand,
side
we
can
see
that
our
app
is
talking
to
itself.
Of
course,
our
app
is
broken.
It's
a
demo
app,
it's
broken
on
purpose.
If
you
want
to
see
more
about
how
that
works,
I
can
send
a
link
to
a
talk.
I
did
on
debugging
applications
with
with
your
service
mesh,
so
we've
got
some
traffic
going.
So
let's
look
at
these
objects.
B
B
Well,
you
look
for
you
look
for
a
pod
that
matches
the
app
name,
authors
and
the
project
books
app
you're,
looking
for
a
port
called
service
right,
like
that's
the
name
of
the
port,
that
you
define
in
your
in
your
yaml,
manifest
on
your
deployment
and
then
we
give
it
the
proxy
protocol
right
which
we
don't
have
to
set,
but
I
like
to
set
it
so
that
we
can
skip
protocol
detection
once
we
do
that,
we're
going
to
actually
allow
some
traffic
right.
B
So
this
is
this
is
the
part
that
I'm
excited
about.
We
say
you
know
we
want
a
policy
called
books
app
only
what
that
means,
and
we
want
it
to
exist
in
the
name
space.
So
we're
gonna
say
if
you
are
a
policy
target
in
the
books,
app
name
space.
We
want
you
to
use
this,
this
books,
app
accounts,
mesh,
tls,
authentication
right,
and
this
is
the
mesh
tls
authentication
object.
So
this
is.
This
is
another
new
custom
resource
definition.
B
We
had
servers
for
picking
ports,
we
have
authorizations
for
mapping
policy
to
servers
and
then
we
have.
We
have
two
types
of
authorization
objects.
We
have
mesh
tls
authorization
for
what
service
accounts
should
we
allow
to
talk
to
this
port,
and
then
we
have
network
authentication
for
what
ip
range
should
we
allow
to
talk
to
this
port
right
and
we'll
see
one
of
those
here
in
a
minute.
B
All
right,
so
these
apps
can
now
begin
talking
to
each
other
again,
so
we've
we've
restored
basic
traffic
right.
So
right
now,
in
what
20
minutes
we've
gone
from
nothing
to
mtls
between
every
single
app
inside
the
books,
app
namespace,
where
we
have
only
explicitly
authorized
connections
between
our
apps
are
allowed,
and
if
we
look
at
our
books
app
here.
B
B
It's
going
to
be
me
because
why
not?
We
can
create
a
new
author.
That's
cool!
Add
a
book
shoot.
I
should
have
thought
this
out
so
off
this
through
how
to
linker
d
page
count
five,
it's
a
short
book,
but
a
good
one.
Oh
it
doesn't
work.
That's
okay,
we'll
figure!
It
we'll
fix
that
in
a
minute,
but
so
we're
able
to
create,
create
authors
and
do
things
within
our
within
our
application.
B
I
hope,
if
you're
watching
you're
slightly
psyched
about
this,
and
you
see
that
it's
not
a
huge
journey
right
like
going
back
to
that
object
right,
you
know
really.
What
did
we
do
here
right?
We
created.
We
created
a
policy
that
mapped
the
ports
on
my
or
a
server
that
mapped
the
ports
on
my
various
applications,
and
then
we
created
a
policy
that
said
if
you're
in
the
namespace,
you
can
accept
calls
from
any
any
app
in
the
namespace
any
tls
app
in
the
namespace.
So
nothing,
that's,
not
tls!
B
A
A
B
Yeah,
I
don't,
would
you
be
able
to
to
rephrase
that
in
general,
liberty
policy
is
not
is
not
like
ip
based
right,
like
that
being
said,
we
can
specifically
authorize
calls
from
from
ip
ranges
right
like
what
we're
doing
with
policy
and
liberty
is
validating
the
identity
of
a
workload
based
on
that
mutual
tls
that
linked
he
bought
us
for
our
cluster
right.
B
So
we
have
the
identity
validated
for
each
workload,
and
we
can.
We
can
use
that
identity
from
the
server
side
of
the
conversation
to
decide.
Should
we
accept
this
request
so
hassam,
and
I
hope
I
said
your
name
right.
B
I
don't
understand
what
you
mean
by
stateful
policies
versus
stateless
right,
like
these
policies
will
survive
application
restarts
that's
what
you're
asking
right.
It's
just
something
that
you
store
on
the
kubernetes
api
and
that
linkery
or
the
linkerd
proxy
will
check
on
when
it
authorizes
a
given
request,
but
I'm
happy
to
I'm
happy
to
dive
in
more.
If
I
miss
oh,
I
clearly
misunderstood.
So
if
you
can
explain
it
to
me
in
a
different
way,
I'm
happy
to
dive
in
more
okay.
B
So
now
that
we
have
that
right,
let's
lock
it
down
even
more
right.
So
our
our
author
service,
if
we
decide
our
author's
service
hey,
this
is
a
really.
This
is
a
really
sensitive
service,
so
going
back
to
our
little
diagram,
we're
okay!
If
traffic
talks
the
web
and
web
talks,
the
authors
and
authors
talks
about
and
books
talks,
authors
all
that
stuff
we're
okay
right,
but
we
what
we
want
to
do
is
we
want
to
make
sure
that
if
you're
talking
to
authors,
only
certain
accounts
are
allowed
to
do
things.
B
So
we
don't
want
traffic
talking
to
authors
and
we
specifically
don't
want
it.
We
we
want
to
specify
who
can
do
what
on
what
port
so
we're
going
to
create
some
policies
that
use
http
routes
which
are
linked?
Oh
sorry,
let
me
step
back
linker
d212.
Another
big
part
of
what
we
did
is
we're
beginning
to
adopt
the
gateway
api
specification,
which
we're
really
excited
about.
B
There's
great
work
coming
out
of
the
gateway
api
group
and
we're
using
http
routes
to
to
actually
build
build
policy
for
linkedin,
and
as
we
look
at
what
we're
doing
next
with
lingerie,
we
want
to
continue
to
use
gateway
api
specifications
to
do
that.
B
So,
let's
go
back
to
our
demo,
so
let's,
let's
isolate
authors,
and
I
want
to
show
you
a
little
bit
about
you-
know
our
first
kind
of
edge
case.
So
when
we
set
up
what
what
we
saw
originally
when
we,
when
we
built
these
connections,
is
that
all
of
our
pods
stayed
ready
because
by
default
linker
d,
respected
or
set
a
default
exemption
for
liveness
checks
and
readiness
checks,
we
call
them
probes
right.
B
B
And
you
know
what
I
didn't
do
this
right,
so
give
me
one
sec.
I
just
want
to
show
you
what
we
what
we
did
here.
B
Let's
just
look
at
look
at
this
object,
so
this
is
the
thing
that
I
created
and
once
I
created
this
right,
we
saw
the
author's
service
go
from
ready
to
not
ready
or
one
of
our
pods.
Our
application
pod
became
unready
right.
What
happened
is
what
happened
is
we
when
we
create
an
hp
route,
it
overwrites
the
exemptions
that
we
make
for
health
checking
right.
So
let's
just
take
a
look
at
this
at
this
route,
real,
really
quick!
B
B
You
can,
you
can
do
stuff,
but
when
we
created
this
route
we
didn't
also
create
the
the
exemptions
for
the
probe,
so
we're
gonna
have
to
fix
that
next.
So,
let's
create
a
probe
exemption.
B
B
It
creates
a
route
that
specifies
that
that
health
check
address,
which,
if
you
look
at
the
at
the
yaml,
manifest
for
our
app
ping,
is
our
health
check
url
and
that
creates
a
network
authentication
policy
right
and
it
maps
that
that
network
authentication
policy
that
route.
So
what
we're
saying
is
hey,
listen
if
you're
not
in
the
mesh,
but
you
have
any
ip
address
any
ip
address
that
the
cluster
could
possibly
have
we're
going
to.
B
Now
beyond
that,
if
we
go
look
at
an
author
now
right,
if
we
want
to
add
a
book,
you
know
new
book,
it
is
one
right,
we're
gonna,
get
we're
gonna,
get
a
failure
right,
because,
right
now
our
web
app
is
not
allowed
to
do
a
get.
So
we
only
authorized
a
a
get.
B
B
B
When
we
look
at
the
at
the
modified
wrap
here
right,
what
we're
going
to
see
is
we're
going
to
say:
hey,
listen
for
things
that
things
that
use
this
route,
you're
going
to
be
able
to
do,
deletes,
puts
and
posts
on
these
applications
or
on
these
on
these
paths
inside
of
our
web
app,
and
when
we
create
it
here
we
see
that
our
our
modify
route
gets
created
right.
That
being
said,
we
still
don't
have.
B
As
soon
as
I'm
finished
here,
thank
you
so
much
for
sharing
that
sorry,
I
didn't
mean
to
speak
over
there
so
now
that
we've
created
the
route,
we're
gonna
actually
apply
a
policy
so
that
something
will
like
attach
that
route
to
our
server
and
set
some
rules
here,
so
we're
gonna
create
it.
Let
me
show
you
what
we
did
again.
B
Right
so
we
took
that
route
that
we
just
created
that
modify
route
route,
which
that's
kind
of
a
redundant
name,
but
whatever
we
said,
hey,
listen,
use,
restrict
yourself
to
whoever's
mentioned
here
in
aus,
authors
modify,
authentication
right
and
specifically,
what
we're
saying
is
we're
gonna
allow
the
web
app
to
do
this,
and
only
the
web
app,
so
books
isn't
allowed
to
use,
puts
or
or
deletes
or
posts
right
books
is
only
allowed
to
do
gets
make
sense.
B
Yeah,
what
we
did
here
over
the
last
35
minutes
right
just
to
clarify
we
took
our
app
that
was
working.
We
didn't
do
anything
to
to.
We
didn't
do
anything
to
you
know,
adapt
the
app
to
our
service
mesh
right.
So
the
core
belief
in
linkery
is
that
if
you
have
an
app
and
it
works
in
kubernetes,
you
should
be
able
to
add
it
to
linker
d
and
it
still
works
with
no
changes
right
and
try
getting
that
deal
with
any
other
service
mesh.
B
On
top
of
that,
you
get
mtls
and
statistics
about
what's
going
on,
then
we
showed
you
not
just
how
to
add
mtls,
but
to
take
that
namespace
and
say
bam.
Absolutely
nothing
that
isn't
explicitly
authorized
will
be
allowed.
That's
what
we
did
with
that
that
changed
the
the
proxy
behavior
to
default,
to
denying
something
unless
it's
authorized,
then
we
went
in
and
we
first
we
took
our
apps
and
we
said:
okay
for
the
linkedin
admin
port,
that
is
the
proxy
admin
port
across
this
whole
environment.
B
We're
going
to
allow
connections
to
the
proxy
admin
port
from
the
linker
d
from
the
linker
d,
visualization
dashboard
right.
So
that
we
could
get,
you
know
our
fancy,
our
fancy
metrics
about.
What's
going
on
right
in
our
fancy
details
about
the
environment,
so
we
could
do
things
like
tap
the
live
traffic
and
see
who's
talking
to
what
right
in
in
our
cluster
and
what
the
you
know,
what
the
performances
of
these,
these
various
components
right
or
go!
Look
at
books
there
we
go
and
books
isn't
receiving
a
lot
of
calls.
B
B
Right
if
your
identity
matches
the
anything
in
the
books,
app
namespace
we're
going
to
allow
you
to
do
application
traffic
so
right
there
we
had.
We
had
a
little
box
around
our
namespace
that
protected
us
from
anything
that
wasn't
that
wasn't
in
the
namespace
traffic,
even
linker
d.
Now,
if
it
tries
to
connect
to
tries
to
connect
to
the
to
the
web
server
on
one
of
these
one
of
these
pods,
it's
going
to
get
denied
because
we
don't,
we
didn't
authorize
it.
B
B
Only
books
and
web
talk
to
others
at
all
right
and
then
how
to
make
changes
to
our
author's
database
and
kind
of
a
force.
Now
it's
an
easy
app,
but
it's
not
to
do
this
across
a
bigger
environment.
Like
anything
test
go
in
steps
right.
I
forget
who
asked
earlier.
Let
me
see
if
I
got
this,
somebody
asked
if
if
we
had
to
take
an
outage,
oh
yeah,
I
think
it
was
grav.
B
C
A
B
Yeah
so
martin
great
great
question,
so
we
actually
care
a
lot
about
the
performance.
Liquidity
right
just
go
broader
right,
linkery,
our
our
intent
is
to
be
the
easiest
to
use
the
fastest
and
the
most
secure
service
mesh
on
the
market.
B
I
believe
those
things
are
true
not
only
because
I'm
paid
to
say
that
right,
like
I
actually
believe
but
y'all
you
can
test
for
yourself
and
see,
see
what
you
feel
we
did
some
benchmarking
of.
We
did
some
benchmarking
of
linkedin
versus
another
popular
service
mesh,
and
we
have
some
statistics
about
what
exactly?
B
What
exactly
is
the
memory
memory,
cpu
and
latency
footprint
of
linkedin
when
it
compares
to
something
else?
So
I'll
give
you
a
sense,
but
you're,
really
only
gonna
see
you're
really
only
going
to
see
what
it
changes
when
you
actually
deal
with
your
real
app
because,
like
no
app,
no
two
apps
are
the
same
like
we're
doing
a
benchmarking
harness
made
by
the
folks
at
invoke
it's
a
generic
service
match
benchmarking
thing,
a
set
of
numbers,
but
real
application
data.
B
A
Yeah,
so
let's
hope
so
martin.
If
you
want
to
know
something
more,
of
course,
ask
more
and
then
we'll
get
the
answer.
B
All
right,
if
you
liked
any
of
this,
feel
free
to
come,
do
an
in-depth
search
and
production
workshop.
While
it's
not
that
complicated,
it's
really
get
a
sense
of
all
the
pitfalls
you
might
run
into
before
you
run
into
them
happening
the
day
before
kubecon
I'll,
be
there
there'll
be
other
people.
You
can
say
hi
hat.
If
you
haven't
seen
our
house
they're,
pretty
cool,
so
lots
that
you
should
join.
On
top
of
that.
B
If
you
haven't
seen
it
we're
doing
a
well,
a
bunch
of
folks
are
doing
a
little
conference
right
before
con
is
like
a
warm
up.
It's
free!
It's
online
check.
It
out
called
cubash
hope
to
hope
to
see
you
there,
I'm
sorry.
Last
but
not
least,
if
you
have
thoughts
on
this,
you
want
to
say
hi
you
want
to
be
like.
Oh
jason,
istio
is
the
best
come
join
me
on
the
linker
d
and
tell
me
what
you
think
and
why
and
I'd
love
to
love.
B
A
Perfect,
the
hat
seems
really
cool.
I
think
everyone
should
be
lining
up
together
to
get
one
of
those
now
sounds
really
great,
but
yeah
we
have.
We
had
the
link
to
the
benchmarking,
as
well
as
the
slack
now
added
to
the
comments,
so
everyone
can
go
hop
on
over
there
to
check
those
out,
but
don't
obviously
leave
it
quite
yet,
because
you
still
have
your
chance
to
ask
your
questions.
If
you
have
anything
for
more
here
for
our
speaker,
so
ask
the
questions
away.
A
We
still
have
a
bit
of
time,
yeah
anything
else
that
you
want
to
add
now
jason
two
to
before.
We
hopefully
have
a
lot
of
questions
coming
in
or
or
let's
see,.
B
Now
only
big
one,
I
just
want
to
send
you
the
getting
started
guide
if
you
haven't
seen
it
like.
I
promise
you
you
can
get
through
this
in
30
minutes
like
30
minutes
is
a
long
time
to
get
through
it.
Linker
d
is
easy
to
use
right
like
I
started
believing
you
know.
When
I
came
into
kubernetes,
I
was
working
with.
You
know
another
another
vendor
another
space,
and
you
know
I
believe
that
service
mesh
was
really
complicated
and
really
painful,
and
while
it
was
valuable,
you
had
to
be
really
good
to
use
it
right.
A
A
Yeah
but
the
kick
off
the
q,
a
a
question
for
me.
So
while
we
see
if
anyone
else
has
any
questions,
do
you
have
any
kind
of
sneak
peeks
what's
in
the
future
for
linkerdy?
Is
there
anything
exciting
coming
up
or
anything.
B
Yeah
fantastic
question
thanks
for
thanks
for
asking,
so
we're
really
excited
about
the
gateway
api
and
what
it
allows
us
to
do
right.
So
linkery's
philosophy
has
been
to
really
limit
the
number
of
custom
resource
definitions.
We
add
to
your
cluster.
The
reason
we
do
it
is.
We
believe
that
for
every
custom
resource
definition,
you
add
you
add
some
element
of
complexity
to
the
environment
right
the
gateway
api
becoming
you
know.
B
Potentially
part
of
core
kubernetes
gives
us
a
lot
of
really
powerful
tools
for
manipulating
traffic
for
doing
traffic,
splits
for
setting
up
things
like
retries
timeouts
header
based
routing,
egress
control,
all
sorts
of
great
tooling
in
a
standard,
kubernetes
native
way.
So
with
the
next
release,
so
linker
d212
just
came
out
not
that
long
ago.
I
don't
remember
exactly
but
came
out
in
august
and
it's
been
really
cool
and
it
gives
you
a
lot
a
lot
of
new
functionality,
we're
working
diligently
right
now
on
linker
d213,
because
we
want
to
do
a
small
release.
B
Next.
That
adds
in
something
that
a
lot
of
folks
have
been
asking
for
specifically
circuit,
breaking
and
header-based
routing
right.
So
we're
excited
to
see
what's
going
to
happen
there,
we
expect
to
get
that
out.
We
expect
to
get
that
out
this
year
and
yeah.
I
I'm
I'm
really
looking
forward
to
it
and
I
think
that
it
just
adds
more
more
power
to
an
already
pretty
powerful
tool.
B
A
Great
so
final
call
for
questions.
If
anyone
is
typing
away
right
now,
please
push
enter
and
send
as
soon
as
you
can,
but
as
always,
though,
if
you
realize
later
on
like
oh,
I
should
have
asked
that
question
now
that,
obviously
you
can,
you
can
help
under
the
lingerie
slack
or
you
can
also
hop
on
over
to
the
club
native
live
slack
and
cncf,
and
then
you
can
with
the
communities
in
cscslac,
so
you
can
find
out
from
there
as
well.
A
You
know
here's
the
little
jonar
live
chat
on
the
csps
left
there.
You
have
one
over
there,
but
I
think
probably
the
link
rd
slack
is
the
best
for
linkedin
direct
questions
as
well.
So
everyone
has
a
lot
of
resources
on
there
on
their
way,
so
perfect,
but
yeah
since
no
questions
at
least
I
can't
see.
Currently
we
had
already
a
lot
of
questions
throughout
the
session,
so
we
handled
the
q
a
during
that.
So
that
was
lovely.
Do
you
have
any
final
words
jason
or
anything?
B
I
do
actually
check
out,
if
you
like
this,
but
you
want
a
much
longer
in-depth
version
check
out
our
service
mesh
academy
site
next
week.
I
think
next
week
or
the
week
after
we're
going
to
do
a
really
deep
webinar
into
policy
and
liberty.
I've
done
my
my
colleague
flynn
he's
awesome.
It's
going
to
be
similar
to
this,
but
better
and
with
more
information,
so
check
it
out.
If
this
was
good
at
all.
A
Perfect,
we
had
flynn
a
few
weeks
ago
in
cloud
native
lives.
Well,
so
people
might
be
familiar
with
him
from
there
yeah
and
we
had.
B
A
Review
as
well
so
great
session
and
everything
but
yeah,
let's
start
wrapping
up.
So
thank
you.
Everyone
for
joining
the
latest
episode
of
cloud
native
live.
It
was
great
to
have
a
really
good
session
about
locking
down
your
kubernetes
cluster
with
linker
d.
I
really
love
the
audience
interaction
this
time
as
well.
Thank
you
for
all
the
questions
and
as
always,
we
bring
you
the
latest
cloud
reading
code
every
wednesday,
so
stay
tuned.
We
have
a
lot
of
great
content
coming
up
in
the
coming
weeks
as
well.