►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
Welcome
to
Cloud
native,
live
where
we
dive
deep
into
the
code
behind
Cloud
native
hi,
Manny
and
I'm.
A
cncf,
Ambassador
and
I
will
be
your
host
tonight.
So
every
week
we
bring
the
newsletter
presenters
to
Showcase
how
to
work
with
Cloud
native
technology.
They
will
they'll
think
they
will
rate
things
and
they
will
answer
all
of
your
burning
questions
and
join
us.
Every
Wednesday
to
watch
live
and,
in
addition
to
this
event,
there's
a
lot
more
going
on
in
the
cncf
world.
A
So
you
can
check
those
out
on
the
various
cncf
and
element
Pages,
for
example,
Cube
day
Japan,
Cloud
native
security
day,
and
so
much
more
is
up
and
coming.
But
this
week
in
Cloud
native
live,
we
have
Flynn
here
to
talk
to
us
about
zero
trust,
Network
policy
with
Linker
D
and
as
always,
this
is
an
official
live
stream
of
the
CNC
app
and
as
such,
it
is
subject
to
the
cncf
code
of
conduct.
So
please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct.
A
B
B
B
Zero
trust
is
there's
a
lot
to
this
topic,
so
we're
a
little
heavier
on
the
slides
as
opposed
to
demo
this
time
around,
and
you
know
we'll
see
how
it
works.
There's
a
link
in
the
chat
to
the
repo
with
all
the
demo
code,
I'm
going
to
be
working
with
all
the
stuff
I'm
going
to
be
showing
I
would
encourage
you
to
take
a
look
at
that.
If
you
have
questions,
stick
them
in
the
chat
we'll
get
to
them
as
soon
as
we
can.
B
Okay,
zero
trust,
network
security,
zero
trust
has
actually
been
a
major
Focus
for
Linker
D.
For
some
time
it's
a
little
bit
of
a
weird
name.
It
should
really
be
zero
assumptions
about
trust
instead
of
zero
trust,
but
that
takes
too
long
to
say,
so
we
just
call
it
zero
trust.
It's
all
about
be
explicit
about
trust,
as
opposed
to
assuming
things
about
trust,
in
addition
to
being
a
big
Focus
for
Linker
D,
it's
a
big
push
in
the
industry
in
the
United
States.
B
The
White
House
actually
issued
a
mandate
that
all
the
federal
agencies
have
to
worry
about
this
by
the
end
of
next
year.
So
yeah
a
little
bit
of
a
big
deal
when
you
talk
about
distributed,
applications
like
we
do
all
the
time
in
the
cloud
native
World,
some
of
the
impacts
that
xero
trust
have
are.
First,
you
don't
get
to
assume
that
the
traffic
between
your
workloads
is
safe
or
that
it
was
what
was
actually
sent.
So
you
need
mechanisms
for
confidentiality
and
integrity
and
authentication.
You
can't
trust.
B
You
cannot
assume
that
the
network
is
trustworthy.
You
cannot
assume
that
IP
addresses,
for
example,
won't
change
under
you,
which
means
that
you
really
need
to
be
thinking
about
identity
in
terms
of
workloads,
not
the
network.
You
can't
assume
that,
because
somebody
was
able
to
do
a
given
thing
half
an
hour
ago,
it's
still
okay
for
them
to
do
it
now.
This
is
usually
phrased
as
check.
Every
access
every
time-
and
in
fact
you
have
to
do
that
everywhere
as
well.
B
There
isn't
really
a
perimeter
that
you
can
draw
around
anything
in
the
cloud
native
world,
so
instead
you
just
have
to
do
checking
all
the
way
down
the
call
graph
every
access
every
time.
This
is
a
very
Whirlwind
overview.
There's
a
blog
post
up
on
or
an
article
upon
infoqueue
that
talks
more
about
this.
That
I
would
encourage
you
all
to
read.
B
There's
a
lot
more.
That
could
be
said
about
this
one,
but
the
core
of
it
really.
B
Is
you
don't
make
a
lot
of
you
try
to
make
zero
assumptions
about
what
is
secure
and
what
is
not
secure
and
you
check
every
access
every
time,
no
matter
where
it's
coming
from,
where
it's
going
to
Linker
D
actually
has
tools
that
help
out
with
all
of
these
things
on
the
front
with
confidentiality,
Integrity
authentication,
that's
why
we're
doing
Mutual
TLS
between
all
the
workloads
everywhere,
we'd
use
workload,
identity
via
we
take
these
kubernetes
service
account
for
a
given
workload
and
then
bootstrap
that
into
an
x.509
certificate
at
any
that's
the
way
we
handle
workload,
identity,
we're
going
to
talk
today
about
policy,
which
is
how
we
handle
authorization
and
we
put
sidecars
everywhere,
partly
so
that
we
have
things
running
in
the
places
that
they
need
to
be
running
in
order
to
do
the
enforcement
everywhere.
B
So
let
me
take
a
moment
here
to
point
out:
yeah
absolutely
do
throw
in
questions
as
they
come
up
please,
but.
B
We're
going
to
continue
on
a
little
bit
and
see
if
questions
show
up
if
we
look
at
a
kubernetes
cluster.
Very
simply,
you
might
have
a
typical
cluster
where
you've
got
an
Ingress
talking
to
the
web,
talking
to
the
food
and
bar
services
and
then
Foo
and
bar
also
talk
to
each
other.
B
When
you
throw
Link
at
the
end
of
the
mix,
we
end
up
actually
rerouting
all
that
Network
traffic,
so
that,
instead
of
going
directly
from
the
Ingress
to
the
web,
it
instead
goes
from
Ingress
into
the
Linker
D
proxy
over
to
another
link
or
do
proxy
into
the
web
service.
This
means
that
the
proxies
are
in
a
great
position
to
go
through
and
do
mediation
of
all
the
network
communication
to
enforce
access
control.
All
of
this
stuff
that
we
just
talked
about,
there's
also
a
control
plane
in
Linker
D.
B
B
The
important
thing
to
remember
in
the
core
of
all
of
this
policy
discussion
is
that
the
point
of
policy
mechanisms
is
that
they
give
you
the
ability
to
say
no.
In
particular,
this
is
a
way
that
we
can
give
Linker
to
the
ability
to
reject
a
request
that
has
not
been
explicitly
authorized
by
a
policy.
It's
also
worth
pointing
out
as
we
go
through
this
discussion.
Authentication
authen
is
about
knowing
who
an
entity
is
authorization
of
Z
is
about
knowing
how
far
we
trust
that
entity.
B
Linkerty
has
two
separate
policy
mechanisms
that
work
together,
the
first
one
which
has
been
part
of
linker
D
for
a
long
time
is
workload-based
authorization
policy.
Where
you
get
to
say
this
workload
is
allowed
to
talk
to
this
other
workload,
and
if
you
don't
say
that
the
workloads
don't
get
to
talk
to
each
other
Once,
you
turn
this
on.
There's
also
a
route-based
authorization
policy.
Where
you
can
say
a
given
workload
is
authorized
to
make
a
particular
request
from
another
workload.
But
not
you
know
just
broad
things.
A
So
yeah
and
then
there's
a
question
from
the
audience
from
Google.
Does
linkery
use
in
any
way
PSP.
B
Expands
to
too
many
things
in
my
head
for
me
to
be
sure
which
one
Hugo
is
talking
about.
So
how
about
clarify
that
and
then
we'll
see
we'll
go
forward
from
there.
B
Yeah,
okay,
important
note
for
the
rest
of
this
presentation:
workload-based
authorization
policy
and
route-based
authorization
policy.
Take
too
long
to
say,
so
we
tend
to
talk
about
workload,
policy
or
route
policy
or
route-based
policy.
Things
like
that.
All
the
same
thing,
pod
security
policies,
excellent,
those
generally
are
at
a
lower
level
in
the
kubernetes
cluster
than
linkerty,
is
worried
about.
So
it
would
be
the
sort
of
thing
where
we're
talking
about
things
that
the
Pod
security
policies
already
allow
and
Linker
D
allows
you
to
be
more
refined
than
that.
B
Does
that
make
sense?
If
that
doesn't
make
sense,
let
me
know
all
right
so
an
example
of
workload-based
policy.
If
we
throw
in
a
policy
that
says
that
only
the
web
service,
the
web
workload
gets
to
talk
to
the
bar
workload,
then
this
communication
between
Foo
and
bar
will
be
denied
as
soon
as
the
request
shows
up
with
the
policy
effectively.
B
B
Let's
come
back
to
that
at
the
end
of
this
talk
and
go
from
there.
The
short
answer
is:
there's
not
a
lot
about
egress
in
Lincoln
right
now,
and
that's
that's
an
upcoming
thing,
but
let's
come
back
to
that.
A
little
bit
later.
B
Talking
about
route-based
policy,
which
showed
up
in
liquidity,
2.12
and
I.
Think
I
forgot
to
say
that
earlier
sorry,
route
based
policy
is
more
like
Foo
is
only
allowed
to
ask
Barr
to
get
slash.
Foo
request
slash.
So
if
you
install
that
policy,
then,
if
food
does
a
get
a
food
request,
it
will
be
allowed,
but
if
it
does
anything
else,
it
will
be
denied
not
tremendously
complex
as
a
concept,
but
this
does
get
very
complex
in
practice.
B
B
And
yes,
this
does
bring
the
total
number
of
linker
dcrds
to
six,
which
we
feel
kind
of
bad
about,
because
we're
trying
to
have
really
few
crds,
the
HTTP
rep
crd,
actually
comes
from
the
Gateway
API,
we're
kind
of
moving
liquidy
over
towards
the
Gateway
API,
just
to
reduce
the
number
of
new
things.
People
have
to
learn,
but
yeah
like
I
said
this
is
this
is
a
complex
thing
for
Linker
d,
also
worth
noting
out
we're
worth
pointing
out.
Excuse
me
only
HTTP
route
is
about
route-based
policy.
B
So
default
policies
you
set
the
cluster
default
policy.
When
you
install
everything
when
you
install
Linker
d
by
setting
proxy.default
inbound
policy,
the
default
is
wide
open
which
we'll
talk
about
in
a
second,
you
can
override
the
default.
The
cluster
default
policy
at
the
namespace
level
or
at
the
workload
level
by
using
the
default
inbound
policy
annotation
a
thing
that
gets
a
lot
of
people
into
trouble.
B
B
We
will
see
this
in
the
demo
later,
but
this
is
a
thing:
I
am
not
going
to
lie.
This
has
burned
me
personally
more
times
than
I
can
count
while
working
through
this
demo,
the
valid
default
policies,
the
possibilities
all
often
all
unauthenticated.
This
first
one
allows
any
traffic
from
everywhere
to
everywhere,
and
this
is
the
default
because
it
mirrors
the
kubernetes
default.
We
can
then
move
up
to.
B
You
have
to
have
traffic
from
an
IP
address
in
the
cluster
to
you
have
to
have
traffic
that
has
a
valid
linked,
mtls
identity
to
it
has
to
be
from
the
cluster
and
have
a
valid
mtls
identity
to
no
we're
not
going
to
allow
anything
at
all
by
default.
We're
going
to
deny
everything
and
we
will
require
you
to
put
in
exceptions
for
the
traffic
you
want
to
allow
this
last
one.
This
is
where
you
want
to
be.
B
If
you're
doing
proper
zero
addressed
is
deny
everything
unless
you
are
explicitly
allowing
things
so
going
back
to
the
crds.
The
server
crd
talks
about
the
pods
that
embody
a
particular
service,
a
particular
workload
that
we
care
about,
and
it
also
talks
about
ports
that
they're
bound
on
the
HTTP
route.
Talks
about
specific
requests
that
a
given
server
might
be
expecting.
B
The
authorization
policy
ties
everything
together,
where
it
states
a
policy
that
can
be
Associated
either
with
a
server
or
with
an
HTTP
route
to
allow
traffic
through
to
that
server
or
via
that
HTTP
route
and
again
server
authorization
is
deprecated
in
favor
of
authorization
policy,
but
server
authorization
does
still
work.
You
don't
have
to
throw
it
away
yet
worth
pointing
out
if
you're
trying
to
do
route-based
stuff.
You
do
have
to
do
that
with
authorization
policies.
B
B
Okay,
all
right.
Here's
an
example:
server
crd.
This
is
saying
that
within
the
Emoji
Builder
namespace
we're
going
to
have
a
server
that
we're
going
to
call
voting
grpc,
and
it
will
comprise
all
of
the
pods
that
have
the
voting
service,
the
app
voting
service
label
and
are
using
the
voting
grpc
Port.
We
are
also
telling
it
this
particular
server
is
using
the
grpc
protocol
that
proxy
protocol
specification
is
often
optional,
but
it
allows
you
to
skip
protocol
detection
which
can
speed
things
up
and
generally
make
things
a
little
bit
more
comfortable
for
you.
B
B
B
B
B
This
means
that
as
soon
as
you
create
a
server,
you
must
also
create
an
authorization
policy
that
you
associate
with
that
server.
In
order
to
start
allowing
traffic
I
should
modify
this
slide
to
say
you
could
also
do
it
by
associating
an
HTTP
route
in
an
authorization
policy,
but
we'll
get
to
that
in
a
bit.
B
This,
like
I,
said,
tends
to
cause
people
a
lot
of
trouble
because
it
can
be
a
little
surprising.
You
go
through
you,
add
a
server
and
suddenly
things
break
so
be
aware
of
that
all
right
on
to
the
HTTP
route,
crd,
which
is
new
in
2.12.
B
We
borrowed
this
from
the
Gateway
API,
it's
in
a
different
policy,
a
different
API
Group,
though
it's
in
policy.linkerty.io,
because
Linker
D
does
not
currently
Implement
all
of
the
Gateway
API
HTTP
route.
So
we
decided
to
just
put
it
into
its
own
API
Group
to
make
that
a
little
bit
more
clear.
You
associate
it
via
this
parent
refs
with
one
or
more
servers,
so
this
one
would
be
we're
associating
this
with
the
author
server
server.
B
There's
too
many
uses
of
server.
Sorry
about
that,
the
HTTP
route
also
specifies
particular
requests
that
are
okay,
going
to
the
server
to
which
the
HTTP
route
is
associated.
So
this
route,
this
HTTP
route
rather
will
allow,
gets
to
slash
authors.json
or
gets
to
anything
with
the
prefix
authors,
slash
to
the
author's
server
whatever,
and
then
it
would
look
at
the
server
resource
to
figure
out
which
pods
and
which
ports
the
author
server
was.
B
Again
much
like
servers
the
moment
you
create
an
HTTP
route,
then
the
default
changes
to
be
denying,
and
in
particular
that
means
that
when
you
create
an
HTTP
route
for
a
server,
the
magic
default
rule
that
allows
Health
probes
to
work
will
go
away,
and
so
you'll
need
to
be
explicit
about
allowing
the
probes
to
start
up
again.
Typically,
you
just
do
this
by
adding
another
HTTP
route
rule
for
the
health,
C
endpoint
or
whatever
you
need
for
your
service
to
do
for
health
checking.
B
Finally,
we
have
the
authorization
policy
crd.
You
can
associate
these
with
a
server
or
an
HTTP
route
and
it
tells
Linker
d
who
is
allowed
to
make
these
requests.
So
this
particular
one
if
we
break
it
down,
is
associated
with
an
HTTP
route.
Called
authors
get
route.
It
requires
that
you
be
using
mesh
TLS.
B
Sorry
it
requires
that
the
only
entities
allowed
to
make
requests
through
this
HTTP
route
are
those
that
match
the
mesh,
TLS
authentication
resource
name
web
app
authors,
auth,
n
web
app
authors,
auth
n
in
this
case
says:
oh
yeah.
This
is
going
to
be
in
the
Bookshop
namespace
and
they
must
be
using
TLS
with
a
particular
identity.
B
Web
app
actually
won't
work,
there's
a
much
longer
thing
that
won't
fit
on
this
slide,
so
we're
gonna.
We
will
show
that
in
more
detail
in
the
demo,
but
this
is
an
example
of
setting
up
an
authorization
policy
to
allow
one
particular
workload
to
be
the
only
one
that
is
allowed
to
make
a
given
request,
because
only
one
particular
workload
should
have
that
web
app
service
account
all
right.
You
can
also
Match
multiple
targets
by
specifying
a
namespace
in
the
Target
ref.
B
That
kind
of
makes
me
nervous
because
we're
covering
a
bunch
of
material
fairly
quickly
and
either
it's
going
really
well
or
or
it's
going
really
poorly.
B
So
anybody
in
the
chat
wants
to
provide
feedback
for
whether
it's
well
or
poorly
I
would
really
appreciate
that
to
put
all
this
stuff
together
when
a
request
arrives
on
a
meshed
port
that
matches
the
server
Linker
D
is
then
going
to
go
chase
down
all
the
HTTP
routes
and
authorization
policies
and
server
authorization
crds
that
it
can
find
that
match
that
server
and
there
must
be
a
path
through
all
of
those
resources
for
the
traffic
to
be
allowed.
B
Some
examples.
If
you
have
a
server
and
oh
good,
somebody
says
it's
going
well.
I
really
appreciate
that
if
you
have
a
server
and
there's
no
authorization
policy
for
that
server,
that
request
will
be
denied,
because
that
server
will
click
everything
over
to
the
default
deny
if
it
can't
find
an
authorization
policy
to
provide
an
exception.
We
deny
the
request
if
you
have
a
request
that
matches
an
HTTP
route
for
a
server
and
there's
no
authorization
policy
with
that
HTTP
route
again
denied.
B
If
you
have
a
server
that
matches
an
HTTP
route
and
there's
an
authorization
policy
that
goes
through
and
links
all
these
things
up
and
provides
for
an
identity
match,
then
that
will
be
allowed.
This
is
the
good
case
and
another
comment
going
well.
Thank
you.
I
appreciate
that
suppose,
though,
you
have
a
server
that
matches
an
HTTP
route,
the
HTTP
route
matches
an
authorization
policy
that
is
for
some
identity
that
the
request
is
not
using,
but
the
server
has
an
authorization
policy.
B
That's
for
the
request
that
the
request
for
the
wow,
the
server
has
an
authorization
policy
for
the
identity
that
the
request
is
using.
So
the
HTTP
route
authorization
policy
does
not
match,
but
the
server's
authorization
policy
does
match
that
will
be
allowed
and
the
reason
is
as
long
as
we
can
find
any
path
through
everything.
A
Yeah
there
is
a
comment
from
there
from
Lincoln.
D
would
be
nice
to
cover
some
comparison
of
istio
and
Celio.
For
example.
Maybe
they
address
different
problems,
but
would
be
nice
to
get
your
opinion
here.
B
B
Well,
I
work
for
buoyant,
so
it
might
not
be
all
that
suspenseful
where
my
opinions
are
gonna
lie,
but
that's
okay,
Okay,
so
an
important
takeaway
here
is
yeah.
This
can
be
complex,
I'm,
actually
going
to
put
more
examples
in
the
GitHub
repository
to
kind
of
you
know
try
to
explain
some
of
this
in
more
detail.
B
We
could
spend
a
lot
of
time
going
through
and
you
know
walking
through
all
of
these
examples
for
a
place
that
will
be
denied
or
a
place
that
it
would
be
allowed
and
that'll
probably
be
a
little
bit
easier
to
do
kind
of
self-paced,
so
I'm
going
to
add
some
more
stuff
in
the
refill
for
that
all
right.
B
Don't
forget
that
as
soon
as
you
create
either
a
server
or
an
HTTP
route,
they
flip
the
world
over
to
default
deny
no
matter
what
the
default
would
be
without
the
server
or
the
HTTP
route.
In
particular,
you
create
a
server.
You
got
to
create
an
authorization
policy,
create
an
HTTP
route,
got
to
create
an
authorization
policy.
Also,
generally
speaking,
if
you
create
one
HTTP
route,
you
are
usually
going
to
have
to
create
other
HTTP
routes,
to
kind
of
fully
specify
the
exception
set
for
your
HTTP
traffic.
Ngrpc
is,
of
course,
HTTP
traffic.
B
B
So
that's
a
thing
to
be
aware
of
as
you're
working
with
this.
There
are
places
where
it
makes
a
lot
of
sense
to
mix
workload
off
with
server
authorization
policies
and
route
auth
with
HTTP
route
authorization
policies.
B
I
tend
to
think
that
it's
better
to
maybe
approach
those
things
a
little
bit
a
little
bit
more
separatedly.
If
that's
a
word,
what
we're
going
to
do
in
the
demo
is
show
a
way
where
we're
using
restrictive
authorization,
policies
to
and
then
opening
up
a
little
bit,
and
that
ends
up
working
pretty
well,
but
if
you're
trying
to
do
permissive,
workload-based
authorization
policies
that
can
be
really
confusing.
If
you
then
try
to
mess
with
HTTP
routes
afterwards,.
B
Okay,
we
have
talked
about
things
getting
rejected.
The
nature
of
rejection
for
Linker
D
depends
on
what
Linker
D
believes
the
protocol
is.
Grpc
will
get
a
permission,
denied
grpc
status,
HTTP,
otherwise
we'll
just
get
a
403,
and
if
Linker
D
doesn't
know
that
it's
one
of
these
protocols,
then
it
will
just
refuse
the
TCP
connection,
also
very
important.
If
you
change
your
policies
and
restrict
traffic
that
was
previously
allowed.
B
Number
one
pay
attention
to
health
probes.
If
you
do
not
have
authorization
that
allows
Health
probes
to
work,
kubernetes
will
not
allow
your
pods
to
become
ready
in
Linker
d2.12.
As
long
as
you
have
not
set
up
HTTP
routes
and
you're
using
workload.
Auth
Health
probes
are
just
going
to
work,
but
as
soon
as
you
set
up
an
HTTP
route,
you
need
to
explicitly
authorize
probes
to
the
server
for
which
you
set
up
an
HTTP
route
number
two,
as
we
said
before,
default
policies
are
static.
They
get
set
on
Startup.
B
If
you
change
the
default,
you
must
restart
the
Pod.
There's
a
single
exception
to
this.
You
can
use
Linker
D
update
to
dynamically
change
the
cluster-wide
default,
because
microd
update
will
restart
a
bunch
of
things
for
you
and,
of
course,
all
of
the
crds
are
read
dynamically.
So
if
you're
using
non-default
permissions,
you
can
change
those
on
the
fly
as
often
as
you
want
number
three.
B
If
you
have
a
server
that
tries
to
reference
a
port
that
is
not
in
the
pods
spec,
that
the
server
matches
that
Port
will
get
ignored,
and
this
can
easily
result
in
your
server
being
ignored
as
well.
You'll
be
able
to
see
some
feedback
about
this
I
believe.
But
it's
a
thing
to
be
aware
of
okay
last
chance
to
ask
questions
before
we
get
into
the
demo
and
find
out
what
breaks
and
what
doesn't.
B
Cross
your
fingers,
we'll
we'll
see
what
happens
all
right.
B
I
am
using
a
single
node,
k3d
policy,
k3d
policy
I'm
using
a
single
node
k3d
cluster,
named
policy
running
on
my
laptop
as
usual,
with
Linker
D.
It
doesn't
really
matter
what
kind
of
cluster
you're
using
for
this.
It's
just
I'm
using
k3d,
because
it's
a
convenient
way
to
keep
everything.
Well,
it's
a
convenient
way
to
not
worry
so
much
about
my
network
for
freaking
out
or
something
like
that.
B
We
have
a
bunch
of
stuff
installed.
Already.
We've
got
books
app
for
our
demo.
We've
got
emissaries
and
Ingress.
We
have,
of
course,
Lincoln
biz
I'm,
not
gonna,
walk
through
in
this
particular
thing,
I'm
not
going
to
walk
through
actually
setting
up
the
cluster.
That
would
take
a
little
bit
too
much
time,
but
this
is
the
way
we
did
it.
The
Linker
D
install
is
straight
out
of
the
kicks
the
quick
start
for
2.12..
B
B
You
must
install
grafana
separately
before
you
set
up
Linker
dvis
because
of
changes
in
some
of
the
licensing
around
this
stuff
books,
app
again
straight
out
of
the
quick
start
for
books,
app
with
the
exception
that
I
stole
my
colleague,
Jason
Morgan's
service
profiles
for
books
app
as
well,
just
because
it
lets
viz
do
some
nicer
stuff
we're
not
really
going
to
use
the
service
profiles,
in
particular
for
this
demo,
but
they're
really
nice
to
have
when
you're
trying
to
figure
out
if
things
are
working.
B
B
It
works
the
only
thing
I'm
doing
differently
in
this
one
is
I'm
forcing
Emissary
down
to
one
replica
because
it
runs
more
easily
in
k3d.
B
That
way
and
there's
you
know
the
configuration
the
biggest
reason
I'm
using
Emissary
right
now
is
that
it
permits
me
to
use
host
names
for
talking
to
various
Services,
rather
than
relying
on
control,
portforward
and
I,
don't
like
Coupe
control
port
forward,
so
yeah,
okay,
so
everything
is
set
up
and
we
can
start
by
taking
a
quick
look
at
the
books
app
in
the
browser
we
can
reload
the
page
and
it
works.
B
B
It
also
lets
you
do
things
like
we
can
edit
this
author
I,
don't
actually
know
what
PD
James
stands
for,
so
I'll
make
him
Police,
Department,
James
or
her
I,
don't
actually
yeah.
I
should
really
go.
Look
that
up.
So
we
can
also
come
over
here
and
see
that
Linker
dvis
has
good
information
about
all
this
stuff.
If
anybody
has
seen
a
previous
demo
of
the
books
app,
you
will
know
that
it
doesn't
actually
work
very
well
out
of
the
box.
We're
not
going
to
worry
about
that.
A
B
Now,
okay,
so
I
just
looked
at
linger
dviz
in
the
browser
we
can
use.
Linker
divas,
of
course,
from
the
command
line
and
you'll
see
pretty
much
the
same
info
that
we
just
saw
in
the
browser.
We
can
also
use
Linker
Diva's
top
rather
than
Vis
stat,
to
show
the
most
common
calls
going
into
books,
app
and
yeah.
So
there's
traffic
happening
and
it
seems
to
be
working
fine
all
right
now
we
will
break
everything.
B
So
if
we
look
at
Linker
Diva's
stat,
this
looks
kind
of
suspiciously
like
it
is
still
working.
If
we
look
at
it
in
the
browser,
we
can
see
that
it
is
still
working
which
is
not
what
we
expected.
Anybody
have
any
guesses
for
why
this
is
still
working.
B
A
They're,
usually
might
be
a
bit
of
a
delay
in
the
answers,
but
we
can
see
if
someone
types
in.
B
There
are
a
couple
of
places
where
we're
going
to
do
this.
One
and
sorry
there
are
a
couple
of
places
where
this
presentation
has
questions
like
this.
It's
looking
like
the
delay
on
the
chat
system
might
be
long
enough.
Hey
there
we
go
Google
got
it
right,
congratulations,
yeah!
Hugo!
You
should!
Is
there
a
way
we
can
get
contact
information
for
Hugo
Ugo
if
you're
on
the.
B
If
you're
on
the
Linker,
D
slack
ping
me
I'll,
get
you
a
lucrity
hat,
because
you
got
the
right
answer:
yeah,
we
we
changed
the
default
policy
and
we
didn't
restart
the
pods,
which
was
one
of
the
things
that
we
listed
earlier,
as
this
is
a
gotcha
that
causes
lots
of
trouble,
so
we're
going
to
go
through
and
restart
everything
in
Bookshop
right
now-
and
this
is
one
of
the
frustrating
bits
because
there's
really
no
way
to
speed
this
up
and
usually
everything
happens,
really
really
fast,
except
this
is
the
point
that
I
forget
or
that
I
realized
that
I
forgot
to
scale
down
the
web
app
to
one
replica
instead
of
three,
oh
well.
B
A
Yeah
no
worries:
we
have
the
two
questions
in
queue
for
the
end
to
but
other
than
that.
B
Come
on
come
on
web
app,
hey
there,
we
go
all
right,
so
we
got
that
going
all
right
at
this
point.
Things
should
not
work
and
if
we
reload
the
books
app,
we
see
it
does
not
work
I'm
not
going
to
bother
going
into
the
JavaScript
console
to
see,
but
it
will
be
a
you
know.
Permission
error.
You
can
also
see
that
Linker
Dave
is
now
has
is
no
longer
showing
us
the
success
rate
for
those
things.
B
That's
because
Linker
dvis
itself
doesn't
get
to
talk
to
anything
right
now,
so
we
can
start
by
allowing
things
and
we're
like
we're
going
to
start
by
allowing
lucrative,
Vis
and
Prometheus
so
that
we
can
get
the
dashboard
working
again
we're
going
to
do
this
first,
we're
going
to
create
a
server.
This
is
the
one
that
we
showed
earlier
in
the
slides
where
everything
in
the
books
app
namespace,
that
is,
on
the
linkordy
admin
port
and
we're
going
to
go
ahead
and
tell
it
yeah.
B
B
Yeah,
this
will
probably
work
while
I'm
going
through
and
and
talking
about
the
authorization
policy.
This
authorization
policy,
I'll
tell
you
in
advance
is
not
going
to
reference
the
server
we
just
created
by
name,
so
anybody
who
thinks
that
they
know
why
we
need
the
server.
Let
me
know:
here's
the
the
authorization
policy
so
we're
gonna
Target
this
one
to
the
books,
app
namespace,
we're
going
to
use
the
viz
apps
mesh,
TLS
authentication,
which
we
create
down
here.
This
apps
says:
if
you're
using
the
Prometheus
service
account
or
the
tap
service
account.
B
These
are
the
two
that
Vis
and
Prometheus
use.
Then
this
will
be
allowed
an
example.
You
remember
I
mentioned
that
this
needs
a
very
long,
fully
qualified
domain
name
that
wouldn't
fit
on
my
slide.
Yeah
there's
an
example:
pretty
much
everything
after
the
word,
Prometheus
is
going
to
be
the
same
all
over
the
place.
B
Actually,
sorry
I
got
that
wrong.
This
is
Prometheus
Prometheus
in
the
Linker
divis
namespace,
the
service
account.identity.linkerty.cluster.local
is
the
part.
That's
going
to
be
the
same.
All
the
time
also
worth
noting
the
mesh,
TLS
authentication
and
the
authorization
policy
live
in
the
books.
App
namespace,
but
we
are
explicitly
allowing
identities
in
the
Linker
dvis
namespace
here
and
that's
fine,
allowing
identities
to
make
requests
across
namespaces
is
really
important
and
I'm
not
seeing
anybody
answer
about
why
we
need
the
server.
B
The
answer
is
this
target
reference
saying:
I
want
you
to
associate
this
auth
policy
with
the
whole
books.
App
namespace
actually
means
with
all
of
these
servers
and
HTTP
routes
in
the
books,
app
namespace.
If
there
are
no
servers
or
HTTP
routes
in
the
books,
app
namespace,
this
authorization
policy
won't
do
anything.
B
Okay,
so
we're
going
to
apply
these,
and
at
this
point
we
should
now
be
able
to
see
stuff.
I
should
have
done
this
earlier,
where
we
wouldn't
have
been
able
to
see
anything
at
all.
Note
that
all
of
these
are
getting
403s
I've
lost
my
mouse.
There
we
go
so
all
of
these
are
getting
403
responses,
because
these
are
all
HTTP
requests
that
are
being
denied.
B
Now,
if
I
reload
it
over
time,
the
the
traffic
here
will
come
back
or
sorry,
the
the
topology
graph
will
come
back.
If
we
wait
a
little
while
longer
I,
don't
think
I'm
actually
going
to
wait
a
little
while
longer,
though
so.
B
A
B
If
we
scroll
on
down,
we
see
pretty
much
exactly
the
same
thing
for
books
and
for
the
web,
app
really
nothing
too
profound
about
all
of
those.
But
in
this
case,
though,
we
are
using
pod
selectors
we're
not
just
saying
every
pod.
That
has
something
that's
listening
on
the
service
port.
That
would
be
a
little
too
broad
for
this
and
we're
going
to
continue
that
with
an
authorization
policy.
In
this
case,
this
is
going
to
allow
any
identity
within
the
books
app
namespace.
B
B
Oh
I
should
ask
this
question
earlier:
I
think
in
the
interest
of
having
time
for
questions
at
the
end,
I'm
gonna
just
answer
this
question.
First,
we
don't
have
a
server
for
the
traffic
generator,
because
the
traffic
generator
doesn't
take
any
inbound
traffic.
All
it's
doing
is
producing
stuff
going
outward.
Nobody
talks
to
it.
B
So
let's
go
ahead
and
apply
these
servers
and
apply
our
authorization
policy.
At
this
point
we
should
see
actual
traffic
showing
up
and
succeeding
and
all
that
so
far
so
good,
and
if
we
switch
back
over
to
the
browser
I
think
we
will
see
real
things
here,
yeah
so
far,
so
good,
the
topology
graph
doesn't
actually
show
anything
except
what
has
spoken.
So
we
had
to
reload
a
couple
of
times
to
get
this
book's
author's
link.
B
Again,
jump
in
with
questions:
if
anybody
has
this
one
yeah,
we
should
actually
try
the
books
out
from
the
browser.
I
meant
to
do
that
last
time.
If
we
reload
this,
that
does
not
work.
B
B
So
the
first
thing
we're
going
to
do
here
is
we're
going
to
create
an
HTTP
route
that
basically
says
that
any
traffic
going
to
the
server
could
the
web
app
server
that
matches
the
root
path
will
be
allowed,
and
then
we
do
an
authorization
policy.
That's
only
for
the
Ingress
here,
we're
binding
this
to
that
HTTP
route
and
we're
saying
this
has
to
come
from.
Our
Ingress
has
to
come
from
Emissary
Ingress
and
the
emissary
namespace.
B
We
are
again
missing
something.
The
thing
we're
missing
is
that
we
just
broke
probes
because
we
created
an
HTTP
route
going
to
the
web
app
server,
and
that
means
that
2.12
will
no
longer
allow
Health
projects
to
the
https.
The
web
app
excuse
me
so
we're
going
to
allow
those
with
another
route
and
another
authorization
policy.
B
The
web
app
its
health
check
is
using
the
slash
ping
endpoint,
so
we're
going
to
create
a
route
that
will
allow
gets
to
slash
ping,
we're
not
going
to
use
a
mstls
authentication
here,
because
the
kubelet
probes
don't
come
from
within
linkerty.
They
come
from
the
coupe
system,
namespace
generally
and
that's
not
part
of
our
mesh,
so
instead
we're
creating
a
network
Authentication
to
allow
traffic
from
any
network
whatsoever,
whether
it's
ipv4
or
IPv6,
and
we
will
allow
anything
with
that
Network
authentication
to
our
web
app
probe
HTTP
route.
B
B
And
look:
we
can
that's
nice.
Interesting
thing
here
is,
as
we
click
around
in
this.
You
know
like
if
I
click
on
PD
James
here,
oh
you'll,
notice,
that
this
is
back
to
PD
and
not
Police
Department.
That's
because
we
restarted
the
deployment
of
the
books.
App
doesn't
use
persistent
storage
if
I
come
up
here,
you'll
notice.
This
is
to
slash
author,
slash,
nine,
which
is
not
in
fact
the
the
root
path.
That's
something
else,
so
this
is
actually
working
a
little
bit
too.
Well.
B
If
we
take
another
look
at
that
HTTP
route
and
we
look
at
the
path,
we
didn't
actually
specify
how
to
do
path
matching,
and
it
turns
out
that
if
you
don't
specify
the
default
as
a
prefix
match,
since
every
path
has
a
prefix
of
root,
we're
actually
allowing
literally
any
HTTP
request
that
to
get
from
the
Ingress,
which
is
probably
not
something
we
ought
to
do
so,
let's
make
that
an
exact
match.
Instead,
we
shall
change
that
to
have
a
type
of
exact.
That's
all
we
need
to
do.
B
B
B
A
B
So
before
I
completely
lose
my
train
of
thought,
I'm
going
to
point
out
that
if
we
look
on
the
web
browser,
you
can
see
that
editing
the
title
of
the
book
there
failed,
which
is
what
we
expect,
because
that's
not
a
get
now
Opa
spiffy
Spire
the
short
answer
there
is
not
yet
identity
and
Linker
D
has
always
been
based
on
the
service
account
and
we
are
pretty
actively
looking
at
how
to
go
through
and
hook
in
some
of
these
other
things.
B
But
for
right
now
we
basically
rely
on
the
service
account
to
be
the
root
of
identity,
and
then
then
we
use
mdls
and
our
own
policy
stuff
on
top
of
it.
Those
are
very,
very
interesting
use
cases,
though,
and
we
would
be
very
interested
in
talking
to
people
who
have
a
particular
need
for
Opa
or
are
just
interested
in
it
or
whatever
Ingress
controller
I'm
using
Emissary
Ingress,
because
I'm
very
familiar
with
it
HTTP
route
and
auth
manifest.
B
They
work
with
the
ingress's
configuration
that
one
of
the
interesting
things
about
ingresses.
Is
that
actually
give
me
one
moment
and
let
me
page
back
to
a
graph
or
a
graphic
in
the
slideshow.
B
Okay,
so
if
we
look
back
at
this
interesting
thing
about
the
Ingress
versus
the
mesh,
the
Ingress
only
gets
to
affect
traffic
coming
in
from
here.
So
basically,
the
Ingress
can
only
make
decisions
for
about
for
things
that
Transit
the
Ingress.
The
Ingress
cannot
make
any
decisions
about
traffic
between
Foo
and
bar,
for
example.
It
can't
make
any
decisions
about
traffic
between
web
and
bar
because
it's
simply
not
there.
B
So
one
of
the
really
useful
things
about
the
combination
of
an
Ingress
controller
and
a
service
mesh
is
that
the
Ingress
gives
you
the
ability
to
control
things
at
the
edge
of
your
cluster
in
ways
that
your
developers
might
find
simple.
But
the
service
mesh
gives
you
the
ability
to
make
decisions
deeper
in
the
call
graph.
So
I
wouldn't
say
that
the
stuff
we're
talking
about
here
replaces
the
ingresses
configurations
and
policies
they
tend
to
work
together.
If
that
makes
sense,
hopefully.
A
There
was
another
question,
but
we
can
take
that
again
because
we
have
about
10
minutes
left.
So
if
you
have
anything
to
run
in
the
demo
still
we
should
probably
get
that
done
with
and
then
get
the
depression.
B
Yeah,
let's
do
that
Okay!
So
actually
that
is
a
fairly
quick
question
about
Liberty
and
gatekeeper.
I!
Don't
really
expect
it
to,
and
that
would
be
another
great
thing
to
bring
up
on
the
slack
okay
right.
B
So
we
just
went
through
and
demonstrated
that
we
can
view
everything
we
demonstrated
that
editing
doesn't
work
if
we
come
back
in
here
we're
going
to
go
through
and
edit
the
authors
and
we're
going
to
do
this
slightly
differently
just
by
instead
of
changing
our
previous
HTTP
route,
I'm
just
going
to
add
another
one,
because
you
don't
have
to
use
only
one
HTTP
route
here.
The
major
differences
are
we're
using
a
regular
expression
match,
so
that
we
can
be
a
little
bit
more
a
little
bit
fancier
about
what
an
edit
looks
like
we're.
B
B
So
in
this
case,
we're
saying
show
me
all
of
the
authorization
policies
that
are
affecting
the
web
app
deployment
in
the
books,
app
namespace
and
we
get
to
see
the
defaults.
We
get
to
see
all
the
stuff
about
probes.
We
can
see
our
four
different
authorization,
things
that
are
affecting
various
stuff
going
to
the
web
app
itself.
B
This
can
be
a
very
helpful
thing
for
trying
to
understand.
What's
going
on
when
things
don't
go,
the
way
you
expect
them
to
a
very
important
Point
here
is
that
you're
not
going
to
see
anything
in
this
display
that
has
not
actually
had
an
attempt
to
pass
traffic
through
it.
So
if
you're
just
trying
to
look
before
you
try
it
if
you're
trying
to
look
at
this
before
anybody
has
tried
making
requests,
then
you
might
see
less
than
you
expect.
B
All
right
things
we
did
not
do.
That
would
be
great
to
do
on
your
own.
This
is
why
we
have
the
repo.
Obviously
we
didn't
allow
editing
books
as
well.
We
just
did
the
authors
also
when
we
allowed
the
authors
and
book
services
to
talk
to
each
other
and
have
the
web
app
talk
to
the
authors
and
book
Services.
B
We
just
opened
those
up
with
workload-based
policy,
so
at
this
particular
moment
the
web
app
can
do
anything
at
all
to
the
books
app
or
the
author's
app
and
the
books
app
and
the
authors
app
are
just
kind
of
trusting
that
the
Ingress
is
going
to
protect
them.
There's
no
need
for
that.
B
You
could
go
through
and
do
route-based
things
as
well
in
the
mesh
to
protect
the
services
further,
but
that
I
believe
is
it
for
the
demo
quick
point
that
the
buoyant
the
creators
of
linker
d
every
month
do
the
service
mesh
Academy,
where
we
do
further
deep
dives
into
things
like
this.
Those
are
workshops
rather
than
just
you
know,
discussions
the
one
that
we're
doing
on
I.
Don't
know
why
this
says:
November
11-17.
B
On
November,
17th
I
believe
it
is
I
will
find
out
in
just
a
moment
on
November
17th,
yes,
November
17th,
we
will
be
doing
a
deep
dive
into
Linker,
D
and
init
containers
and
cni,
plugins
and
it'll
be
a
delight
and,
of
course,
also
worth
noting
that
we
also
offer
plant
Cloud
to
do
fully
managed
liquidy
on
any
cluster
and
yeah.
That
is
all
I
have.
So
if
there
are
any
further
questions,
now
would
be
a
great
time.
A
A
If
we
want
to
go
to
them,
while
someone
might
be
typing
away
too.
B
Okay,
what
were
our
two
questions
there,
egress
and
istio
and
psyllium
yeah?
Let's
talk
about
egress
first,
because
it's
simpler
likerty
doesn't
do
a
lot
with
egress
right
now.
It's
another
area
of
active
active
discussion.
We
would
be
delighted
to
talk
to
people
who
have
particular
use
cases
or
strong
opinions
about
how
it
should
work
and
what
they
think
would
be
a
really.
B
You
know
linkerty-ish
way
to
do
egress,
it's
an
interesting
problem.
A
lot
of
what
Lincoln
has
focused
on
in
the
past
is
worrying
about
policy
for
traffic
coming
into
a
given
workload,
and
a
lot
of
what
we're
worrying
about
right
now
is
worrying
about
traffic
going
out
of
a
given
workload.
So
egress
very
much
fits
into
that,
and
we
are
very
interested
in
figuring
out
how
to
make
that
work.
B
Well
for
people,
okay,
istiocillium
things
like
that
full
disclosure
I
work
for
buoyant
buoyant,
makes
linkerty
I
work
for
buoyant,
because
I
think
likerty
is
a
really
good
way
to
to
solve
a
bunch
of
these
problems.
B
Psyllium
is
interesting
to
me,
mostly
because
I
tend
to
believe
pretty
strongly
that
doing.
Ip
based
identity
in
a
world
like
kubernetes,
where
you
don't
actually
control
the
network.
B
That
worries
me
a
lot
I
know
that
the
psyllium
folks
put
a
lot
of
effort
into
trying
to
make
that
safe
and
I
know
some
of
the
people
that
I
surveillance
and
I
think
they're
really
sharp
so
yeah.
It
leaves
me
in
this
interesting
place
where
I
have
enormous
respect
for
their
engineers
and
the
fundamental
concept
that
they're
working
with
still
kind
of
scares
me
when
you
think
about
zero
trust.
B
So
it's
an
interesting
situation
to
be
in
istio
is
a
little
bit
more
interesting,
mostly
because
well,
okay,
so
I'm
gonna
not
talk
much
about
ambient
mesh
their
new
thing
here,
because
there's
a
lot
of
stuff
that
we
still
need
to
dig
into
there
under
the
hood,
istio
and
linkerty
do
actually
a
bunch
of
things
in
Fairly,
similar
ways
when
you
start
talking
about
identity,
I
just
find
that
Linker
D
tends
to
be
way
way
simpler,
operationally
speaking
than
istio,
and
so
especially
when
you're
dealing
with
security,
operational
Simplicity
is
a
security
feature.
B
An
operational
complexity
is
not
so
I
tend
to
opt
towards
the
simplest
thing
that
can
get
the
work
done,
as
opposed
to
the
most
complex
thing
that
might
be
able
to
do
things
down
the
road
that
I
don't
need
right
now
and
yeah.
If
there's
other
follow-up
on
that,
then
happy
to
see
it
here
or
you
know,
hit
me
up
on
Slack.
B
Speaking
of
slack
sorry,
let
me
paste
that
as
well.
Here's
the
link
to
ourselves.
A
Give
you
a
good
ending
note
today
continue
discussion
there
or
in
the
yeah.
B
B
Github
repo
any
or
can
we
paste
the
GitHub.
A
B
Let's,
just
let's
just
cheat,
then
shall
we.
A
Yeah
that
works
and
okay
links
again
as
well.
A
And
we
are
wrapping
up
soon
yeah
perfect.
Well,
this
is
going
to
be
a
good
way
for
everyone
to
wrap
the
links.
B
There
we
go
with
that
yeah
that
should
that
should
be
simple
there
and
yeah
Hugo.
B
Hit
me
up
on
the
Linker
D
slack,
you
answer
the
question.
We
should
get
you
a
link
or
D
hat
I
hope
we
have
some
likerty
hats
left,
but
yeah
ping
me
on
slack,
we'll
sort
it
out
the
oh
there
we
go
that
got
posted
as
well
again.
Thank
you.
What
was
the
other
thing?
I
was
gonna,
say
right.
B
A
Perfect,
but
thank
you,
everyone
so
much
and
I
think
that
was
a
good
wrap
up
for
today,
but
as
always,
thank
you.
Everyone
for
joining
the
latest
episode
of
cloud
native
live.
It's.
B
A
To
have
yeah,
it
was
great
to
have
Flynn
here
talking
about
zero
trust,
Network
policy
with
Linker
D,
and
we
also
really
love
the
interaction
from
the
audience,
great
questions
and
great
answer
as
well,
so
I
like
it
that
it
goes
back
and
forth
and
as
always,
we
bring
you
the
latest
Cloud
native
code,
every
Wednesday
and
in
the
coming
weeks
we
have
more
great
sessions
coming
up,
so
stay
tuned
for
those.
So
thanks
for
joining
us
today
and
see
you
next
week.