►
From YouTube: Cloud Native Live: Emissary and Linkerd - How to integrate your Service Mesh with K8s Ingress
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hosting
today's
show
so
every
wednesday
we
bring
a
new
set
of
presenters
to
showcase
how
to
work
with
cloud
native
technologies.
They
will
build
things.
They
will
break
things.
They
will
answer
your
questions,
and
this
week
we
have
jason
and
daniel
here
to
talk
to
us
about
linker
d
and
emissary
before
we
get
to
that.
Just
a
quick
reminder
that
this
is
an
official
live
stream
of
the
cncf
and
as
such
is
subject
to
the
cncf
code
of
conduct.
A
So
please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct,
basically
just
be
nice
to
each
other,
so
hi,
jason
and
daniel
hello.
Would
you
like
to
introduce
yourselves.
C
Oh
okay
sounds
good.
Well,
hey
everyone!
My
name
is
my
name's
jason.
As
you
can
see,
there's
my
twitter
there.
If
you're
looking
for
uninteresting
comments
on
twitter,
I
do
technical
evangelism
for
lingerie,
so
talk
to
folks
about
the
open
source
project.
Why
it's
great
and
why
you
should
use
it.
B
Hey
everyone,
daniel
bryant,
director
of
devrella,
ambassador
labs.
You
may
know
us
formally
as
datawire
we
rebranded
earlier
in
the
year.
My
background
is
java
development
and
I
moved
through
to
solution.
Architecture
did
a
bit
of
operations.
I
was
always
the
build
person
was
the
classic
thing
right
and
fell
in
love
with
kubernetes
when
it
came
out
so
much
like
jason.
I
spent
my
days
these
these
terms
chatting
about
the
goodness
of
the
cloud
native
tech
scene,
helping
folks
learn
because,
like
sometimes
it
is
quite
complicated.
B
A
Great
all
right,
so,
let's
start
with
the
basics,
which
is
a
little
bit
about
linker
d
and
a
little
bit
about
emissary.
B
Sure
thing
so
emissary
cncf
incubation
project
we
got
accepted
was
a
few
months
ago
now,
so
we're
in
the
process
of
moving
everything
across.
You
can
check
out
the
emissary
ingress
repo
on
github,
with
a
lot
more
awesome
stuff
there
and
links
to
getting
started,
which
we'll
run
through
in
in
just
a
moment.
But
if
you
are
looking
for
an
envoy
powered
ingress,
I
recommend
msr
ingress
right.
There.
B
Others
do
exist,
I
should
say
both
in
the
cncf
and
the
general
ecosystem,
but
it's
fundamentally
a
way
to
get
user
traffic
to
your
backend
services
right.
So,
whatever
you're
doing
you
always
need
to
get
that
user
traffic,
whether
it's
literally
you
know
browser
traffic
or
mobile,
app
traffic
or
maybe
curl.
You
know
that
kind
of
stuff.
B
You
need
to
get
that
traffic
through
to
the
back
end
services
and
in
api
gateway,
ingress
somewhat
of
an
overloaded
term,
but
we
often
do
a
lot
of
cross-cutting
concerns
or
like
non-functional
requirements,
some
folks
call
them
at
the
ingress
at
the
gateway.
So
things
like
tls
transport
level,
security,
things
like
rate
limiting
things
like
auth,
all
that
goodness
kind
of
centralized
at
a
single
point,
separation
of
concerns,
because
then
your
back
end
apps
do
not
have
to
worry
about
all
those
good
things.
B
So
that's
why
you
want
to
look
at
an
api
gateway
in
ingress
to
handle
what
you
traditionally
call
north
south
traffic
when
we
used
to
draw
our
network
diagrams
like
from
vertical
space,
the
user
was
at
the
top
traffic
going.
North
south
is
at
the
edge
of
your
data
center
and
that
nicely
leads
into
well.
The
service
mesh
is
because
that
deals
with
east-west
traffic
over
to
jason.
C
Yeah,
I
muted
myself
right
because
I
was
ready
to
start
talking
there
so
yeah
great
great,
great
segue.
So
linkerity
is
a
service
mesh
for
those
that
that
haven't
heard
the
term
before
or
at
least
have
heard
the
term
and
maybe
don't
have
a
great
sense
of
it.
C
A
service
mesh
is
essentially
what
you
get
when
you
add
a
number
of
proxies
little
little
load,
balancers
in
between
every
service
in
your
environment,
right
so
in
kubernetes
we
use
what
we
call
like
a
sidecar
model
right
to
put
a
little
proxy
beside
every
one
of
your
applications
inside
your
cluster
and
then
that
those
connections
between
the
proxies
make
up
a
mess
of
services
where
the
proxy
can
handle
things
like
encrypting.
C
The
traffic
between
services,
or
you
know,
introducing
failures,
or
you
know,
adding
adding
metrics
right
so
generally
think
of
the
features
of
a
service
match
as
being
related
to
security,
reliability
or
observability
so
making
it
so
that
more
of
your
calls
succeed.
Giving
you
better
insight
into
your
calls.
If
you've
got,
you
know,
20
different
apps
in
five
different
languages,
you
don't
have
to
have
everybody
put
in
metrics
data
right.
C
Instead,
the
proxy
collects
standard
metrics
about
everything
and
then
feeds
that
up
into
into
the
the
control
plane
for
that
match
you,
when
you're
talking
about
service
mesh,
we
talked
about
control,
plane
and
data
plane,
something
for
humans
or
computers
to
interface,
with
the
mesh
or
to
the
mesh
with,
and
then
the
actual
layer
that
carries
your
data.
So
linker
d
is,
is
the
original
service
mesh,
or
at
least
we
say
it's
the
original
service
mesh?
I
work
for
point
the
folks
that
make
linker
d.
C
It
is
recently
graduated
from
the
cncf
right,
so
it
means
it's
a
cncf
project
and
has
met
the
criteria
for
graduation.
That
happened
I
think
two
three
weeks
ago,
which
is
big
news
for
us.
Congratulations,
oh
thank
you!
Yeah
we're
we're,
really
excited,
and
then
singh
asked
in
the
chat
if
it's,
if
it's
available
for
beginner.
If
this
talk
is
for
beginners-
and
I
would
say
absolutely
what
we're
going
to
do
is
get
started
with
emissary
and
then
get
started.
A
Yeah,
so
so
we
we
talked
about
some
functional
or
non-functional,
require
features
that
both
products
deliver
and
the
differentiation
between
them
is
that
one
is
mostly
concerned
with
traffic
that
is
coming
from
the
outside
world
into
your
cluster
and
which
is
emissary
right
and
the
other
one
is
linker
d,
which
is
mostly
concerned
with
what
happens
within
the
cluster
after
the
traffic.
Is
there
right
absolutely.
A
All
right
good,
so
how
easy
it
is
to
make
the
two
work
together.
C
A
B
A
All
right,
yeah,
let's
get
let's
get
to
it,.
B
A
Screen
screen.
A
B
B
B
Yeah,
I
think
it's
my
bad,
I
think
with
some
browser
issue
there
awesome
so
just
to
recap,
as
in
if
you
do
want
to
pop
along
and
like
we
mentioned
the
there's,
the
getting
started
for
emissary
ingress,
we've
also
got
the
github
repo,
so
you
can
pop
along
to
emissary
ingress
scroll
down
all
you're
getting
started
links
are
there
as
well.
So
it's
a
good
resource
we'll
share
all
of
these
links
in
the
cncf
slack
channel
as
well
later
on.
B
So
if
you
do
miss
them,
don't
worry,
we'll
share
them
and
then
pop
along
to
link
a
d
dot.
Io
see
jason's
face
on
the
front
page
here,
he's
famous
right,
linka
d,
famous
and
all
nice
kind
of
jumping
off
point
to
land
on
linkedin
page
here
and
jason
also
shared
earlier
on
the
linker
d,
101
kind
of
service
mesh,
intro
and
and
how
to
get
started.
I've
learned
a
lot
from
the
point
folks
like
from
william
and
so
forth
over
the
years
either.
B
The
first
time
I
heard
service
mesh
being
talked
about
was
on
the
buoyant
website.
I
think
pretty
much
on
the
link
of
the
website.
So
these
are
great
references
if
you
want
to
get
started.
You
kind
of
I
often
talk
about
building
mental
models
where
you
have
to
understand
the
tech
at
a
fundamental
level
before
you
really
get
the
full
value,
and
these
blog
posts
are
a
great
way
to
do
that
awesome.
So
we
have
it's
here.
I'll
literally
show
you
if
we
go
to
actually
this
one
here.
B
This
is
the
emissary
github
repo
scroll
down
to
the
getting
started.
You
can
jump
into
our
tutorials
here
getting
started,
I'm
going
to
install
with
helm.
I
think
this
is
the
easiest
way
to
get
started.
I'm
going
to
use
helm,
3
the
latest
version,
and
it's
the
easiest
jump
to
go
towards
sort
of
production-like
environment.
You
can
use
yaml,
you
can
use.
B
We've
got
a
like
a
cli
tool,
but
again,
if
you're
running
in
production,
you're,
probably
gonna
be
using
something
like
helm,
you're
actually,
probably
gonna,
be
using
helm
right,
so
we're
gonna
use
helm
to
get
started
with
ms
right.
So
if
I
click
install
with
helm,
I
have
in
my
browser
window
just
to
show
nothing
up
my
sleeves
here.
A
blank
kubernetes
cluster,
courtesy
of
siva
jason,
connected
us
up
the
this
morning,
so
you
can
see
here.
B
B
So
I've
installed
helm
locally.
On
my
mac,
I
have
added
the
datawire
repo.
We
used
to
be
called
data
ambassador
labs.
So
that's
why
we've
still
kept
the
data
branding
here
and
I
will
literally
run
this
command
here
home
install
ambassador,
I'm
going
to
change
it
slightly,
just
to
make
it
easy
on
some
things.
We've
got
later
on.
I
go
to
my
cheat
sheet
over
here.
B
I
wanted
just
to
put
a
name
space
in
there.
So
I
do
helm.
Install
ambassador
me
for
master,
looks
good.
You
do
the
enable
aes
false.
Oh
I've
left
my
config
file
open
my
bad
and
do
the
enable
aes
false
on
the
command
line,
options
for
helm
or
you
can
do
it
via
rammel
values.yaml
and
that
just
installs
the
open
source
emissary,
so
you've
got
a
commercial
offering
where,
like
open
core,
adds
more
value
on
top
but
emissary
the
open
source
project.
You
just
set
that
flag.
B
C
My
thread
break
things
for
a
second
daniel,
so
we've
got
a
bunch
of
links
in
the
chat,
so
the
things
that
you
see
daniel
going
through
or
that
you'll
see
me
going
through
when
we
when
we
get
to
it,
are
all
going
to
be
covered
in
the
links
that
that
have
been
shared.
So,
but
if.
A
B
Address
them
super.
This
is
all
taken
away,
we're
getting
some
warnings
there
because
of
different
versions
of
kubernetes
and
things
being
deprecated.
So
I
know
that
some
of
those
are
raised
issues
that
we're
tracking
as
well,
depending
on
what
versions
of
kubernetes
cluster
you're
installing
you
may
see
different
warning
messages
there,
but
that
looks
good.
If
I
do
k
get
service.
B
All
now
see
the
bottom
here,
we've
got
our
ambassador
admin
and
ambassador
also
up
and
running
great
stuff.
If
I
now
follow
the
instructions,
I
can
create
a
mapping
here,
we'll
install
first,
the
quote
of
the
moment
service
as
our
demo
service
I'll
copy.
That
text
pop
my
browser,
I'll
just
clear.
The
screen
to
move
everything
up
to
the
top
to
coupon
apply
that
link
that
should
install
deployment
and
service.
For
the
quote
of
the
moment,.
B
Yeah
great
point
great
point:
so
a
mapping
is
a
custom
resource.
We've
created
a
custom
resource
that
maps
a
uri
or
path
into
a
back-end
service,
so
I
haven't
actually
I'll
spin
it
up
in
just
a
second
I'm
literally
now.
I've
just
installed
my
service
and
my
deployment.
Hopefully
folks
are
roughly
familiar
with
that
in
kubernetes
land
we're
spinning
up
a
container
within
a
pod
with
a
deployment
within
a
service
and
then
to
your
question.
It's
all
you
can
see
here.
Here's
what
the
mapping
looks
like.
B
We
have
created
a
customer
source
as
part
of
the
helm,
install
we
define
what
a
mapping
is
and
the
mapping
is
quite
a
rich
construct.
You
can
start
super
simple,
which
we've
done
here.
If
I
bump
up
the
resolution,
we've
literally
said
create
a
mapping,
call
it
quote:
backend
and
prefix
slash
backend.
If
you
hit
the
ip
address
of
our
ambassador
service,
slash
backend,
you
will
be
routed
to
the
quote
service
running
on
port
80
by
default.
B
Yeah,
we
all
great
question:
we
also
support
ingress
and
so
the
it's
very
similar.
To
be
honest,
you
see
a
lot
of
the
other
ingresses
doing
their
own
sort
of
thing,
or
some
folks
use
annotations
to
define
the
the
routing
the
mapping
and
some
folks
have
created
customer
resources
like
we
have
so
we
when
we
created
him
well,
ambassador,
now
every
ingress
it
was
like
three
or
so
years
ago,
and
the
customer
resources
weren't
a
thing.
B
We
also
went
back
and
supported
the
ingress
spec
when
it
became
more
solid
as
well.
Folks
do
want
to
know
more
about
this.
I'm
happy
to
like
chat
sort
of
probably
a
bit
sort
of
separate
from
what
we're
talking
about
today,
but
there
is
quite
a
storied
history
of
kubernetes
ingress
and
because
it's
not
simple,
it's
the
honest
answer
like
getting
traffic
from
the
user
to
the
back
end
is
not
simple
and
the
like.
B
B
Good
good
question
yeah.
I
see
this
day
in
day
out
right,
but
I
forget
that
this
is
a
new
contract
with
a
lot
of
folks.
So
it's
a
great
question
from
atai
awesome,
so
we
have
installed.
Our
service
looks
good
right
and
now
I'll
just
copy
that
you
know.
Actually,
I
think
I've
already
got
that
set
up.
If
I
oh
where's
my
terminal
gone
there,
we
go
my
laptop
struggling
today.
If
I
just
bring
up
ll,
you
can
actually
have
got
the
quotes
backend.yaml
already.
B
If
I
guess
that-
and
you
can
see
here
exactly
what
we
had
on
that
on
the
interwebs
right,
so
I've
just
saved
that
locally.
Let's
get
rid
of
that.
If
I
now
do
okay
apply,
file
quote
backend
looks
good,
we've
got
a
mapping
and
because
it's
a
custom
resource
I
can
do
k
get
mapping
like
that.
That's
I
think
it's
quite
cool
when
you
you
know,
regardless
of
the
project.
If
it's
got
a
customer
source,
it's
kind
of
kubernetes
native
right,
you
can
get
extra
info.
B
I
could
describe,
for
example,
the
mapping
and
looking
for
more
info,
so
super
useful.
You
know
that's
why
I
think,
following
along
with
the
sort
of
kubernetes
native
way
and
jason
will
touch
more
on
this
later
on,
you
know
both
emissary
ingress
and
link
d,
really
embrace
the
kubernetes
resource
model,
the
kubernetes
way
of
doing
things,
and
it
makes
our
lives
as
developers
and
operators
that
much
easier,
because
it
kind
of
follows
the
principle
of
least
surprise.
B
A
Was
a
question
about
how
does
mapping
get
impacted
by
using
network
policy
so.
B
Yes,
so
I'd
see,
the
the
mappings
are
sort
of
more
fundamental,
like
you're,
literally
mapping
a
path
or
some
other
details
onto
a
back-end
service.
If
you
want
to
layer
on
additional
security,
you
can
do
some
of
this
by
service
meshes.
So
jason
can
answer
some
more
of
that
as
well,
and
you've
always
got
to
bear
in
mind.
Other
policies
you've
got
in
place,
but
I,
if
you're
learning
this,
I
would
start
with
a
kind
of
blank
cluster
like
we're
doing
here
and
layer
up
your
learning
get
your
routes.
B
First,
on
get
your
mappings
very
basic
front-end.
You
know
user
to
back-end
service
layer
in
your
service
mesh,
see
all
the
value
you
get
there
and
then
start
looking
at
things
like
a
lot
of
folks
who's
like
calico
right,
one
of
the
other
different
lower
level
constructs
of
networking,
oper,
open
policy
agent,
super
popular
you
can
layer
all
those
things
on
to
add
extra
security,
add
extra
protection
and
they
are
great
for
production
use
cases.
But
if
you're
learning
my
advice
is
start
small
and
layer
it
on
top.
C
And
just
add
a
tiny
bit
there
right
so
network
policy.
Is
that,
like
it's
really
like
layer,
3
really
gets
like
firewall
and
components
like
that.
So
when
I
say
layer,
I'm
talking
about
the
the
osi
model
right
in
like
the
different
layers
of
your
network
stack
when
you
think
of
an
ingress,
or
in
this
case
emissary
and
its
mappings,
that's
all
really
like
layer,
7
stuff,
so
up
at
the
up
the
application
side.
C
B
Yeah
good
job
jason
and
if
you
are
using
a
cluster
within,
say
your
commercial
environment,
your
your
company,
you
may
bump
into
exactly
what
jason
said
there,
where
you
know
by
default.
Certain
network
policies
do
disallow
things.
So
that's
totally
worth
checking
like
yeah.
If
you
can
start
with
something,
maybe
even
like
local
kind,
mini
cube
like
that,
can
remove
all
that
challenge.
B
For
you
awesome,
let's
go
to
grab
the
ip
of
your
emissary
ingress,
so
I've
literally
copied
this
pop
back
into
my
terminal,
I'll,
just
clear
a
screen
again
to
make
it
a
bit
more
obvious.
I
thought.
Oh
I've
missed
my
namespace
put
namespace
in
there
n
ambassador
like
so,
and
hopefully,
if
I
echo
this
is
famous
right.
B
If
I
just
echo
for
you
folks
watching
what's
going
on
there,
we
can
see,
we've
got
an
ip
address
and
if
again,
if
I
was
just
to
do,
k
get
service
and
just
do
all
and
you
can
actually
see
our
ambassador
pod
within
the
ambassador
namespace,
that's
the
service
right
within
the
best
name.
Space
has
got
the
external
ip
we've
set
it
up
as
a
load,
balancer
type
service.
B
That's
why
this
is
a
little
cheat
sheet
just
for
very
quickly
getting
the
ip
address
of
of
your
ambassador
instance
awesome,
let's
pop
back
to
the
web
page.
B
Our
lead
ambassador,
emma
ingress
engineer,
it
comes
with
some
very
witty
quotes,
hopefully
there.
So
that
is
pretty
much
there's
some
other
stuff
you
can
follow
below.
If
you
want
to
go
through,
I've
talked
about
k,
get
mappings.
If
you
do
want
to
set
up
tls
there's
a
nice
page,
we
can
share
the
link
here.
Using
search
manager
and
the
ambassador
edge
stack.
Does
a
lot
of
this
stuff
automatically,
but
emissary
is
the
open
source
project
best
to
use
cert
manager.
B
I've
got
one
pre-baked
which
I
can
share
later
on,
if
you
like,
but
if
you
do
want
tls
termination
at
the
edge
with
a
free,
tls,
cert
loving,
let's
encrypt,
loving
the
acme
protocol,
and
this
guides
you
through
using
helm
and
the
jet
stack
cert
manager
and
gets
you
all
installed
and
all
set
up
there.
I
won't
do
that
now
I'll
move
on
to
linker
d
install,
but
but
I
always
advise
tls
if
you're
in
prod,
for
example,
I'll
pause
it
just
anything.
C
No
right,
just
just
for
folks
that
are
are
watching
right.
So
what
we've
got
now
is
traffic
from
the
internet
to
our
cluster
right.
So
that's
our
our
norse
house,
so
we've
gone
to
the
to
the
front
door
of
the
cluster,
which
is
emissary.
Our
mapping,
you
know,
tells
us.
You
know.
Your
ombudsman
tell
people
where
to
go
right
in
our
traffic
where
to
go,
and
now
we're
going
to
add,
link
or
d
right.
C
B
And
suddenly
you
said
okay,
so
much.
You
know
my
background's
java,
but
I
did
a
lot
of
go
and
ruby,
and
I
remember
when
I
worked
on
my
first
microservices
projects,
I
had
to
re-implement
all
the
things
jason
just
mentioned
there
in
language,
specific
libraries
yeah.
If
I
wanted
observability
java
library,
I
wanted
observability
the
ruby
services,
ruby
library,
go
library,
link
d
by
abstracting.
Some
of
that
to
the
proxy
means
that
as
a
java
developer
as
a
ruby
developer
as
a
go
developer,
I
don't
need
to
worry
about
those
individual
libraries.
B
Now
and
more
importantly,
I
don't
need
to
maintain
them,
because
linkerd
maintains
that
for
us.
So
I
remember
when
I
first
bumped
into
linkery.
I
was
like
this
is
awesome,
absolutely
as
a
polyglot
type
programmer
right
cool,
so
I've
now
fired
up
the
link
d
2.10
getting
started
guide
again,
we'll
share
the
links
in
the
channel
I
have
already
installed.
I
know
my
cubecat
version
is
good
to
go
nice
to
check
that
in
the
docs.
B
I've
already
installed
the
latest
link
d
cli,
just
because
downloading
on
the
interwebs
can
be
a
bit
dodgy
when
we're
doing
live.
Demos,
so
I'm
all
set
there
I'll
actually
now
start
from
the
link
d
version,
so
I'll
copy
that
back
to
my
terminal,
I'll
clear
the
screen
again
just
to
make
it
a
bit
easier
to
read.
I
just
pop
in
link
a
d
version
good
to
go
right,
2.10
client
version
excellent.
B
I
will
be
shopping
and
changing
here,
but
I'll
go
back
to
I'll,
see
preflight
checks.
Of
course,.
C
B
I
love
running
the
checks
as
well
like
because
it
just
it
just
looks
so
good
right,
and
so
I'm
a
big
fan
of
that
luca
gcli
is
super
easy,
so
right
I'll,
just
paste
in
now
the
the
install
command.
Oh,
I
also
did
not
copy
paste.
B
Well
then,
if
I
scroll
down
in
the
background,
we'll
then
run
our
checks
again,
once
linkedin
is
installed
again,
you
get
that
nice
visualization
that
nice
feedback.
So
I
did
a
demo
with
thomas
rampelberg
for
kubecon
or
eu.
I
think
a
year
or
two
ago
where
we
did
multi-cluster
link
d
and
the
checks
which
is
fantastic.
B
There,
we're
not
going
to
dive
into
multi-cluster
much
today,
but
the
linkadi
checking
commands
like
they
can
seem
sort
of
trivial
at
times,
because
you
see
lots
of
green
ticks,
but
when
stuff's
going
wrong,
they're
super
useful,
so
I'll
run
that
again,
it's
really
nice
to
check
all
the
pods
are
up
and
running.
When
you
do
a
multi-cluster,
can
you
talk
to
each
other?
Can
the
connections
do
the
connections
work?
So
do
not
underestimate
the
value
of
these
check
commands.
They
are
super
useful.
C
Just
a
little
talking
while
we're
well
we're
seeing
this
this
go
through
right.
So
what
what
happens
there?
You,
somewhere
in
the
command,
linkery
liquidity,
install
right
and
then
pipe
it
over
to
keep
ctl
apply,
so
that
is
that
is
generating
all
the
ammo
that
you're
going
to
use
for
the
install.
So
we've
got
examples
doing
it
with
like
a
git,
ops
flow
or
installing
via
helm.
C
C
Really
easily
and
then
you
know,
it'll,
just
it'll
just
generate
standard
yaml
that
you
can
save
and
share
out.
I
guess
that's
really
the
big
thing
with
the
install
command.
So
if
you
don't
pipe
it
over
to
cubesat,
apply
it's
just
gonna
output,
a
bunch
of
yaml
right
to
your
terminal
that
you
could
save
off
somewhere.
B
Loving
the
github's
flow
jason,
so
we've
like
when
we're
in
prod.
We
typically
would
do
that
because
just
you
know
easier
to
manage
easy
to
upgrade
and
so
forth,
but
I
love
the
cli
for
getting
started,
but
yeah,
plus
one
on
the
on
the
yaml
on
the
install
status
tracks,
look
good
and
just
to
jason's
point.
If
we
just
do
know
okay
get
service
all
right,
you
can
see,
there's
a
lot
more
stuff
installed.
Now
we've
got
our
ambassador.
We've
got
our
quote
service
and
the
default
name
space.
B
A
So
while
this
is
running,
we
can
maybe
adjust
one
question
here
about
whether
lincoln
d
can
provide
us,
the
ability
or
other
functionality
for
non-http
applications
and
actually
there's
a
good
opportunity
to
clarify
what
kind
of
proxy
linkerd
is
using.
Yeah.
C
So
it
is
a
great
question.
Thank
you
very
much.
So
linker
d
is
actually
I
don't
know
if
it's
unique
among
the
service
meshes
but
they're
they're,
most
service
meshes
use
envoy
as
that
sidecar
proxy
right,
linker
d
does
not.
Lingerie
has
a
custom.
A
custom
built
proxy
called
the
linker
d2
proxy,
it's
written
in
rust,
and
we
can.
I
can
talk
about
that
in
way
more
detail
than
anyone
here
probably
wants
right
now,
but
it
is
it's.
It's
very
fast
and
it's
very
it's
very
simple.
So
for
non-http
traffic
we
can
get.
C
We
can
get
metrics
for
tcp
connections,
so
I
can
probably
show
some
of
that
when
we
do
the
demo,
although
may
have
to
take
that
offline,
but
we
can
show
when,
when
you're,
when
you
are
a
non-http
request
or
non-grpc
request,
where
we
can
get
get
some
details
around,
that
the
bulk
tcp
stream,
but
you're
not
going
to
get
like
request
level
information
right,
because
it's
just
it's
just
a
bulk
connection,
going
back
to
that
osi
layer
thing
right.
If
it's
layer
four,
we
can
tell
you
that
it
connected
and
how
much
data
is
going.
C
But
that's
really,
that's
really.
All
we're
gonna
see
without
understanding
how
to
read
the
underlying
protocol,
so
you
generally
get
it
for
you
generally
get
interesting
information
for
http
and
grpc
traffic.
B
Very
nice,
very
nice,
mr
sheamus
plugs.
Well,
I
did
a
podcast
with
oliver
from
buoyant.
My
colleague,
where's
rice
from
infoq
did
another
podcast
with
william,
I
think
as
well,
and
so,
if
you
do
want
to
know
more
details
about
the
rust
proxy
and
like
why
the
why
buoyant
chose
rust
and
how
the
libraries
around
it
evolved.
I
learned
a
bunch
from
oliver
and
williams
so
check
out
this
podcast
on
infoqueue.
If
you
do
want
to
dive
in
because
I
think
it's
just
super
interesting
to
know
about
the
tech,
all
right.
B
But
yeah
nice.
Let
me
stop
sharing
my
screen
a
bit
find
out
and
restream
voila
all
right,
so
we're
sharing
our
sivo
cluster
just
by
the
koopa
config
files,
so
jason
we're
jumping
in
following
my
footsteps.
C
All
right,
let
me
know
when
we
can
see
my
terminal.
Let
me
see
all
right
so
right
now
right.
I
should
probably
bring
that
back
so
right
now,
we've
got
a
bunch
of
stuff
running
right
in
the
environment,
come
on
a
sec.
I've
got
a
little
laser
pointer
that
I
try
and
show
off
every
chance
I
get
so
we
have.
We
have
a
bunch
of
things
going
on,
so
we've
got
linker
d,
the
lingerie
pods
right.
So
these
are
the
components
that
make
up
the
control
plane
for
our
service
mesh.
C
We
also
have
the
liberty
vist
components
which
are
which
are
the
the
dashboard
right,
which
I
I
can
show
you
all
in
a
minute
brian.
This
is
this.
Is
the
dashboard
is
a
nice
way
to
visualize?
That's
why
we
use
the
word
bits
what's
happening
inside
your
class
there
right
but
you'll
note.
All
of
them
have
like
when
we
see
ready.
C
We
see
two
of
two
right,
so
that's
that's
two
par
or
two
containers
per
pod
and
the
reason
there
are
two
is
because
there
is
both
the
the
app
that
does
whatever
thing
it's
supposed
to
do,
then,
the
linker
d
proxy
sitting
beside
it
right
so
that
we
can,
we
can
have
it
in
the
mesh.
So
our
ambassador
pods
we've
got
our
three
ambassador
pods
in
that
deployment.
None
of
them
are
in
the
mesh
same
thing
quote
of
the
day.
C
Right
is
not
meshed.
So
if
I
pop
out,
I
didn't
think
I
was
going
to
need
this,
but
if
I
pop
pop
out
another
window
and
put
my
dashboard,
I've
got
one
right
here.
So
lingerie
no
give
me
just
one
sec:
export,
yeah
liberty,
viz,
dashboard.
Sorry,
that's
so
small!
Let
me
make
that
a
little
bit
bigger
if
I
pop
open
a
dashboard
here
right
I'll,
be
able
to
see
into
my
into
my
cluster,
but
I'm
not
actually
going
to
see
anything
anything
interesting
right
cause.
C
It's
just
like
there's
very
little
in
the
mesh
beyond
lingerie
itself
right,
but
we're
gonna,
we're
gonna
fix
that.
So
we're
gonna
we're
gonna
inject,
both
our
application
and
and
ambassador.
So
let's
do
that.
So,
let's
start
off
with
with
quote
of
the
day,
so
let's
do
k,
get
deploy,
dash
and
default
right.
It
was
in
the
default
namespace
right
we've
got,
we've
got
quote
right,
so
we'll
just
specify
it
we'll
we'll
output
it
as
yaml
right.
C
So
we're
going
to
put
the
the
deployment
details
as
yaml,
so
I
can
use
the
linker
dcli
to
add
the
proxy
to
the
cluster
and
we'll
all
all
we're
doing
right
when
we
do
it
is
adding
an
annotation
to
the
pod
spec
that
says,
linkery
inject
enabled
so
add
the
linguity
proxy
to
this,
and
then
linkery
will
do
the
rest
object.
Yes,.
C
C
C
C
A
C
All
right,
so
what
we're
doing
here
this
is
this
is
the
ingress.
So
in
general,
when
we
inject,
when
we
inject
the
service
inside
our
cube
cluster
or
a
pod
or
a
deployment
inside
our
coop
glass
there
right,
what
we
want
to
do
is
we
want
to
get
traffic,
both
incoming
and
outgoing
from
that
pod,
because
that
pods
only
talking
to
other
things
inside
the
master
is
generally
going
to
be
talking
to
other
things
inside
the
mesh,
the
emissary
ingress
or
any
ingress
that
you're
using
right
like
doesn't
actually
doesn't.
Actually
it
doesn't?
C
Actually,
we
don't
ever
we're,
never
gonna
care
about
traffic
coming
into
the
ingress
from
a
service
mesh
perspective.
Right
we
are,
we
are
about.
You
know,
east-west
traffic,
so
traffic
between
services
in
your
cluster.
So
that's
a
lot
of
words
for
let's
just
skip
inbound
traffic
web
traffic
to
this
thing,
because
we
don't
really
care
about
it
and
then
we're
just
gonna
go
ahead
and
pass
that
right
back
to
the
kubernetes
api,
so
same
inject
command
with
a
little
bit
of
extra
flavor.
C
C
Cool
gave
me
a
warning
because
I
didn't
you
know
we
created
this
with
hell,
not
with
a
k
apply,
but
it's
it's
totally
fine.
So
now
we
can
do
okay,
get
pods
and
ambassador
right.
We
see
that
there
are
new
pods
spinning
up
for
ambassador
now
with
now
with
two
of
two
right,
so
we
have
the
the
normal
ingress
plus
a
linkade
proxy
and
in
a
minute,
if
we
keep
refreshing
this.
C
We're
going
to
start
to
see
traffic
coming
from
ambassador
through
to
through
to
our
through,
to
our
quote
of
the
quote
of
the
moment
for
the
day,
give
one
sec.
Let's
force
this
guy,
to
refresh
a
little
bit
faster.
C
So,
let's
see
how
long
this
is
going
to
take
give
me
just
one
sec
can't
get
pods
and
ambassador
all
right.
Well,
we've
got
this
pre-baked.
How
I'll
actually
is
now
a
good
time
to
swap
over
clusters.
B
C
So
so
this
cluster
we
did
that
we
did
the
same
thing
right
and
we
can
do
okay,
get
pods
and
ambassador
right
on
on
this
cluster,
we're
actually
using
the
ambassador
edge
stack
because
that's
I
I
like
it
it's
a
bit.
It's
got
some
some
features
that
I
I
use
an
awful
lot,
but
we
see
that
the
ambassador
ingress
in
this
case,
I
only
have
the
one
pod-
is
injected
right
now
we
can
actually
see
some
traffic
now,
what
we've
done
so
you
saw
the
mappings
that
daniel
showed
earlier
right.
C
We
can
actually
get
our
mappings
right
and
you
know
the
thing
I
love.
I
love
about
crds
or
custom
resource
definitions
is
like
all
my
native
kubernetes
tooling,
like
continues
to
work
the
way
I
expect
right.
So
I
just
tap.
I
don't
know
mappings.getambassador.io
right.
I
just
start
typing
map
hit
tab.
It
completes
for
me.
I
want
to
look
in
all
namespaces,
it's
the
standard
cli
that
I'm
used
to
and
I've
got
a
bunch
of
stuff
going
on
like
I,
I
made
it
easy
on
myself.
Instead,
I
have
to
do
that.
C
Linker
d,
viz
dashboard
actually
just
hit,
and
actually
anyone
who
feels
like
it
can
just
hit
this
dashboard.sibo.59io
and
you'll
see
this
there
we
go.
I
knew
that
the
link
somewhere
you'll
see
this
right.
So
we've
got
you
know
we
can
see
ambassador
itself
right
who
it's
so
we
can
see
the
deployment
we
can
see
what
what
ponds
that
deployment
is
talking
to
right.
C
We
can
see
the
total
number
of
requests
per
second
heading
through
our
response
time,
every
endpoint
that
it's
going
to
right,
so
we're
hitting
a
quote
service
and
that's
responding
really
quickly
so
going
and
looking
at
at
the
same
thing
that
daniel
just
showed
us
right
or
the
same
thing
with
daniel
just
installed.
C
We
can
see
that
from
ambassador
we're
getting
a
get
method
to
the
to
the
root
of
this
path,
so
I
changed
it
from
back
end
just
to
the
root,
so
it
would
be
a
little
bit
easier
to
rev
to,
and
you
know
we're
entirely
successful.
We
could
tap
live
traffic
if
we
wanted
to
right.
So
let's
just
see
hey
what's
coming
in
right
and
this
is
this:
isn't
stuff?
That's
instrumented,
in
quote
right.
We
didn't
have
to
put
in
a
metrics
library.
C
We
don't
have
to
do
anything
anything
special
right,
we're
just
getting
this
data
because
the
proxy's
there
in
our
space
we
also
haven't
created
like
while
we
do,
need
a
mapping,
a
custom
resource
for
the
mapping
right
inside
linker
d.
Everything
else
is
just
oh
great,
I'm
getting
an
error,
but
we're
gonna
close
that
right
inside
inside
linker
d
right
because
it
just
works
with
kubernetes
native
services
and
kubernetes
constructs.
C
We
didn't
do
you
know
I
haven't
created
like
a
virtual
gateway
or
service
or
anything
special
right,
I'm
using
kubernetes
services,
I'm
using
a
standard
mapping
or
or
an
ingress.
If
I
want
to
use
an
ingress
via
emissary,
although
I
find
the
mapping
really
easy
to
do
so.
I
I
use
the
mappings
and,
and
all
our
stuff
just
continues
to
work
the
way
we
expect,
but
but
marginally
better,
and
that
that
integration
point
so
we
were,
we
were
digging.
C
C
I
get
the
high
success
rates
and
where
do
I
have
apps
that
have
a
problem
right
all
all
done
all
integrated
with
that
all
integrated
with
that
ingress
right
with
no
with
no
special
configuration.
So
I
think
that's
that's
pretty
cool
and
that's
the
bulk
of
what
you
know.
We
really
want
to
show
today.
B
I
love
that
as
a
key
takeaway
jason,
because
that's
something
you
know
disgusting
but
like
this
is
really
easy,
like
it
just
works.
But
again,
that's
that's
the
power
of
like
standardization
hat.
Typically,
the
cncf
right
all
the
great
work
going
on
here.
If
you
follow
the
kubernetes
resource
model,
follow
all
like
you
know,
I
know
you
perhaps
want
to
talk
about
smi
things
like
service
mesh
interface,
all
these
good
things.
B
If
you
follow
the
standards
like
it
kind
of
just
works,
or
it
should
just
work
and
mostly
does
and
you're,
also
not
locked
into
certain
things
as
well.
If
you
embrace,
you
know,
that
is
one
argument
for
some
folks
using
the
ingress
rather
than
the
mapping
custom
resource,
because
our
cust,
our
mapping
customer
source,
is
not
directly
interchangeable
with,
say
you
know
another
ingress,
for
example,
but
in
reality
like
what's
the
chance
of
swapping
out
ingresses,
I
remember
back
in
my
java
days.
B
I
always
you
know,
wrote
defensive
code
around
databases
swapping
out
and
in
my
like
20-year
java
career,
I
think
I
swapped
out
one
database
like
postgres
for
my
sequel,
and
that
was
a
completely
custom.
Recent
reason
why
we
did
that
right,
so
look
for
your
abstractions,
but
again
I'm
with
jessie,
I'm
obviously
biased.
But
for
me
the
mapping
resource
is
super
simple,
whereas
the
ingress
stuff
tends
to
be
more
complicated,
powerful,
but
more
complicated.
I'm
a
big
fan
of
minimal
code.
You
know
less
code,
I
write
less
config.
I
write
less
stuff.
B
C
Yeah
and
they
expand
on
that
right,
like
ingress,
is,
like
people,
think
they're
interchangeable,
but
they're,
not
right,
like
there's,
but
in
every
ingress
that
are
going
to
be
specific
to
the
one
that
you're
using
right,
so
they
tried
to
they
they
worked
on
like
I
know
the
networking
group
has
worked
on
ingress
routes
and
and
expanding,
or
was
it
the
gateway
api
spec?
Yes
right?
Yes,
yes,
which,
which
I
believe
emissary
fully
supports,
right,
or
at
least
it's
planning
on
yeah.
C
So
so
any
like
that's
changing
anyway
right
so
there's!
No!
I
don't!
I
don't
think,
there's
any
concern
about
using
a
little
bit
like
it.
You
can
see
well
here.
Let
me
just
show
a
mapping
right,
so
I've
got
like
I've
got
like
five
mappings
in
this
in
this
in
this
document
here.
So,
let's
just
do.
C
Let
me
close
this
out
right.
So
here's
here's
the
cool,
the
quote,
mapping
right!
So,
if
you're
familiar
with
ingress,
this
isn't
like
this
isn't
crazy,
complicated
right.
We
give
it
a
name.
You
know
the
prefix
that
we're
using.
So
what
path
are
you
going
to
hit
on
the
api?
What
host?
So?
What
host
name
do
I
want
to
respond
to
and
then
what
service
am
I
going
to
do
right
and
that's
like
that's
the
extent
of
it
right
and
it's
it's
a
pretty
pretty
simple
and
straightforward
thing:
here's
like
a
complex
one!
C
So
here
we
see
the.
Let
me
make
that
a
bit
bigger
right.
Here's!
Here's
for
our
dashboard,
so
lingard
dashboard
uses
web
sockets,
so
we
have
to
get
to
get
a
bit
more
complicated
I'll
like
I'll.
Never
I
I
was
working
with
a
different
ingress
at
one
point
and
trying
to
get
web
sockets
to
work
over
that
ingress
and,
like
the
the
pain
and
suffering
that
I
went
through
going
through
the
docks
trying
to
get
the
the
config
right
for
this,
for
this
particular
component
was
pretty
high.
C
A
Well,
that's,
that's
really
awesome,
I
have
to
say,
and
if
anyone
has
any
questions
you
can
type
them
in
chat
in
the
meantime,
what
other
things
could
we
do
with
either
emissary
and
the
mapping
or
linkerd
like
we
got
the
the.
C
Yeah,
so
all
all
sorts
of
stuff
right,
so
we've
got
on
this
one.
Every
like
every
one
of
these
urls
has
a
validity
like
y'all
can
hit
them
up
in
the
chat
right,
like
they've
got
a
valid
https
server
that
I
am
like
I
didn't
do
beyond.
You
know,
defining
a
host
right,
so
this
was
actually
with
the
this
is
actually
with
with
the
edge
sack,
but
in
the
end
stack
right.
I
just
I
just
define
a
host.
C
Please
don't
spam
me
folks,
I
just
you
know
I
just
put
in
the
host
name
and
then
it's
going
to
go
through
and
auto
generate
certificates
for
me
right,
which
is
which
is
really
really
super,
handy
right
and
then
the
other
stuff
that
I
love
or
what
I
really
love
about
this
integration
is
is
for
us.
C
It
can
sometimes
be
a
challenge
with
an
ingress
to
like
integrate
directly,
because
we
have
to
say:
hey:
listen
should
should
it
do
some
non-standard
behavior,
here's
how
we
override
it
so
that
it
works
well
with
the
mesh
right,
especially
when
we
start
getting
into
smi
constructs.
So
so
smi
is
the
service
mesh
interface,
where
it
allows
you
to
do
more
complex
things
with
traffic.
Then
then,.
C
A
Create
for
those
who
don't
jump?
No,
no,
just
clarification.
The
smi
is
a
different
cncf
project
right
that
is
like
abstracting
other
service
meshes
and
link.
Rd
is
one
of
them
just
to
clarify
for
those
who
don't
know.
Yeah.
C
Yeah,
sorry
about
that,
thanks
for
adding
that
detail,
you
know
so
so,
but
they
they
rely
on
like
this
kind
of
we
call
it
a
apex
service
inside
kubernetes,
so
that
you
can
ship
traffic
around
well
that
that
thing
really
it
needs
the
service
mesh
needs
to
handle
that
right
and
it
does
it
with
intelligent
routing.
C
Now,
with
with
emissary,
because
it
defaults
to
right
into
the
cluster
ip,
we
can
handle
things
like
multi-cluster
or
you
know
complex
rollouts,
with
no
special
configuration,
but
if
you're
looking
for
in
a
particular
service,
you
want
to
do
sticky
sessions
or
you
want
to
route
to
a
particular
pod
based
on
some
criteria.
You
can
do
all
that
in
emissary
and
still
have
linker
d,
carry
it
the
default
behavior
for
the
rest
of
your
your
traffic.
You
know
with
with
no
special
configuration
beyond
what
you
do
in
linguity.
B
Just
get
a
bit
more
conscious,
I
think,
because,
like
like,
we
level,
I
guess
we
said
we
love
all
the
service
meshes
right,
but
the
native
sort
of
integration
is
much
simpler
when
everything
dials
into
the
kubernetes
best
practice
we're
doing
things
like
a
lot
of
folks.
A
lot
of
the
service
meshes
will
use
endpoint,
resolvers,
endpoints
and
kubernetes,
as
opposed
to
the
actual
looking
at
the
services
and
getting
the
metadata
the
ip
addresses
and
there's
good
reasons
for
doing
that.
B
But
it
also
adds
a
lot
of
complexity
on
top
and
I've
been
with,
I,
I
feel
the
pain
jason.
I
won't
mention
any
service
mesh
names,
but
where
there
was
a
few
like,
why
is
that
doing
that?
And
it's
just
because,
like
sort
of
the
responsibilities
overlapped
right
like
the
north,
south
and
east
west,
the
emissary
and
other
service
mesh
like
it
was
like
it
was
almost
you
know
if
I
anthropomorphized
it,
it
was
almost
like.
You
know
an
argument
between
the
two
things.
That's
my
responsibility.
B
Now
it's
my
responsibility
where
there's
a
clearer
separation
of
concerns
in
msr
and
linker
d,
because
we
literally
hand
off
at
the
abstraction
points
that
are
native
to
kubernetes
the
services
in
this
case
right
and
if
folks
haven't
bumped
into
things
like
endpoints
and
maybe
even
haven't
gone
deeper
into
pods.
It's
worth
a
little
google,
so
the
kubernetes
docs
are
fantastic
and
just
understanding
how
endpoints
and
endpoint
slices
work,
particularly
when
you
go
to
more
multi-cluster
stuff.
I
learned
a
bunch
from
thomas
when
I
was
learning
about
linkedin
multi-cluster
it.
You
know
again.
B
C
A
B
Encryption
yeah
for
sure,
and
in
some
ways
one
thing
I
often
say
to
folks:
I'm
chatting
in
the
community
is
when
a
ingress
is
doing
its
job
well,
and
I
think
this
really
applies
to
service
mesh
too.
You
actually
don't
notice
too
much
about
it.
So
doing
demos
for
all
these
things
is
really
hard
right
because,
like
it
should
just
work,
but
you
always
want
to
think
about
things
like
security
is
a
big
one,
100
what
you
said
there
so
transport
level
security
jason
showed
you
edge
stack
with
integrated.
B
Let's
encrypt
acme
protocol
support,
you
can
use
cert
manager,
emissary
ingress.
I've
got
a
demo
up
and
running
of
that,
so
definitely
want
to
secure
the
transport
layer.
The
tls
and
that's
super
easy
to
do
next
thing
you
want
to
do
typically
is
integrate
and
authentication
yeah
you
want
to
be.
You
know:
we've
got
some
demos
on
the
msa
ingest
site
of
using
a
very
simple
author
authentication
service
that
we've
written,
I
think
in
go
or
node.
B
I
think
it's
in
node
and
it
uses
basic
auth
so
like
real
uses,
actually
the
express
framework
in
node
and
basic
auth,
and
it's
really
like
a
really
simple
way
to
just
do:
authentication
at
the
edge,
because
emissary
exposes
ext
auth,
like
it's
almost
like
an
api,
an
interface
standard
envoy,
type
interface.
So
you
can
plug
in
anything
that
implements
that
it's
the
auth
api.
So
we've
got
you
know
some
commercial
offerings
in
that
space.
There's
open
source
offerings,
there's
stuff
on
the
interwebs,
be
careful.
B
What
you
choose
because
you
know
authentication
is
super
important
right.
Yeah,
just
just
bear
that
in
mind,
if
you're
pulling
stuff
down
on
github-
and
you
think
it's
doing
all
for
any
ingress
double
check
it
because,
like
that's,
you
know
if
the
auth
is
compromised,
game
game
over
right
really
really
quite
tricky
there,
but
we
do
expose
the
standard
apis
for
authentication.
We
also
expose
the
rate
limits
envoy
api
in
emissary
ingress
too
so
rate
limiting
is,
is
sort
of
closely
related
to
security,
because,
obviously
you
want
to
secure
your
transport.
B
You
want
to
authenticate
authorize
the
human
coming
in
and
but
then
you
might
want
to
stop
things
like
denial
of
service
and
people
accidentally
abusing
your
service.
Maybe
you've
got
a
freemium
product
and
the
app
just
runs
away
and
starts
calling
the
back
end
a
lot
and
it
degrades
the
overall
experience.
B
B
But
those
are
the
most
common
things
transfer
level,
security,
authentication
and
rate
limiting,
I
would
say,
and
then
hooking
up
to
the
observability
is
closely
related
to
that
hooking
up
to
prometheus,
which,
like
jason's,
talked
about
in
service
mesh
context
too,
and
then
often,
if
you're
doing
things
like
distributed
tracing,
you
want
to
start
them
at
the
edge
too.
So
we
integrate
with
zipkin
and
jaeger
and
a
bunch
of
other
things
there.
So
that's
my
observability
is
often
thought
about
quite
a
bit
too.
A
Awesome
we
have
a
question
about
doing
traffic
splitting
with
emissary
I
I
maybe
it
can
also
be
applied
to
linker
d.
But
generally,
can
you
guys
address
this.
B
B
It's
like
our
online
free
learning
thing
we're
doing
over
the
summer
and
he
broke
down
how
to
use
emissary
edge
stack
with
argo
rollouts,
and
then
I
followed
on
with
argo
cd
afterwards
as
well,
but
how
to
do
canary,
releasing
with
all
that
tech
so-
and
you
can
do
it
manually,
just
by
changing
the
canary
the
weighting
effectively
on
different
mappings.
So
you
have
like
mapping
stable
mapping
canary
and
you
just
change
the
weights
manually
and
you
know,
do
a
cue
cut
apply
but,
like
argo,
is
amazing,
I'll
go
cdr
rollouts.
B
The
whole
argo
series
of
projects
are
amazing.
If
you're
looking
to
do
canary
releases,
I
suggest
having
a
look
at
those.
C
And
then
just
to
add
on,
like
you
might
have
seen
it,
but
so
it's
a
traffic
split
depending
on
what
you
mean
right.
There's
a
traffic
split.
I
think,
there's
a
traffic
split
object
from
the
smi
spec
right,
which
is
which
is
implemented
by
linker
d
right
and
when
you're
connecting
these
two
just
work
like.
So
you
might
have
caught
it
in
the
in
the
dashboard.
But
there's
actually
at
an
argo
rolex.
Install
and
pod
info
is
using
auger
rollout.
So
the
routing
to
to
potinfo.sivo.59s.io.
C
C
Summer,
kate's
week
after
that,
we're
gonna
show
traffic
splitting
with
argo,
rollouts,
linker
d
and
emissary
all
kind
of
together
rolled
into
one,
but
but
yeah.
That's
that's
kind
of
the
way.
It's
kind
of
the
way
you
do
it
so
there's
a
ton
of
options
and
again
highlighting
that
it
because
of
the
way,
because
both
objects
or
both
projects
have
a
clearly
defined
set
of
boundaries
and
try
and
do
one
thing
really
well
right:
they
work
together
super
well
and
there's
no
there's
no
special
configuration
to
do.
A
A
B
One
thing
I'll
shout
out
is:
do
get
involved
in
the
community.
Both
these
are
cncf
projects.
Link
d
is
graduated,
we're
incubation
stage.
The
community
thrives
from
folks
like
yourself,
watching
the
stream
right
jump
on
the
github.
Repos
have
a
look
at
issues.
Docs
are
super
super
important,
like
jason,
and
I
were
just
tripping
over
a
couple
of
dock
issues.
I'm
gonna
come
fix
later
on
right,
but
it's
so
hard
to
keep
some
of
the
projects
up
to
date.
B
C
Yeah
and
then
add
on
slack.linkerd.io
is
the
lingerie
specific
slack.
If
you
want
to
talk
to
talk
to
maintainers
or
or
get
involved
there
love
to
hear
from
you,
I
hang
out
in
both
the
linkrd
slack
and
the
datawire
slack.
I
find
them
both
really
helpful.
B
I
gotta
do
a
good
good
job,
so
if
you
go
to
a8r
so
this
ambassador,
basically
a8r
dot,
io,
slash
slack,
you
can
find
our
slack
there
as
well.
We've
got
like
telepresence,
which
is
another
cncf
tool
which
we
we
steward
or
we
help
steward.
So
you
can
chat
to
us
there
like
I,
I
hang
out
in
all
the
slacks,
the
cncf,
the
buoyant
ones
yeah,
so
you
can
find
us
there.
A
Great
and
you
can
see
on
screen
also
just
a
reminder
that
kubecon
north
america
is
upcoming,
so
registration
is
open
for
in-person
and
virtual
events
and
so
hope
to
see
you
either
there
or
on
screen.
A
A
I
hope
to
see
everyone
here
again
next
week,
every
wednesday
we're
here.
So
thank
you
guys
again.
Thank
you.