►
From YouTube: Contour Office Hours - August 20, 2020
Description
Aug 20
How to debug Contour/Envoy when things go wrong?
A
Okay,
yes
welcome
everyone,
so
today
is
thursday
august
20th
day
three
of
kubecon
for
those
of
you
following
along
with
kubecon,
this
is
contour
office
hour,
so
I
thought
was
at
the
last
community.
We
all
kind
of
spoke
that
we
needed
some
extra
time
so
we'll
do
an
hour
here
of
the
official
contour
office
hours,
and
then
we
can
switch
to
the
kubecon
office
hours,
which
is
at
2pm
eastern,
I'm
not
sure
what
that
is
in
central
eastern
summer
time
so
yeah.
A
So
you
all
wanted
to
talk
about
how
to
deploy
contour
locally
or
how
to
bug
contour
locally
right.
That
was.
B
Yeah,
I
wasn't
even
thinking
you
know,
we
could
talk
about
it
and-
and
I
I'd
be
happy
to
kind
of
share
screen-
and
you
know
since
you're
recording
this-
maybe
kind
of
have
the
format
where,
for
any
new
developers
that
want
to
come
in
or
or
people
want
to
learn
more
about.
Contour
could
reference
this
yeah,
but
I
was
thinking
you
know.
B
Maybe
I
could
you
know,
show
code,
I
also
cloned
a
project
called
customize
as
well
and
kind
of
showing
what
we're
doing
there
kind
of
the
limited
progress
that
we've
made
up
to
that
point.
B
If
that's
the
correct
way
to
do
kind
of
local
dev
test
for
contour,
and
then
you
know
the
overall
architecture
of
how
kind
of
you
know
just
just
so
that
you
know
people
that
are
new
to
it,
can
kind
of
onboard
and
understand
it,
and
you
know
the
right
way,
like
you
know
where
the
project's
going,
you
know,
etc,
etc.
A
B
All
right
sounds
good.
Can
you
guys
see
my
screen?
Okay,
yeah?
I
see
screen
in
the
terminal
in
front
of
it.
Yeah
awesome
yeah.
So
I've
got
you
know,
contour
open
up
in
go
land
here.
I
thought
this
would
be
kind
of
a
good
way
just
to
kind
of
quickly.
You
know
see
the
overall
kind
of
like
folder
structure,
you
know
and
then,
and
then
this
project
here
you
know
where
you
know
we
can
kind
of
launch
it
here.
So
anyways
yeah.
B
So
I
don't
really
know
maybe
where
to
start.
I
know
I
think
with
contour.
Is
it
so
if
we
go
into
contour,
I
think
we
can
do
like
a
make
integration
right,
so
I
think
like
kind
so
so,
okay,
so
maybe
for
people
that
are
like
just
starting
since
this
is
being
recorded
right.
B
So
essentially
there
are
two
projects
you
know
and
what
I
did
was
I
forked
the
contour
project
into
my
own
kind
of
repository,
and
then
I
cloned
that
so
that
we
could,
essentially
you
know,
create
branches,
push
those
branches
to
my
repo
and
then
create
prs,
essentially
right
so
kind
of
that
standard.
You
know
open
source,
you
know
the
github
process
there
and
then
there's
this
customize
project
here
and
steve
is
that
is
that
your
project,
I
couldn't
remember
who
sent
me
that.
B
So
if
you
look
at
the
make
file
and
then
you
can
see
that
there's
a
there's,
an
integration
essentially
step
in
the
makefile
that
will,
you
know,
build
the
cluster
and
then
essentially
get
get
contour
up
and
running
right.
So
I
was
going
to
maybe
demo
this
real
quick
and
is
this
too
small?
Should
I
increase
the
size
of
this?
Maybe
just
a
little
bit
would
be
great.
A
Yeah,
if
your
your
font
size
there,
I
think
I
should
just
do
it
in
here,
so
I
can
just
kind
of
have
the
same
screen
there.
A
B
Do
that,
unfortunately-
and
if
you
could
back
me
up
on
that,
if
you
try
to
real
quick
for
cabin,
if
you
guys
want
to
look
that
up
real
quick
but
so
anyways
yeah,
so
so,
maybe
so
I
think
with
like
kind.
What
is
it
kind
of
list
clusters,
I
think,
or
just
maybe
kind
list
or
kind.
A
A
A
So
there
are
some
pieces
of
that
you
probably
could
use
if
you
want
to
build
a
kind
cluster
just
for
local
dev,
but
the
the
goal
of
that
is
isn't
to
really
develop
against
this
sort
of
a
was
it
yeah
a
way
to
make
writing
tests
easier
locally?
I
guess
the
integration
tests
yeah.
B
That
makes
sense,
then,
okay
yeah.
So
then
maybe
that
wasn't
the
right
way
to
go
about
it.
So
so
I
guess
yeah.
I
know
I
don't
you
know
so
so
is
customized
the
right
way
to
you
know
to
do
that.
Local
development,
then,
is
that
a
project
that
that
is
typically
used
when
people
are
developing
on
contour
or.
A
B
Yeah
so
let's
say
that
you're
trying
to
implement
some
sort
of,
like
you,
know,
http
mod
modifier
right,
like
the
add,
remove
you
know,
policies
right
where
I'm
trying
to
like
add
or
remove
a
header,
an
http
header.
What
I
would
want
to
do
as
a
contour
developer,
say:
okay,
I'm
going
to
work
on
this
feature.
I
think
I've
got
it
implemented.
B
I
want
to
build
deploy
locally
and
then
you
know,
you
know,
run
an
http
get
across
the
proxy
and
then
see
if
it
you
know,
on
the
other
end,
if
that
header
was
actually
added
or
removed
right
right.
A
Okay,
yeah
yeah,
so
that's
that's
a
normal
one.
I
call
it.
You
know
yeah
dev
workflow,
so
again
this
this
can
all
work.
So
so
you
can
do
that.
You
could,
you
know
like
you've
done
clone
contour,
you
know,
make
some
changes
and
then
build
an
image
and
push
that
somewhere
and
then
re-pull
that
image
down
in
in
your
your
cluster
and
then
run
it.
That
would
totally
work
a
way
to
save
steps
on
that.
A
So
it
shouldn't
be
a
huge
deal,
but
again
just
other
options,
but
still
an
even
faster
way
would
be
to
actually
run
envoy
in
the
cluster,
but
run
contour
locally
on
your
machine,
that's
kind
of
how
I
developed
so
there's
actually
an
example
file
of
that.
So,
if
you
go
into
the.
A
Yeah
so
the
idea
there
is
so
so
this
workflow
with
running
contra
locally,
is
you
would
just
you
know
again
clone
the
repo
make
some
changes?
Then
you
would
run
contour
locally
on
you
on
your
mac
or
your
linux
machine
or
whatever
you're
running.
I've
actually
done
this
in
in
wsl
on
windows
as
well.
So
if
someone's
a
windows
user
that
does
work,
there's
a
little
couple
different
things
to
do
with
that,
but
we
should
document
this
better
too.
I
think
so.
This
is
a
great
place
to
as
well
yeah.
A
So
if
you
go
up
to
the
into
the
repo
there's
an
examples
folder
and
that's
kind
of
where
there
are
some
different
things
and
there's
there's
the.
A
Yeah,
maybe
there's
a
good
way
to
see
it
yeah
so
there's
contours.
So
contour
is
so
this
contour
folder
there.
If
you
want
to,
pin
that
open
real
quick.
That
directory
has
basically
all
of
the
deployment
manifest
that
we
use,
and
this
is
what
we
use
to
actually
generate
like
the
quick
start,
so
down
a
little
further,
there's
a
render
folder
or
a
directory.
A
That's
basically
it's
that
contour.yaml
file.
There
is
all
of
the
things
in
the
contour
directory
I'll,
just
append
it
together,
so
but
they're
basically
the
same
thing
and
there's
a
make
task
for
that.
It's
make
generate
which
will
take
all
of
the
changes
in
that
examples,
contour
directory
and
put
them
into
the
contour.yml,
and
what
we
do
is
developers
or
as
maintainers
is
the
quick
start
basically
points
to
that
contour.emo
file
when
it
gets
published
out
to
the
repo.
A
A
So
again,
if
you
wanted
to
say
you
wanted
to
change
something
like
if
you
wanted
to
add
something
to
this
deployment,
say
you
I
don't
care
like
you,
had
an
environment
variable
or
something
you
would
go
into
this
contour
folder
make
the
change
there
and
then
you'd
run
make
generate,
and
then
that
would
basically
rerun
the
the
the
script
to
then
make
contour.yaml.
Basically,
if
you
forget,
there's
a
ci
check
that
we
have
to
actually
look
for
that
so
it'll
actually
it'll
yell
at
you.
A
If
you
forget
that,
so
it's
no
big
deal,
but
that's
where
those
those
things
come
from.
A
Yeah
and
then
like,
if
so
again,
we
can
stay
on
task,
but
there's
two
ways
I
develop.
So
one
is
locally,
so
I'll
use
a
kind
cluster
locally
and
then
do
all
my
local
dev
there,
and
sometimes
I
just
need
like
a
real
load
bouncer
or
a
real
set
up
certificates
that
sort
of
thing
so
I'll
use.
I
have
a
cluster
in
aws
as
well
as
well
as
one
in
gke
or
google.
A
I
I
pushed
so
I
don't
ever
use
customizing
any
kind
of
stuff.
I
just
go
edit
that
file
and
then
save
it
and
apply
it,
but
you
could
totally
do
that
with
customize.
So
that's
an
easy
way
to
do
that
cool
anyway,
so
we're
talking
about
losing
kind
right,
so
I
just
will
give
you
that
backdrop
is
how
those
things
come
to
come
together.
I
guess,
while
we're
hearing
examples,
we
can
just
walk
through
those.
A
The
other
bits
in
there
are
example,
workload
that
can
point
to
my
screen
like
I
can
touch
it
example
workload.
There
was
a
bunch
of
examples
in
there
and
this
is
another
place
that
we
should
probably
expand.
The
goal
here
was
to
have
just
real
working
things
that
folks
could
use.
So
if
you
wanted
to,
you
know,
move
from
ingress
to
http
proxy
there's
some
examples
there
that
show
you
like
here's
an
ingress
resource
and
then
here's
the
same
thing
in
http
proxy.
Just
to
help.
A
You
know
show
some
examples
that
way.
We
need
to
add
some
other
other
things
here.
I
think
which
would
be
helpful
for
folks
like
grpc
comes
up
all
the
time.
Folks
want
to
have
a
grpc
service,
and
we
don't
have
a
great
example
of
that.
So
we
should
do
that,
but
those
are
just
examples
of
things
that
folks
can
reference.
So
if
you
wanted
to
have
know
how
we
did
like
you
know,
inclusion
or
set
up
a
tcp
proxy,
that
sort
of
thing,
those
were
examples
that
you
could
just
know:
control
apply.
A
And
the
other
bits
down
there,
so
the
grafana
and
prometheus,
so
there's
a
there's,
a
guide
on
how
you
can
pull
all
the
metrics
out
of
envoy
and
contour.
So
if
you
want
to
see
things
like,
you
know,
your
200
requests
and
your
500
responses,
I'm
sorry
200
responses,
500
responses.
A
You
know
how
many
requests
per
second
you're,
pushing
through
envoys.
That
sort
of
thing
we
have
example,
grafana
dashboards
that
exist.
That
kind
of
show
you
some
generic
information
that
you
probably
have
to
take
and
customize
for
your
own,
but
there's
a
guide
in
the
project
contour
I
o
site
that
walks
through
how
to
actually
deploy
prometheus
and
graffana
again
they're.
Just
there
for
examples.
A
You
may
have
your
own
infrastructure
for
all
these
bits,
but
if
you
follow
along
at
least
you
can
see
how
those
wall
work,
deployment,
information,
that's
what
those
two
do
and
then
kind
is
when
I
was
getting
to
so
there's
the
kind
directory
there.
So
this
is
an
example.
I
think
I
broke
this
last
time.
I
should
fix
this.
I
did,
but
it
will
still
work.
So
this
is
how
I
develop.
A
It
also
does
the
same
thing
for
for
for
443,
if
you're
going
to
run
tls.
A
So
what
that
means
is
now,
if
you,
after
you
run
this
it'll,
create
a
kind
kind
cluster
and
because
kind
runs
in
docker
it'll,
essentially
give
you
a
docker
mapping
again
from
your
local
machine
port
80
to
the
container
port
80.
and
because
by
default
we
deploy
envoy
with
host
networking
or
host
ports
on
port
80
essentially
means
that
now,
once
you've
run
this
local
host
80
on
your
machine
will
map
to
envoy
running
in
the
cluster.
A
So
again,
if
you
remember,
if
you're
not
familiar
contour
is
the
xds
server
for
envoy.
So
xds
is
the
protocol
we
use
to
program
on
voice.
So
it's
a
it's
a
grpc
connection
between
ongoing
contour.
So
there's
a
again,
we
can
walk
through
this.
If
you
want
to
see
that
there's
a
bootstrap
config
that
we
give
envoy
and
it's
essentially
enough
information
to
go
and
give
you
a
connection
back
to
contour
from
that
point
on
contour
then
generates
all
this
configuration
down
to
envoy.
B
Yeah
yeah
and
it's
it's
things
that
I've
seen
in
the
chat.
You
know
about
xps,
and
you
know
you
know
envoy,
which
I
guess
is
via
tcp
via
some
sort
of
like
you
know:
proto
buffs,
the
you
know,
messages
that
you
kind
of
pass
envoy.
Is
that
kind
of
how
you
all
the
communication
and
does
that
kind
of
what
xps
does
or
are
there
kind
of
two
different
layers
that.
A
Yeah
there's
all
kinds
of
proto
buffs
that
we
end
up
making
and
we
can
look
through
the
code
for
that
but
yeah.
But
so
we
pull
in
envoy
has
a
go
control,
plane
project
which
basically
has
all
of
the
envoy
protobufs
in
go
files.
So
we
so
in
contour
there's,
basically
the
essentially
the
output
of
what
contour
does
is
a
whole
bunch
of
of
objects
that
match
envoy
so
listeners
clusters,
endpoints
secrets
that
sort
of
thing
and
then
all
that
gets
realized
and
streamed
down
to
envoy
over
that
grpc
connection.
A
So
if
you
actually
go
back
to
your
examples
and
right
there,
it
says
o3
envoy.yaml.
If
you
pop
that
one
open
it's
up
a
little
more
yeah
that
one
so
in
here
there
should
be
an
xds
server,
parameter
that
we
passed.
Actually,
let's
look
at
the
the
bootstrap
first,
so
right,
yeah
right
here
it
says:
whoop
discipline,
pi
yeah,
so
there's
an
init
container
right
so
in
the
envoy
daemon
set
again
onward
runs
is
a
damage
set
as
the
default.
A
You
could
change
it
if
you'd
like,
but
that's
just
how
we
do
it.
The
init
container
runs
before
all
the
containers
start
up,
so
essentially
what
this
this
command,
there's
a
command
and
contour
called
bootstrap
and
what
it
does
is
it
generates
some
json,
which
is
what
envoy
will
read
in
as
it
starts
up,
and
it's
enough
here
to
tell
it
where
things
are
so
you
can
see
here
we
pass
in
the
xds
address,
which
is
contour
so
in
this
example,
contour
matches
the
contour
service
in
in
that
file.
A
A
So
then
that
what
that
does
is
envoy
when
it,
after
after
nick
happens,
it'll
write
that
envoy.json
file
to
slash
config
and
then
there's
a
shared
shared
volume
which
the
contour
image
I'm
sorry,
the
envoy
image
maps
in
shoot
down
a
little
bit
further.
If
you
scroll
down
a
little
bit
or
maybe
it's
up
whatever,
whichever
way
it
should
map
it
should
read
in
that
file
and
that's
what
envoy
uses
to
see
the
to
find
contour.
Basically.
A
A
No
you're
fine,
so
so
in
this
is
the
file
you
do
it.
So
what
I
would
do
is
if
I'm
gonna
run
this
locally.
I
would
change
in
this
file.
We
can
just
do
it
here
and
then
you
could
modify
your
customized
stuff
later,
if
you'd
like
so
go
up
to
that
that
bootstrap.
A
Yep
and
then
here
so
xds
address
is
going
to
be
the
ip
of
your
local
machine
and
that
won't
work
because
that's
going
to
be
localhost
in
so
you
think
this
is
running
in
the
the
kind
cluster.
So
you
either
need
your.
I
think,
there's
a
docker
loopback
address
you
can
use,
or
I
just
use
my
my
internal
ip.
If
you
want
to
share
that
or
not
it's
up
to
you,
I
don't.
B
Yeah,
so
you
know
that
would
be
like
192
address
space.
A
A
We
can
leave
8001
the
same
and
then
what
I
do
is
I
just
delete
the
the
unvoiced
search,
so
the
ca
file,
search
file
and
key
file
because
I
developed
locally.
I
don't
much
care
about
that
and
it's
a
little
trickier
to
you'd
have
to
pick
the
certs
out
of
the
kubernetes
cluster
and
put
them
locally
on
your
machine
somewhere
or
do
the
reverse.
So
I
just
ignore
it,
but
we'll
have
to
change
one
thing
there
so
yeah.
So
at
this
point
now
contour
will
start.
A
B
Yeah,
I
have
client
kind
installed
and
I
don't
think
I
have
one
running
I
mean
I
could
get
one
up
and
running
real
quick
as
I
was
trying
to
see
what
I
can't
remember
what
state
my
cluster
was
in.
You
know,
so
I
think
was
it
listing
my
clusters
and
then
just
deleting
it
to
create
it.
C
A
Yeah,
steve
posted,
I
think
it's
kind
get
clusters.
Oh
thank
you.
Thank
you
steve.
So
we
can
just
we
can
just
trash
that
and
then
we
can
we'll
pass
in
that
config
file
yeah.
Let's
do
the
mapping.
A
So
if
you
create
one
and
then
you'll
have
to,
if
you
just
want
to
go
to
that
directory
and
contour
and
for
whatever
that
file
is
there's
that
example
file,
you
could
use.
B
A
Yeah,
the
name
doesn't
matter,
you
basically
just
need
to
say
kind,
create
cluster
and
you
pass
in
dash
dash
config.
A
B
Be
so
it
essentially
runs
this
kind.
Cluster,
okay,
gotcha,
so
kind
exposed
pork
all
right.
All
right.
I
gotcha!
I
got
you.
Okay,
so
config
equals
and
that's
and
sorry
exactly
good
kind
kind,
exposed
port
right
and
that's
gonna.
C
A
That's
going
to
create
just
a
one-node
cluster,
it
used
to
be
two
and
then
I
I
changed
it
last
time
trying
to
get
our
integration
test
to
run
better,
but
anyway,
it'll
still
work.
Oops,
let's
delete
cluster.
B
And
get
clusters?
Okay
totally
start
from
scratching,
because
I
messed
up
all
right
here:
we've
got
okay,
so
I'm
just
passing
config
to
this
kinda
exposed
port
and
that's
the
cluster
that
we
need.
Okay,
cool.
A
So
the
only
the
only
gotcha
here
is
yeah.
If
you
have
443
or
80
already
exposed,
you
could
take
out
443
if
it's
used
up
in
your
machine
already,
I
don't
stop.
I
don't
know
what
I
could
have.
A
B
A
Just
coming
out
the
443,
maybe
it
looks
like
it
was
only
443-
was
the
issue
right.
Oh
it's,
four.
Five
years.
B
Mine
for
four
four
three:
five:
okay:
yeah:
no,
because
we
don't,
we
don't
really
care
about
that.
Four
four:
three.
B
A
A
Not
yeah,
so
let
me
just
like
always
so
make:
let's
see,
we
want
container
port
80
still
to
match
envoy
yeah
and
then
host
port
88,
yeah
that'll
work
there.
I
think
yeah
awesome.
Thank
you.
A
A
Okay,
cool,
okay,
cool!
So
now,
if
you
real
quick,
do
a
docker
ps,
what
we
should
see
is,
you
should
have,
I
guess
one
node,
there's,
probably
the
stuff
you've
got
run
in
there.
It's
your
maps,
you
already
have
it
yeah
with
the
traffic.
It
looks
like
so
not
to
peer
through
your
containers,
but
yeah
see
how
port
888
maps
to
port
80
in
the
container
now
so
now,
if
we,
if
you
curled,
you
know
localhost
888,
it
would
map
to
envoy
running
in
your
kind
cluster.
A
It's
not
going
to
do
anything
because
there's
nothing
running
yet,
because
we
haven't
deployed
any
anything
into
the
cluster
yeah,
your
cluster's
still
empty
yep.
Okay,
but
that's
that's!
The
idea
is
how
we're
gonna
do
this.
So
if
you
go
ahead
and
you
can
just
control
apply
again,
this
is
how
I
do
it
again,
this
matter
how
you
can,
because
this
is
where
customize
or
helm
or
something
like
that
would
shine,
because
it
would
do
some
of
this
configuration
for
you,
but
where
I
just
do
it
by
hand
I
have
it.
A
I
would
just
go
ahead
and
control
apply
the
examples
folder
in
the
contour
repo.
Okay,
all.
B
B
So
so,
okay
control,
all
right.
I
see
this
is
going
to
be
in
the
examples.
Okay,
so
so
I'm
just
trying
to
kind
of
wrap
my
head
a
little
bit
around,
so
I
created
just
a
default
kind
cluster
and
I
passed
this
as
a
config
which
essentially
so
now
I
don't.
I
really
don't
understand
what
this
did
right
here
right
now.
I
see
that,
like
the
output
talks
about
the
control
plane
here,
where
it
says,
oh,
I
thought
it
says
something
about
a
control
plane,
but
anyways
yeah.
So
I
don't.
B
I
really
don't
understand
kind
of
what
what
this
config
did
and
what
changes
are
different,
as
opposed
to
like
just
creating
a
regular
like
what
did
this
config
file
do
to
our
current
cluster.
A
Yeah,
so
you
you
can
create,
you
can
create
multi-node
clusters
with
kind
so
by
default,
when
you
say
kind,
create
cluster
you'll,
get
a
single
cluster
or
a
single
node
cluster,
and
it
doesn't
map
any
ports
to
your
machine.
So
it's
just
a
generic
cluster.
What
this
is
doing
is
this
is
actually
extending
that
and
saying
right.
I
still
want
a
one
node
cluster,
but
I
want
to
map
ports
from
my
local
machine
into
that
cluster.
A
B
A
You'd
have
to
like
port
forward
first
into
it
to
get
to
it:
okay,
yeah.
Now
there
is
an
example
out
on
the
this.
Just
don't
work!
I
can
show
you
this
a
while
ago,
I'm
on
the.
I
think
you
had
the
the
kind
docs
up.
If
you
want
to
pop
there,
real
quick
just
for
fun
out
in
the
in
your
browser.
There's
the
the
kind
documentation
yeah
on
the
left
there.
A
If
you
go
over
to
there
should
be
an
ingress
on
the
the
home
yeah
right
in
the
middle
of
that
first
bullet
under
yeah.
So
this
shows
you
how
to
there's
an
option
for
contour
there.
So
what
this
does
is
this
lets?
You
deploy
basically
a
multi-node
cluster,
so
say
you
needed
more
capacity
in
your
cluster
than
just
one
node.
A
This
walks
you
through
basically
how
to
apply
multiple,
multiple
machines
so,
but
because
we're
binding
to
to
to
host
ports.
You
can
only
do
that
once
right,
so
think
about.
If
you
had
four
four
nodes
in
your
cluster
running
in
kind,
that
equates
to
four
different
containers
running
in
docker.
You
can't
bind
all
four
to
port
80
right.
You
get
an
error,
so
what
this
config
does
is
we're
basically
patching
the
daemon
set
to
make
only
one
of
them
run
on
a
node.
I
guess.
A
But
what
this
does
is
this
lets.
You
have
more
capacity
in
your
cluster.
If
you
needed
it
like
say,
you
had
to
run
a
whole
bunch
of
stuff
on
your
machine
to
run
whatever
test
you're
trying
to
run
just
let
you
basically
let
you
have
multiple
nodes
as
all
this
is
doing,
I
mean
I
think,
at
the
top
they
detect.
They
show
you
how
to
make
the
config
file
does
multiple
nodes.
I
guess
up
here,
yeah
so
see
that
create
cluster
ground.
Just
a
little
bit
right
here.
A
You'll
see
this
one
to
see
it
has
control
plane
and
it
adds
some
more
configuration
files.
It
is
only
one
node,
it
used
to
be
multi-node,
but
you
could
make
you
could
add
more
nodes
here.
If
you
wanted
to
so
see,
it
says,
roles
control
plane.
You
can
actually
create
h
a
clusters
in
kind.
You
could
add
more
worker
nodes
to
it.
A
B
A
And
I
think
you
could
apply
the
same
with
minicube
and
the
community
cube
has
a
way
to
expose
ports,
even
if
you
wanted
to
run
like
I've
done
this
in
aws,
where
I
had
a
cluster
running
aws
and
I
had
one
of
those
tunneling
software
things.
Yeah
I
mean
like
alex
ellis,
has
inlets
where
it'll
tunnel
a
port,
and
you
could
use
that
sort
of
thing
as
well
again.
A
That's
I'm
I'm
breaking
across
subjects
now,
but
all
we
want
to
do
is
have
envoy
running
in
some
cluster
somewhere
and
then
have
a
wave
that
onward
to
find
us
back
here
on
our
machine.
That's
what
we
need
to
do
so,
yeah,
so
cool.
So
now
you
have
a
kind
cluster
running,
so
we
need
to
go,
deploy,
contour
and
envoy.
B
Absolutely
yeah-
I
know
this
I
think
guinness
from
the
very
beginning
and
be
able
to
kind
of
get
some
of
those
like,
and
you
know
like
questions
out
of
the
way
the
framework
here
so
okay,
so
we've
got.
We've
got
a
kind
cluster
that
we
started
with
just
this
nice
little.
You
know
file
that
was
provided
in
our
repo
here
and
and
we're
exposed
on
port
888
that
maps
to
80.
But
this
doesn't
map
to
necessarily
any
sort
of
service
or
pod.
B
It's
just
essentially
saying
bind
to
well,
I
guess
yeah,
so
it
says
container
port
right.
So
I
guess.
Oh,
oh,
it's
going
to
the
control
plane
in
the
in
the
cluster
right.
So
it's
essentially
saying
yeah!
Well,
okay,
so
I
guess
I
don't
really
understand
like
maybe
how
some
of
the
routing
internal
routing,
but
what
I
do
understand
is
that
our
kind
cluster
now
has
port
8088.
A
A
Where's,
the
what's
in
the
deployment,
I'm
sorry
go
to
o3
envoy:
it's
not
the
service,
it's
the
last
one
yeah!
So
in
here,
wherever
port,
80
and
443
are
defined,
you
should
see
it
should
say.
Hostport.
A
There
we
go
so
see
how
this
is
the
default
daemon
set
for
envoy,
so
it's
going
to
map
to
host
ports,
80
and
443,
so
essentially,
because
you've
mapped
docker
to
port
80
in
the
in
the
cluster
envoy,
isn't
going
to
bind
to
port
80
on
that
node
and
then
and
then
expose
it
that
way,
you
could
do
this
too.
So,
if
you
created
say
in
that
service
envoy,
you
can
create
a
static,
node
port,
you
know
say
3005.
A
B
So
this
is
essentially
like
any
other
pod
right
where
this
is
our
o3
envoy
is
our
damon
set.
You
know
essentially
a
deployment,
you
know
descriptor
right,
and
this
is
just
the
service,
and
this
is
just
like
any
other
sort
of
like
kubernetes,
potter,
service
or
dana's.
At
the
point
right.
C
B
A
A
Yeah
so
so
envoy,
the
onboard
diamond
set,
has
two
containers.
Well,
three:
technically
I
guess
so.
The
first
container
that
gets
run
is
the
unit
container
and
that's
what
creates
that
bootstrap
configuration
right,
and
that's
that
one
and
and
these
all
use
master
because
we're
on
the
main
branch
of
you
know
you
might
have
not
updated.
We
changed
all
the
main,
I
believe
so
we're
on
the
the
root
of
contours
repo.
So
this
isn't
a
version
release.
A
Normally
these
would
all
be
versioned,
so
the
unit
container
does
the
bootstrapping
and
then
the
other
container
in
there
is
envoy,
which
we
just
saw
right.
So
that
actually
runs
envoy
and
then
the
third
piece
in
there
is
this
thing
called
the
shutdown
manager,
which
is
the
whole
nothing
we
could
talk
about,
but
its
job
is
to
help
you.
If
you
need
to
change
envoy
like
say
you
roll
out
a
new
version
or
you
need
to
take
a
node
out
of
service
in
your
cluster.
A
Essentially,
there's
traffic
routing
through
that
envoy,
presumably
right
running,
running
lots
of
traffic,
but
you
want
a
way
to
take
that
node
out
of
service,
so
it
doesn't
get
new
traffic,
but
also
let
it
complete
all
the
in-flight
requests,
all
the
open
connections
that
are
still
on
there.
So
the
job
of
that
shutdown
manager
is
to
is
to
gracefully
shut
down
envoy
until
to
stop
accepting
connections
and
then
allow
a
wait
time
for
it
to
actually
drain
all
the
connections
out.
So
you
can.
A
B
All
right,
awesome,
okay,
so
so
now
I
just
need
to
do
a
quick
control,
apply
examples,
and-
and
I
guess
are
we
wanting
to
apply
one
by
one
or
do
we
just
want
to
apply
the
render.
A
We'll
do
that
do
the
the
one
for
the
contour
folder,
because,
like
we
changed
the
we
took
out
the
certs
gotcha,
I
mean
we
put
your
ip
in
that
one.
So
let's
just
do
that.
One.
B
A
Yeah
just
do
that
one,
and
then
we
need
we
can
delete
just
yeah.
You
can
do
that
one
again.
This
is
how
I
do
it
again.
You
could
do
better
with
customize.
I
think.
Essentially,
what
we
want
is
we
want
everything
but
contour
right,
so
this
is
going
to
deploy
contour.
So
if
you
want,
you
could
go
to
control,
delete
contour
or
what
I
do.
I
just
scale
the
deployment
to
zero,
but
it's
not
really
in
play
because
we've
told
envoy
now
to
look
for
your
local
machine.
A
B
A
We
wanted
to
use
right
yeah,
so
if
you
go
and
do
do
coup
control
get
pods
in
a
namespace
project,
contour,
that's
the
default
that
it
gets
deployed
to
you
should
see.
We
have
two
instances
of
contour.
Again,
that's
the
default,
because
we
make
two
two
replicas
of
that
there's
the
cert
gen,
which
ran,
but
again,
it's
not
needed
because
we
took
out
the
certs.
A
So
if
envoy
doesn't
connect
to
contour,
it's
not
really
healthy,
yet
right
because
it
hasn't
gotten
all
its
configuration,
so
it's
actually
not
healthy.
So
this
is
great
good
too.
So
in
a
perfect
world,
this
won't
get
traffic
from
the
external
from
anything
external.
If
this
was
a
production
cluster
and
it's
waiting
because
you
don't
have
contour
running
locally
on
your
machine,
that's
why
it's
sort
of
hanging
and
we
could
we
could
tell
the
logs
you'd,
see
it
looking
for
that.
A
Now
I
guess
in
here
there's
the
two
contour
pods
and
that
search
end
are
just
not
needed
now
they're
not
in
play,
but
you
don't
really
need
them
if
it's
confusing,
but
this
is
where
customize
could
shine
right.
All
you
really
want
to
deploy
is
the
crds,
the
namespace,
the
envoy
pod,
that
sort
of
stuff,
but
for
me,
like
I,
just
deployed
it
all
and
just
ignore
the
rest.
I
guess
that
makes
sense
because,
whatever
you
know,
is
this
clear.
B
I
guess
is
that
oh
yeah
yeah,
no,
absolutely
I
mean
I
think
I
think
once
I
get
contour
kind
of
running
locally,
because
one
thing
so
from
my
understanding
right
is
that
if
I
run
contour
locally,
we
run
the
xds
grpc
on
like
what
was
it
port
like
4001
or
something
like
that,
and
it
doesn't
work.
No,
it
does
one
right
and
we're
not
exposing
that
in
the
cluster,
so
right
so
like
the
cluster,
so
we've
exposed
port
888.88.
B
A
A
A
So
yeah,
so
the
make
install
will
actually
you
know,
install,
do
a
go,
install
or
it'll
build
contour
and
put
it
in
your
local.
Go
go
bin
and
then
we'll
run
a
contour
serve
so
contour
serve.
Is
the
command
used
to
actually
serve
the
xds
server
so
you've
seen
like
contour
bootstrap,
we
saw
like
the
contour
envoy
shutdown
manager,
which
is
a
different
command.
This
one
is
actually
what's
used
to
serve
up
the
this
is
the
mean
potatoes
of
contour
and
what
we're
giving
it
is
basically
by
default.
A
I
think
you'll
look
at
your
home,
coop
config.
I
think
I
just
passed
it
because
I
had
some
of
the
clusters
running,
but
we're
telling
contour
how
to
configure
some
stuff.
So
we're
saying:
hey
contour
the
kubernetes
cluster
that
connect
you.
You
have
to
give
it
a
coop
config
because
it
doesn't
have
you're
not
running
inside
the
cluster
and
then
we're
telling
it
hey.
The
xds
address
is,
is
you
know
zero
zero,
zero,
zero?
A
You
could
customize
that
if
you
wanted
to-
and
the
important
bit
here
is-
we
have
excuse
me
dash
dash
insecure,
because
by
default,
contour
won't
run
without
certificates.
Unless
you
force
it
to
say,
hey
go
run
insecure
again.
This
is
just
to
be,
you
know
secured
first,
so
we
took
the
certs
out
of
the
envoy
daemon
set
running
in
your
kind
cluster.
A
So
now
we're
just
going
to
basically
tell
contour:
hey,
ignore
certs
we're
just
going
to
run
without
them
and
then
the
other
two
pieces
are
and
then
all
the
stuff
maps
up
to
the
envoy.
I'm
sorry
the
contour
deployment
in
the
examples
there,
but
we're
going
to
configure
the
envoy
ports.
So
by
default
the
envoy
will
expose
its
listeners
on
port
8080
and
8443..
A
So
you
still
want
to
keep
that
80
though.
Oh
I'm,
sorry,
yeah,
no,
that's
cool!
So
basically,
what
this
is
going
to
do
is-
and
we
can
look
at
that,
but
contour
is
going
to
configure
two
listeners
depending
on
what
you
have
set
up.
So
when
you
create
some
sort
of
ingress
object,
contour
will
configure
envoy
to
have
a
an
http
listener
right.
That's
how
it's
going
to
accept.
A
You
know
insecure
traffic
as
soon
as
you
create
some
sort
of
resource
that
has
assert
attached
to
it,
which
means
tls,
it'll,
then
spin
up
the
tls
listener,
and
then
this
is
configuring
what
ports
that
should
use!
So
you
may
you
know
like
if
you
had
a
cloud
environment
where
you
had
some
sort
of
external
load,
balancer,
80
and
443.
Don't
don't
matter
as
much
because
it
could
be.
You
know
8080
or
48443
that
sort
of
thing.
A
But
if
you
don't
set
these,
I
guess
it'll
use
the
defaults
which
won't
map
to
our
examples
that
we
have
set
up
here
cool.
So
if
you
go
ahead
and
hit
enter
on
that
this,
hopefully,
if
everything
works
on
your
machine,
it'll
install,
oh,
what's.
B
B
That's
a
bit
of
a
leap.
I
think
I
had
some
issues
with
go.
I
may
just
have
to
update,
but
I
don't
know
I
think
we
may
have
fixed
that.
I
was
having
some
issues
with
like
go
and
go
path
and
getting
those.
A
Thinking
it
didn't
so,
do
you
have,
can
you
just
echo
your
dollar
sign,
go
path,
or
do
you
have
that
set
up
yeah
dot
go
so
then
it
should
be
in
dot.
Go
slash
bin!
I
think,
oh
okay,
so
then
so
then
I
don't.
A
Yeah
there
it
is
cool.
So
now,
if
you
just
do
you
can
just
do
that:
contour
serve
command
and
then
just
take
out
the
make
and
style.
And
then
you
yeah
I'll,
have
to
explain.
A
I
think
it's
yeah.
You
need
to
add
that
bin
to
your
path
and
then
that'll
work
the
same
cool
so
now,
if
you
hit
enter
this,
will
actually
go
and
run
contour
so
now
you're
running
contour
locally
on
your
machine
and
then
here
right
there
that
those
two
messages
just
from
habit.
That's
that's
envoy
connecting
to
your
your
instance
of
contour.
A
B
Gotcha,
okay,
so
so
maybe
if
I
just
do
like
okay
get
now,
what
do
I
want
to
get
here?
Do
you
want
to
get
the
pods.
A
Yeah
pods
in
yeah.
A
A
So
again,
if
you
want
to
follow
what
I
do
just
for
fun,
if
you
do
a
coupe
control
apply,
if
you're
in
the
contour
repo
yeah,
you
can
do
apply
and
there's
if
you
go
site
examples,
slash
proxy
demo,
I
think
yeah.
A
So
there's
a
there's
a
you
can
just
apply
the
whole
thing.
There's
a
blog
post
that
I
have
that
I
wrote
that
walks
through
how
to
do
includes
and
stuff
it
kind
of
introduces
http
proxy
again.
I
just
use
this
because
it's
just
it
gives
you
a
bunch
of
things
real,
quick
just
to
play
around
with.
So,
if
you
just
apply
this,
it
wouldn't
create
a
whole
bunch
of
things.
So
go
ahead
and
do
a
coupe
control
get
proxy
dash
capital
a
or
space
dash
capital.
A
yeah
base
dash
yeah.
A
A
Proxies
and
all
namespaces
is
what
that's
doing
so.
You
can
see
this
deployed.
It
deployed
some
sample
workload
on
your
cluster,
but
I
guess
the
important
bit
I
want
to
get
to
is
see
how
the
fqdn
is
local.projectcontour.io.
A
What
we've
set
up
is
we've
set
up
that
dns
name
to
point
back
to
localhost,
so
without
so.
Basically,
because
you
know,
contour
is
an
l7
load
balancer,
you
need
to
pass
it
a
name,
it's
difficult
to
route
against
ip
addresses
so
by
default.
Now,
at
this
point,
if
you
curl
a
local
host
project,
contour
colon
888,
you
should
get
a
response
from
something
localhost
local.projectcontour.io,
oh
but
okay,
but
is
it
going
to
resolve
that?
A
A
Yeah
so
because
there's
no
fqdn
matching
localhost
now
you
could
change
the
proxy
config
to
be
localhost
for
that
for
the
the
fqdn
and
that
then
that
would
work
okay,
but
then
you
can
use
whatever
you
want.
Absolutely
you
know
it's
just
that's
just
a
simple
thing
without
having
to
mess
around
with
your
host
file
on
your
local
machine
or
set
up
some
of
those
like
no
ip
url
shortener
things.
A
B
A
D
A
C
B
D
B
Now
it
doesn't
have
like
restart
listener.
Does
it
so
like
if
I
make
it
change
the
file
it
doesn't
like,
so
I'd
have
to
kill
it
and
restart
it
manually,
essentially,.
A
But
so
then
what
happens
is
then
onward
will
reconnect
right
and
then
it'll
download
its
new
config,
so
it
should
just
update
properly.
So
the
idea
here
is,
then:
this
shortens
your
your
your
workflow
right.
So
you
can
now
like
make
a
change
to
contour.
Do
that
command
there
and
then
boom.
You
can
test
it
again
really
quickly
without
having
to
build
the
image.
Push
the
image
pull
it
kill
the
pod.
Do
that
whole
that
whole
loop,
okay,
just
kind
of
faster
yeah.
B
Yeah,
no
that's
so
I
mean
we're
we're
up
and
running
at
this
point
right
now
now,
yep
usually
like
test
hdp
like
do
you
use
like
http
bin,
so
so
would
I
deploy
http
bin,
like
in
my
kind
cluster,
by
using
cue
control
like
with
a
service
and
a
pod.
You
know
some
sort
of
deployment
deploy,
something
like
http
bin
and
then
I'd
be
able
to
route
and
proxy
and
create
an
http
proxy
and
all
that
stuff.
In
my
time,
cluster
is
that
kind
of
how
the.
A
Yeah
yeah
yeah
at
this
point
you
can
do
it.
If
you've
got
a
full
cluster
running,
you
can
have
any
kind
of
routing
you'd
like
it's
just
you
just
need
to
have
a
way
for
your
for
your
local
mac
to
find
that
kind.
Cluster
now,
which
is
you
know,
over
localhost
888,
you
know
you
have
to
have
a
dns
name
map
to
to
contour,
I'm
sorry
to
envoy
in
that
cluster.
A
B
Yeah
and
that's
kind
of
actually
what
confuses
me
a
little
bit
right
so
so,
when
I
curled
here
right.
So
this
resolves
to
like,
if
I
like,
you
know
just
say
you
know
if
I
were
to
just
like
peeing
this
or
whatever
right,
so
it
resolves
to
my
local.
A
A
Yep
yep
yep
yep,
perfect,
okay,
yeah,
and
that
was
to
make
so
to
make
our
examples
kind
of
work
easier.
So
that
is
that
you
could,
if
you
follow
along,
we
just
did
all
these
examples
should
work
really
easily
on
your
local
machine
without
having
you
to
set
up,
you
know
I
used
to
have
folks
set
up
their
or
modify
their
local
etsy
host
file.
Yes,.
C
C
A
B
B
I
mean
this
is
a
really
cool
project,
but
you
know
I,
I
think,
a
couple
things
that
might
stick
out
in
my
mind
and
maybe
I'm
just
bringing
up
some
good
points
so
that
I
can
make
sure
you
guys
can
rap
by
two
right
to
get
to
your
cucumber
stuff,
but
like
that
grpc,
I'm
still
not
totally
sure
on
like
how
that
relationship
between
envoy
and
contour
right.
So
I
stood
up
the
kind
cluster
I
passed
that
config,
which
gave
us
an
exposed
port
to
the
to
the
kind
clusters
control
plane.
B
Then
we
did
an
apply
which
essentially
deployed
envoy
and
contour
from
the
examples
folder
here
right
so
got
this
contour
folder.
I
I
then
applied
this.
It
applied
all
of
these
manifests
to
the
kind
cluster,
but
we're
essentially
ignoring
this-
and
you
know
this
right-
I'm
still
not
too
sure
how
my
how
my
local
contour
does
the
rpc
with
envoy
with
8001..
B
A
A
Yeah,
so
here
so
this
is
so.
This
is
the
bootstrap.
So
this
is
what
envoy
reads
the
first
time
so
when
onevoice
starts
up
it
reads
in
this
config
file,
this
the
slash,
config
envoy.json
and
what
we
told
it
was
hey
your
xds
address,
meaning
your
where
contour
is
located.
Is
it
that
ip
address
that
192
168
1.187
so
so
on,
was
going
to
look
to
that
ip
address
in
that
port
for
its
xds
server,
which
is
running
over
grpc?
A
A
B
A
So
yeah
so
envoy
is
really
really
really
dumb.
It
has
no
configuration
short
of
what's
in
that
bootstrap
config.
All
the
logic
is
in
contour,
so
we
have
to
give
envoy
just
enough
information,
basically
to
find
contour
once
it
has
that,
and
then
contour
will
stream
down
the
rest
of
the
configuration
over
that
jrpc
connection,
but
this
is
kind
of
the
minimum
information
that
envoy
needs
to
get
running.
A
B
Okay,
awesome
yeah.
I
think
I've
got
enough
to
chew
on
steve.
I
didn't
want
to
give
an
opportunity
for
anybody
else,
and
we've
got
a
few
other
folks.
You
know
to
talk.
I
really
appreciate
this
and
you
know
didn't
want
to
take
up
the
whole
hour,
but
I
hope
this
is
beneficial
for
you
know,
for
you
know,.
A
A
Yeah,
so
this
works
totally
fine
and
this
can
work
anywhere
too.
So,
if
you
had,
you
know,
I've
had
clusters
running
you
know
and
on
a
vsphere
instance
behind
my
house
here
that
I
did
the
same
thing.
It
just
doesn't
matter.
I
guess
how
envoy's
running
as
long
as
it's
running
in
a
in
a
kubernetes
cluster,
for
you
to
test
things,
that's
the
important
bit
so
he's
got
to
have.
Basically,
you
know
underway
finding
contour,
so
so
is
envoy
in
a.
B
Like
like,
when
you
deploy
this
on
kubernetes,
using
the
examples
that
are
unlike
the
website
and
stuff
is
envoy,
you
pretty
much
have
the
same
configuration
here.
You
just
have
this
envoy.json
and
where
is
that
onboard.json?
By
the
way?
Is
it?
Is
it
somewhere
in
here
this?
Does
it
get
like
a
volume
mounted
or
something
like
the
envoy.
A
Yeah
so
that
in
the
container
generates
it
so
that
bootstrap
command
here
will
actually
go
and
build
that
out.
So
in
contour
we
have
a
a
command
that
will
generate
that
json
file,
okay
and
then
it
gets
it
gets
written
to
that
slash,
config
directory,
which
is
just
a
shared
directory
inside
that
that
pod
gosh.
B
Okay,
one
thing
too,
that's
a
little
confusing
for
me
again,
I'm
just
there's.
Probably
a
lot
of
things
confusing
is
like
how
envoy
how
the
envoy
service
in
the
kind
cluster
knows
anything
about
this
ip
address
right
like
if
it's
running
inside
of
its
own,
I
guess
kind
of
bridge
network.
You
know
in
the
cluster,
you
know:
how
is
it
able
to
connect
to
my
local
environment?
You
know,
is
it?
Is
it
on
the
same,
you
know
network
essentially
that
I
can
even
communicate
with
contour.
A
This
docker,
I
think,
I'm
not
sure
if
there's
okay,
I
think
you
can
create
networks
and
docker,
but
I
think
out
of
the
box,
if
they
can't
find
ip
locally.
It's
just
going
to
go
out
to
your
local
machine.
Go
look
for
that
and
that's
why
I
said
like
that
that
xds
address
I've
done
this
before,
where
I've
run
envoy
in,
like
my
aws
cluster
and
I've
run
contra
locally
in
my
machine
and
there's
some
tools
out
there
like
ngrok,
and
I
guess
the
inlets
from
alex
ellis.
A
Some
of
those
tools
will
basically
map
an
application
locally
on
your
machine
through
an
external
resource
right.
So
basically
it
makes
your
local
machine
exposed
to
the
internet.
So
in
that
scenario
this
this
ip
address
could
be.
You
know
a
domain
that
points
to
my
local
machine,
but
coming
from
the
cloud
somewhere
else
in
the
world.
You
know
what
I
mean,
which
is
really
scary
in
a
way,
but
but
it's
totally
possible.
The
idea
is
just
need
to
connect
them
together.
However,
that
works.
That's
that's
the
goal.
I
guess
yeah.
B
Awesome
awesome
well,
thank
you
steve.
I
know
we're
wrapping
up
and
I
you
know
I'll
definitely
hit
you
guys
up
on
slack
because
I
kind
of
am
working
through
this
hold
on
sorry.
I
like
the
phone.
B
Sorry
yeah,
no
worries
no
worries
yeah,
but
yeah
yeah.
So,
but
I
really
appreciate
that.
A
Yeah
no
worries.
I
hope
this
helps
hit
us
up
on
slack
yeah,
so
I
think
we're
out
of
time
for
this
hour,
so
we're
gonna
switch
now
to
the
cncf
or
the
kubecon
office
hours,
I'm
not
sure
steve.
You
have
a
link
for
that
or
anything.
A
If
not
yeah
I'll
just
end
this
here
and
then
we
can,
I
think
it's
in
the
expo
hall.
A
Worry
yeah
thanks
thanks
chad,
so
we'll
go
to
that
one
next.
If
you
want
something
any
other
questions
happy
to
chat
more
about
all
this
stuff,
okay,
cool.