►
From YouTube: Kind-ly Running Contour
Description
This demo by Steve Sloka accompanies the blog post found at
https://projectcontour.io/kindly-running-contour
A
So
if
you
want
to
learn
more
about
kind
itself,
you
can
come
out
here
to
the
kind
repo-
and
this
is
a
Kirby
ID
sig,
slash
kind
here-
you
can
go
participate
in
the
community,
there's
a
great
slack
channel.
If
you
have
questions
or
need
some
support
or
just
chat
about
some
some
features
you
want
to
add
to
it.
So
it
really
really
encourage
you
to
go
out
here
to
check
that
out.
A
There's
also
a
lot
of
Doc's
out
at
this
Doc's
web
page,
which
again
explains
kind
of
how
kind
is
built
some
of
the
design
principles
behind.
Why
certain
things
we
built
certain
ways
so
good
encourage
you
to
read
through
this
as
well,
if
you're,
if
you're
new,
to
kind,
but
today,
today's
plan,
what
I
want
to
do
is
go
through
how
we
can
basically
deploy
kind
to
your
machine.
The
next
gonna
go
create
a
cluster
with
kind.
A
Once
we
have
that
cluster
running
we're,
gonna,
go
and
deploy
contour
and
once
contour
is
running,
we'll,
go,
deploy
a
sample
application
and
then
we'll
test
the
whole
thing
out
and
end.
So
again,
if
you're
not
familiar
we're
gonna
start
out
with
as
disappointing
kind,
so
kind
here.
I
have
some
examples
of
how
you
can
do
that.
This
is
basically
curling
down
a
binary
of
an
own
version,
and
this
is
this
is
based
on
a
Mac.
A
But
again
this
may
change
based
on
your
architecture,
so
grab
go
grab
the
binary,
make
it
executable
and
then
move
it
to
it
to
your
path
somewhere
again
I'm
putting
in
my
local
bin.
This
may
change
based
on
how
your
environment
is
set
up.
So
once
you
do
that,
what
you
have
is,
basically,
you
can
run
this
kind
command
and
what
you
can
do
is
you
can
some
things
in
here
that
you
can
do?
Is
you
can
build
out
images?
A
You
can
delete
clusters,
you
can
create
clusters,
so
the
simplest
command
once
you
skit
kindness
dollars,
you
can
stay
kind,
create
cluster
and
what
that
will
do
is
go
spin
up
a
one
node
cluster
in
your
on
your
machine.
What
we
want
to
do
something
different
for
this
demonstration.
We're
going
to
do
is,
go
ahead
and
create
a
cluster
but
pass
in
this
config
file
and
what
this
config
file
does.
Is
you
can
override
some
of
the
command
line
parameters?
A
So
what
we're
gonna
do
is
we're
gonna
say:
let's
create
a
two
node
cluster
and
one
node
is
the
control
plane
and
the
other
node
is
the
worker
and
the
worker.
Node
has
two
ports
that
it's
going
to
expose.
One
is
port,
80,
min
ones,
4
4
3,
and
this
is
how
we're
gonna
do
some
of
our
testing
with
contour,
because
what
we
want
to
do
is
we
want
to
map
envoy
which
is
gonna.
Is
the
data
path
component
for
contour
map
it
to
port
80
on
our
notes
and
then
pass
traffic
through
there?
A
So
then
we
can
simulate
a
real-world
cluster
here
on
our
machine.
So
let's
go
ahead
and
do
that.
So
what
you
would
do
is,
you
would
say,
create
cluster
kind,
crate,
cluster
and
then
pass
in
the
config.
Now
we've
done
is,
if
you
follow
along
there's
a
similar
tutorial
out
here
in
our
example
in
the
contour
repo,
but
not
here,
in
example,
is
kind.
A
So
what
this
first
one
is,
this
is
the
worker
node
and
this
bottom
one
here
is
the
control,
plane,
node
and
you'll,
see
here
in
the
worker
node
we've
exposed
everything
to
port
80
so
from
my
host
port
80
into
the
container
port
80,
and
this
is
really
cool,
and
this
is
again,
this
is
new
and
the
0.4
release
of
kind,
so
anything
older
than
that.
You
have
to
upgrade
it
to
use
this
feature
great.
So
this
is
really.
A
I'm,
sorry,
let's
go
ahead
and
export
our
coop
config
calls
to
this
one.
So
this
command
after
you
run
the
the
kind
crate
cluster
you'll
get
this
command
prompted
to
you-
and
you
can
run
this,
and
this
will
set
your
context
at
your
coop
config
to
your
cluster.
So
now
I
can
say:
coop
control
get
nodes.
Then
I
have
my
two
nodes:
I
mean
if
you
control
get
pods
and
you'll
see
I
have
nothing
on
here.
Yet,
okay,
so
let's
go
ahead
and
deploy
contour
next.
A
So
what
we'll
do
is
we'll
go
ahead
and
deploy
contour
in
this
split
model,
so
this
here
there's
a
bunch
of
different
models.
You
can
use
to
deploy
contour,
so
the
split
model
uses
contour
deployed
as
a
deployment
and
envoy
deployed
as
a
daemon
set
and
also
envoy
is
using
host
networking,
which
means
that
envoy
will
bind
its
ports
directly
to
the
node,
which
is
fantastic,
because
we
have
now
our
node
mapping,
port,
80
and
443
to
our
hosts.
So
we
can
go
ahead
and
apply
this.
A
So
let's
go
ahead
and
take
a
look
and
watch
these
spin
up
so
we're
just
downloading
the
image
they're
getting
running
once
they're
running.
They
pass
all
the
health
checks.
Then
we
will
go
ahead
and
deploy
the
workload
next.
So
waiting
on
envoy
so
again,
envoy
running
as
a
daemon
set
here
and
contour
is
running
as
a
deployment.
A
An
envoy
is
mapping
port
80
to
port
80
on
the
host,
ok,
fantastic,
so
that's
up
and
running
so
now
what
we
can
do
is
we
can
go
ahead
and
deploy
our
sample
application,
which
is
this
one
and
we're
using
the
kubernetes
up
and
running
or
QWERTY
application
just
to
demonstrate
this
and
again.
This
is
an
example
that
lives
in
the
contour
repo.
So
you
can
follow
along
with
that
as
well.
A
So
now,
if
I
say
to
control,
get
pods,
we
have
three
up
three
replicas
of
these
of
this
QWERTY
deployment
here
cool.
Now
it's
up
and
running
so
now.
I'm
theory
actually
want
to
curl
it,
but
the
problem
is:
is
that
I
don't
have
a
way
to
get
to
my
application?
Part
of
this
deployment
here
created
an
ingress
route.
I
said
get
ingress
route.
You
can
see.
I
have
my
Koala
fully
qualified.
My
name
is
QWERTY
dot
local
and
this
is
a
valid
ingress
route.
Now
here,
I'm
demonstrating
ingress
route.
A
This
would
work
the
same
for
ingress
if
you're
interested
but
I'm
just
using
an
ingress
route.
Just
this
what
I
picked
so
again,
we
need
a
way
to
find
a
way
to
map
to
this.
Now
normally
in
the
world,
you
would
spin
up
a
load.
Balancer
say
your
native.
You
asked
you'd
spin
up
and
you
know
an
eld
or
some
sort
of
load
balancer
from
whatever,
whatever
you're
deploying
your
cluster,
and
you
would
map
our
DNS
records
to
that
load.
Balancer
and
that
load
balancer
would
then
map
to
all
the
envoys
in
the
cluster.
A
So
he
would
spread
out
the
traffic
across
those
number
of
envoys.
But
here
I
don't
have
a
load
balancer
because
I'm
running
locally,
but
what
I
can
do
is
I
can
go
ahead
and
fake
this
with
my
FD
hosts.
So
if
I
do
edit,
my
Etsy
hosts,
what
I
can
do
is
I
can
add
quarried
out
local
and
I
can
say
it's
at
1
to
7
at
0
0
that
one
and
what
that
means
is
that
any
request
to
localhost
is
gonna
map
to
this
name.
A
I'm,
sorry,
this
table
I
guess,
will
resolve
to
localhost
and
this
works
because
my
kind
cluster
is
running
on
localhost,
essentially
because
we've
mapped
localhost
80
to
80
in
the
in
the
container
or
the
node
running
running
our
envoy.
So
now
we
have
a
full
end-to-end
kind
of
solution.
So,
let's
hop
out
of
here,
we
have
our
application
is
running.
We
tested
that
so
now
it's
just
be
able
to
go
to
our
address
so
we'll
go
to
http,
cordial,
oh
goal,
and
there
we
go
so
there's
our
application
running
now.
A
I
haven't
deployed
TLS
or
anything,
which
is
why
we're
seeing
this
not
secure,
but
we
can
verify
really
quickly
that
this
is
actually
the
right
cluster.
So
this
QWERTY
pod
is
kind
of
neat.
It
has
a
bunch
of
different
things
about
the
server
environment
and
you
can
manage
liveliness
probes
and
readiness
probes
and
test
out
how
certain
things
work
with
your
deployments.
But
what
I
want
to
show
you
is
that
this
is
the
pod
name
here
at
the
top.
A
So
if
I
go
ahead
and
say,
goop
control
get
pods,
we
just
see
is
this:
one
is
RB.
M2L
and
I
have
one
here:
RB
m2l.
If
I
refresh
this
page
I'll
get
a
new
one.
Jme
BBC
and
I
have
jmv
BC
there
you
go
so
you
can
see.
This
is
this.
Is
the
the
mapping
between
what's
being
the
application
running
and
my
my
kind
cluster
that
I
have
if
you're
running
as
well
cool?
That's
all
I
have
to
talk
about.