►
From YouTube: Contour Code Intro & Overview with Steve Sloka
Description
Steve Sloka, core Contour maintainer, does a high-level overview of the Contour architecture, and answers questions from community members on Contour and its code base.
A
B
Yes,
it
I
think
that
sounds
great
to
what
I
envision
is
kind
of
that
architectural
view
of
contour,
maybe
spent
some
time
going
through
the
API.
It
seems
like
the
HTTP
proxy
API
is
the
the
API
going
forward
at
least
till
until
service
api
s--
becomes
stable
and
then,
as
you
mentioned,
walking
to
the
code.
Okay,.
A
A
This
request
can
route
to
this
service
or
this
endpoint
within
the
cluster
I'll
skip
some
of
these
things
here.
So
here's
how
contours
architected
and
again
this
should
match
kind
of
any
ingress
controller
out
there
generically
you
have
some
traffic
from
and
the
Internet's
the
canonical
example.
But
this
could
be
anything
from
you
know
on
pram
or
meaning
non
internet
traffic,
but
you're
coming
through
and
hit
some
sort
of
load
balancer,
which
is
gonna
direct
traffic
to
a
set
of
envoys
envoy
is
the
data
path
component
for
contours.
A
A
You
can
configure
on
board
with
so
we
implement
that
so
contours
the
XDS
server,
so
it
spins
up
a
G
RPC
server,
which
then
the
Envoy
will
connect
to
and
then
contours
job
is
to
look
at
the
cluster
here
and
it's
gonna
go
watch
for
things
like
ingress
resources,
our
series
HTTP
proxy.
It's
looking
for
secrets,
endpoints
and
services
want
any
of
those
things.
Change
in
the
cluster
contour
will
then
build
a
new
configuration
and
then
pass
that
configuration
to
on
what
flow
of
how
Iowa
gets
its
configuration
from
contour.
A
Not
sure
from
where
folks
are
with
envoy,
but
the
idea
here
is
that,
on
top
we
have
all
the
different
things.
I
just
mentioned
that
we
watched
for
you,
know,
services,
secrets,
endpoints
and
so
on.
I
mean
in
the
bottom
here.
These
are
all
the
Envoy
isms
so
on.
Who
has
an
idea
of
clusters,
secrets
and
points
routes
and
listeners?
And
you
can
see
here
that
we
map
things
from
curb
a
these
objects
into
envoy
objects.
A
C
It
was
maybe
it's
just
I,
don't
get
too
far
into
the
weeds
on
this
I'm.
Just
curious
I
mean
is
a
is
the
it's
the
graph
structure
somehow
conducive
to
so
just
mapping
the
data
structures
between
cuban
envoy
I'm,
just
not
entirely
familiar
with
envoy
data
model.
I
guess
cuz,
I
didn't
know
if
there
was
like.
I
wonder
why
you
called
out
the
the
dag
and
it
just
seemed
like.
Maybe
it
was
important
in
terms
of
the
impedance
Magister,
maybe
between
the
two
data
models.
No.
A
So
I
mean
there's
not
conceptually
mean
that
they're
different,
you
know
different
stocks,
different
data
structures
that
you
know
define
what
the
parameters
aren't
in
each
side,
but
I
guess
the
the
dag
that
is
there,
that's
more
for
our
proxy
CRD
because
of
its
hierarchy
on
a
chart
hierarchical
nature.
So
I
were
to
say
we
basically
map.
We
have
a
way
that
we
can.
We
can
so
let
me
just
abacus
ingress
today
right
anyone
can
can
own
an
ingress
resource
right.
I
can
have
a
fully
qualified
domain
and
you
can
as
well.
A
Nothing
stops
us
from
basically
repeating
the
same
domain
with
path
right
and
if
two
resources
insert
the
same
ingress
object
with
the
same
domain
and
path,
then
you've
got
it.
You've
got
a
collision
there.
So
the
idea
of
the
of
our
of
our
HTTP
proxy
it
was
previously
ingress
route,
is
that
you
can
delegate
permissions
down,
and
this
is
kind
of
some
of
the
new
models
and
service
API
is
coming
out.
A
But
basically,
you
can
say
like
hey,
I
own
kind
of
models
after
DNS,
so
I
own
Steve,
slow,
cocom
and
then
I
can
delegate
permissions
down
to
different
namespaces.
So
I
can
say:
hey
this
namespace
has
slash
blog
and
so
the
name
space
has
slash
IT.
And
then,
if
someone
else
in
the
cluster
tries
to
replicate
that
same
domain
with
path,
then
contra
will
see
that
and
throw
that
out.
It
enforces
that
enables
that
by
building
out
this,
this
dag
in
memory.
A
A
A
A
So
as
this,
so
we
have
some
examples
here
and
we
call
them
examples
because
there's
a
number
of
ways
you
can
deploy
things
on
kerbin
at
ease,
so
this
is
kind
of
like
the
way
we
think
you
might
want
to
do
it.
But
again
it
depends
on
how
your
network,
topology
and
stuff,
but
in
here
what
we
have
is
there's
a
bootstrap
and
that's
this
init
container
here.
A
A
This
one
here
right
so
when
this
Envoy
spins
up
it
doesn't
know
where
contour
is
so
what
we
use.
We
give
this
so
con.
What
has
this
bootstrap
config?
So
it's
like
its
initial
config
that
you
can
pass
to
it
so
that
an
it
container
will
go
ahead
and
build
that
bootstrap
config
and
basically
it
has
a
couple
things
in
it
with
some
static
listeners.
But
the
big
thing
it
has
is
basically
a
dress
to
contour
and
you'll,
see
that
here,
I
can
get
my
windows
right
here.
A
A
There's
a
shared
data
volume.
We
have
to
get
that
config
in
there,
but
it's
not
important
again.
It's
just
what
to
Jason
right
so
that
Jason
then
describes
hey.
This
is
where
contour
is
so
then,
whenever
those
envoys
start
up
here
in
this
diagram,
they'll
go
look
for
contour
and
then
they'll
ask
contour
for
its
initial
configuration.
A
So
hey
give
me
all
the
listeners
knew
the
routes
and
things
whenever
contour
sees
things
change,
then
contour
will
signal
back
to
envoy,
say:
hey
your
have
changed
and
then
all
the
also
get
all
those
updates
and
messages,
but
they're
not
polling.
It's
it's
a
it's
a
dirac
PC
connection.
So
it's
a
rich
connection
that
can
communicate
back
and
forth.
A
Cool
so
that
that's
the
generic,
you
know
overview
of
how
ingress
and
stuff
works.
This
slide
talks
about.
You
know
we
have
gravano
metrics
and
stuff
and
Prometheus
metrics,
so
we
do
expose
all
the
metrics
from
envoy,
Mia,
Prometheus
and
then
contour
itself
has
some
metrics
as
well
that
we
expose
again
we
could
democrat
if
that's
interesting
or
not.
A
B
B
A
C
I
think
is
it
fair
to
say
then
I
mean
that
the
Envoy
deployment
is
effectively
user,
managed
a
contour,
the
contour
yeah,
a
contour
itself
is
not
actively
managing
like
an
envoy
deployment
structure
of
any
kind
or
a
daemon
set
itself.
It's
up
to
the
the
user
that
a
cluster
administrator
or
whatever,
to
install
Envoy,
essentially
to
use
in
conjunction
with
contour
and
there's
no
automate.
There's
no
automation
around
the
management
of
Envoy,
correct.
C
A
Yeah
I
guess
what
we
found
is
like
envoy
envoy
scales
very
well,
and
it
consumes
lots
of
threads
and
in
lots
of
networks.
So
usually
what
happens
is
like,
and
we
did
some
load
testing
like
I'll
max
out
the
data
pipe
or
I
maxed
out
the
threads
and
CPU
on
a
server
before
I
maxed
out.
You
know
that
the
instance
of
envoy,
so
if
you
needed
more
capacity,
adding
more
running
more
on
voice
on
a
single
node
doesn't
so
much
help.
You
know,
I
mean
I,
have
a
whole
bunch
of
traffic.
A
I'll
go
to
run
ten
version,
ten
instances
on
each
node.
It's
not
gonna
help
you
as
much
as
adding
more
nodes
to
your
cluster
I,
think
and
adding
more
network
bandwidth.
That
way.
So
that's
kind
of
we
went
with
this
model
of
having
envoy
one
as
a
daemon
said,
but
there's
no
there's
no
requirement
to
you
know
you
could
run
envoy
as
a
deployment.
A
C
A
Another
good
example
of
that
is
this:
this.
Let
me
like
questions
on
this,
because
envoy
and
in
contour
are
split
apart
that
communication
it
used
to
be
before
we
solved
it.
That
commune
was
unencrypted
which
isn't
a
big
deal
until
you
start
adding
TLS
starts
to
your
to
your
own
voice.
So
then
anyone
can
basically
intercept
that
traffic,
possibly
from
inside
the
cluster
and
pick
apart
your
swords.
So
we
now
secure
that
and
that's
what
those
those
sorts
here
you
see
that
this
gets
referenced
by
default
at
the
box.
A
We
give
you
this
job
and
this
job
will
go
ahead
and
create
generic
starts
for
you,
and
this
is
another
one
of
those
commands
off
of
contour
yeah,
a
contour
search,
Jen,
so
it'll
generate.
You
know,
self-signed
search
for
you,
but
there's
no
requirement
that
you
have
to
use
this
job
right.
I'll,
contour
once
or
recommends,
is
having
you
know,
certificates.
You
can
turn
off.
So
do
the
encryption
and
say
hey,
I
want
to
run
in
secure
it'll
work,
fine,
but
we
thought
folks
that
maybe
you
want
to
have
their
own.
A
B
A
Sure
and
they're
in
there,
art,
I,
guess
I
can
link
to
out
and
slack
I've
done.
This
talks
like
a
bunch
of
times
in
terms
of
what
is
proxy
and
how
does
it
fit
and
function?
So
you
know
if
you
want
more
in-depth,
we'll
do
a
quick
one,
because
I
want
to
kind
of
get
in
the
code,
maybe
just
to
make
this
a
different
session.
A
But
so
the
big
thing
we
wanted
to
solve
and
I
can
make
this
bigger.
We
wanted
to
solve
was
ingress
view.
One
beta
one
before
all
the
v2
stuff
came
out
was
willing
to
be
able
to
solve
this
idea
of
multi
teen
clusters,
and
the
idea
here
is
that,
having
again,
multiple
users
in
the
cluster
self
managed
their
own
ingress
resources,
but
do
that
in
a
safe
manner?
And
we
do
that
to
this
idea
of
delegation
of
routing.
A
That's
where
that
that
data
came
into
play
so
we'll
be
able
to
delegate
path
and
header
combinations
to
other
names
bases
within
the
cluster.
I
also
want
to
go,
will
define
things
without
annotation.
So
today
we've
got
this
annotation
sprawl.
You
know
like
service
ways,
balancing
strategies,
301
redirects,
on
from
insecure
to
secure
connections,
all
has
to
be
a
annotation
today,
just
because
the
ingress
spec
is
just
not
well-defined
to
have
all
those
different
pieces
and
pieces
and
parts
I'm
sommore
a
way
to
do
that.
We
wanted
a
way
to
have
multiple
up
streams.
A
So
when
you
define
an
ingress
resource,
I'm
gonna
proxy,
to
you,
know
three
different
services
in
the
cluster
and
have
it
load
balanced
across
them.
Maybe
I
want
to
do
a
Bluegreen
deployment
with
that
or
a
canary
deployment
or
something
but
again
today
with
ingress.
You
can't
do
that
very
well.
Anything
that
came
up
from
users
is
this
idea
of
TLS
certificate
delegation,
because
we
can
talk
about
this
as
well,
but
being
able
to
separate
where
your
secrets
actually
live
in
the
cluster
and
then
where
you
just
can
actually
use
them.
A
So
those
those
were
the
highlights
of
kind
of
why
we
went
to
this
proxy
thing
and
here's
again
that
example,
which
we
talked
about
before
I,
have
you
know
as
a
team,
a
namespace,
a
team
names,
a
speii.
They
both
my
friend
slash
blog
both
for
the
same
host
project
contort
at
I/o,
and
they
reference
different
services
in
the
clusters
like
what
happens
when
this
gets
processed
they've
the
same
host
with
the
same
path,
it's
undefined.
A
A
A
So
here
I
have
this
root
proxy
right
and
it
owns
project
contour
that
I
have
this
fully
qualified
domain
name,
and
someone
else
here
is
that
is,
it's
called
the
blog
and
they're
a
child
of
the
root
and
they
say:
hey,
I'm
gonna
give
authority
to
project
haunted
I,
do
slash
blog
to
this
different
namespace
or
this
team.
So
the
idea
is
if
someone
else
comes
along,
let's
give
it
okay,
so
its
authority
is
given
from
the
root,
so
the
roots
passes
off
that
authority
and
gives
that
that
permission.
A
If
someone
else
comes
along
and
says,
hey,
I
own
slash
blog
for
the
same
domain,
you
know
serve
this
traffic
up
comfortable,
throw
it
out
because
it
doesn't
have
authority
for
that
path
in
that
domain.
Name
right-
and
we
do
this
in
this
idea
of
it
include,
and
we
can
show
you
that
in
better
detail
here,
a
second
but
include
is
the
way
that
you
actually
pass
off
that
authority.
You
actually
include
these
different
permission,
sets
these
different
proxies
and
then
that
gives
them
that
authority
to
use
those
different
paths.
C
A
B
B
A
A
So
the
whoever
owns
it's
not
always
the
route
I
always
show
the
route.
In
this
example,
the
route
where
the
fully
qualified
domain
names
is
defined,
it
passes
off
that
to
include
so
it'll,
say:
hey
include
this
proxy
here,
this
blog.
This
is
the
name
of
the
proxy
name
in
that
proxy
in
that
namespace,
and
then
they
get
the
path,
slash
blog
and
then
from
there.
B
A
So
contour,
so
we
went
back
and
forth
on.
Should
we
have
one
C
or
D
2
series
3
series
and
we
landed
on
just
having
one,
because
it
the
simplicity
of
having
one
kind
of
outweighed
the
complexity
of
having
multiple
yeah.
So
there's
confusion.
I
guess
sometimes-
and
you
know
you
said
what
is
a
root
proxy
and
what
isn't
what
a
child
proxy.
B
A
Yeah,
it's
one
of
the
things
where
it's
it's
it's
tricky
to
explain
once
you
get
it,
it's
it's
it's
not
too
bad,
and
they
by
no
means
you
have
to
use
this.
You
know
we
still
support
regular
ingress
as
well.
So
if
you
want
to
use
just
you
want
beta
one
ingress
and
soon
the
service
API
stuff,
then
contour
will
support
that
as
their
problem
at
all.
I.
Don't
think
I
could
the
thought
that
we
have
to
use
our
CRT
and
you
don't
have
to.
A
B
A
A
So
we'll
slide
that
yeah,
so
there's
a
bunch
of
different
commands
that
it
has
right.
So
this
is
the
the
main
point
contour
here
and
there's
some
ways
easy
to
get
to
get
configuration
into
contour
itself.
So
contour
will
take
some
command-line
arguments
and
it
also
takes
a
configuration
file.
So
we're
starting
to
you
know,
push
things
off
to
config
files,
because
we
got
kind
of
overwhelmed
with
flags
and
then
sort
of
seemed
like
a
good
way
to
do
that
so
yeah
I'm,
trying
to
think
of
yeah.
A
So
there
a
couple
different
commands,
so
one
is
contour
serve
and
that's
the
main
one
here-
and
this
is
the
serve
context,
so
serve
is
what
actually
starts
contour
up
when
it's
running
as
the
GRP
server.
When
you
run
the
contour
serve
command.
This
is
what
happens
in
the
deployment
of
contour.
That
command
is
gonna
start.
You
know
a
connection
to
the
kubernetes
and
start
a
bunch
of
watches.
It's
gonna
start.
The
watch
is
on
services,
endpoints
secrets.
A
All
those
different
things
is
the
meat
and
potatoes
everything
it's
gonna
spin
up
that
G
RPC
server
to
serve
context
back
to
envoy
and
everything
that's
what
the
serve
one
is
for.
We
have
the
bootstrap
one
which
we
touched
on
shortly
before
bootstrap
is
were
this
runs
as
an
init
container
in
envoy.
So
job
here
is
to
produce
a
small
configuration
file
for
the
Envoy
living
in
that
Damon
said
or
wherever
that's
deployed.
So
then
this
this
gets
passed
is
again
as
an
it
and
then
it
shoves
that
configuration
down
on
voice.
A
B
A
Oh
here
we
go
to
look
at
our
examples
here.
So
here
is
the
unit
container.
So
what
it
does
is
it
mounts
this
envoy,
config,
slash,
config
and
then
envelope.
Config
here
is
just
an
empty
der,
so
because
it's
a
knit
container,
this
will
have
to
run
first.
So
this
will
spit
this
one
will
start
first,
it
will
generate
that
config.
Actually
we
can
do
it
here.
A
On
my
on
my
machine
I
forgot,
we
can
run
the
command
locally,
so
we'll
generate
that
that
configuration
and
it
gets
written
to
slash
config
once
we
write
that
slash
config,
the
Envoy
pod,
which
is
here
set
of
containers,
it
also
mounts
that
same
config,
config
map
over
that
empty
dirt
file,
and
then
it
references
it.
So
here's
where
it
mounts
in
that
same
config
map
or
directory
empty
directory-
and
then
here
is
where
that
gets
passed
here
so
on
VoIP
Jason
is
that
config
file
that
the
in
that
container
wrote
it's.
C
C
A
C
A
Yep
yep,
so
we
have
just
one
image
with
multiple
commands
in
yep:
okay,
thanks
and
here
I
can.
Actually
let
me
just
can't
hardly
run
it
here
because
he
all
right.
So
if
I
do
contour
there's
a
lot
of
commands,
let
me
have
is
probably
a
better
way
to
look
at
it.
So
here's
bootstrap
right
so
I
could
say
contour
bootstrap,
I,
gotta,
pass
it
a
come
in.
B
A
Strap
don't
have
it
in
history,
it's
a
bad
demo.
There
I
think
I
got
to
give
it.
Let's
see
what
I
gotta
give
it
next
yes
address
and
stuff,
so
we
can
do
XDS
address
is
something
and
then
XD
s
port.
B
A
A
We
go
boom
cool.
Thank
you
cool,
so
this
is
what
gets
set
up
in
the
boost
job,
so
we
write
out
some
generic
generic
cluster
in
that
cluster.
Is
the
XDS
server
for
the
contours
right?
So
here
you
can
see
that
so.
Andre
has
idea
of
clusters
and
the
cluster
is
kind
of
like
a
service
in
kubernetes
and
that
a
cluster
has
a
set
of
endpoints
which
match
endpoints
and
kubernetes.
A
So
we
create
a
cluster
called
contour
and
we
give
it
an
endpoint
which
matches
that
address
that
we
gave
it
so
I
gave
it
this
IP
address.
This
could
be
the
service
name
in
Coober
90s
as
well.
So
this
is
the
endpoint.
Now
that
the
cluster
is
going
to
back
and
then
down
here
in
where
we
tell
it
somewhere,
we
tell
it
to
use
this
cluster
name
contour
for
its
XDS
server.
Here
we
go
so
here
we
go
for
your
on
board
your
PC,
so
this
is
telling
it
hey.
A
You
go
use
the
cluster
called
contour
and
then
that
contour
then
uses
those
endpoints
to
go
to
go,
find
it.
So
this
is
what
tells
envoy
where
that
it's
XDS
server
is
in
the
cluster
for
LD
s
and
then
C
D
s,
and
then
we.
This
is
right.
Here
is
hiding
the
admin
domain.
So
there's
an
admin
domain
admin
web
page
in
Envoy,
which
is
actually
pretty
dangerous.
You
can
like
kill
envoy
from
there
and
we
start
it
and
do
crazy
things.
So
this
is
just
making
it
only
listen
on
localhost.
A
Here
we
go
stats,
so
I'll
be
crate
yeah.
We
could
a
generic
listener
for
stats.
So
we
say
this
is
creating
the
the
listener
for
it
and
then
there's
an
implementation
for
that
service.
Dance
somewhere,
I
thought
we
create
that
might
move
out
the
code
which
I'm
curious.
Why
this
is
here
now,
but
we'll
forget
this
out.
Does
this
make
sense?
This
is
the.
A
This
is
all
we
passed
and
there
are
ways
folks
have
customized.
This
I
wouldn't
recommend
it
at
first,
but
because
all
the
other
things
are
dynamic.
So
all
the
other
listeners
and
clusters
and
things
come
dynamically
through
the
the
XDS
interface.
But
this
is,
you
know,
I
think
we
contour
started
up.
We
used
to
actually
pull
poll
for
XDS
changes,
but
now
we
can
use
to
your
PC
for
that.
Oh.
A
A
The
bootstrap
command,
what
else
we
have
I
can
search
in
that
helps.
You
generate
certs
if
you'd
like
to
have
it
generate
sorts
for
you,
that's
what
that
job
was
running
so
again
that
we
gave
that
generic
job
that
you
can
use
to
generate
sorts
for
you
if
you'd
like
again,
not
not
needed
or
not
required,
but
it
helps
just
a
boost
job
if
you
don't
mind
where
the
swords
come
from,
let's
take
their
into
the
command
they're
a
bunch
of
CLI
commands.
A
So
one
of
the
things
that's
interesting
to
see
is:
if
what
what
are
you
sending
to
envoy
like
what
you're
sending
for
clusters
and
for
endpoints
and
things
when
you're
debugging?
This
can
be
useful
in
certain
cases?
Probably
getting
started
is
not
as
useful,
but
it's
a
way
to
see.
You
know
what
clusters
you're
sending
down
to
to
envoy
and
such
so
I
can
say
contour
it's.
A
A
C
So
so
you
didn't
show
it
so
you
sure
to
see
now
how
probably
gets
bootstrapped
and
how
that
works
in
the
contour
side
and
how
you
would
deploy
them
now.
They're
up
and
talking
now,
I'm
curious
to
see
like
what
the
actual
processing
model
looks
like.
How
was
the
internal
state
being
built
up
and
then
serves
to
envoy,
etc.
C
C
What
do
you
mean?
That's
sorry,
this
function,
it's
do
serve
serving.
What
is
this
like?
His
I
missed.
Something
about
the
model
at
the
communications
model
with
envoy
is
envoy.
Are
you
establishing,
like
a
dar
pc
connection
up
front,
and
now
it's
pulling
you
occasionally
or
do
we
push
out
changes
like
how
do
we
get
into
this
function?
Okay,.
A
A
C
A
C
A
A
Good
yeah
so
do
serve
in
the
so
I'm
here
in
this
control,
command
or
command
contour,
and
then
do
you
serve.
It
starts
up
the
serve
command
inside
the
code.
So
the
first
thing
that,
when
you
say
contour
serve,
does
is
it
goes
and
creates
some
connections
to
kubernetes
and
we
use
that
using
client
go
and
we've
recently
just
switched
to
using
the
dynamic
client
for
our
series,
but
we're
still
using
the
generic
Informer's
and
such
for
all
the
view
on
objects,
incriminating
so
secrets,
services
and
points
all
those
different
things.
A
So
we
can
skip
it
through
a
couple
things,
but
this
is
just
setting
up
all
those
different
watches
setting
up
the
prometheus
endpoint
for
contour,
because
it
exposes
some
for
me.
Yes,
I'm
stats
as
well
event
handlers,
probably
the
meat
and
potatoes
of
how
we
process
from
events
from
client
go
into
into
contour
itself.
A
So
we
can
dig
that
here
a
second,
but
this
sets
up
all
the
different
components
of
that
it
needs
in
terms
of
its
loggers
and
its
metrics
things
and
just
hold
off
delay,
which
we
can
talk
about
here
in
a
second
and
then
the
builders
get
a
component
to
which
we'll
talk
about.
So
this
is
actually
what
builds
up
that
that
dag
that
directed
acyclic
graph
in
memory-
and
this
is
where
we
kind
of
process
different
types.
So
contour
will
process.
You
know:
V
1,
beta,
1
ingress
objects
soon,
when
everyone
comes
out
will
process.
A
If
you
want
to
ingress,
we
can
process
ingress
routes.
We
can
process
HTTP
proxies
and
soon
we'll
go
to
do
the
service.
Ap
is
so
the
person
that
helps
us
kind
of
convert
all
those
different
types
is.
This
thing
called
builder,
so
builder
takes
these
different,
specific
types
and
then
writes
them
to
a
generic
type
internal
to
contour.
A
I
shouldn't
diagram
on
this,
but
basically,
once
you
get
all
those
objects
written
to
contours
internal
speak,
then,
from
that
point
on
it's
all
the
same
logic,
you
know,
I
mean
obviously
convert
all
that
into
contour
data
structures.
Once
contour
has
it,
then
we
can
just
pass
that
off
to
envoy.
So
the
only
hard
part
when
you
implement
say
the
new
service
API
spec.
A
All
here's
I'll
be
Informer's,
they
get
written
up.
This
here
is,
if
so
they
moved
the
group
from
what
was
it
networking
that
the
great
extensions
that
v1
beta
1
2
networking
ducky
that
IO?
So
this
is
that
flag?
So
if
you
have
a
newer
cluster
and
you've
turned
off
the
old
one,
so
we
don't
throw
a
whole
bunch
of
watch
errors.
Looking
for
review,
1
beta,
1
ingress,
this
will
be
that
switch
there.
For
you,
this
is
just
boilerplate
stuff
cool.
So
that's
just
that's
how
that
works.
A
A
A
A
A
It's
here
yeah,
it's
your
PC,
so
this
is
where
the
G
RPC
server
gets
spun
up
for
contour
to
serve
that
end
point
to
to
contour
to
envoy
yeah.
That's
when
that
happens,
and
then
this
handles
the
sig
term,
so
I
think
if
it
receives
its
Victor
from
kubernetes,
this
tells
it
to
shut
down
and
that's
it
okay,
cool.
So
once
you
spin
up
all
that
stuff,
when
we
hook
up
all
those
events,
I
know
that's.
This
is
super
fast.
This
isn't
helpful,
but
a
couple
things
happen.
A
It's
not
gonna,
be
dag
I'll,
give
you
that
so
here's
the
thing,
the
the
XDS
server.
So
this
is
where
you
can
see
where
you
can
register
for
like
clusters
and
points
listeners
and
routes
and
secrets.
So
these
are
all
the
different
XDS
interfaces
that
contour
is
going
to
send
back
to
envoy
yeah.
This
is
so
this
is
the
v2
XD
SD
RPC
API,
as
where
all
that
happens,
and
then
I
think
this
stuff
is
just
managing
the
chatter
between
envoy
and
contour,
from
behind
the
scenes
yeah.
A
A
So
when
an
event
comes
in,
let
me
find
the
handler
yeah,
so
you
create
the
listener,
insert
deco
right
and
we
create
out
the
different
Informer.
So
in
the
former's,
our
client
go
isms
they're
going
to
go
in
create
watches
on
the
API.
So
when
a
service
changes
or
an
endpoint
changes
we'll
get
the
event
back
from
Clank.
Oh
all,
those
get
registered
to
this
event
handler
and
we
can
see
that
here
through
and
serve
that
go
yeah
right
here,
so
we
register
this
event
handler
again.
A
This
is
for
let's
go
to
services
here,
that's
an
easy
one!
So
if
I
go
into
services,
we
basically
add
that
Informer
and
register
that
event
handler
when
it
event
changes
and
happens.
We'll
get
a
callback
here
on
this
event
handler.
So
we
here
we
have
the
on
add-on,
update
and
then
on,
delete
and
again
this
kitten,
and
this
is
the
same
event
handler
across
any
of
the
objects.
So
you
can
see
that
we're
getting
passed
in
an
interface
because
it
doesn't
matter
what
type
gets
comes
through.
So
on
add,
update.
A
A
A
This
is
the
event
handler
running
loop,
so
the
events
come
in
through
client
go.
They
get
pushed
onto
a
cache
once
there's
change,
then
this
is
where
the
actual
processing
kicks
off.
So
this
event
handler
then
runs
this
event
healing
loop.
You
can
see
here
what
this,
what
this
job
is
for
is
it
says
alright,
so
I
got
an
update,
there's
a
there's
a
initially.
What
contra
did
so
contrib
received
an
event.
It
goes
and
it
processes
through
and
build
out
that
dag.
A
So
it
builds
out
like
a
canonical
example
of
what
all
the
configurations
should
look
like
for
contour
for
envoy
once
it
has
that
it
passes
that
down
to
envoy
now
the
problem
we
had
was
only
what
every
event
would
build
a
new
version
of
that
dag
right.
So
if
you
had
a
busy
cluster,
you
know
a
service.
Changing
an
endpoint
coming
or
going
would
cause
that
a
change.
So
we're
consciously
rebuilding
this
data
over
and
over
and
over
and
over
and
for
a
most
part,
it
was
happening,
we're
rebuilding
it
too
often.
A
So
what
we
had
was
this
ingot
of
this
back
off
timer
right.
So
what
will
wait
so
many
seconds
or
milliseconds
and
let
all
the
events
come
in
and
then
we'll
just
go
and
process
that
at
that
time,
so
contour
isn't
try
to
explain.
Contour
doesn't
need
to
have
the
events
happen
in
real
time.
It
needs
to
know
like
hey
at
this
point
in
time.
Go
create
your
configuration
based
on
what
it
knows
about
right.
A
So
it
has
a
cache
of
all
the
services
and
point
secrets
and
all
those
things
that
exist
and
when
it
does,
when
it
builds
that
dag
it
just
builds
it
based
off
of
what
it
knows
in
those
caches.
Does
that
make
sense?
So
the
idea
here
is
that
I
I
can
build
a
cat.
I
can
build
a
a
configuration
and
I
can
wait
so
many
seconds
right
and
a
whole
bunch
of
events
can
come
through.
A
C
C
B
A
C
A
C
A
C
A
A
You
can
actually
break
yourself
so
like
if
you
get
say
you
have
like
a
running
envoy
with
all
its
configuration
and
then
restart
contour
contour
is
gonna
start
and
if
it,
if
it
builds
configuration
slowly
off
of
what
it
sees
it
could
then
delete
configuration
at
existing
an
envoy
do
any
means
because
they
didn't
have
all
the
data
yet.
So,
if
you
had
say
you
know
10
different
routes
and
contour
processes,
it
only
has
5
at
that
time.
Then
it's
gonna
say:
hey
envoy,
a
kill.
A
Those
FFI
that
don't
exist
because
they
didn't
know
about
this
five
that
were
there
before
yep.
So
the
idea
here
is:
isn't
it
not
to
mitigate
against
that?
So
let
all
those
caches
fill
up
first
and
then
pass
all
once
you
have
it
all
then
say:
alright,
no
go
build
the
dag
and
then
pass
that
configuration
off
to
envoy.
A
A
A
A
Okay,
so
let's
skip
ahead
so
so
put
all
this
code
here.
I
know.
I
know
this
is
fun
that
I
don't
know.
Hopefully,
hopefully
this
is
helpful,
so
contour
has
a
bunch
of
caches
right.
So
in
this
cache
handler
there
should
be
yeah.
So
here's
a
list
in
a
cache
or
out
cache
a
cluster
cache
secret
cache.
All
this
gets
filled
up
with
update
right
and
then
what
we
do
is
we
go
build.
A
This
builder
gets
kicked
off,
so
the
event
handler
says:
hey,
go,
build
a
dag
and
then
in
here
this
is
that
basically,
the
kind
of
adapter
in
a
sense
right.
So
this
this
dag
builder
says
go
ahead
and
compute
virtual
host
go
work
on
the
ingress,
is
the
ingress
routes
and
then
proxies
and
you've
done
this
order
because
we
want
anything.
We
want
proxies
that
Trump
anything
configured
as
ingress
routes,
an
ingress
route,
so
Trump
anything
with
ingress
is
so.
This
is
the
mean
potatoes
of
where
these
go.
A
So
if
I
look
at
compute
proxies,
it's
gonna
go
look
through
all
of
the
proxies
that
it
knows
about
and
again
that's
based
on
what
it
has
in
its
cache
right.
It
was
gonna,
say:
hey,
go
compute
a
proxy
and
then
this
is
where,
as
soon
logic,
so
it's
saying
like
hey
if
the
virtual
host
is
nil,
then
we're
gonna
mark
that
as
orphan
because
no
one's
delegated
to
it,
because
I
guess
at
this
point,
I've
already
processed
we've
already.
Yes,
I,
missed
that
we've
processed
all
the
routes
by
now.
A
I
believe
that's
here
in
this
one.
No,
not
that
one
valid
proxy
has
to
come
from
here.
I
think
yeah.
So
this
is
where
we
pick
out
all
the
routes
right
so
for
all
the
fully
qualified
domain.
So
this
goes
and
looks
for
all
the
routes
that
exist
and
have
a
fully
qualified
domain.
Again,
it's
processing
logic
through
it
from
there.
Once
we
have
the
routes
processed,
then
we
can
go
process
all
the
children,
so
it's
gonna
go
and
process
through
and
look
for
things
like
there's
the
way
we
can
limit
where
ingress
routes
live.
A
It's
gonna
look
for
TLS
secrets.
So,
hey
you
know.
If
you,
if
you
said
hey
this,
this
should
serve
a
TLS
end
point.
You
know
that
logic
to
figure
that
out
now
I
guess
where
this
comes
into
play
is:
after
does
all
this
stuff?
You'll
see
this
add
routes,
method
right,
so
it's
gonna
do
a
bunch
of
checking
of
errors
in
logic
and
whatnot,
and
then
it's
gonna
go
ahead
and
compute
those
routes
here.
A
So
here
it's
gonna
go
pick
apart
and
get
like
the
actual
route,
so
slash
blog,
/,
foo,
slash
whatever,
once
it
has
those
it
gets.
This
method
called
ad
routes,
and
this
just
adds
it
to
this
view
house
thing,
so
this
V
host
object
now
is
it
gets
returned,
but
this
is
what
this
this
route
object
is
now
part
of
the
dag
right.
So
this
is
contour
speak,
so
we're
taking.
You
know
all
those
ingress
routes
proxies
all
the
different
types
of
things
and
we're
converting
them
into
this
internal
structure.
That
contour
manages
again.
A
A
Try
policies
and
stuff
so
we're
gonna,
convert
to
the
contour
speak
and
guess
is
what
I'm
saying
once
it's
in
this
generic
contour
speak.
Then
we
can
process
everything
after
this
is
all
the
same
right.
So
we're
mapping
everything
into
here
and
that's
the
job
of
the
Builder,
so
you'll
notice
in
here
we
have
there's
a
little
process
process,
ingress
routes
or
everything's
in
here.
I
guess
here
right
all
different
types.
A
So
once
we
go
and
build
the
service
api's
will
have
a
be
you
know:
compute
service,
api's
or
something
jeremy
and
then
have
all
the
logic
in
there
for
that
once
it
gets
into
the
dag
this.
This
thing
will
build
that
out.
So
this
is
all
the
pieces
and
parts
that
make
up
that
dag.
Then
it
goes
and
becomes
hits
the
next
step,
so
this
builder,
that
build
will
call
back
to
this
update,
dag.
So
this
event
handler
here,
is
where
we
before
we're
waiting
for
the
time
off
there,
the
back
off
timer.
A
This
gets
called.
We
go
ahead
and
build
that
dag,
and
then
we
tell
the
cache
handler
hey.
You
changed
something's
different
now
and
then
this
will
go
and
updates
all
the
different
components.
So
at
this
point
and
go
back
here
after
this,
this
builder
dot
build
this
dag
here
contains
all
the
configuration
for
your
cluster
everything,
that's
processed,
everything
salad.
Everything
should
get
passed
down
to
envoy
is
in
this
dag
now
and
then
once
guys,
it's
not
humble
to
speak
yet,
but
it's
ready
to
go.
A
It's
been
processed
and
valid
and
everything
cache
owner
then
we'll
go
update.
So
let's
go
look
at
rats,
maybe
so
casual
an
update
routes
will
go
and
visit
the
routes,
so
we'll
dig
into
that
one,
and
then
this
will
go
through
and
pick
apart.
All
the
different
pieces
right.
So
we
sort
the
routes,
so
we
put
longest
first
right
so
envoy
will
roll
route
to
whatever.
A
We
can
build
out
what
honorably
speaks
are
so
like
a
route
match
is
an
envoy
XDS
type
and
it
can
have
a
path
or
a
regex.
So
this
one
has
if
it
has
a
regex
capacitor
projects
in
there,
if
it
has
headers,
it
passes
headers.
So
this
is
what's
gonna,
take
the
dag
information.
So
you
see
the
input
here
is
a
dag
route,
and
then
this
will
convert
that
dagger
out
into
an
envoy
route
right
and
we
return
back
this
on
way
route
ism
or
this
XTS
route
type,
which
is
again.
A
And
then
that
gets
built
up
into
a
cache
and
then
that's
what's
the
XDS
server
will
serve
back
to
envoy
I
like
I'm,
confusing
myself,
so
y'all
can't
be
found
along
that
wall
either.
But
the
goal
here,
I
guess:
there's,
there's
an
envoy
package
and
there's
a
contour
package
right.
The
goal
is
to
convert
the
information
into
this
dag
and
then
you
convert
that
information
into
envoy
isms
right.
A
A
We're
supposed
to
updates
yeah,
so
this
message
here
this
is
processing
discovery
quests
from
envoy,
so
you'll
see
here,
there's
like
a
there's
a
forever
for
loop
here
and
then
this
will
go
ahead
and
this
receives
requests
from
envoy
when
it
asks
for
things.
But
then
we
can
also
push
things
down
to
them.
Sorry,.
C
C
A
We're
be
sending
everything
because
they're
that
we
talked
about
so
there
are
some
things:
we've
had
some
cuts,
some
folks
with
memory
issues
we're
in
a
high
volume
cluster.
We
consume
lots
of
memory
and
part
of
that.
It
was
because
of
we're
extending
data
and
you're
sending
the
same
things
over
and
over
when
on
receives
a
change.
It
keeps
it
makes
a
new
change
in
memory.
A
This
is
how
it
can
do
things
hot,
so
it'll
spin
up
a
new
set
of
configuration
memory
and
then
let
the
old
connections
drain
off
and
then
once
those
drain
off.
Then
it
releases
all
that
back
both
each.
If
you
send
like
lots
of
connections,
lots
of
changes
all
at
once
envoy,
then
spins
up
many
many
copies
of
all
the
data
and
we've
had
some
some
memory
issues
with
that.
A
Yeah
I
think
I.
Think
in
that,
in
that
XPS
protocol
you
can
you
can
manage
how
you
want
that
to
work
the
granularity
of
that
yeah.
Okay,
some
guy
says
yeah.
We
could
we
can
be
date,
then
a
Dave
Chinese
and
a
lot
of
the
the
XPS
stuff
on
this
so
I
feel
like
I'm,
not
helping
answer
your
question
all
that
well.
C
A
Yeah,
so
there's
lots
of
lots
of
things
around
this
XTS
that
that
can
get
tricky
and
it's
confusing
at
times
of
how
the
data
flows
around.
But
this
is
this
is
the
this
is
where
this
were
responding
to
discovery
requests
here
in
this
XTS
Joe
I'm
thinking
of
I
forget
where
we
map
up
I'm
happy
to
go
longer,
I
focus
when
I
go
longer,
but
up
to
y'all
need
to
balance.
That's
cool,
I
thought,
and
here
we
register
the
yes
things
but
turns
serve
here.
A
A
This
has
been
halo
cash
so
remember
when
we
got
yeah
so
in
the
cash
handler
when
it
gets
all
the
updates.
These
are
the
caches
that
get
happened
that
get
populated
internally
to
contour
and
then,
when
those
get
populated,
we
pass
that
off
to
the
XDS
server
and
then
that
that
map's,
those
caches
to
the
XDS
address
with
we're
on
what
it
looks
for
things.
B
A
B
I
think
this,
this
gets
us
going
in
the
right
direction
and
I'm
gonna
be
a
good
idea
to
to
wrap
things
up
and
I
get
a
chance
just
to
kind
of
digest.
The
information
I'd
like
to
go
back
and
dive
into
envoy
because
it
has
been
a
while
since
I've
worked
with
Anglais
John,
and
maybe
we
circle
back
together
and
if
we
think
that
maybe
there's
a
few
topics
that
we
could
have
a
follow
on
and
we
can
discuss
those
details.
Okay,
cool.