►
From YouTube: Contour Office Hours - Sept 3, 2020
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Heard
a
quick
one:
we've
got
one,
that's
trying
to
join
and
they're
getting
a
request
for
a
passcode
zero
zoom
pass.
B
Code,
the
the
passcode
is
in
the
link
in
the
wiki,
so
if
you
just
send
them
the
the
wiki
link
there,
you
can
also
click
here
in
zoom
up
to
your
left,
you
have
a
little
meeting
information
and
that
gives
you
the
entire
invite
link,
so
you
can
just
copy
and
paste
that
to
them.
Okay,
thank
you
very
much
sort
of
thing.
C
D
Hey
steve,
I
got
a
quick
question
for
you
so
that
meeting
that
we
had
the
other
day
on
getting
up
and
running
is
that
video
on
youtube.
Yet
I
was
checking,
I
think,
a
little
while
back,
I
didn't
see
it
up.
There.
C
Jonas,
on
was
on
vacation
for
for
some
time,
so
he's
he's
just
backed
up
with
everything
so
actually
yeah
he'll.
Do
it
soon.
Sooner
than
later,
cool.
D
Hey
I'd
love
to
learn
a
little
bit
more
about
your
new
release
and
some
of
the
performance
improvements.
I
think
that'd
be
a
just
a
maybe
interesting
thing
to
talk.
D
C
We
can
do
that
for
sure.
Let
me
grab
a
window
here.
C
Okay,
so
I
can
share
desktop
ones.
Do
the
whole
shebang
there
we
go
yeah,
so
we
did
a
release
last
night
or
depending
on
in
your
time
zone,
if
you're,
honest
jelly,
it
was
yesterday
morning,
but
so
this
one
was
the
primarily
one
was
to
fix
some
performance
issues.
So
matt
moore
he's
one
of
the
folks
that
works
on
k-native.
C
He
was
doing
some
load
testing
of
okay.
So
there's
this
thing
called
net
contour,
so
net
contour,
k
native.
So
if
you
run
k
native,
you
can
actually
run
contour
as
the
ingress
controller.
That
does
all
the
routing
bits.
So
this
is
the
repo
for
that.
So
I
think
in
all
the
candidate
docs
there's
a
way
that
you
can.
You
can
deploy
this,
but
this
is
the
basic
repo
for
that.
So
matt
was
doing
some
performance
testing
of
of
contour
running
mk
native
and
ran
into
some
some
things
where
an
older
version.
C
I
think
he
had
like
a
two
minute
delay
and
then
on
the
newer
ones.
It
was
like
five
minutes
or
so
like
something
much
larger
than
it
should
have
been,
so
he
did
some
digging
and
figured
out
that
some
of
the
issues
he
had
was
tied
to
the
status
updates,
so
contour
writes
status
back
to
the
api
server
for
ingress
objects,
as
well
as
http
proxy
objects.
C
So
things
like
you
know
the
load
balancer
address
for
ingress
as
well
as
just
you
know,
is
it:
is
the
resource
configured
properly
or
not
so
in
there
he
found
out
that
there
was
this.
Basically
like
there
was
a
resource.
We
were
overwhelming
the
api
server
with
updates
and
hitting
some
rate
limiting.
So
I
think
here
in
the
notes,
nick
put
some
good
description.
So
when
we
we
built
the
the
graph.
C
Yeah
we'd
hit
the
api
server
rate
limiting
right
so
because
lots
so
basically
all
the
status
updates
would
back
up
and
then
they
would
get
they
would
get
stored
back
in
time.
So
we'd
have
to
over
time
keep
that
list
of
grown
grown
grown
grow
to
push
that
through,
and
that
was
causing
the
issue
so
to
test
it
to
verify
it.
C
Matt
went
and
turned
off
right
on
the
update
the
status
update
and
just
ran
it,
and
then
I
think
the
performance
went
down
to
like
seconds
it's
from
minutes,
so
we
narrowed
it
down
to
that
being
the
the
bit
and
then
matt
went
and
put
pretend
pr
in
here.
I
think
that's
this
one,
maybe
nick
did
the
pr
I
made
the
channel
buffered
here.
That
was
the.
I
think
there
was
another
pr
that
matt
has
filed.
D
C
C
E
C
I
don't
know
where
it
is,
but
I
forget,
but
we
can
find
it,
but
that
was
the
the
tldr.
I
guess
the
the
big
thing
was
status
updates
was
working
fine.
It
was
just
a
matter
of
when
you
had
a
large
cluster
with
lots
of
proxies
and
lots
of
objects.
C
E
I
think
what
specifically
what
matt
found
was
that
every
time
we
went
to
do
a
status
update,
we
were
first
getting
the
http
proxy
from
the
api
server,
even
if
we
weren't
actually
going
to
update
it
and
so
we're
doing
tons
and
tons
of
gets,
and
so
he
changed
it
to
use
the
the
informer
cache
to
actually
get
the
proxy
resource
and
check
and
see
if
we're
actually
going
to
modify
it
before
submitting
the
patch
so
saved
a
whole
bunch
of
of
calls
out
to
the
api
server.
C
Okay,
yeah
you're
right
steve,
thanks,
yeah
yeah,
here's
the
pr,
so
this
is
2864.
and
this
one
was
replace
the
dynamic
cache
or
the
dynamic
client
get
which
was
uncached
with
the
read
from
the
informer's
list
or
cache.
C
C
D
One
go
ahead,
so
so
how
would
you
define
as
a
so
a
large
cluster
would
be
the
number
of
http
proxy
objects?
How
would
you
define
that?
Is
it
just?
Is
it
just
the
number
of
http
proxy
objects,
and
is
this
kind
of
like
a
exponential
type
issue?
Is
it
kind
of
like
linear
in
terms
of
like
you
know,
if
you
get
to
250
it'll,
take
two
seconds
to
get
to
500
it'll,
take
you
know
five
seconds
or
whatever.
C
Yeah
we've
had
users,
I've
heard
of
users
that
I
know
when
we
wrote
gimbal,
and
this
has
been
some
time
now
but
gimbal
we
tested,
I
think,
with
like
5
000
services,
and
I
forget
how
many
endpoints
we
had
that's
kind
of
getting
getting
big.
So
there's
a
couple
different
things
to
think
about
right.
C
So
so
one
of
the
things
that
nick
did
before
was
status
updates
was
tied
to
the
dag
rebuild,
and
I
know
this
one
sort
of
was
as
well,
but
the
idea
the
goal
for
status
was
to
pull
it
out
from
the
dag
rebuild,
so
that
status
updates
could
happen
in
a
different
process
because
we
have
status
updates
for
proxies
as
well
as
ingress
objects.
So
the
kind
of
two
different
places-
and
I
know
we're
looking
to
add
condition,
support
as
well.
I
believe
nick
has
a
as
a
design
some
pr's
on
that.
C
You
y'all
might
know,
but
I'll
just
do
it,
because
it's
nice
to
look
at
things,
cube
control,
get
proxies.
C
Yeah
so
down
here,
you
can
see
that
we
have
status
so
right
now
we
have
basically
is
it
valid
thumbs
up
or
thumbs
down
and
then
a
small
little
description
and
then
the
load
balancer.
If
you
have
one,
this
is
actually
running
in
my
house,
so
there's
no
load
balancer
in
front
so
that
we're
still
looking
to
add
a
set
of
conditions
to
that
and
nick
has
a
design
dock
here,
yeah
for
conditions
which
outlines,
basically
that
can
become
a
set
now,
and
this
is
mirroring
the
upstream
service
apis
work.
C
So
the
idea
here
instead
of
having
basically
like
here's
the
thing,
that's
wrong:
it's
like
here's,
a
set
of
things
that
could
be
wrong
or
just
informational
about
what's
going
on
with
that
resource,
so
also
so
that's
going
to
get
tied
into
all
this
as
well
anyway,
back
to
your
original
question,
so
that
was
status,
so
that
was
the
the
issue
we
saw
here
was
writing
that
status
back
basically
slowed
down
the
client
go
client
and
contour,
and
then
that
made
other
updates
happen
slowly.
C
What
makes
a
big
cluster
is
yeah,
so
the
more
services
there
are
and
the
more
endpoints
and
the
more
objects
like
ingress
objects
that
would
make
it
busier.
We've
never
really
had
an
issue
of
like
the
dag
rebuild
time
having
taken
too
long,
but
there
is
a
back
off
in
there.
So
if
you
ever
watch,
look
at
the
logs
of
contour,
contour
you'll
see
there'll,
be
like
this
yeah.
This
last
update
outstanding
one
there's
like
a
back
off
timer
base,
basically
baked
into
there.
C
C
So
we
moved
to
a
way
where
we
would
kind
of
match
up
a
bunch.
So
every
time
we
rebuild
the
dag
it's
based
on
the
current
state
of
what
contour
knows
about
the
cluster
right.
So
we
don't
process
contour,
doesn't
process
like
oh
the
service
got
added
now.
Let's
go
add
that
to
the
configuration
and
then
send
out
that
change,
contour
says
hey
and
here's
all
the
services.
I
know
about
here's
all
the
routes
and
all
the
objects
I
know
about.
It
builds
a
configuration
off
of
that
and
sends
it
out.
C
So
it's
kind
of
burns
up
more
cpu
cycles
in
a
sense
just
because
you're
always
recalculating
from
scratch,
but
it
gets
out
of
that
mode
of
having
you
know,
issues
where
you
get
out
of
sync
with
with
reality
of
what
the
cluster
is
doing
so
so
the
dag
update
here
has
a
back
off
sense.
So
it'll
wait
a
certain
period
of
time.
So
I
think
it's
100
milliseconds
or
something
it'll
wait
that
time
if
it
doesn't
have
any
other
changes
and
rebuild
or
rebuild
off
of
a
certain
set
of
objects.
C
So
yeah
that
doesn't
help
answer
your
question.
I
don't
think
either
so
scaling
is
no!
It's
fine
yeah!
I'm
trying
to
talk
through
it,
because
I
don't
have
a
good
answer.
I
know
it's
come
up
before
with
like
what
resources
should
I
give
envoy
and
contour
in
terms
of
memory
and
cpu,
and
a
lot
of
it
too
depends
on
lots
of
things.
You
know
I
think,
before
I
did
a
proof
test
of
things.
This
is
a
while
ago
now
resource
limits
yeah.
C
This
is
on
1.12,
so
this
has
been
a
while,
but
this
was
the
test
that
I
sort
of
did.
What
did
I
throw
at
it
yeah?
So
I,
when
we
spun
it
up,
contoured
10
megs
of
memory
and
I
had
15.
throwing
5
000
services
at
it.
C
Without
any
traffic
envoy
stayed
the
same
and
contour
used
46
megs
of
memory,
and
then
I
started
throwing
traffic
so
10
000
services,
10
000,
ingress
objects,
contour
used
what
two
gig
I
can't
be
right:
yeah,
3,
600
or.
D
C
That
seems
like
too
much
so
we
should
do
this
again
and
then
ombre
went
to
430
megs,
which
is
so
good,
and
then
I
think
we
did.
C
This
makes
no
sense
with
zero
traffic
this
and
have
no
services
or
ingress
objects,
but
send
traffic
at
it.
So
we
should
redo
this
I'm
going
to
bet.
There
was
high
different
objects
in
there,
but
this
ties
into
there
as
well.
You
know,
but
nothing
was
crazy
in
terms
of
numbers
yeah.
We
used
to
have
memory
issues
and
I
think
that
got
fixed
with
a
couple
things
in
contour,
as
well
as
an
update
in
envoy.
C
D
Yeah
there
was
an
interesting
link.
I
saw
steve,
it
was
about
I've
got
it.
I've
got
like
a
little
gist
where
I
keep
a
bunch
of
kubernetes
links
I'll
send
that
over
in
the
contour
channel,
but
it
kind
of
talks
about
kubernetes
limits
too,
which
kind
of
may
be
interesting.
You
know
to
kind
of
take
into
account.
D
D
They
kind
of
talk
about
how
you
know
when
you're,
when
you're
trying
to
figure
out
like
okay,
what
are
the
loop?
What
are
the
cluster
limits
right
and
it's
actually
a
complex
question
and
I
think
they
make
some
kind
of
good
good
points
in
this
in
this
little
article.
So.
F
C
I
know
we
did
a
lot
load
testing
of
gimbal,
so
gimbal's
job
was
to
make
sure
we
didn't
add
extra
latency
to
the
request.
So
from
the
external
load
balancer
to
the
backend.
You
know,
service
pod
in
that
case
could
have
been
an
openstack
vm.
We
were
trying
to
add
too
much
latency
to
that
request
when
we
found
that
we
didn't.
But
where
was
I
going
with
that?
Oh,
but
we
found
that
we
would.
C
We
would
peg
the
network
much
faster
than
picking
the
cpu
memory
of
the
machines,
so
the
network
became
you
know,
congested
and
filled
up
much
way
before
the
ran
out
of
memory
or
cpu.
So
that
was
again
that's
not
the
same
thing
I
get
it,
but
it
showed.
I
guess,
envoy
was
being
a
very
good
proxy
as
it
should
be.
I
would
expect
it
to
not
use
a
whole
lot
of
memory
yeah,
but
it
kinda
depends
on
what
you
get
what's
going
on.
You
know
we
don't
have.
C
I
don't
have
much
production
clusters
that
I
can
see
running
contour.
You
know
from
what
I
said
as
the
open
source
person.
C
So
I
think
I'm
rambling
now,
but
it
all
depends
right,
so
yeah,
the
more
stuff
you
have,
the
the
more
it's
gonna
have
to
chug
through
and
process,
but
we
haven't
really
had
a
whole
lot
of
issue
with
that.
D
Oh
yeah,
no,
absolutely
I
just
thought
that
was
really
cool.
You
know
I
was
trying
to
better
understand
and
it's
kind
of
interesting,
because
I've
actually
the
past
couple
weeks
been
doing
a
lot
of
like
you
know,
kubernetes,
you
know,
automation
like
essentially
integrating
api
and
I've
been
working
with
informers
and
you
know
watchers
and
you
know
the
whole.
I
think,
essentially,
what
you're
saying
is
you
guys
went
from
polling?
You
know
essentially
like
performing
a
bunch
of
get
requests
to
kind
of
event,
event
driven
right
for
updates
and
stuff.
D
So
you
know
definitely
you
know.
C
Yeah,
I
know
yeah
the
first.
The
first
version
of
contour
used
envoy
would
pull
contour
for
xds
updates
and
that
was
before.
Like
the
grpc
stuff
existed,
I
think
right
and
then
contour
went
and
implemented
the
the
jpc
interface.
Then
we
could
stream
that
stuff
down
so
yeah.
That
was.
G
G
When,
when
the
changes
are,
are
then
sent
to
to
anova
what
the
current
is,
what
what
makes
it
work
so
that
that
that
the
configuration
the
new
configuration
that
replaces
the
whole
configuration
is,
is
somehow
taken
into
use
in
in
a
consistent
manner
like
like,
like
as
a
transaction,
that
every
resource
will
be
in
place,
and
there
is
no
any
hiccups
during
streaming.
These
new
configuration
items.
C
Well,
there
could
be
some
hiccups.
I
mean
that's
actually
another
issue
that
that
matt
has
raised
with
with
k
native
that
he's
seen.
So
let
me
see
if
I
can
find
that
one.
So,
basically,
what
happens
is
if
I
can
type
and
talk
at
the
same
time,
k
native
had
some
some
issues
where
they
swap
out
services
quickly.
C
So
they'll
have
you
know,
service
a
configured
and
it'll
swap
to
service
b
quickly,
and
what
can
happen
is
is
that
the
change
will
hit
contour
at
some
point
and
then
contours
got
to
stream
that
down
to
envoy.
So
there
could
be.
You
know
a
route
configuration
change
as
well
as
a
cluster
configuration
change
and
then
a
set
of
endpoints.
C
Those
three
things
need
to
hit
envoy
all
at
the
same
time
to
to
make
sure
that
you
don't
get
a
you
know
a
down
or
a
500
request
or
a
bad
request
from
a
user
when
that
happens
like
super
like
when
in
some
of
matt's,
canadian
tests
are
happening
very
very
quickly,
and
he
has
some
had
some
issues
where
they
they
broke.
Where
he
had,
you
know,
didn't
get
a
successful
response
code
from
from
contour.
C
I
believe
matt
did
a
change
in
in
this
to
to
handle
that
so
there's
two
things
we
can
do
to
address
that
in
contour.
So
one
this
this
is
happening
so
quickly.
I
think
he's
finding
it
because
it's
a
unit
test,
but
I
guess
it
can
happen.
Part
two
is
that
we
could
switch
to
a
thing
called
ads.
So
right
now,
contour
exposes
a
grpc
connection
for
every
xds
resource.
So
I
guess
I
could
show
you
I'll
have
a
slide
here,
real,
quick.
Let
me
go
find
this
slide.
F
F
Okay,
I'll
try
this
back.
C
C
C
Maybe
it's
this
one.
Who
else
has
like
google
drive
issues
of
finding
the
right
account?
Here
we
go
yeah.
This
is
a
good
picture,
so
this
one
has.
This
is
a
good
view
of
what
contour
does
right
so
down
here
at
the
bottom.
These
are
all
the
different
services
that
envoy
offers.
So
there's
some
more,
but
this
is
what
contour
implements,
so
you
can
think
of
contour
as
a
translator
between
kubernetes
and
envoy
objects,
so
any
service
in
kubernetes
gets
translated
to
a
cluster
and
envoy
down
here.
C
That's
called
cds,
secrets
or
certificates
that
we
pull
in
from
the
cluster,
get
mapped
to
sds.
Endpoints
go
to
eds
for
endpoints
routes,
hit
rds,
which
essentially
they're
actually
cluster
load
assignments,
I'm
sorry
endpoints
or
cluster
load
assignments,
and
then
listeners
mapped
to
lds,
which
are
listeners.
So
you
can
see
here
that
contour
watches
for
all
these
objects
at
the
top,
builds
that
that
dag
in
it
and
then
passes
that
down.
C
But
there
could
be
a
time,
and
so
every
one
of
these
services
down
here
becomes
a
grpc
stream
or
connection
between
contour
and
envoy.
So
one
two,
three
four
five.
So
now,
if
you
have
one
envoy
with
one
contour
you're
going
to
have
five
gear,
pc
connections
to
contour
today,
there's
a
thing
called
ads
and
what
that
does
is
it.
It
proxies
all
these
different
xds
endpoints
over
one
grpc
connection.
C
So
then
you
can
have
basically
for
if
they're
one
to
one
then
so
every
instance
of
envoy
has
one
jrpc
connection
back
to
contour
and
that
can
get
expensive
the
way
it
is
today
right.
So
if
you
had
100
envoys
you're
going
to
have
what
500
grpc
connections
back
to
various
numbers
of
the
contour
in
the
cluster,
so
then
there's
things
you
turn
on
called
ads
or
no.
What
is
it
ads?
C
It's
been
a
while.
What's
the
thing,
there's
a
validation
when
you
do
this
here,
we'll
go
to
go
control,
plane.
C
There's
a
flag
you
turn
on.
That
tells
it
to
basically
do
what
you're
asking
you're
saying
about
tarot.
So
it
does.
It
validates
the
objects
that
you're
sending.
So
if
you
send
it
say,
a
route
related
to
a
cluster
doesn't
exist.
Yet
the
go
controlling
will
reject
that
information
until
you
get
that
configuration
updated
properly.
So
then,
when
you're
streaming
that
information
down
you're
only
streaming,
you
know
a
full
manifest.
That's
valid.
C
Yeah
yeah,
so
we're
looking
to
move
to
this
go
control
plane.
So
I
have
some
pr's
now
that
I'm
pushing
through
and
we're
going
to
actually
add
this
in
in
parallel,
so
you'll
be
able
to
so
we
don't
want
what
we
don't
want
to
do
is
swap
out
the
whole
back-end,
xds,
control,
plane
and
like
here.
You
go
run
this
on
production
because
I'm
I'm
not
scared
of
it,
but
it's
just
a
big
change,
so
we're
gonna
let
this
run
in
parallel.
So
we
can
start.
C
You
know
turning
this
on
in
certain
places
to
see
how
it
works.
So
the
difference
what
the
go
control
plane
does
is.
It
is
transactional,
so
you
create
these
things
called
snapshots.
So
it's
probably
an
example
here
yeah,
so
you
create
like
a
new
snapshot
cache
and
then
you
generate
a
snapshot
and
that
snapshot
is
sort
of
like
here's.
The
listeners
here
are
the
clusters.
Here's
the
routes,
there's
all
the
things
that
you
care
about
and
then
we'll
send
it
over
to
envoy
and
that
gets
passed
over
to
envoy.
C
I
think
it's
that
and
then
there's
just
there's
less
churn
for
less,
because
every
grpc
stream
is
can
be
expensive
in
contour
right
having
that
that
resource
configured
and
running
all
the
time
when
we
had
memory
issues
before
this
is
actually
in
this
talk
here.
So
some
of
this
well,
the
memory
issues
we
saw
like
this
is
about
70
gigs
of
memory,
which
is
a
lot,
but
some
of
this
was
just
grpc
connections
that
it
was
maintained,
because
this
user
was
a
very
had
a
very
large
cluster
that
they
were
working
with.
C
So
that
helps
remove
a
lot
of
that
churn.
A
lot
of
that
overhead,
you
might
see
with
memory
and
then
that
it
is,
it
is
called
ads.
So
here
in
the
code,
when
you
create
the
snapshot
cache,
this
is
false
or
true
is
whether
you
want
to
turn
on
ads
or
not,
and
when
you
turn
that
on
here,
it
adds
a
consistency
check
to
verify
that
the
the
caches
all
have
the
right
things
in
there.
C
So
that
should
be
coming
very
soon.
This
new
swap
out
to
the
go
control,
plane
and
there'll
be
a
flag
again.
It
will
default
to
the
current
contour
xcs
server,
but
you'll
be
able
to
buy
into
it
and
then
once
we
have
that
in
that'll
help
set
us
up
for
a
way
to
move
to
the
the
version
three
xds
v3
right
now,
we're
still
on
v2,
because
at
the
end
of
the
year
onward
is
going
to
deprecate
v2.
So
it
won't
be
supported
any
longer.
C
C
E
C
He's
not
here,
so
I
can
blame
him
right,
they're
cool
when
you
when
you're
reviewing,
because
it
helps,
but
it
also
gets
yeah
in
the
way,
sometimes
plus
my
font's
big.
C
I
don't
know
where
it
is
here.
I
can
just
show
you
in
here.
F
A
C
Time,
I
don't
know
steve.
I
know
today
on
today's
talk
we
talked
about
doing.
I
can
show
you
this
if
this
matters,
but
it's
not
all
interesting
about
how
the
back
the
dag
got
rebuilt
and
how
we
convert
from
events
to
xds.
C
C
E
E
All
right
yeah,
so
over
the
past
couple
of
weeks.
I'm
sure
some
of
you
have
heard
me
talk
about
this
already,
but
I
was
working
on
putting
together
a
guide
and
some
sample
configurations
for
using
gatekeeper
alongside
contour
and
so
for.
E
Those
of
you
who
aren't
familiar
gatekeeper
is
related
to
the
open
policy
agent
project
and
specifically
it's
a
it's
an
admission
controller
for
kubernetes
that
uses
open
policy
agent
and
rego
to
allow
you
to
define
kind
of
policies
or
constraints
that
you
want
to
enforce
as
part
of
the
kubernetes
api
server
processing
as
resources
are
being
created
or
updated,
and
so
the
the
idea
here
and
and
what
got
us
thinking
about
this,
was
that
you
know
currently,
when
you
define
basically
as
a
user
of
contour,
if
you're,
defining
http
proxies
contour
will
do
a
bunch
of
validations
on
those
and
and
ensure
that
they're
configured
correctly.
E
But
currently
that
doesn't
happen
until
after
the
http
proxy
resource
has
been
created
in
the
kubernetes
api
and
the
dag
is
actually
processing,
and
so
the
way
that
you
as
a
user
would
see
if,
if
one
of
your
resources
is
invalid,
is
that
eventually
contour
would
set
a
status
on
it.
That
says
this
proxy
is
invalid
and
it
would
give
you
at
least
one
of
the
reasons
why
it
was
invalid,
and
so
that's
that's
fine,
but
it
it
means
that
as
a
user,
you
don't
get
immediate
feedback.
E
You
know
whatever
validations
you
want,
you
want
to
apply
there,
so
you
know
we
could.
If
we're
using
an
admission
controller,
we
can
enforce
all
of
those
validations
on
the
proxies.
You
know
to
ensure
that
kind
of
required
fields
are
populated
and
that
there
are
no
conflicts
with
other
with
other
resources.
E
So
so,
anyway,
yeah
I
wrote
up
this
guide,
which
is
kind
of
a
walk
through
for
how
to
use
gatekeeper
with
contour
and
we've
started.
To
create
some
example
configurations
for
that
in
the
contour
repo,
so
I'll,
just
kind
of
walk
through
some
of
this
I'm
going
to
loosely
follow
the
guide,
but
may
go
off
a
little
bit
and
definitely
if
anyone
has
questions
as
I
go
through,
it
feel
free
to
jump
in
so
yeah.
So
the
first
thing.
E
So
I
have
a
kind
cluster
set
up
and
I
have
contour
installed,
but
I
don't
have
gatekeeper
installed
yet
so
the
first
thing
I'm
gonna
do
is
install
a
gatekeeper,
my
keyboard's
being
super
laggy
here.
So
I'm
actually
gonna
hold
the
laptop
off
the
stand,
and
just
do
this.
E
So
yeah
so
first
thing,
I'm
gonna
do
is
deploy
gatekeeper
and
getting
an
off-the-shelf
gatekeeper.
Install
is
super
simple.
So
if
you,
if
you
look
at
this
command
here,
this
is
basically
pulling
some
yaml
out
of
the
gatekeeper
repo.
So
I'm
just
gonna
run
this.
E
And
so
this
is
going
to
set
up
a
gatekeeper
namespace,
so
we've
got
the
gatekeeper
system
namespace
now
and
if
we
look
at
the
pods
that
are
in
there
should
be
up
and
running
by
now
yep.
So
you
see,
we've
got
a
few
different
pods
or
I
guess
three
replicas
of
the
controller
manager
for
gatekeeper
and
then
also
an
audit
pod
which
we'll
we'll
get
into
in
a
little
bit.
So
that's
the
first
thing
and
then
what
we're
going
to
do
next
is
apply
some
some
configurations
to
it.
E
So
the
first
thing
that
we're
going
to
do
is
if,
if
you
go
into
the
contour
repo
under
examples,
we
have
a
gatekeeper
directory
now
that
has
all
of
the
kind
of
sample
yaml
that
we're
using,
and
the
first
thing
we
need
to
do
is-
is
configure
gatekeepers
cache.
E
So
so
by
default,
you
know,
gatekeeper
operates
as
an
admission
controller,
which
means
that
essentially,
when
a
when
the
resources
that
you
configure
are
getting
passed
to
that
admission,
controller
gatekeeper
will
will
see
the
full
kind
of
spec.
So
they'll
see
the
spec
of
that
http
proxy.
But
if
you,
if
you
want
to
write
admission,
checks
that
look
at
other
resources
in
the
cluster,
you
need
to
configure
gatekeeper
to
have
those
in
its
cache.
So
in
this
case
I
want
gatekeeper
to
keep
in
its
cache
all
http
proxy
resources
and
that'll.
E
Allow
me
to
write
admission
checks
that
that
look,
not
just
at
the
resource,
that's
being
created,
not
just
that
the
proxy
that's
being
created,
but
actually
look
across
all
the
other
proxies,
and
so
that
allows
me
to
infor
to
do
things
like
enforce
unique
fqdns
across
all
the
proxies,
so
so
yeah.
What
I'm
gonna
do
is
just
let's
see,
apply
that
config
that'll
tell
gatekeeper
to
cache
all
those
proxies
and
then
we're
we're
good
to
start
going
through
this.
E
So
we
we've
sort
of
organized
the
gatekeeper
constraints
across
what
we're
calling
validations
and
then
policies,
and
so
validations
are,
are
basically
things
that
contour
always
requires
to
be
true.
So
you
know
an
example
of
this
is
that
every
http
proxy
has
to
have
a
unique
fqdn
and
that's
something
that
contour
already
validates
internally
and
will
will
mark
your
proxy
invalid
if
it
if
it
uses
the
same
fqdn
it's
another
one,
but
but
so
we
can
implement
that
as
a
as
a
gatekeeper
constraint
as
well
and
and
get
earlier
feedback.
E
You
know.
Another
example
of
this
would
be
every
proxy
has
to
define
at
least
one
route
or
one
include
or
a
tcp
proxy.
So
essentially
it
can't
be
empty,
and
so
those
things
always
have
to
be
true
for
every
contour
install
and
they're
already
being
validated
internally
in
contour
separately.
We
have
policies
which
are
things
that
that
a
particular
contour
administrator
may
want
to
enforce
for
the
cluster,
but
it's
they're
not
required
for
for
contour
to
run
correctly.
E
So
examples
of
this
may
be
that
you,
you
don't
want
to
allow
any
proxies
to
have
timeout
values
of
you
know
greater
than
two
minutes,
for
example.
So
that's
that's
not
something
that
contour
requires,
but
you,
as
the
contour
administrator,
may
want
to
enforce
that
for
your
your
setup.
So
we'll
look
at
the
policies
more
in
a
minute,
but
I'm
going
to
start
just
by
looking
at
the
the
validations.
E
E
All
right
so
yeah,
so
this
is
the
what's
called
a
constraint
template
that
enforces
that
essentially
every
proxy
that
you
define
has
to
be
non-empty,
meaning
it
has
a
route
or
an
include
or
a
tcp
proxy,
and
so
the
so
within
gatekeeper.
You
have
the
you
have
constraint
templates
and
then
you
have
constraints.
E
So
you
know
they
sort
of
are
what
they
sound
like.
The
template
is
kind
of
a
generic
basically
defines
the
rego
that
actually
implements
the
the
admission
control
check,
but
it
can,
it
can
have
parameters,
and
then
you
can
instantiate
that
template
into
constraints
with
with
different
parameter
values
set.
So
actually
neither
of
my
validations
have
any
parameters,
but
we'll
we'll
see
some
with
parameters
when
we
get
into
the
policies.
E
So
if
you're
not
familiar
with
rego,
it
takes
takes
a
little
bit,
or
at
least
it
took
me
a
little
while
to
kind
of
wrap
my
head
around.
E
E
So
yeah,
I'm
just
I'm
actually
going
to
apply
this
whole
directory.
I'm
going
to
have
to
do
it
twice,
because
when
you
define
a
constraint,
template
gatekeeper
automatically,
creates
a
custom
resource
definition
to
represent
that
constraint,
template
and
then,
when
you
define
a
constraint,
it's
essentially
instantiating
instances
of
those
crds.
E
So
now,
if
we
look
at,
we
can
just
say:
get
constraint
templates,
we'll
see
that
you
know
this.
This
non-empty
constraint
template
that
I
was
talking
about
and
another
one
that
is
called
unique.
Fqdn
have
both
been
defined
and
then
what
we
can
do
is
so
each
of
these
are
represented
as
a
crd,
so
we
can
actually
say
get
the
http
proxy
non-empty
crds
and
there's
one
of
those
which
corresponds
to
this
this
file
here,
and
so
this
is
just
instantiating
that
template
into
an
active
admission
control
check.
E
So
anyway,
let's
let's
get
into
looking
at
an
actual
proxy.
So
we've
now
said
that
every
proxy
has
to
be
non-empty
and
it
has
to
have
a
unique
fqdn.
So
the
first
thing
I'll
do
is
in
this
file.
I'm
just
going
to
comment
out
all
these
routes
right.
So
we
have
now
a
proxy
here
that
defines
a
virtual
host
with
an
fqdn,
but
it
doesn't
have
any
routes
or
includes
or
tcp
proxies,
so
so
this
should
be
invalid.
E
So
if
we
just
apply
this
so
so
now
you
get
a
an
immediate
error
from
the
admission
controller
from
gatekeeper
that
says,
proxy
must
define
at
least
one
route
include
or
a
tcp
proxy,
and
so,
if
we
look
in
the
cluster,
we'll
see
that
proxy
wasn't
actually
created.
So
it
was
rejected
by
the
admission.
E
So
in
in
the
past,
without
gatekeeper,
you
know,
if
you,
if
you
created
a
proxy
like
this,
it
would
create
successfully,
but
then
it
would
eventually
be
marked
invalid
because
it
didn't
didn't
define
any
of
these
things.
But
so
now
you
get
more
immediate
feedback
and
you
can
go
actually
remediate
your
proxy,
so
yeah.
So
now
I've
added
routes
back
to
that
proxy.
I
need
to
save
it,
and
so
now
it
creates
just
fine,
since
I've
actually
added
some
routes
in
there
and
then
so.
E
E
So
if
I
apply
this
again,
the
first
one
didn't
change,
but
when
it
goes
to
try
and
create
the
second
one
down
here
that
uses
that
same
fqdn,
we'll
see
that
we
get
an
error
and
basically
says
that
it
must
have
a
unique
fqdm.
E
E
Yep
so
now,
if
I
remediate
that,
then
they
both
both
create
just
fine,
so
so
yeah
that
kind
of
covers
validation.
So
hopefully
you
can
see
how
that's
useful-
and
I
think
you
know
over
time.
We
want
to
add
to
the
the
list
of
validations
that
we
have
here
so
that
essentially
everything
that
contour
is
currently
checking
internally
within
the
dag,
build
any
of
those
that
make
sense
to
validate
up
front
as
part
of
as
part
of
the
gatekeeper
run.
E
E
So
then,
the
next
thing
to
move
on
to
is
the
the
policies,
and
these
are
again
things
that
you
know
contour
doesn't
require
to
be
true,
but
that
you,
as
a
cluster
administrator,
may
want
to
enforce
for
your
cluster.
So
a
good
example
of
this
that
we've
heard
before
is
is
around
enforcing
timeout
ranges,
so
contour
lets
you
define
various
timeouts
for
your
proxies,
but
some
administrators
may
want
to
limit
the
allowable
values
for
those
timeouts.
E
So
I'm
not
going
to
go
into
detail
in
into
what
this
rego
does.
But
the
the
kind
of
important
thing
is
that
this
is
the
constraint
template
for
timeout
ranges
and
it
takes
three
different
parameters,
so
it
takes
a
minimum
and
a
maximum,
so
you
can
define
either
a
minimum
or
a
maximum
or
both
for
your
timeouts.
E
So
you
could
say
you
know,
I
don't
want
any
time
out
to
be
greater
than
two
minutes
and
then
it
also
takes
a
field
as
a
parameter,
and
this
this
basically
is
just
a
way
of
reusing
this
one
template
across
all
of
the
different
timeout
fields.
So
when
you
actually
instantiate
this
template,
you
tell
it
which
timeout
field
to
validate
as
a
parameter
and
then
you
give
it
the
range.
E
E
Timeout
range
so
that
to
find
the
template
and
then
do
the
idle
timeout
range.
So
now,
we've
we've
said
that
all
the
idle
timeouts
have
to
be
at
most
five
minutes.
So
I'll
go
back
into
my
proxies
here
and
I'm
just
going
to
work
the
first
one
for
now.
E
So
what
I'll
do
up
here
is
first
I'll
define
an
invalid
proxy,
so
I'll
put
an
idle,
timeout
and
I'll
try
to
set
it
to
six
minutes.
So,
let's
see
what
happens
here
when
I
try
to
apply
the
proxy
all
right,
so
we
get
a
big
error
here,
which
is
a
little
bit
difficult
to
read.
E
But
if
you
look
at
the
the
top
line
here,
we're
essentially
saying
that
you
know
we
essentially
got
an
error
message
saying
that
the
idle
timeout
must
be
less
than
or
equal
to
five
minutes
and
you'll
note
that
in
this
case
this
proxy
already
existed
in
the
cluster,
and
since
I
was
reapplying
it,
it
was
trying
to
update
it.
So
by
default,
gatekeeper
is
configured
to
run
on
both
creates
and
updates
for
proxies,
so
it'll
it'll
prevent
invalid
configuration
for
either
of
those.
E
E
So
so
that's
pretty
nice
and
you
know
if
you,
if
you
omit
the
timeout
policy
entirely.
It
also
goes
through
just
fine,
so
it's
only
gonna
enforce
that.
If
you,
if
you
actually
specify
a
value
there.
E
So
let
me
I'm
gonna
set
this
back
to
five
minutes,
real,
quick,
just
to
demonstrate
one
more
thing:
the
maximum
value
is
inclusive,
which
is
why
it
lets
me
set
an
idle
time
out
of
five
minutes.
So,
let's
imagine
now
that,
as
a
contour
administrator,
I
actually
want
to
change
my
policy.
So
previously
I
had
a
max
idle
time
out
of
five
minutes,
but
I
want
to
change
it
to
one
minute
now.
E
So
the
challenge
here
is
you
know
when
you,
when
you
roll
out
a
new
policy
like
this,
you
don't
necessarily
want
to
break
all
of
the
configurations
that
already
exist
in
the
cluster.
You
don't
want
to
break
all
your
routing
and
so
gatekeeper
lets.
You
lets
you
do
this.
So
if
we
apply
this
file
policies.
E
So
it
applies
just
fine,
but
we
know
that
our
our
proxy
that's
defined
is
actually
technically
violating
this
policy
now
and
so
gatekeeper
has
the
concept
of
of
an
audit,
and
so
what
it
does
is
every
every
minute
by
default.
It
looks
at
all
of
the
resources
that
exist
in
the
cluster
and
essentially
reruns
all
of
these
constraints
against
those
against
those
resources,
and
then
it
will
report
in
in
the
custom
resource
if
there
are
any
any
violations
against
those
policies.
So,
let's
see
so
if
we
do.
E
E
So
it's
going
to
run
every
minute
and
show
an
audit
report
in
the
status
for
that
constraint.
So
you
can
see
here
it's
actually
already
run
since
I've
made
the
changes.
So
it
shows
you
the
time
stamp
that
it
last
ran
and
then
at
the
bottom
here,
if
you
have
any
violations
of
the
policy
it'll
surface
them
here.
E
So
in
this
case
it
tells
you
that
there's
an
http
proxy
called
p1
in
the
default
namespace
that
that
violated
this
constraint,
and
so
now,
as
an
administrator,
you
can
periodically
check
this
and
then
you
know
go
tell
your
your
app
developers
or
the
folks
who
are
responsible
for
defining
the
proxies
to
go
remediate
their
their
proxies
and
once
they
update
that
value
in
the
proxy
to
be
within
the
valid
range.
This
this
violation
will
go
away
from
the
auto
report.
E
So
yeah,
I
think
that's
pretty
much
all
I
wanted
to
demo.
So
you
know
we
have
a
few
other
examples
of
policies
that
you
might
want
to
use
within
here,
but
you
know,
certainly
you
should,
if
you're
interested
in
using
this,
you
should
feel
free
to
kind
of
customize,
these
or
or
add,
new
ones.
E
C
I
don't
think
this
is
a
a
done
list
either.
I
think
there's
also
more.
We
could
add
you
know
to
the
stuff
you've
set
up
steve.
So
in
terms
of
like.
C
And
all
that
you
know
all
the
things
that
you
could
potentially
configure
so
add
more.
You
can
comment.
Obviously,
some
of
the
values
made.
Like
the
excuse
me,
the
timeouts
made
to
match
what
your
specific
use
cases
are,
but
by
no
means
is
this
all.
So
this
is
intended
to
be
a
library
of
things
that
you
all
can
leverage
to.
Then
you
know
make
your
clusters
more
consistent.
I
guess.
C
H
So
I'm
pretty
new
to
using
contour
and
the
other
day
we
had
an
outage
in
production.
That
was
very
brief,
but
it
was
related
to
our
contour
pods
dying
and
we
lost
all
routing
in
inside
of
our
cluster
or
something
for
that
effect.
Good
ways
to
diagnose
or
reproduce
these
kind
of
conditions
would
be
would
be
definitely
something
that
I'm
interested
in
but,
like
I
said,
I'm
very
new
to
all
the
the
contour
stuff
at
the
moment.
H
Dive
yeah
the
paws
were
killed
and,
and
so
we
no
longer
had
dynamic
configuration
for
envoy
and
so
envoy,
stopped
serving
traffic
for
us
for
a
brief
period
of
time.
C
Because,
usually
what
should
happen
is
envoy
should
it's
kind
of
kubernetes
right
if
you
pull
the
api
server
out
from
under
kubernetes
it'll
use
kind
of
what
it
had
last
so.
H
Yeah,
that's
what
I
would
have
assumed,
but
yeah.
We
we
experienced
that
that
outage.
I
we
haven't
fully.
We
haven't
fully
figured
out
why
it
happened
yet
or
what
happened
that
was
just
the
only
thing
we
could
diagnose
at
the
time
was
was
that
the
contour
pods
were
dying.
C
Okay,
I
know
there
was
one
one
user
that
came
up.
That
said
that
one
point
they
had
again,
they
had
a
very
large
cluster.
This
is
back
to
your
question
earlier
and
I
think
what
happened
was
when
they
spun
up
a
new
instance
of
contour
when
and
onward
would
switch
to
it.
C
It
didn't
have
all
of
the
configuration
yet
from
the
cluster
because
it
was
still
churning
through
all
of
it,
and
what
happened
was
that
convoy
would
have
a
working
config
that
was
valid,
but
then,
when
the
new
version
of
contour
spun
up,
it
didn't
have
all
of
information
yet
right.
So
basically
it
would.
It
would
tell
on
void,
hey
here's,
your
new
config
and
it
would
trash
a
whole
bunch
of
routing
configuration
out
of
envoy
until
it
caught
back
up
again.
Then
it
would
then
send
it
over.
I
know,
contra.
C
We
have
now
a
weight
and
we
should.
We
should
probably
add
some
some
feature
tests,
for
this
would
be
hard
to
test,
but
there's
a
weight
basically
wait
until
all
of
the
objects
get
cached
by
contour
to
then
process
the
dag
and
move
on
to
try
and
avoid
that
situation.
That's
only
like
I
could
think
of
that
would
happen,
possibly
if
somehow
what
contour
knew
about
the
cluster
changed
and
then
it
reprogrammed
the
envoys
incorrectly.
H
C
C
C
C
G
H
It
was
five
minutes
of
torture,
but
we
made
it
through.
A
Gotcha
cool,
I
had
a
question
about
the
the
weight.
Is
that
effectively
it
doesn't
bring
up
the
xds
connection
until
it
believes
it
has
a
complete
set
of
the
information
or.
H
A
How
does
it
prevent
envoy
from
pulling
in
valid
configuration.
C
I
think
yeah,
so
thank
you.
I
think
what
we
did.
I
could
show
you
the
code.
I
think
what
contour
should
do
is
it
should
it
shouldn't
start
the
grpc
server
that
envoy
uses
to
connect
to
until
that
happens,
so
that
would
be
over
here.
It
should
be
in
serve.
C
C
C
This
should
wait
for
the
the
cash
the
the
internal
client
go
cash
to
get
all
the
resources
out
from
kubernetes
first
before
it
does
it,
but
I
think
this
this
was
a
first
step.
I
think
we
should
also
add
a
check
the
wait
for
the
first
dag
to
get
rebuilt
right.
So
it's
not
so
much
that,
like
having
all
the
objects
is
great,
but
contour
may
not
have
processed
them
all
through
the
dag.
A
Yeah
I'd
be
interested
in
in
debugging
and
working
through
this,
so
yeah
I'll
definitely
take
a
look.
C
There
should
be
an
issue
somewhere,
I
think
on
this,
but
if
not
go
ahead
and
create
one,
we
can
just
merge
them
together.
Go
ahead.
Joseph
are
you
gonna,
say
something
else.
A
Not
much
else
other
than
if
anyone
is
interested
for
people
not
aware.
I
have
contributed
support
for
a
contour
in
external
dns,
which
we've
been
using
in
production
for
a
couple
of
months.
So
if
anyone
is
not
using,
that
is
interested,
there's
a
pr
open
that
will
hopefully
get
merged
soon,
yeah,
it's
it
makes
convenience
of
working
with
contour
http
proxy
objects
a
little
bit
nicer,
because
previously
you
had
to
use
ingress.
If
you
wanted
it
to
work
with
external
dns.
C
C
C
A
C
This
is
at
1628
yeah,
so
this
so
once
we
get
this
in
then
yeah
you
can
just
use
your
http
proxy
to
then
manage
external
dns.
That's
awesome
and
you've
been
using
a
production.
You
said
so
that's.
A
Yeah
we've
been
using
it
for
some
time.
It's
it
works
well
for
us,
so
hopefully
it'll
be
in
soon
for
everyone
else.
Very.
C
Cool,
that's
fantastic.
I
think
the
next
thing
to
do
then,
is
that
is
the
update
insert
manager
to
to
know
about
proxy
objects.
Then
we
can
ditch
the.
A
H
A
C
C
C
Very
cool-
and
it
comes
back
to
I
know
some
folks
are
working
on.
There's
a
bitnami
helm
chart
folks
are
working
on
in
terms
of
deploying
contour.
I
think
we're
looking
to
move
that
into
the
contour
repo
proper
just
to
have
it.
You
know,
live
and
live
in
our
space
and
there's
some
folks
also
looking
at
building
an
operator
for
contour.
C
So
if
anyone's
interested
in
that
there's
the
underpinnings
here
of
starting
that
out
just
to
play
around
with
see
how
that
might
look,
so
you
can,
you
know,
have
crds
and
stuff
build
out
all
your
configuration
that
way.
A
Yeah,
is
there
an
issue
or
any
like
documentation
on
like
how
to
approach
cert
manager
support
proper?
So
that's
like
when
you
create
an
http
proxy.
Have
it
automatically
allocate
like
a
that's
encrypted.
C
Yeah,
I
believe
they
they
moved.
Last
time
I
talked
to
james
james
malini
start
manager.
He
this
is
probably
a
year
or
so
ago.
They
were
looking
to
switch.
How
cert
manager
used
to
do
these
conversion
web
hook
things.
So
the
idea
was
yeah,
so
they
would
do
you
register
a
callback
to
say:
hey,
go,
implement
the
callback
hook
and
then
once
it's
there,
then
it'll
continue
on.
I
believe
that's
in
there
now
in
in
the
repo
to
do
that.
C
So
now
we
just
need
to
just
wire
all
those
bits
up
and
just
figure
out
I
mean
assuming
contour
would
would
expose
that
extra
end
point
for
that
certain
manager
could
call
back
into
so.
We
need.
A
To
figure
that
out
what
that
looks,
like
the
acme
request,
right.
C
Yeah
yeah,
I
think
basically
cert
manager
will
say
hey.
I
need
you
to
go.
Have
this
url
with
this
bits
with
these
validation
tokens
or
whatever,
whatever
it
needs,
you
know,
go
create,
go
create
a
resource
with
this
and
we
can
go
generate
a
a
proxy
object
under
the
hood
to
then
to
do
this,
so
I
don't
know
if
it
should
be
in
contour
if
it
should
be
a
separate
tool.
I
don't
know
because
you'd
have
to
give
contour,
then
write
access
to
create
these
proxy
objects
right.
C
But
yeah
we
should
figure
that
out
because
that'd
be
great,
because
then
once
we
do
that
because
we're
chatting
about
that
so
steve
chris
online,
who
just
gave
us
the
demo,
did
a
bunch
of
work
to
split
out
how
the
builder
types
work.
So
in
contour
we
have
we
had
this
big
file
called
builder.go
and
what
that
did.
Was
it
understood?
What
a
ingress
object
looked
like
what
an
http
proxy
object
looked
like
and
coming
up
we're
going
to
add
the
service
api's
work
into
contour
as
well.
C
It
was
all
kind
of
smashed
together
in
one
big
giant.
Go
file,
so
steve
did
a
lot
of
work
there
to
help
split
those
apart.
So
now
we
have.
Is
these
processors,
so
here's
the
ingress
bits.
You
can
see
that
so
now
the
ingress
processor
knows
just
about
how
to
process
an
ingress
object.
C
So
now,
once
we
what
I
guess
our
question
was,
was:
should
we
support
all
these
things?
Concurrently,
like
should
contour,
be
able
to
serve
ingress
objects
alongside
you
know:
proxy
objects
in
the
same
cluster,
because
there's
some
weird
things
that
can
happen
when
you
have
things
overlap
and
we've
been
reluctant
to
do
that
yet
just
because
we
don't
have
all
the
building,
all
the
other
things
set
up
in
the
external
apps,
like
cert
manager,
external
dns,
all
those
sort
of
things,
if
that
makes
sense,
so.
C
E
C
F
C
C
Boom,
well,
not
me,
that's
just
a
habit.
There
I
saw
steve
and
I
went
nuts
there.
We
go
cool
yeah
and
then
just
if
you
want,
if
you
want
to
send
that
in,
I
was
looking
for
that
issue.
Was
it
start
manager,
but
then
there
was
another
one.
We
talked
about
the
startup
time
thing
right,
startup,
yeah,
all
right.
We
don't
just
be
looking
at
github
issues,
anything
else.
Anyone
wants
to
talk
about
or
like
we
can.
We
can
demo
something
or
chat
about
something.
C
We
can
do
that.
We
still
did
that
in
a
couple
of
previous
things,
but
happy
to
do
it
again,
happy
to
show
whatever
whatever
y'all
want
to
see
yeah.
Oh,
you
did
add
that
to
your
steve
cool
yeah,
the
second
one
was
how
we
watch
for
events,
how
we
process
those
events
and
convert
that
to
envoy
xds
bits.
C
A
Be
cool
if
you
could
walk
through
the
process
where
it
builds
up
from
the
information
from
the
informer
to
the
dag
yeah,
not
necessarily
in
a
lot
of
depth,
but
it
would
definitely
help
build
a
mental
model.
C
Yeah
sure,
let
me
get
off
of
the
branch
I'm
on,
because
I
don't
know
what
I've
done
there,
let's
pull
in
latest
and
then.
C
All
right
cool
so
now
we're
latest
and
greatest,
okay,
so
contour,
I
have
a
picture
here.
C
Maybe
you
all
know
this
so
yeah.
So
contour
is
english
control,
obviously,
and
we
have
to
watch
so
we
watch
kubernetes
for
certain
things
right.
We
look
for
using
client
go
so
client
go
is
the
tool
we
use
to
go
talk
to
the
api,
so
we
watch
for
services
we
watch
for
endpoints.
We
watch
for
secrets.
C
We
watch
for
ingress
objects
and
http
proxy
objects.
So
when
you
deploy
all
that
stuff
in,
let's
see
it's
in
command
server.go,
so
this
is
where
things
start.
Oh,
this
is
big
enough.
If
I
do
this
thing,
it's
like
you
can't
see
anything.
C
I'll,
just
I'll
just
zoom
in
with
my
man
by
hand,
so
command
contour,
there's
serve
right.
So
here's
the
there's
a
bunch
of
commands
that
make
up
contour,
so
there's
there's
contour
serve,
which
is
what
serves
you
know
the
xds
endpoints
to
envoy.
There
is
the
search
genjob
with
the
job
run,
so
this
contour
generate
sorts.
There
is
shutdown
manager
and
the
shutdown
manager's
job
is
to
help
you
reload
or
reinstantiate
an
instance
of
envoy,
so
that
helps
shut
down
envoy
cleanly,
so
it
can
drain
connections.
C
This
is
where
this
is
kind
of
where
all
the
stuff
gets
wired
up
and
then
from
there
we
can
break
apart
into
the
other
bits
so
yeah.
So
we
create
some
informers
and
we're
using
the
dynamic
client
from
clientgo.
Part
of
that
was
to
incorporate
the
service
api's
work,
because
when
you
they
didn't
have
a
typed
client.
So
we
went
to
the
dynamic
one,
so
we
can
just
pull
in
new
objects
on
the
fly
without
having
to
regenerate
a
client
inside
a
contour,
okay
cool.
C
So
this
is
where
the
first
bit
starts,
so
this
informer
think
list.
So
these
default
resources
define
the
group
version
resources
we're
going
to
watch
for
so
here
you
can
see.
There's
http
proxies.
Oh
yeah,
there's
a
tls
certificate
stuff,
which
we
should
talk
about.
We
don't
that
doesn't
get
enough
love
for
how
kind
of
cool
it
is.
Here's
services,
then
here's
ingresses.
C
So
those
are
what
we
call
the
default
ones
down
here.
Here's
the
service
apis
bit.
So
if
you
have
service
apis
installed-
which
I
doubt
you
do
yet
because
it's
it's
well
contours
have
support
for
it,
but
once
we
add
support,
then
this
will
get
spun
up.
This
does
it
for
secrets.
C
So
there
are
some
times
where
we
don't
watch
the
whole
entire
cluster
for
secrets
and
that's
if
you
set
the
root
root,
namespaces
flag,
then
we'll
only
watch
those
namespaces
for
secrets
and
that's
for
security
because
we
don't
want
to
you
know,
watch
everything
if
we
don't
need
to
here's
the
endpoints
bits
and
then
here's
services
again,
which
it's
interesting,
that
it's
there
as
well
to
figure
out.
Why
that's
there
but
anyway.
So
this
is
where
we
create
that
that
list
here
to
inform
on
everything.
C
So
this
kicks
off
the
informer
pop
this
in
here.
So
that's
where
we
get
that
and
then
what
happens
is
we
we
add
these
event
handlers
to
it.
So
contour
manages
this
own
internal
cache
right
of
resources.
So
basically,
when
something
changes
from
these
informers,
we'll
get
an
event
for
that
and
then
we'll
wire
that
through-
and
that
should
be
this
dynamic
handler,
which
is
here
so
the
event
handler.
So
this
bit
here
is
what
the
meat
and
potatoes
of
all
of
the
event
handler
comes
in.
C
So
let's
hop
into
this
and
see
yeah
you'll
see
here
what
we
have
is
we
have
the
this
interface
so
add,
update
and
delete.
So
when
that's
what
the
client
go,
lister
is
going
to
give
us
when
things
get
added,
update
or
deleted.
So
this
is
implementing
that
interface
here
and
then
down
here.
C
There
should
be
yeah,
that's
the
dag
rebuild
here
we
go
so
here's
the
on
update
event.
So
update
gets
here,
and
this
is
actually
you
know
inserting
the
object
into
our
cache
or
updating
the
object
from
cache
or
deleting
it
if
it's
removed
and
that
cache
gets
stored.
It's
in
this
builder
bit,
it's
an
internal.
C
Internal
dag
there
should
be
a
cache
yeah,
so
basically,
on
every
add,
update
and
delete
it
ends
up
in
this
cache
here.
This
is
our.
This
is
our
version
contours
version
of
what
kubernetes
looks
like
cool
moving
that
far
so
now.
At
this
point,
you
can
assume
that
all
of
these
different
maps
here
are
all
filled
up
with
all
the
objects
that
we're
watching
from
kubernetes
and
they're
updating
real
time
as
things
change
or
as
fast
as
we
get
them
through
client
go
so
every.
D
C
And
then
there's
there's
so
back
here
in
this
handler,
there's
this
bit
here
so
this
run.
This
is
where
the
dag
rebuilt
happens
is
in
this
run
bit
here.
C
So
this
ends
up
in
a
big
giant
like
for
loop
yeah.
So
here's
all
the
things
that
can
happen
in
this
loops,
basically
we're
waiting
for
an
event
to
come
in
we're
going
to
process
an
event
or
that
hold
off
time.
We
talked
about
before
kicks
off
and
that
tells
us
to
go
rebuild
the
ripple,
the
dag
yeah.
Here's
this
hold
off
delay
that
you
can
configure.
I
think
it's
like
100
milliseconds
or
something
I
think,
stevie
looked
up
last
time
we
had
this
question.
C
E
C
It's
still
pretty
fast
yeah,
so
this
is
a
loop.
That
is
basically
so
all
those
events
are
coming
through
and
then
this
loop
will
determine
when
we're
going
to
start
the
diagram
rebuild.
Where
is
the
here?
Yeah
ripple
dag,
so
we
call
rebuild
dag,
and
this
goes
and
says
so.
This
is
where
we're
going
to
take
all
those
caches
that
we
have
now
locally
and
we're
gonna
go
rebuild
a
new
dag
in
memory,
so
we
call
build,
and
this
is
hit
and
should
hit
the
stuff
that
steve
just
did
yeah.
D
Quick
question:
steve
sorry,
so,
okay,
so
you're
getting
these
events
right,
and
so,
when
you
create
an
informer
you're
receiving
these
events,
which
are
essentially
kubernetes
objects
right,
you're,
like
okay,
a
new
http
proxy
was
created
right,
so
you're
gonna
do
on
ad
http
proxy
you're
going
to
add
that
to
the
cache
right,
and
then
this
dag
that
you're
rebuilding
essentially
so
so
two
questions:
is
this
dag
a
representation
of
like
the
relationship
between
all
so,
like
you
may
say,
like
okay,
I
haven't.
D
C
H
C
Yeah,
so
what
happens
is
yeah,
so
so
it
what
happens
so
after
we
build
this
thing
out.
So
this
is
where
we're
actually
going
to
build
that
dag
after
we
build
this,
then
the
next
step
is
to
walk
it.
So
we'll
walk
down
that
tree.
F
C
Then
pick
off
each
type
that
comes
out
of
that
so
like
I'm,
going
to
get
good
good
example
of
like
the
v
host
and
stuff,
so
we'll
walk
down
all
the
v
hosts
and
then
build
out
the
envoy
configuration
next.
D
E
D
So
this
is
kind
of
like
an
ast,
then
right,
essentially
that
converts
from
like
kubernetes
objects
to
envoy.
Is
that
what
the
stack
essentially
is
like
an
abstract
syntax
tree,
where
you're
kind
of
like
maintaining
these
contour
types
and
then
like
converting
them
to
essentially
to
envoy,
I
guess
xds.
C
Yeah
yeah
so
yeah,
so
yeah
we're.
So
we
take
the
kubernetes
types
and
we
create
this
intermediate
type.
They're,
all
dag
types
so
in
here
you'll
have
like
a
let's
go
to
one
of
the
should
be
like
a
cluster
yeah.
So
here's
like
a
a
service.
Here's
a
cluster.
C
All
these
things
mirrors
sort
of
how
kubernetes
does
it
and
then
everything
after
this
point
is
all
the
same
code
right.
So
so
the
job
of
this
this
builder
bit
here
these
processors
is
to
convert
that
specific
type
that
you're
doing,
maybe
it's
ingress
or
http,
proxy
or
whatever.
It
is
convert
that
into
this
intermediate
language
like
you're,
saying,
yeah
and
then
everything
afterwards
is
all
the
same
same
line.
Gotcha.
D
C
It's
rebuilt
rebuilt
from
scratch
every
time,
yeah,
okay.
So
so,
if
you
say
you
process
it
once
now,
and
then
you
wait
five
minutes
and
process
again,
it's
going
to
use
all
of
the
all
the
bits
in
that
kubernetes
cache
that
we
have
to
be
rebuilding
from
scratch.
So,
okay,
which
is
which
is
again,
I
think
it's
it
could
be
bad
in
a
sense
because
you're
rebuilding
it
again
from
scratch,
but
it's
good
because
there's
no
no
errors
in
logic
in
terms
of
because
you're
having
any
diffs
between
what
happened.
You
know
right
right.
G
It's
all
that,
let's
say
that
in
in
some
of
the
watch
namespaces
a
secret
secret
changed
that
that
will
automatically
trigger
a
rebuild
for
everything.
C
Yep
secret
was
yeah
now
the
only
thing
that
shouldn't-
I
guess
now
it
does
it
didn't
before,
but
now
it
does
so.
I
think
it
doesn't
say
I
think
it
didn't
get
trigger
a
dag
rebuild.
It
was
an
endpoint
change
and
I
think
we've
since
changed
that
so
now,
endpoints
are
only
represented
to
what's
actually
used
in
the
dag,
which
is
kind
of
better
so
before
contour
treated
endpoints
differently,
and
that
was
because
endpoints
kind
of
update,
much
faster
than
say,
services
or
end
points
or
secrets.
C
So
now
I
think
james
just
did
a
pr
to
in
looking
at
some
other
work
he's
doing
for
auth.
Now,
only
the
endpoints
in
envoy
should
match
things
that
actually
are
going
to
get
referenced
from
from
envoy
or
from
contour.
So
basically
any
service
that
you're
sending
traffic
to
should
have
a
corresponding
endpoint
in
envoy.
Everything
else
should
shouldn't
be
shouldn't
be
programmed
into
there.
C
C
Here
we
go
run
so
this
is
actually
going
to
go.
Build
compute
proxies,
so
it's
going
to
loop
through
all
of
the
proxies
that
it
knows
about
so
here
here,
valid
http
proxies
so
builder.source,
so
this
set
of
source
is
then
types
back
to
this
local
cache
right.
So
it's
looking
at
the
local
cache
of
what's
there,
so
all
the
while,
while
this
might
be
processing
this
back,
this
cache
behind
it's
getting
updated.
C
So
again,
this
will
loop
through
it'll
pick
apart
all
the
different
bits
it
needs
to.
Do
you
know
here's
where
it
figures
out
like
hey,
if
you
don't
have
a
virtual
host,
we're
going
to
mark
it
as
orphaned
if
you're
using
root,
http
proxies
and
it's
not
in
the
right
namespace
we'll
set
the
error
there.
If
you
don't
specify
a
fully
qualified
domain
name
we'll
and
we'll
set
that
there.
You
know
all
those
different
kind
of
errors
you've
seen
in
the
spec
or
the
status
comes
out
of
this
thing
here.
C
So
this
goes
and
figures
out
all
of
that
logic
and
it
and
essentially
creates
that
intermediate
dag
type
object.
I'm
trying
to
see
where
example
that
would
be
yeah.
H
C
C
Let
me
return
that
back
so
once
we
have
that
will
trigger
this
on
change
event,
so
I
can
pause
here
if
there's
questions
in
the
middle.
D
So,
okay,
so
these
this
dag
that
you're
generating
what
what
are
the
relationships
in
terms
of
like
okay,
so
you
know,
I
guess,
is
it
like
a
tree
structure
where
you
have
like
what
would
your
highest
level
object
be
and
then
what
would
the
kind
of
child
objects
underneath
that
look
like
like?
Is
there
kind
of
like
like
do
you
have
like
a
top
level
object
and
then
below
that
or
like
the
services
and
then
below
that
or
the
hdp
proxies,
or
can
it
just
like?
Can
it
change?
C
Yeah,
I
think
it's
it's
it's
based
around
virtual
hosts.
So
let's,
let's
I'll,
show
you
here
in
a
second.
So
let's
move
to
the
next
one,
and
then
we
can
see
how
we
actually
like
use
that
diagonal.
Maybe
that'll
help
us
answer
that
here
in
a
second,
so
yeah,
so
we'll
build
this
dag
and
then
we'll
call
this
on
change,
and
this
on
change
triggers
a
thing
in
all
these
different
contour
package
here.
So
let's
go
to
listener.
C
Knock
my
water
over
again
spill
everything
so
in
here
there
should
be
an
on
change,
this
one's
a
big
one,
yeah
okay,
so
here
so
this
gets
called
here.
So,
like
I
said
from
the
builder
here
or
from
this
one,
this
is
the
handler.
The
helmet
goes
and
rebuilds
the
dag,
and
then
it
calls
this
on
change
event
which
everyone
subscribed
to
it.
So,
basically,
all
the
all
the
xds
types
are
subscribed
to
that.
So
listeners
clusters,
endpoints
services,
I'm
sorry
secrets
and
routes.
C
Those
are
all
so
have
the
same
event
here
and
what
they
get
is
the
dag
here,
so
they
get
passed
in
the
dag
and
what
we
do
is
let
me
go
and
walk
that
so
this
one
here
visit
listeners
this
will
build
out.
Essentially
this
will
turn
this
into
the
the
xds
version
of
listeners.
That
onward
is
going
to
actually
consume.
C
So
we
can
follow
that
we'll
create
a
listener
from
here
and
then
we
call
visit
and
then
visit
figures
out,
basically
where
we
passed
in
an
insecure
virtual
host
or
a
secure
one.
Meaning
is
this
just
plain
http,
or
is
this
a
tls
version?
C
C
C
The
metric
stuff,
the
logger
and
the
timeouts
and
all
that
kind
of
stuff,
so
this
bit
once
this
returns
and
then
we
recurse
into
it
so
we'll
visit
itself.
So
I'm
trying
to
answer
your
question
chad,
so
we
so,
I
think
the
top
level
is.
Is
this
virtual
host
type
and
then
we'll
visit
itself
again
and
walk
down
that
that
thing
so
we'll
keep
calming
itself
as
we
loop
through
this.
D
C
Sort
of
yeah
so
the
result
of
of
this
visit-
let's
go
back
to
where
visitors
are
called
yeah,
so
here
so
here's
the
on
change
right,
so
the
dag
got
built
unchanged
got
called
so
we
call
this
listener.update.
So
this
updates
the
local
cache
with
the
supplied
map
and
the
one
that
we
didn't
talk
about
was
how
we
map
the
xds
resources
to
the
grpc
server.
That
envoy
provides
so
in.
C
I
guess
the
short
short
answer
is
it's
these
values
here,
so
these
values
and
static
values.
So
these
v2.listeners-
these
are
the
envoy
protos
here.
So
you
can
see.
I'm
in
this
listener
come
on
zoom
out,
let's
listen
to
protobuf
here,
and
this
is
the
types
that
we
actually
convert
them
into.
So
then
this
can
get
converted
to
the
actual
protobuf
to
get
center
of
the
wire
back
but
way
back
here
and
serve
when
we
spin
up
the
grpc
handler
there
should
be
a
resources.
C
C
C
Yeah,
so
then,
what
we're
saying
is
when
we
create
this.
The
server
we're
passing
in
all
those
resources
and
those
resources
have
a
type
url
which
identifies
who
they
are
so
it
might
be.
You
know
whether
it's
cds
listener
cluster,
a
secret
that
sort
of
thing,
so
this
bit
gets
sent
into
to
create
the
xds
server
so
that
when
this
gets
started
up
it
maps,
basically
those
types
that
we
saw,
which
are
these
these
different
caches
into
each
type.
C
D
Okay,
okay,
because
it's
envoy
that
hosts
the
grpc
server
and
I'm
sorry
it's
it's
contour
that
hosts
a
grpc,
server
and
envoy
is
configured
to
connect
into
on
into
contour,
at
which
point
contours
are
sending
proto
buff
messages
for
the
configuration
right,
yep,
more
or
less.
C
Okay,
so
so
contents
here,
so
this
is
part
of
the
xds
protocol.
So
contents
implements
everything
and
then
there's
a
query
as
well
to
where
and
we
get
the
discovery
request
from
envoy.
It
can
ask
for
specific
things.
So
here
it
comes
as
names.
Excuse
me
and
then
we
can
return.
You
know
that
smaller
subset,
if
it
wants.
C
Oh
here's
the
type
here
we
talked
about
yep,
so
this
should
hit
the
yeah.
So
here's
the
again
we're
back
in
the
in
the
go
control
plane
here.
So
these
are
the
all
the
different
types.
So
this
would
be
the
full
type,
so
google
apis
dot
com,
slash
on
invite
api
dot,
v2,
slash
listener,
slash
route
configuration
or,
like
your
endpoints,
see
how
it
says:
cluster
load
assignment.
I
mentioned
that
earlier.
That's
behind
the
scenes,
what
it's
really
called?
Not,
I
always
call
it
endpoints,
but
it's
really
cluster
load
assignment
yeah.
C
So
that's
how
the
mapping
between
the
xds
caches
that
envoy,
or
so
that
contour
serves
maps
to
that
to
that
code.
So
essentially,
in
this
contour
package,
you'll
have
listener.
You'll
have
cluster
so
again
in
here.
We'll
have
the
same
type
of
thing,
so
this
values
map
here
is
the
is
the
source
of
what
we
send
out
to
envoy.
C
There
should
be
a
route
one
as
well
yep
here
it
is
so
here's
the
route
one
and
again
it
has
the
same
contents
method
with
the
query
method,
and
you
can
see
it's
the
same
now.
The
difference
between
the
only
thing
special
is
the
listener,
because
you'll
see
we
have
that
static
values,
and
we
use
that
to
spin
up.
I
think
it's
like
the
metrics
endpoint
there's
a
couple
of
things
that
contour
configures.
C
We
used
to
do
it
in
the
bootstrap
config
and
we
since
took
that
out
just
to
make
that
bootstrap
config
smaller,
but
basically,
whatever
gets
passed
into
here,
you
can
create
listeners
that
just
are
always
there
versus
this.
One
is
dynamic
that
the
values
map.
A
C
So
yeah
so
from
unvoiced
perspective
yeah
any
any
instance
of
contour
is,
is
valid
to
connect
to
and
retrieve
configuration.
The
leader
election
bit
is
only
to
limit
one
instance
of
contour
to
write
state
back
to
the
api
server.
A
If
you
have
even
just
a
single
contour
that
has
invalid
state,
it
could
potentially
propagate
to
all
of
the
the
envoy
proxies
or
some
subset
of
the
envoy
proxies.
C
Yeah
whoever's
connected
to
it,
yeah
right,
yeah
yeah.
So
if
you
had
two
two
yeah,
so
would
you
what
you're
saying?
Just
if
you
had
two
different
instances
of
contour
running
and
say
you
know
two
onwards
connected
to
the
first
one
and
two
to
the
second
one
in
theory,
if
somehow
the
second
contour
started
producing
invalid
configuration,
then
your
half
of
your
envoy
fleet
would
have
bad
bad
configuration
yeah.
That's
true!.
D
Yeah
and
maybe
a
little
bit
yeah,
so
that's
actually
was
going
to
be
one
of
my
questions,
joe.
So
thank
you
so
so
kind
of
maybe
looking
at
a
little
bit
more.
So
when
we
deploy
concert
envoy,
how
do
those
connections
get
established
right
so
so
contour
has
some
sort
of
like
chatter,
between
all
the
pods
to
elect
a
leader,
and
is
that,
like
a
persistent
leader
so
that
every
time
it's
publishing
config
changes
to
envoy?
D
D
Because
I
know
you
don't
want,
like
multiple
configurations
being
pumped
into
contour
or
into
envoy,
like
you
know,
the
same,
the
same
configurations
being
called
multiple
times
right,
so
sure
how's,
all
that
handled.
C
Yeah,
so
so
leader
election,
just
to
be
clear,
it's
only
for
to
make
sure
that
only
one
instance
of
contour
writes
state
to
the
api
server.
So
from
an
onboard
perspective,
it
doesn't
never
care
who's,
the
leader
who's,
not
the
leader,
every
contour
that
gets
spun
up
processes.
The
same
thing
we
just
showed
about
so
say
you
had.
You
know
two,
two
contour
pods
running
in
your
cluster,
each
pod
is
gonna,
have
the
same
watch
on
the
api
and
it's
gonna
go
through
the
same
workflow
we
just
went
through.
C
You
know
to
get
the
information
and
process
it
produce
a
dag
and
send
it
send
the
information
out
to
any
envoy
that
connects
now
envoy
when
it
connects
again.
This
is
by
how
we've
just
configured
it
by
default.
C
Is
it
uses
this
service?
Let
me
show
you
here.
C
Not
that
one
we
want
equals
contour
yeah.
So
what
we
do
is
when
you,
when
you
spin
up
envoy,
there's
an
init
container,
I'm
sorry
con
yeah
envoy,
I'm
in
the
onboard
pod,
so
this
init
container,
what
it
does
is.
It
generates
bootstrap
configuration
right,
and
this
is
the
configuration
that
we
feed
to
envoy
to
basically
go
find
contour
in
the
cluster
and
a
couple
other
things.
So
this
init
container-
and
this
is
one
of
those
other
commands
we
talked
about
over
here-
so
there's
there
was
serve.
There
was,
what's
it
called
shutdown
manager.
C
There
was
the
search
generation
and
there's
a
bootstrap
one
here
as
well,
and
so
this
bootstrap
knows
how
to
generate
this
json
file,
and
essentially
the
big
bits
in
here
are
the
ones
that
you've
got
marked
here.
So
this
xds
address
is
what
you
feed
into
envoy
to
tell
envoy
where
its
grpc
server
is
so
in
this
example,
here
we're
using
contour
and
contour
is
the
service
that
we
get
deployed
so
out
here.
In
these
examples
we
have
this
sample
server
or
service
that
contour
gets
deployed.
So
this
name,
then,
is
a
dns
name.
C
C
So
when
this
spins
up
and
as
onward
goes
to
connect,
it'll
ask
kubernetes
hey,
you
know
I'm
trying
to
connect
the
contour
over
port
8001
and
then
kubernetes
will
pick
one
of
the
two
pods
or
three
pods
or
however,
however
many
replicas
you
have
running
just
to
let
that
envoy
connect
to
at
that
point,
I
think
onway
will
be
a
persistent
connection.
Then
it
won't
try
this
again
it'll
connect
to
that
one
instance
and
keep
it
there
until
that
contour
stops
serving.
C
If
that
contour
would
go
down,
it
will
reconnect
to
a
new
one.
That
makes
sense,
so
you
can
have
any
number
of
contours
running
any
number
of
envoys
and
each
envelope
will
connect
to
whatever
one
it
gets
it
gets
to.
First,
based
on
you
know,
round.
Robin
I
think,
is
what
it
still
is
in
kubernetes
yeah,
yeah
yeah.
D
C
C
So
in
theory
you
know,
if
you
had,
you
know
a
whole
bunch
of
contours
running
each
contour
could
somehow
have
a
different
set
of
config
based
on
the
time
it
got
the
events
and
process,
and
that
sort
of
thing,
so
it
is
potential
you
could
have
that
that
problem.
D
D
C
C
So
if
you
have,
let's
go
to
shape.
C
So
let's
have
three
of
those
and
then
we'll
say
when
you
start
up
there,
we
use
incline,
go
there's,
there's
built-in
logic
to
do
the
leader
election
stuff.
So
it's
not
code
that
we
wrote,
but
it's
just
comes
from
begged
into
that.
Let's
make
this
one
the
leader,
so
this
is
this
person
is
a
leader
leader.
Just
you
know,
for
example,
sync
and
then
we'll
do
we'll
make.
I
don't
know
it's
gonna
be
funny,
but
maybe
I
shouldn't
be
funny
we'll
make
this
one.
C
C
So
now,
at
this
point
you
know
each
instance
of
contour
is
watching
the
api
server
and
building
its
own
configuration
and
everything
up.
So
essentially
you
know
each
one
here:
let's
do
this
one
there's
a
database
here.
I
think
this
one
right,
so
each
one
has
maintains
its
own
separate,
xds
datastore
and
that's
what
gets
passed
down
to
each
envoy,
so
in
theory
yeah.
These
could
somehow
somehow
get
out
of
sync
making
each
one
of
these
envoys,
not
all
exactly
the
same
potentially.
D
So,
what's
so,
what's
the
role
of
the
leader
then
like,
for
example,
okay,
so
in
front
of
those
envoys
you're
going
to
have
your
load
balancer
right
and
your
load
balancer
is
going
to
connect
to
these
envoys
and
it's
going
to
essentially
distribute
traffic
right
to
all
of
these
envoys
here
right.
So,
if
your
leader
is
the
one
that
publishes,
then
the
only
envoy,
that's
getting
updated
information
would
be
that
second,
one
right.
H
C
C
D
To
all
the
envoys,
okay,
okay,
what
status
is
go
to
to
the
kate's
api
server
then
like
what
status
changes
would
go
to.
C
Yeah
so
things
like
we
describe
this
so
like
this
right
here
this
is
valid
valid
proxy
or,
if
you,
you
know,
if
I
edit
that.
C
Was
it
I
come
in
here
and
I
make
say
I
change
the
service
to
be
something
that
doesn't
exist
and
then
I
describe
again
you'll
see
I
get
it's
invalid
now,
because
this
service
root,
blah
blah
blah,
is
not
found.
C
But
again
I
mean
speaking
in
reality
technically
because
each
one's
processing
its
own,
its
own
xds,
cache,
essentially
based
on
the
events
that
come
out
of
kubernetes.
There
is
potential
that
one
of
these
could
be
out
of
sync
somehow
and
your
answer
there.
I
guess
today
would
be
just
to
only
run.
C
Again.
It
hasn't
really
come
up
been
an
issue,
but
maybe
it
has
them.
You
haven't
seen
it
or
you
know
I
mean
you
saw
how
fast
things
got
updated.
So
it's
never
again
never
really
been
an
issue,
but
yeah.
A
H
A
D
A
C
So
I
do
know
that
there
is
excuse
me
there
is
there's
a
readiness
probe.
So
8001
is
the
default
for
the
contour
xds
server.
So
if
that
stops
responding
to
internal
probes
from
on
from
kubernetes,
that
instance
of
contour
will
go
unhealthy
and
then
on
voice.
You
connect
also
there's
a
liveness
probe,
and
it's
this
health
z
thing.
I
think
I
wrote
this
like
forever
ago,
but
I
think
it
tries
to
do
a
query
against
the
kubernetes
api
yeah
and
what
it
does
is
it.
C
It
tries
to
hit
that's
a
different
thing.
It
tries
to
do
query
out
to
the
kubernetes
api.
C
And
if
it
can't
yeah,
then
it'll
fail
that
liveliness
probe
and
that
should
force
contour,
then
restart
yeah.
It
tries
to
get
the
version
of
of
the
server
you're
connecting
to
and
if
it
can't,
then
it
basically
blows
up.
It'll
fail
that
liveness
probe,
which
caused
kubernetes
to
restart
that
instance
of
contour.
If,
for
whatever
reason
it
can't
connect
to
to
the
api
server.
D
Cool
yeah
so
like
with
linux,
clustering
right
like
if
you
look
at
other
like
pacemaker
and
somewhat
kind
of
like
the
service,
if
there's
some
sort
of,
if
there's
services
like,
for
example,
services
that
manage
disks
and
persistent
data
that
require
to
be
consistent,
have
consistent
data,
they
implement
some
a
protocol
called
fencing
or
stoneth,
which
is
shoot.
The
other
node
in
the
head,
which
essentially
kind
of
like
guarantees,
consistency
in
case
you
get
something
that
acts
up
right.
So
I'm
just
trying
to
kind
of
think
of
like
you
know.
D
E
H
E
Contours
and
the
envoys
are
all
eventually
consistent
and
in
practice
it
seems
to
work
pretty
well,
but
but
you
don't
have
that
guarantee
of
consistency
at
any
given
point
in
time.
C
And
I
think
a
big
win,
how
that
hasn't
been
a
problem
is
because
the
way
I
mentioned
that
you
know
the
builder
goes
from
scratch
every
time
right,
so
based
on
that
cache,
I
lost
wherever
it
is,
but
that
resource
cache
that
we
have
you
know
it
generates
it
from
scratch
every
single
time.
So
if,
for
some
reason
an
instance
of
contour
did
lose
a
message,
you
didn't
have
all
the
information
that
should
catch
up
eventually
and
then
the
next
rebuild
would
have
everything
in
it.
You
know
essentially
so.
D
C
Yeah,
no,
it
can
be
a
lot.
I
mean
it's
it's,
you
know,
there's
a
bunch
of
moving
pieces
and
things
in
there,
but
in
the
day
it's
it's
just
you
know,
have
a
local
cache
of
kubernetes
process
that
stuff
into
that
internal
type
and
then
convert
that
into
envoy
types.
C
So
it'll
get
interesting
when
we
move
to
the
v3
stuff,
the
xds
v3,
because
we're
going
to
explode
out
this
this
internal
bit.
So
that's
all
this
envoy
package
here
that
should
explode.
So
how
does
that?
So?
How
does.
D
C
Yeah
sure
yeah,
so
when
we
go
out
and
build
here's
an
example,
so
here's
the
envoy
package
and
we'll
look
at
say
cluster
for
fun.
So
in
here
we
spin
up
basically
see
how
it
says:
v2.cluster-
and
this
v2
is
represented
of
this.
So
we
use
the
go
control
plane
because
they
give
us
the
the
golang
protobuf
prototypes.
C
C
C
C
C
C
So
generally,
what
recontour
is
going
to
have
to
do
is
just
swap
out
basically
all
those
different
paths.
So
right
now,
contour
pulls.
You
know,
clusters
from
this
v2
package.
We
have
to
pull
them
from
the
v3
package
and
then
also
update
whatever
might
change,
so
I'm
not
sure.
What's
different,
I
don't
think
it's
a
whole
lot
different,
but
whatever
is
different
in
v3.
We're
going
to
have
to
then
replicate
here
and
contour.
C
D
D
And
I'm
sorry
one
more
question
steve
so
yeah
in
terms
of
like
leader
election
right
we
mentioned
you
know.
Obviously
you
can
spin
up
as
many
replica
sets
as
you
need,
and
I
would
assume
do
you
guys
suggest
like
running
daemon
sets.
D
You
know
for
contour,
essentially
to
kind
of
put
that
processing
on
all
the
nodes,
and
if
does
your
leader
election
like
does
it
require
like
a
three-node
like
a
three-note
kind
of
election
like
is,
there
is
a
risk
of
like
a
split
brain
scenario,
type
of
thing
or
is
there
some
sort
of
now?
What's
the
term
I'm
looking
for
a
consensus,
you
know
that
requires
like
three
or
more
essentially.
C
Yeah,
no,
I
think
so.
Elite
election
is
done
with
a
config
map
today,
so
in
client
go
there's
a
here's,
an
example
of
of
the
code.
So
I
think
what
it
does
is
someone
tries
to
write
to
a
config
map
and
whoever
gets
that
one
first
becomes
the
leader,
so
it
uses
kubernetes
as
its
its
generic
store
for
its
central
spot.
C
You
can,
I
think,
there's
any
number
you
could
run
one
or,
or
you
know,
100
doesn't
matter,
because
it's
all
based
around
using
that
that
one
I
mean
config
map
that
it's
doing
today,
so
yeah.
So
it's
again
we
didn't
write
it.
It's
this
this
thing
here
where
it's
it's
all
built
into
the
client
gear
library
or
the
kubernetes
library
that
does
all
that
stuff.
For
us.
I
don't
know
the
ins
and
outs
of
it.
I
did
the
initial
implementation
of
it,
but
this
would
be
where
I
think
to
go.
C
Look
if
you're
curious
about
how
more
how
it
works.
Yeah.
It
creates
this
resource
lock
here
and
then
it
uses
I'll
leave
a
config
map
to
be
the
the
resource
to
lock
against
whoever
gets
that
lock
will
then
become
the
leader.
E
Steve,
I
have
a
question
so
you
were
around
when
you
know
when
a
lot
of
the
original
code
was
written
and
I'm
curious
what
the
original
motivations
were
for
for
having
the
the
dag
representation
like
having
that
intermediary
representation,
rather
than
just
going
straight
from
the
kubernetes
apis
to
to
envoy
types.
C
Yeah,
I
think
the
original
motivation
was
when
we
introduced
ingress
route
and
the
goal
there
was
because
of.
You
need
to
be
able
to
understand
like
hey.
How
does.
C
E
I
think
that's
a
good
example
of
why
having
the
dag
is
useful,
because
without
that,
if
you
want
to
be
able
to
work
with
v2
and
v3
at
the
same
time,
then
you
would
have
to
like
rewrite
every
one
of
your
processors
to
support
both,
whereas
with
the
dag,
it's
just
go
from
the
dag
to
either
v2
or
v3.
So,
okay.
C
Yeah
yeah
yeah,
I
know
so
dave
chang
did
initial
work
on
a
lot
of
this
stuff.
So
the
idea
was
the
contour
package
would
process
through
to
figure
out
how
this
dag
style
should
get
built
or
how
it
should
look.
And
then
the
envoy
package
was
the
envoy
proto-representation
of
that
stuff
yeah.
So
once
you
got
past
you
know
that
builder
stage,
then
you
shouldn't,
you
know
contour
shouldn't
care
where
the
data
came
from,
whether
it
was
an
ingress
resource
or
an
ingress
route
resource,
or
you
know,
service
apis
or
whatever
yeah.
There.
E
You
have
many
outputs
so,
like
you
know,
xds
v2,
xp,
xds
v3.
It
makes
a
lot
of
sense
there,
because
then
you
don't
have
to
map
everyone
to
every
input
to
every
output.
But
if
you
know,
if
you
only
had
one
output,
if
envoy
you
know,
if
a
single
version
of
envoy
is
the
only
output,
then
maybe
doesn't
have
as
much
benefit.
So
I
was
yeah
kind
of
just
wondering
if
there
are
any
other,
like
other
insights
as
to
why
that
pattern
was
chosen.
C
D
A
A
tangential
question
so
related
to
all
of
that
that
was
obviously
how
include
support
is
built,
so
you
can
have
child
http
proxy
resources.
A
I've
seen
it
raised
a
few
times,
but
not
really
explained.
Would
it
be
antithetical
to
contours
design
to
have
parent
references
so,
instead
of
having
to
define
a
root
with
all
of
its
children,
you
could
define
that
a
root
allows
children
right
that
match
in
a
certain
name
space
or
something
and
then
have
the
children
reference.
The
parent
and
the
use
case
for
this
is
cases
where
you
want
to
generate
a
lot
of
resources
that
point
to
a
parent.
C
So
today
you
could
have
two
different
users:
could
create
the
same
domain
name
and
the
same
path
and
two
different
name
spaces
and
they
would
conflict
so
the
the
way
we
ended
up
doing
that
was
with
having
that
like
delegation
or
now
it's
an
includes
model
where
from
the
root
you
you
work
down,
you
know
from
the
top
down
in
terms
of
who
has
permissions
to
do
what
sort
of
thing-
and
that
was
just
because
that's
easy
to
then
implement
that
security
model
right.
C
So,
if
you're
a
user
on
the
side
here-
and
you
haven't
your
parent
hasn't,
given
you
the
right
permission,
then
to
do
to
use
some
sort
of
resource,
then
you
you
know
you
get
thrown
out
now.
What
you're
talking
about
joseph
is
sort
of
the
opposite.
Folks
want
to
do
this
more
of
like
the
ingress
model,
where
it's
more
self-service,
where
folks
can
just
create
a
resource,
and
then
you
know
have
it
all
kind
of
tied
together.
I
think
we
just
need
to
just
think
about.
C
C
We
just
need
to
figure
out
how
those
things
can
get.
You
know
understand
what
what
comes
out
of
that
model
and
how
we
can
implement
that
without
just
so.
It's
clear
that
you
know,
if
you
do
this,
maybe
we'll
break
it
because
there's
conflicts
or
you
know,
x,
y
and
z.
We
just
need
to
think
about
it
because
it
comes
up
a
lot.
Folks
really
want
to
do
that.
Have
that
self-service
model
they
want
to
have
to
have
that
root.
You
know
generated
all
the
time
yeah.
What
I
was
more
thinking
is.
A
Not
just
the
free-for-all
model,
where
it's
like
yeah
I'm
running
here,
but
more
like
the
route
defines
which
name
spaces
can
be
included
and
like
things
like
that,
so
it's
like
basically
just
inverting
some
of
the
thing
rather
than
being
directly.
I.
H
A
Specifically
this
child,
I
will
allow.
I
will
allow
child
resources
that
match
this
policy
yeah
as
a
way
to
to
make
it
more
secure
than
the
free-for-all
mode.
C
A
A
Of
where
this
was
a
little
difficult
was,
I
was
implementing
contour
support
for
a
system
called
selden.
Zelda
is
a
machine
learning
platform.
It's
a
serving
system
effectively.
So
it's
somewhat
like
k-native
and
things
like
that
when
you
brain
your
brain
is
not
working.
When
you
want
to
host
a
new
machine
learning
model,
it'll
spin
up
the
ingress
resources
to
wire
up
everything
to
tensorflow,
et
cetera
and
part
of
the
problem
is
the
the
way
that
platforms
are
designed
to
be
built.
A
G
Problem,
my
colleague
actually
rotates
through
that
that
that
is,
that
is
introducing
this
with
this
problem
or
describing
this
problem,
and
in
our
case
it's
it's
mostly
because
the
problem
is
that
we,
we
are
doing
products
that
are
deployed
on
kubernetes
environment
of
somebody
else,
and
we
are.
We
are
not
like
maintaining
that
then
ourselves.
G
We
do
not
have
teams
who
would
be
like
devops
teams
who
would
be
developing
applications
behind
a
single
virtual
host,
but
instead
we
do
a
product
which,
which
is
then
deployed
at
customers
of
ours,
and
that
that
product
needs
a
certain
level
of
modularity
that
what
whatever
parts
of
that
product
are
deployed
should
then
appear
behind
that
virtual
host
and
that
that
would
happen
on
deployment
time
the
decision
of
what
what
what
is
deployed.
G
We
we
have
then
options
to
choose
certain
parts
of
the
product
and-
and
we
basically
put
the
need
to
generate
this
top
level
http
proxy
at
customers,
because
only
at
that
time
we
know
what
goes
behind
that
virtual
host
and
and
yeah.
So
it
did
that
it
isn't
any
more
development
time
effort,
but
it
would
be
done
somewhere
else.
G
One
thing
that
I
wondered
about
this:
this
is
validation
and
conflicts
that
what
what
is
then
there,
because
for
me,
the
the
conflicts,
the
potential
conflicts
can
happen.
Even
when
we
have
a
cluster
administrator
who
writes,
let's
say
two
includes
in
a
top
level,
http
proxy.
C
Namespace
so
yeah,
I
think
I
mean
the
the
http
proxy
model
yeah
its
goal.
There
is
to
help
you
eliminate
those
some
of
those
issues,
but
I
mean
conflicts
can
still
come
up.
You
I
mean
an
administrator
could
create
you,
know
two,
the
same
domain
name
twice
and
then
contra
will
find
it
and
make
mark
them
both
as
invalid,
and
some
of
the
work
that
steve
did
with
the
gatekeeper
stuff
could
be
an
interesting
thing
of
maybe
right.
We
talked
about
doing
this,
you
know
should.
C
Could
we
somehow
expose
the
dag
somehow
so
gatekeeper
could
look
at
that
and
understand
if
hey,
there's
already
a
route
for
this
or
are
users
trying
to
create
some
sort
of
object?
You
don't
have
the
right
permission
to
do
that
and
then
not
get
to
that
point
where
we
we
mark
that
as
invalid
and
then
we
you
know,
break
both.
G
C
But,
but
so
far
that
doesn't
change
the
issue
of
how
you
define
that
permission
set
and
that's
what
you're
bringing
up
joseph
of
like
you
know.
How
does
a
user
say
they?
Can
you
know
how
do
you
create
those
those
rules?
I
guess
is
sort
of
the
part
b
is
you
know,
should
you,
how
can
you
implement
the
security
of
it
and
then
two
is
how
should
we
change
the
model?
So
it's
not
the
same,
and
it's
just
different.
G
I
was,
I
was
just
wondering
that
if
I
think
that
it
was
written
in
the
ticket
that
that
there
that
we
just
need
to
have
a
design
document,
that
explains
the
corner
cases
and
what
whatever
validation
needs
to
do,
needs
to
be
done
for
the
merging
and
validating
that
the
merge
will
be
then,
okay.
G
C
Yeah,
I'm
trying
to
make
sure
I
understand
yeah,
so
I
I
think
we're
talking
about
of
changing
how
http
proxy
works
today,
so
we
can
make
it
a
little
less
a
little
less
restrictive
in
terms
of
how
clients
or
users
in
different
namespaces
can
create
objects.
It's
come
up
a
few
times.
There's
been
a
couple
different
things.
I
know
one
user
said
they
they
wanted
to
have
like
a
first
come
first
serve.
So
I
think,
like
if
you're
there
first
and
you
get
a
path,
then
you
own
that
path.
C
You
know
I
mean
which
is
kind
of
interesting,
so
there's
some
different
things
to
think
about.
G
G
C
H
C
Like
you
can
have
a
root
delegate
to
a
child
and
have
a
child,
then
delegate
further
right.
So
if
I
give
a
namespace
blog,
for
instance,
that
namespace
now
that
proxy
object
now
owns
that
blog
and
they
can
take
whatever
they
want
out
of
that
and
delegate
further
if
they
need
to
yeah
so
which
is
which
is
interesting
too.
But
but
it
still
requires
you
from
the
top
down
to
define
that
kind
of
hierarchy.
F
C
G
Yeah
then,
I
was
wondering
that
that
at
least
for
for
ingress-
because
obviously
ingress
has
this
this
overlapping
parts
problem,
so
so
for
england
ingress,
there
is
some
algorithm
already
in
contour
that
that
merges
those
parts,
and
thus
this
kind
of
decisions
of
who
wins
first
of
course,
first
gets
the
ball.
C
C
Yeah,
so
there
are,
we
have
to.
Let
me
go
pull.
I
can
go
pull
up,
there's
a
bunch
of
issues
that
folks
have
already
opened
up
and
chatted
about
and
have
a
bunch
of
things
in
there
would
be
good
starting
places
to
look
through.
C
I
think
the
only
thing
to
think
about
too
is
this:
there's
the
service
apis
work,
that's
going
to
be
getting
added
to
contour
here
very
soon,
so
it
may
have
a
good
middle
ground
as
well
in
terms
of
it
has
some
underpinnings
of
what
we've
done
in
hp
proxy,
but
also
doesn't
have
all
the
same
things
that
ingress
had.
So
it
may
be
a
good
middle
ground
for
folks
to
implement
if
they're
willing
to
switch
to
that,
but
that's
obviously
not
not
in
in
yet.
It's
not.
C
You
know
a
final
spec
just
yet,
but
yeah.
Let
me
go
find
those
those
issues
here.
If
you
want
to
read
through
them,
you
can
see
some
of
the
ideas
that
folks
have
had
asked
about
and
just
pointing
together
the
use
cases
would
be
great,
but
as
a
first
start
and
then
we
can
start
addressing
you
know
how
do
we
make
it
a
little
less
restrictive
but
still
usable
for
users
yeah
I'll?
Do
that
yeah
all
right,
so
we
were
five
minutes
past.
I'm
happy
to
chat
if
folks
have
more
questions.
Otherwise,.