►
From YouTube: Contour Office Hours - July 16, 2020
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
No
worries
sure
sure
so
so
so
on
github
here
so
controversy,
NSF
project
now
and
we're
open.
So
everything
we
do
is
in
the
open
source.
Here,
it's
all.
It's
all
open,
there's
nothing!
You
know
behind
the
scenes
private.
So
on
the
github
page
here
this
is
kind
of
a
good
place
to
go.
There's
a
couple
Isis
to
interact
with
with
us
and
everyone
else.
So
github
is
a
good
place
on
the
kubernetes
slack.
Let
me
pull
up
my
slack.
B
A
Great
yeah,
so
so
it's
locks
a
great
place.
Yes,
and
then
here
there
should
be
some
labels
for
like
good.
First
issue,
yeah
good
first
issue:
we're
trying
to
be
try
to
tag
stuff
of
things
that,
if
you're
new
and
you
want
to
contribute
things,
you
can
do
documenting
doc
things
there's
some
bad
the
bad
messages.
It
looks
like
log
messages.
A
Contour
should
help
sort
the
option
flags
just
you
know.
So
these
are
good
kind
of
things.
We
tagged
that
might
be
good
to
start
with
they're,
not
too
crazy.
You
don't
need
design
Doc's,
don't
need
lots
of
discussion
around
them.
Those
would
be
good
to
start
with
I
guess
so
what
you
can
do
and
just
go
find
once
to
say
you
want
to
do
this
document
are
back
policies.
A
You
can
just
come
in
here
and
just
comment
and
say:
hey,
you
know,
I
want
to
work
on
this
or
something
and
we
can
assign
it
to
you
and
getting
let's
you
go
work
on
it
and
then
I
guess
there's
a
good
spot
with
on
our
site
that
talks
about
how
to
actually
do
a
do.
A
a
pull
request
was
that
not
a
resource
I.
A
B
A
A
You
know
once
you
get
an
issue
together,
the
smell
were
to
change
the
better,
and
then
you
know
we
can
work
with
you
also,
obviously,
to
help
help
get
those
changes
through
and
stuff.
But
the
big
thing
is
to
talk
about
what
you
want
to
do.
First,
so
if
it's
just
documentation
and
stuff,
it's
probably
not
a
big
deal
to
propose
what
you
want
to
do,
but
if
you
want
to
go
change,
how
something
works
or
how
something
interacts,
then
it's
always
good
to
talk
about
it.
A
First,
because
we
don't
want
to
do
is,
have
you
go,
spend
a
bunch
of
time
building
it
testing
it
doing
all
these
things
and
then
come
back
and
I
say:
okay,
we
should
maybe
take
a
different
approach
or
something
so
just
that
is
just
talking
about
it.
First,
on
those
sort
of
things
you
can't
write
too
many
tests
test
should
should
describe
how
your
features
should
work
versus
documentation,
explaining
how
it
should
work.
A
So
those
those
are
important
and
then
this
explains
the
labels
and
what
all
those
different
things
mean,
and
then
there
was
a
spot
see
this
stuff's
all
over
the
place
and
here's
well
there's
a
contributing
guide
here.
I
think
yeah.
So
once
you
read
through
that,
then
this
is
the
actual
how
to
do
it.
So
if
you
actually
want
to
go
and
do
a
PR,
unlike
your
local
machine,
you'll
need
you'll
need
to
go
because
contours
written
and
go
so
you
can
go
ahead
and
clone
the
repo.
A
You
can
do
a
build
just
using
make
and
everything's
everything's
installed
there.
There
are
a
couple
of
pieces
that
need
docker
locally,
so
you
might
need
docker
as
well,
but
make
check
or
run
the
tests
and
then
here's
the
flow
it
actually
can
make
the
PR
so
again
raise
the
issue.
If
you
need
to
do
something
and
then
here's
a
some
docs
on
how
you
can
actually
structured
commit
message,
so
it's
consistent.
A
B
A
B
A
A
So
okay,
yeah,
so
yeah,
but
but
anybody
can
you
know,
contribute
and
stuff
is
there's
no
worries
there,
but
you
know
in
terms
of
so
this
is
a
good
spot
and
then,
if
you
want
to
see
kind
of
the
roadmap,
if
we
go
over
here
to
the
community,
probably
should
pin
that
one
or
no
way
it's
here
projects
so
go
back
to
the
root
and
go
to
projects
and
this
go
to
the
project
boards.
This
board
here
is
to
kind
of
show
you
the
road
map
in
a
way
of
kind
of.
A
What's
going
on
so
we
screw
over
to
here,
we
should
see
like
here's
all
the
things
that
are
in
progress,
so
once
we
assign
something
to
a
user
to
somebody,
then
it
shouldn't
hit
this
this
category
here
and
then
you
can
kind
of
see
what's
getting
worked
on
right
now
and
then
parking
lot,
one
two
and
three
are
sort
of
the
things
that
are
prioritized
to
come
up.
Obviously
this
can
change.
This
is
sort
of
just
some.
A
A
Yeah
sure
thing.
Thank
you.
Thank
you
sure
yeah
to
make
the
other
cool
spot
is
the
design.
Doc's
Dimmick
are
interesting
as
well,
so
here
in
the
root
and
design.
There's
all
the
things
we
sort
of,
like
the
main
large
features,
need
design,
doc
1
to
describe
what
its
gonna
look
like
and
then
2
to
describe
to
future
future
self.
When
you
make
a
decision
you
like,
why
did
I
do
that,
it's
designed
to
action
help.
You
explain
that
this
is
a
great
spot
here.
A
I
know
big
one,
that
if
anyone
wants
to
go,
take
a
look
at
and
I
had
some
comments.
Is
this
one
on
off
so
James
started
this
design
doc,
and
this
is
the
goal,
is
to
add
external
off
to
contour,
so
we
can
configure,
you
know
an
OID
seek
provider
or
something
to
do
off
on
the
edge
there
through
envoy.
So
that's
what
this
this
design
doc
does.
A
Know
put
your
two
cents
that'd
be
great
to
where's
the
yeah
yep,
so
his
is
his
doc
here.
It
explains
how
that
kind
of
looks.
A
Thank
you,
I
think!
That's
it
yeah
and
I
said
we're.
Steve
and
I
here
are
based
in
the
in
the
US
and
then
either
other
folks
engineering
team
they're
in
Australia.
So
there
will
come
online
here
in
a
few
hours.
So,
depending
where
you
are
in
the
world,
we
have
good
coverage,
I,
guess
of
time
zones
between
US
and
Australia.
B
So
I'm
actually
joining
from
India,
so
I
accidently.
There
was
a
notification
in
bit
of
like
for
my
issue,
so
I
just
clicked
on
the
contour
get
up
and
I
saw
this
link,
so
I
tried
joining.
So
it
was
really
nice
to
know
you
guys
and
get
to
know
from
you
guys.
Thank
you.
Yeah.
A
B
A
B
Yeah
I
just
may
be
like
and
go
like
I
just
created
a
sample
go
controller
like
two
minutes
controller.
So
that's
the
my
level
of
experience
in
two
minutes
and
two
chops,
but
I
have
done
like
a
lot
of
deployments
using
like
in
Cuban,
is
like
multi-tenant
applications
on
most
of
development
and
the
deployments
have
done
but
respect
to
writing
a
new
piece
of
code
in
your
notes
or
something
I'm,
just
a
beginner.
So
that
level
like
it
gave
me
some
new
shoes
like.
B
A
So
I
think
ever
gets
bad
start.
There
I.
C
A
B
A
Yeah,
so
that
that
one
requires
you
to
bounce
back,
so
this
one
I
was
looking
at
was
just
we
have
so
that
there,
because
contour
configures
envoy
there
are
certain
things
in
that
like
timeout
settings
and
just
various
things
that
contours
sets
that
either
matches
envoys
default
or
the
changes
envoys,
Devo
I,
know.
One
thing
we
said
is
there
are
there's
any
the
circuit
breaker
policy
right,
so
one
requests
come
in.
If
you
have
too
many
than
on
way
will
throw
the
circuit
and
then
a
vote.
It'll.
B
A
It'll
do
some
things
we
make
that
that
value
really
high,
just
because
we've
had
issues
with
something
else
in
the
past,
so
we
changed
the
default
from
what
envoy
works,
but
you'd
have
to
find
all
those
different
pieces
inside
contour
and
then
go
find
them
an
envoy
and
then
match
them
all
up.
So
that
may
not
be
a
good
first
issue.
A
Time
label
like
that,
though,
but
because
you
got
to
go
digging
through
all
the
different
pieces
and
parts
this
one
I've
always
wanted
to
do
was
which
hide
it:
how
to
do
how
to
configure
an
example
of
how
to
configure
a
dr
PC
client.
So
if
you've
been
up
contour
and
you
want
to
connect
your
PC
to
through
contour
through
envoy,
this
would
be
a
great
a
great
thing.
That
works
looks
like
someone
started
to
work
on
it,
but
that's
been
since
May.
A
This
is
a
good
one
here,
so
in
our
example,
so
we
have
this.
You
show
me
this
and
the
root
here,
there's
a
this
examples
folder
and
has
a
bunch
of
examples
of
how
certain
things
could
work
well
in
here
in
contour.
This
is
where
we
kind
of
have.
We
call
the
examples
deployment.
So
it's
how
contour
is
deployed,
so
it
deploys.
A
You
know,
surface
accounts
and
the
namespaces
and
kubernetes
it
deploys
the
series
that
we
have
you
use
and
then
it
deploys
the
actual
contour
service,
envoy
service
and
contour
deployment
and
envoy
Damon
said
yeah.
So
I
think
James
wrote
this
issue
here.
What
we
could
apply,
these
specific
application
labels,
but
our
generic
for
all
crew
bidets
deployments
right.
So
we
could
update
these
deployments
here
and
add
these
different
pieces
so
that
when
you
deploy
it,
you
understand
the
the
version
and
the
component,
all
those
sort
of
things
yeah.
B
A
A
good
thing
to
spin
up
and
start
with,
and
that's
here
in
this
examples,
folder
what
you
do
is
you
edit
as
examples
folder
and
then
there's
a
mate
command
so
called
make
generate.
That
then
generates
this
render
file,
which
crates
basically
takes
all
of
those
different
yamo
files
and
smashes
them
together
into
one
file.
And
what
happens
is
this
becomes
the
source
of
the
QuickStart?
So
if
you,
if
you
go.
B
A
B
A
D
By
the
way,
something
that
I'm
always
been
curious
about
that,
how
does
people's
like
target
environments
very
clearly
test
to
work
school
parent?
This,
because
then
I
know
that
you
made
a
video
at
some
point
and
I
think
I
think
you
are
using
kind
in
your
environment
like
target
environment
Steve.
Isn't
it
true
yeah.
D
Yeah
and
I
also
picked
that
up
and
but
then
when,
when
doing
like
an
actual
actual
development,
builds
and
and
uploading
the
having
real,
quick
turnaround
when
editing,
building
and
deploying
Cooper
a
test
and
don't
know,
tips
and
tricks.
I
I
haven't
seen
seen
anywhere,
for
example,
I'm
I'm
myself,
I've
done
it's
on
the
die.
I
have
no
like
a
empty
or
like
a
container,
with
only
only
sleep
running
in
that
and
and
then
I
like
copy
the
the
contour
binary
into
that,
and
then
I
can
quickly
run
it
manually
and
other
quick
turnaround.
Okay,.
A
A
So
what
I
do
is
I
have
make
you
stick
with
all
these
windows
dancing
around
and,
in
the
contrary,
go
there's.
There's
a
example:
kind
config
this
one.
This
is
what
I
use.
So
basically
we
expose
when
you
spit
up
a
kind
cluster.
You
can.
You
can
map
ports
from
the
crew
Brady's
cluster
to
your
host
machine,
so
I
as
I
buy
an
18-4
for
three
just
for
fun.
So
what
I'll
do
is
I'll,
go
ahead
and
say
kind,
create
cluster,
and
then
here
I'll
pass
that
config
file,
that
kind
not
config.
A
So
what
that
does
is
basically
because
I'm
mapping
these
ports
to
my
hosts.
Now,
when
I
deployed
an
app
to
my
kind,
cluster
I
can
curl
local
host
80
and
it'll
it'll
route.
The
other
thing
that
we
did
was
a
lot
of
the
examples
here
in
like
the
blogs
and
stuff
use,
this
thing
of
local
dot
project
convert
at
I/o
and
we've
configured
that
DNS
name
to
point
to
local
host.
A
A
So
once
this
spins
up
I'll
show
you
real
quick
go
go
go
so
what
I'm
gonna
do
so
to
do
this
development,
I'm
gonna
have
contour
run
locally
on
my
Mac
and
then
I'll
have
envoy
in
my
kind
cluster.
Look
back
to
my
machine
for
for
contour,
so
that
I
can
then
basically
spin
up
contour
locally
and
then
I'll
go
will
connect.
I
can
test
my
thing
and
then
I
can
you
know,
rebuild
contour
over
and
over.
So
what
we'll
do
is
we'll
go
ahead
and
bowl.
A
Let's
apply
contour
I'm
just
going
to
apply
the
examples.
Let
me
just
looked
at
and
then
what
I
do
is
I'm
gonna
head
and
edit
the
daemon
set,
and
in
here
there's
that
flag
for
the
xdx
address.
So
this
XD
Esther
dress
is
what
we
create.
This
bootstrap
can
config.
This
is
where
this
is
we're.
Gonna
tell
envoy
hey.
This
is
what
your
contour
is
and
for
me
my
local
machine
like
he
is
1050,
131
186,
think
that's
what
I
am
yeah
I'm
186
131.
D
C
A
A
A
A
Do
a
code
change
build
it
locally,
and
then
this
will
start
contour
contour
will
spin
up
and
then
the
envoy
running
in
kind
should
then
look
for
my
this
contour
running
here
and
get
its
configuration
and
there
it
goes
I
think
that's
the
convoy
connecting
here.
So
let
me
go
ahead
and
make
a
new
window
look
at
our
pods.
No,
there
goes
all
right.
We
should
scale.
A
Scale
the
contour
replicas
is
zero.
It
doesn't
matter
because
he
changed
envoy,
but
we'll
get
rid
of
those.
So
now
this
envoy
here
is
looking
at
my
local
machine
for
its
config
right,
so
it's
getting
all
its
configuration
from
there.
So
if
I
go
to
here
and
then
I
just
apply
this
demo,
the
site
demo
is
a
generic
thing
in
that
and
in
the
blog
I
use
the
once
so
now,
if
I
get
proxies
you'll
see
now
I
have
we
use
local
that
project
contoured
I/o,
so
I?
A
Can
that
look
that
does
a
look
back
to
look
up?
I
have
looked
up
on
here.
I.
Do
it
is
loop
back
to
one
22700
one?
So
then,
if
I
look
at
my
docker
machine,
it
is
running
on
port
80
s,
map
to
my
local
machine
here:
seaport,
82,
port
80.
So
now,
if
I
do
a
curl
that
it'll
it'll
respond
and
I
don't
have
to
deal
with
any
kind
of
DNS
or
anything.
A
A
Yeah,
you
think
it
is
yeah
you
can
do
like
I
do
have
to
sometimes
so
I
have
out
in
the
MacLeod
environment.
I
have
a
you
know:
I
I,
think
it's
eks
and
a
TBS
or
something
where
I've
got
real
you'll,
be
role,
load,
balancers
and
real
sorts
and
those
sort
of
things
sometimes
I'll
just
make
it
an
image
for
that
and
you
can
do.
There's
I
think
it's
registry.
A
Yeah,
it's
version
so
a
version,
you
know
something
oops
and
then
you
can
say
make
push
and
what
that'll
do
is
that
will
do
it'll
build
an
image
at
this
where
I
am
now
and
then
tag
it
with
that
tag.
So
let
me
Steve
sloka
/
contour
/
something
earth
:
something
then
you
can
go
edit,
your
daemon
set
and
then
just
or
your
deployment
and
change
it
to
that
image.
But
you
have
to
do
this
push
every
time
which
is
similar
to
what
you
were.
You
were
doing
town,
but
that's
the
fast
one.
A
A
This
will
basically
great
let
you
create
a
tunnel
back
to
your
local
machine
through
a
public
URL.
So
you
can
basically
like
expose
the
contour
running
locally
into
the
Envoy
running
out
in
your
your
cluster.
It's
a
little
more
insecure
because
you're
gonna
expose
yourself
to
the
world,
but
for
the
stuff
I'm
doing
it's
not
a
big
deal,
but
n
grok
is
one
and
then
another
one
called
unless
from
Alex
Ellis.
A
This
was
neat
because
it's
it's
an
grok
uses.
Some
uses
public
servers
and
everyone
can,
you
know,
I'm,
not
sure
the
security
of
it,
but
in
let's
spins
up
basically
a
VM
in
an
environment
somewhere
and
then
that
VM
has
a
public
address.
But
you
use
that
to
tunnel
back
to
your
machine,
so
you
kind
of
own
all
the
bits
for
it
which
is
kind
of
nice,
but
they
both
accomplish
the
same
thing.
But
you
could
do
the
same
thing
with
this.
D
D
A
A
There's
an
endpoint,
and
this
is
right.
It's
funny
how
you
don't
do
docker
for
a
while.
You
forget,
there's
an
IP
address
on
the
docker
networking.
That
is,
your
local
machine.
I
think
which
is
consistent.
It
won't
change.
If
you,
you
know,
got
up
and
went
to
a
coffee
shop
or
something
I
just
end
up
using
my
Wi-Fi
adapters
IP
address
and
it
seems
to
work.
Fine,
okay,.
D
D
C
A
Sure
so
this
is
the.
When
you
create
a
kind
cluster,
you
can
pass
config
so,
but
these
convicts
days
we're
gonna
have
one
control
plane,
we're
gonna,
have
one
worker
and
then
on
the
worker.
We're
gonna
map
supports
so
we're
gonna,
say:
port
80
in
the
in
the
container
or
in
the
cluster
is
going
to
map
to
the
post
port
80
on
my
local
machine,
where
I'm
running
kind,
it's
gonna
listen
on
again:
zero,
zero,
zero
zero.
So
when
I
spin
this
up
after
I
create
this,
if
I
do
a
dr.
A
PS,
it's
just
hard
now
with
with
the
fonts
we
will
see
here
is
this
is
my
my
node.
You
can
forget
the
other
one.
This
is
for
a
different
thing,
but
this
one
is
it
bridges?
You
know
zero,
zero,
zero
80
into
80
in
the
container,
so
basically
localhost
80
is
now
mapped
to
80
in
my
in
my
container,
which
is
my
committee's
cluster,
and
then
because
we
run
contour
or
run
on
going
with
host
networking
with
the
host
port.
We
go
back
to
here
on
Valais
yeah,
so
we
run
on
our
default.
A
C
Then
this
is
how
like,
if
you're,
using
no
local
dot
project,
contour,
dot
IO
and
you
curl
it
it's
actually
just
hitting
low
close,
but
because
of
this
kind,
port
mapping
that
gets
routed
into
the
Envoy
pod
inside
your
cluster.
That's
using
this
host
port
and
then
the
traffic
actually
gets
routed
to
wherever
you
have
your
HTTP
proxy
set
up
to
point
to
right.
A
Right,
yeah
mm-hmm.
Now,
if
you
did
need
more
so
out
on
the
kind
website,
I
did
add
Docs
to
that.
So,
if
you
go,
this
is
the
kinds
generate
dock
site
and
there's
an
ingress
tab
here.
So
this
has
a
similar
thing.
That's
a
little
different
and
what's
different.
It's
actually
kind
of
nice
is
that
you
can
actually
these
examples.
They
they
tagged
a
node
in
your
cluster
to
be
an
ingress
node.
Basically,
so
what
this
mapping
lets
you
do?
Is
you
basically
have
a
similar
config,
but
we
can
do?
A
Is
you
can
have
more
nodes
in
your
cluster?
Our
example
that
I
showed
you
here?
Not
this
one,
the
kind
config
you
can
only
expose
one
port
280
right,
I
can't
map
multiple
containers
to
port
80.
So
if
you
added
a
second
worker,
your
kind
cluster,
it
would
fail
because
you'd
have
duplicate
ports.
The
example
here
lets
you
basically
create
you
know
you
could
have
a
5
node
cluster
and
only
make
one
of
the
nodes,
an
ingress
node.
In
a
sense.
It's
doing
that
through
this.
This
configuration
here.
A
A
You
have
to
apply
this
back
to
it
and
that
will
tell
it
basically
you
to
patch
it
with
the
node
that
you
want
to
do
it
on,
and
then
it
makes
one
of
the
nodes,
basically
an
ingress
node,
but
you
can
have
more
capacity
if
you
needed
in
your
cluster,
depending
on
what
you're
trying
to
test
for
things
I'm
doing
one
node
is
fine,
but
maybe
you
need
more
more
more
horsepowers
to
do
whatever
you're
going
to
do.
Is
that
clear,
I?
Guess
it's
a
similar
thing.
A
It's
just
it's
just
going
to
tag
a
certain
node
with
a
with
a
node
label
and
you
can
expose
more
if
you
had
different
ports
or
different
things.
Maybe
you
don't
run
run
a
run
on
port
80.
So
say
you
want
to
run
and
say
8080.
You
could
change
this.
You
know
this
port
8080
in
your
cluster
and
then,
when
you
run
it,
you
could
change
this.
This
envoy
service,
HTTP
protocol
or
HTTP
port.
It's
telling
you
what
port
you
want
envoy
is
listener
to
me.
So
by
default
it's
actually
88.
A
D
I'm
also
overused
idea
and
403
so
so
that
they
they
are
convenient
to
use
with
kerlor
or
HDTV
PI
or
something,
but
what
I
did
differently
in
my
environment
that
I
I
don't
use
like
the
wild-caught
IP
address
and
listening
address,
but
the
instead
I
I
put
like
something
like
100,
2700,
101
or
or
something
like
that
dedicated
local
host
IP
address
for
the
cluster.
So
then,
then
I
can
have
several
coasters
in
parallel
and
they
don't.
They
don't
overlap
with
the
port
findings.
Yep.
A
Yep
absolutely
yep,
so
yeah
you
can
do
everything
yeah,
yeah
and
I've
done
it
too.
In
my
house,
the
same
exact
config
work
I
have
like
a
little
Intel
nook.
So
my
Mac,
you
can't
hear
it,
but
it
just
screams
like
the
fans
or
just
like
always
on
I.
Don't
know
why,
but
it's
just
always
maxed
out
with
heat.
Sometimes
I'll
run
like
a
little
cluster
on
a
nook
and
put
do
the
same
thing.
A
So
I'll
have
envoy
running
on
that
machine
and
then
my
local
host
will
have
contour
and
I'll
just
do
the
same
configuration
but
instead
of
being
localhost
it'll
be
the
IP
of
that
machine.
You
know
to
make
requests
to
zoom.
The
same
scenario
could
work.
If
you
had
a
different,
you
know
cluster
locally
or
somewhere
else,
where
you
can
have
them
talk
together.
D
A
A
An
IP
yes
I
mean
that
yeah,
let's
search
for
that
yeah
yeah
anything
works,
doesn't
matter
how
you
do
it.
Yeah
I
mean
you
can
have.
You
know
I
used
to
edit
my
local
Etsy
file,
my
host
file
on
my
machine
and
add
things
into
there
and
then
I
think
one
time
Jonas
I
did
a
demo.
It
was
the
blog
post
actually
and
on
the
project,
contour
site,
I,
think
I
wrote
a
blog
post.
I
wanted
to
make
it
easier
to
use.
Was
this
one?
This
was
actually
good
article.
A
This
is
actually
a
year
older
of
how
to
do
this,
but
I
think
we
used
this
local
project.
Contour
I
other
thing:
no,
we
didn't
nevermind,
yes,
and
we
did
the
Etsy
host
thing.
That's
how
I
used
to
do
something
I
know.
Sometimes
we
did
that
Jonas
and
I
came
up
the
idea
and
we
added
the
local
project
contour
name
so
that
helped.
A
B
A
Yeah
this
is
this
might
be
good,
so
contour
is,
like
he
said,
is
a
deployment
and
on
boy
runs
of
the
demons
head.
So
we
run
on
Voina
diamond
set
for
a
couple
reasons.
One
is
that
we
eat
again
by
default,
we're
using
host
ports,
and
you
can
only
map
one
one
container
to
a
host
port
on
a
specific
node
in
your
cluster
right,
so
graphing
ports,
80
and
443
to
to
your
host.
So
you
couldn't
have
two
containers
running.
You
know
in
the
same
cluster
mapping
to
the
same
host
port
right.
A
That
would
be
an
error,
so
Damon
said
I'll
only
always
ensures
that
only
one
instance
of
that
container
runs
on
every
node
in
the
cluster.
The
second
thing
you
get
is
that
envoy
scales
nicely
against
CPU
threads.
So
if
you
had
say
you
know
what
need
to
add
performance
to
your
cluster,
adding
you
know
two
or
more
different
instances
of
Envoy
on
us
on
the
same
machine.
A
Isn't
like
give
you
more
performance,
so
you're
not
going
to
gain
a
whole
lot
by
doing
that,
so
you'd
be
better
off,
adding
more
nodes
to
your
to
your
infrastructure.
To
get
more
performance,
so
this
ensures
that
you're
going
to
give
as
many
you
know,
threads
as
you
can
to
tune
instance
of
Envoy
and
it's
the
third
thing
you
kind
of
get
from
doing
the
statements.
A
That
idea,
because
we're
using
host
networking
we're
gonna
skip
the
the
the
CNI
overlay
hop
right,
so
we
use
node
ports
per
se,
and
traffic
came
in
here
from
out
here
in
this
world
and
hit
this
load
balancer
it's
gonna
route
to
a
node
port
and
from
there
queries
will
decide
what
pod
should
get
it.
So
maybe
this
node
one
here
will
get
it,
but
really.
No
three
here
is
gonna
heal
the
request,
so
the
request
would
come
in
hit
load
balancer
hit
the
first
node
and
then
route
east,
the
west.
A
To
this
other
node
hit
that
Envoy
and
then
go
off
to
the
pod
with
host
ports.
We
read
route
dress,
so
envoy
rots
directly
deposit
our
works,
so
you
can
figure
it
with
the
community
service,
but
that's
just
to
tell
envoy
basically
where
all
the
endpoints
exists.
So
once
it
once
ever,
Qwest
comes
in
hits
envoy,
that's
gonna
route,
dressy
to
the
pod
or
always
gonna
decide
which
one
should
go
next,
so
you're
gonna
skip
that
extra.
A
Hop
that
note
ports
will
give
you
by
going
directly
to
the
load,
balancer
and
then
dressing
to
an
insensitive
envoy
and
then
off
to
your
it's
your
application
directly.
So
that's
kind
of
a
white
tube,
but
really
it
doesn't
matter.
If
you
wanted
to,
you
could
run
envoy
as
a
deployment
get
rid
of
the
you
know
the
host
port
mapping
thing
and
it
would
work
just
as
well
so,
whatever
maps
to
what
you
environment,
what
you,
what
you
need!
You
know
yeah.
B
A
All
you
need
is
you
need
one
at
least
one
for
one
instance
of
contour
running.
You
can't
have
multiple,
and
here
it
says,
leaders
like
we
can
explain
that
a
little
bit
too.
So
you
can
have
as
many
instance
of
contour
as
you'd
like
you
can
have
100
of
them.
It
has
internal
leader
election
and
what
that
does.
Is
it
uses
a
it's
a
feature
in
client?
Go,
that's
just
uses,
I
think
it's
config
Maps
today
and
what
it
does
is
it.
A
Basically,
it
picks
one
person
to
be
the
leader
and
that
leader
is
the
only
one
who
can
write
state
back
to
the
API
server.
You
see
the
arrow
here.
It
goes
this
way,
so
contour
will
write
status
for
HTTP
proxy
resources,
while
those
ingress
resources
it'll
set
the
load
balancer
of
the
that
you've
got
configured
up
here,
but
you
don't
want
to
have
all
of
them
writing
right
hanging
out
to
that
same
that
same
information,
so
the
bleed
election
defines
one
to
be
the
leader
and
the
other
ones
just
are
active
listeners.
B
A
To
have
anything
hot
hot
standby
is
running
sort
of
so
if,
for
whatever
reason,
they
know
that
this
instance
of
contour
is
running
on,
would
go
down
for
maintenance
or
whatever
envoy
will
still
serve
its
last
known
good
configuration.
So
it's
kind
of
like
kubernetes.
If
you
take
the
API
server
away,
everything
will
still
work
as
it
last
got
its
config.
A
So
if,
if
whatever
reason
this
leader
would
go-
or
this
instance
would
go
away,
that
this
I
was
connected
to
I'm,
we'll
go
reconnect
to
a
different
one,
but
if
you
had
another
one
sitting
there
hot
ready
to
go
and
it'll
just
you
know,
reconnect
to
that
and
things
will
go
on
so
you'll
just
reduce
the
downtime.
If
you've
lost
one
of
your
instances
of
contour,
okay,
I
guess
it
wouldn't
be
downtime,
it
would
just
be
configuration
downtime
if
that
makes
sense.
So
yeah
yeah.
B
A
C
A
A
D
Have
one
random
question
that
I've
been
thinking
at
some
point?
I
should
have
checked
it
myself,
but
I
haven't
so
I
asked
about
it.
I've
been
thinking
that
not
that
contour
is
is
listening
for
for
secrets.
Also,
so
will
it
actually
get
notified
of
every
secret
and
really
to
cache
every
secret,
not
only
the
ones
that
are
repaired
from
from
the
HTTP
proxy
I.
A
D
A
So
yeah,
so
you
can
have
it
and
that's
the
same
for
I
know
that
clusters
clusters-
we
don't
we
used
to
make
we
used,
apply
changes
for
every
cluster.
We
don't
now
we
only
store.
We
really
pass
off
the
Envoy,
the
clusters
that
actually
get
reference
from
a
route.
Well,
it's
Ingres
or
a
proxy
resource,
but
secret
to
the
same,
because
there's
there's
some
overhead
in
terms
of
parsing,
the
secret
to
validate
and
stuff
I
think
there
was
just
an
issue
on
that,
because
I
was
working
on
right
now.
A
We
one
thing
we
don't
do
is
every
endpoint
gets
pushed
to
envoy
today,
because
we
handle
on
endpoints
differently.
So
contour
has
this
idea.
It
builds
this
soap,
so
I'm
stuttering
a
little
bit
when
Carter
goes
to
blows,
configuration
right.
It
gets
all
the
events
from
the
API
server
and
it
stores
all
of
those,
and
we
have
this
local
cache.
It's
called
a
Kerberos
cache
and
what
it
does
is,
then,
is
on
some
sort
of
interval.
A
We
go
and
we
build
this
dag
this
directed
acyclic
graph
in
memory
and
that's
how
we
do
the
the
proxy
walk
to
know.
You
know
whether
this
person
has
the
right
delegation
or
not
or
inclusion,
for
a
path
or
a
set
of
headers
or
something
so
we
walked
down
that
die
and
we
build
out
that
you
know
that
graph
in
memory
and
then
once
we
have
that
we
walk
it
and
we
build
out
envoy
configuration
that
takes
a
little
bit
of
time,
and
it's
not
you
know
as
real-time
as
as
an
endpoint
would
change.
A
A
They
see
streams
all
those
over
this
single
one,
single
gr,
PC
connection
and
that's
one
kind
of
benefit.
So
it
reduces
the
amount
of
overhead
in
terms
of
those
jpg
streams
all
over
the
place
and
then
two
is
that
there's
an
ordering
boat
to
knit
into
ATS
so
that
you
can't,
if
you
reference
a
cluster,
doesn't
have
endpoints,
yet
it
won't
get
pushed
to
envoy
meaning
you
won't
have
bad
configuration
in
envoy,
but
there's
a
couple.
Things
need
to
happen
to
get
to
that
point.
D
A
Yeah
I
know
there's
logic
too,
so
you
can
pass
in.
Let
me
just
go
to
it.
I'll.
Do
it
faster
in
in
here
you'll
do
in
here,
so
in
in
contour
serve
you
can
pass
the
root
namespaces
yeah.
So
basically
you
can
restrict
where
the
the
root
proxies
live
by
set
namespaces.
So
when
you
do
that,
it
also
restricts
where
contour
watches
for
secrets.
What's
the
second
thing,
so
where
is
it
yeah
so
for
every
root?
Namespace
will
only
set
a
watcher
for
that
namespace.
A
If
you
set
the
root
namespaces,
so
basically,
contour
will
only
have
watches
for
the
namespaces
that
you
define
based
on
those
root
namespaces
on
any
others,
any
others
that
won't
even
look
at
which
which
can
help
again
reduce
the
the
scope
of
what
you
want
to
do.
I
think.
If
this
is
zero
down
here,
then
it
doesn't
give
set
somewhere.
I
know
it.
A
Does
it
it's
in
there
right
now,
but
yeah,
but
it's
only
gonna
watch
things
phases
you
define
those,
that's
not
a
thing
that
is
kind
of
cool
I
know
folks
have
asked
about
not
watching
all
namespaces
having
contour
just
watch
a
specific
set.
Even
for,
like
you
know,
services
and
endpoints
those
sort
of
things
I
think
that
use
case
was
having
like
a
multi-tenant
cluster,
where
they
wanted
each
tend
to
have
their
own
ingress
controller
in
a
way.
But
we
haven't
ever
implemented
that
yet
I
talked
about
how
that
would
work.
A
A
So
those
are
in
it's
an
internal
dag,
cache
yeah.
So
all
these
are
cached.
This
is
our
cache.
Is
the
big
map
of
things
that
we've
talked
about?
Moving
to
the
part
of
the
client?
Go
that
client
go
the
the
cube
controller
go
runtime
stuff.
There
are
the
different
components
in
there
that
we've
talked
about
moving
to.
We
just
haven't
yet
I
think
we
just
built
around
like
some.
We
built
our
own
XTS
implementation
or
own
caching
and
stuff,
but
we
could
totally
switch
it.
It
just
was
history,
I
guess,
fair.
A
Yeah
everything
comes
into
here,
so
there's
in
serve.
It
was
interesting
for
folks.
Server
is
where
everything
starts
so
down
here,
there's
a
bunch
of
Informer
stuff
yeah,
so
we
could
use
informers.
So
this
is
how
we
basically
inform
on
services.
So
right
now
we're
doing
everything
is
the
dynamic
client.
So
here
we're
gonna
say:
hey:
go
grab
the
services
schema.
This
scheme
group
version
thing.
We
pass
that
off
and
then
it
tells
client
go
to
hey.
Go
watch
for
changes
on
that.
A
So
this
is
the
interface
here
so
on
add,
update
or
delete
something
see
all
the
different
types
of
events
hit.
The
same
add
update,
delete
and
that's
here,
I'd
update
delete
once
it
gets
that.
Let
me
go
ahead
and
pass
that
off
to
the
cache.
So
here,
like
we'd,
say
insert
and
then
the
insert
here
back
in
that
cash
has
a
big
big
case
statement
for
hey.
A
If
it's
a
secret,
if
it's
a
service,
it's
an
ingress
object
or
whatever
it
is,
and
then
we
added
the
cash
here,
so
you
can
add
to
this
cash
and
then
every
so
often
on
whatever
interval
that
is
you'll
see
that
here
in
this
loop,
so
this
cash
event
handler
is
what
builds
the
dag
out.
So
there's
a
timer
to
say:
hey.
If
we
hit
a
certain
time
or
a
certain
set
of
outstanding
unprocess
events,
then
we'll
go
ahead
and
build
this
dag.
A
So
here
update,
dag,
says:
hey
builder,
so
the
builder
is
the
tool
that
which
translates
all
of
those
ingress
objects
into
what
we
call
these
dag
objects.
So
this
is
how
we
can
handle
basically
ingress
objects
used
to
the
ingress
from
out
objects,
proxies
and
then
soon
we're
going
to
do
the
service
API
is
work
in
here.
So
this
this
builder
is
the
translation
layer
between
those
specific
types
into
our
generic
internal
contour
layer.
Once
we
have
that,
then
we
can
go
ahead
and
build
out
the
Envoy
config.
A
A
These
these
caches
here
should
have
every
object
in
there,
no
matter
what
and
then
there's
a
different
cache
for
every
Titan.
So
here
this
is
clusters,
which
is
essentially
services
in
kerbin
at
ease.
This
cache.
This
set
of
values
here
is:
is
the
Envoy
object
type
so
v2
cluster?
This
is
the
Envoy
specific
type,
the
protobuf
type,
so
this
type
here
is
gonna.
Have
this
is
what
actually
got
so
when
we
walk
through
the
Builder?
The
Builder
is
only
gonna.
Add
to
this
cache
this
one
here
of
things
that
are
referenced.
A
Everything
else
won't
get
added.
It
just
gets
kept
in
the
crew
Brady's
cache
and
then
once
we
have
this.
This
is
what
gets
mapped
to
the
Envoy
XDS
server
so
down
here.
We
map
here
yeah.
So
this
is
right
here.
This
is
set
of
resources,
so
this
is
what
we
get
passed
into
the
our
XDS
server.
This
is
saying:
hey
clusters,
this
cluster
cache
type
URL
here,
which
is
this
cluster
type
type
lustre
its
its
source,
is
the
cluster
cache
we
just
looked
at
so
the
cluster
cache
here
has
here
has
the
contents
things.
A
This
is
what
envoys
gonna
call:
there's
a
content
and
there's
a
query
to
get
a
specific
name.
So
this
says:
hey
take
all
these
instances
of
clusters
and
then
convert
them
to
proto's.
I'm
gonna
return,
this
proto
messages
back
to
envoy.
So
when
this
cache
changes
then
envoy
will
then
get
updated
automatically
so
that
links
envoy
into
into
contour
here.
C
A
D
A
A
So
that's
that
it's
the
time
to
go,
build
the
dag.
Once
once
we've
built
it,
then
it
takes
a
little
bit
of
time
for
envoy
to
actually
get
the
update
to
push
that
down
to
them.
So
again,
it's
never
really
been
an
issue
for
anyone
that
I've
leased,
I've
heard
of
the
dag
being
too
slow
I'm
even
on
super
large
clusters
and
stuff.
We've
never
had
a
performance
performance
hit
of
it.
You
know
taking
too
long
to
go
and
build
the
dag
and
everything.
A
But
that's
kind
of
the
coolness
of
it,
too,
is
because
we
build
it
all
based
on
this
cash.
Every
rebuild
of
the
dag
is
a
clean
build.
So
there's
no
like
take
what
we
had
last
time
and
then
figure
out
the
diff
between
it.
We
just
go
from
scratch
every
time,
so
it's
sort
of
like
no
matter
what,
if
we've
missed
something
in
the
middle
or
something
you
know,
gets
messed
up,
every
every
build
is
gonna,
have
a
new,
a
new
set
of
that
config.
A
A
This
endpoints
translator,
instead
of
it,
hitting
that
dynamical
and
it's
gonna
hit
its
own.
So
this
next
event
recorder
basically
says
after
you
can
form
on
the
endpoints
resource.
Then
this
is
the
Hammond
is
going
to
get
it
so
over
here
in
the
events
endpoints
translator,
you'll
see,
we
have
the
same.
You
know
on
add-on,
update
undelete
interfaces
and
they
handle
it.
Actually
we
don't
do
a
rebuild
attack.
Does
we
just
handle
the
diffs
every
time?
A
So
you
know
on
a
update,
we'll
go
and
check,
for
you
know
if
there's
no
subsets
and
there
are
no
endpoints,
we'll
skip
it
before
we
compute
them
from
scratch.
Every
time
that
was
in
points
look
and
these
become
cluster
load
assignments,
which
here
yeah
cost
load
assignment
so
you'll
see
when
we
hit
that.
If
you
look
at
that
query,
where
is
query
yeah,
so
query
here
returns
from
the
cluster
Lotus
Lima
cache,
which
is
local
to
this
endpoints
translator.
B
So
one
more
question:
this
is
very
basic
on
only
but
general,
like
any
English
controller
like
what
are
the
security
vulnerabilities
that
like
we
can
expect
separation,
maybe
contour
is
like
robbers
and
everything,
but
there
could
be
like
so
many
English
controllers.
So
if
we
are
going
to
look
for
like
any
security
well
like
we
have
to
check
for
the
word,
if
they
have
a
know
these
things
or
not.
So
in
that
sense,
like
a
VP
able
to
give
some
idea
on
des.
A
Yeah,
so
for
us
all
the
boundaries
we've
had
are
in
envoy.
So
envoy
has
you
know
different
things
that
come
out
and
you'll
see
us
patching
patching
those
four
different
things.
So
our
examples
is
sort
of
the
latest
that
we
support
in
a
way.
So
if
you
look
at
the
latest
release
this
one,
not
6.1
it's
here
because
of
Envoy
Seabees,
so
here
we
haven't
linked
in
these-
are
all
the
Seavey's
that
existed
in
in
one
convoy,
one
that
14.2
and
earlier.
A
So
so
those
are
the
primary
place
that
we
see
Vaughn
Goldie's
in
which
is
not
really
maintained
by
us.
It's
sort
of
done.
You
know
the
honorable
community
itself
helps
helps
manage
that
that
lifecycle
contour
itself
it.
When
we
go
me
like
build
the
build,
the
contour
image
and
stuff
there's.
No,
it's
just
it's
from
scratch.
So
I'll
show
you
the
docker
file,
so
we
basically
go
build
it
and
then
we
we
do
it
from
scratch.
So
the
only
thing
in
the
container
is
literally
the
contour
binary.
A
So
there's
nothing
else
in
there
to
really
have
a
CTE.
Unless
something
in
that
and
then
in
the
go
line
to
go,
you
know,
control
the
go
code
base
has
a
vulnerability
under
something
you
know
say:
1,
1,
dot,
14.2
has
some
sort
of
issue
and
go,
and
that
would
be
a
CV,
but
the
biggest
attack
service
is
going
to
be
on
boy
for
sure.
Yeah.
D
A
D
A
Yeah,
a
lot
of
it
is
pretty
pretty
open
source,
see
stuff
like
go,
link
things
and
then
the
client
go.
Libraries
are
all
and
there's
not
much
that
we
haven't.
That
would
be
any
repair.
That's
not
maintained
all
that!
Well,
you
know
some
but
yeah
yeah.
It
could
definitely
happen.
You're,
absolutely
right,
yeah.
So.
C
A
C
D
A
A
A
D
Little
bit
surprised
that
it
didn't
because
it
should
be
at
the
Concord
analyze.
That
really
goes
into
the.
Let's
say
that
if
you
have
some
some
something
strange
in
a
group,
rope
inside
your
logic,
yeah
so
I
would
would
have
expected
at
least
some
some
things
to
be
reported.
Yeah
like
false
positives,
but
so
so,
if
there
is
nothing,
then
I
would
suspect
that
it's
most
configured
correctly
yeah.
A
Yeah
I'll
try
this
real,
quick
just
for
fun
to
see
we
did.
We
do
bind
that
going.
We
just
did.
We
talked
about
this
manager.
The
girl
lying
sent.
Ci
girl
likes
the
island.
This
this
pics
finds
a
lot
of
stuff
for
us
as
well.
I
know
it's
not
so
much
security,
but
this
this
runs
under
DPR
as
well,
and
that
didn't
make
check
all
this.
This
will
get
run
so
yeah.
D
When
one
curious
thing
that
the
never
never
happened,
Whitman
and
I
was
I
was
doing
this.
This
client
certificate
authentication
contribution
and
there
could
could
have
been
a
vulnerability
in
that
logic
that
that
this
kind
of
a
type
of
box
that
could
happen
of
course
later
also
in
this
case
I
think
it
was
changed.
D
Who
spotted
the
problem
that
there
was
this
like
that
the
routing
routing
was
quanta
cured
in
a
way
that
what
one
could
go
in
with
a
certain
SMI
host
name
but
then
put
another
host
name
inside
the
HTTP
level,
so
that
this
kind
of
a
category
of
a
box
that
wouldn't
come
up
with
any
of
the
scanners
and
still
it
would
be
a
condor
park,
not
an
invoice
park.
So
I
would
we
would
have
qualification
tour
in
voiding
in
the
wrong
way,
but
luckily
changed
forty
that,
but
that's
a
category
that
could
happen
sure.
D
One
one
thing
that
I'm
glad
that
there
is
now
the
some
some
of
that
those
cases
are
executed
in
in
kind
in
in
the
CI
I.
Think
that
first
also
something
that
that
James
I
did
not
only
unit
test
is
it's
done,
because
I
know
that
there
are
certain
type
of
problems
that
you
cannot
really
catch
in
unit
tests.
D
A
Yeah
I
mean
I.
Am
we
had
an
issue
once
where
we
change
like
the
logging
path
or
something?
And
then
we
switched
I
think
we
switch
the
the
binary
underneath
from
Ubuntu
to
something
else,
and
then
the
path
that
we
were
logging
yet
doesn't
exist
in
the
other
other
OS.
So
we
didn't
know
until
someone
ran
into
the
problem,
but
yeah
those
are
in
this
I.
Think
Steve.
You
guys
thought
this
through
the
day.
I
think
I
didn't
ever
answer
you,
but
there's
so
because
we
can
explain
that
real,
quick
out
here
in
project
contour.
A
There
is
where
is
it
this
thing?
Integration
tester,
so
James
James
offered
this
thing.
So
this
is
an
it's
a
tool
to
help
run
these
integration
tests,
so
it
uses
this
thing
called
Rico
to
do
the
definitions
of
what
of
how
the
the
pass/fail
should
happen.
So
you
need
to
install
this
this
soulful
and
you're
in
your
path
and
then
these
test
Suites
come
out
of
here.
So
there's
a
bunch
of
fixtures,
so
here's
so
explain
oops
how
you
can
run
it.
A
A
A
A
A
Probably
you
know,
cuz
we're
using
the
same
prototypes
under
the
hood,
but
who
knows
you
know
until
you
actually
run
it,
so
we
used
to
do
like
smoke
test
by
hand
before
a
new
version
came
out,
and
the
idea
was
that
that's
kind
of
silly
we
should,
you
know,
have
some
tooling
around
doing
that
for
us.
So
that's
how
that
all
that
came
about
so
yeah,
it's
all
pretty
new.
We
just
really
haven't
I,
guess
gone
headfirst
into
making
that
a
huge
priority.
A
At
this
point,
it's
still
very
just
me,
I'm
done
much
with
it
recently
yeah.
This
is
this
is
what
James
wrote.
So
this
creates
a
cluster
basically
loads,
the
image
into
the
cluster,
so
you
don't
have
to
pull
it
from
a
repository
and
then
applies
it
and
then
deletes
burn
in
CPU
cycles
for
nothing
on
this
one
right
now,.
C
Yes,
I
have
a
question
I
had
on
the
PR
a
few
days
ago.
I
was
just
like
you
know,
as
I'm
making
changes
like
there
are
unit
tests
within
the
packages,
and
that's
that's
easy
enough.
Then
there's
the
feature
tests
package
that
you've
mentioned
there's
also
an
EE
package
which
looks
very
similar
to
the
future
test
package
and
then
there's
the
integration
stuff.
So
I'm,
you
know
just
trying
to
figure
out
like
where,
where
do
you
usually
make
changes
and
how
do
you
like
Howry
to
those
ones?
A
A
So
what
they
do
is
they
they
don't
actually
spin
up
like
an
envoy
or
something,
but
they
spin
up
everything
up
to
the
point
of
spinning
up
an
envoy,
so
we
spin
up
a
gr,
PC
server
locally,
and
then
we
read
it
a
bunch
of
ingres
resources
or
services
or
all
the
kubernetes
things
that
we
watch
for
and
then
we
basically
assert
that
only
when
we
pass
in
these
kubernetes
objects
that
we
get
this
envoy
configuration
out.
So
we
just
validate
like
hey.
A
We
would
pass
the
right
thing
to
envoy
or
the
under
actually
processed
that
part
correctly
or
not.
It
was
sort
of
outside
of
the
scope
of
that
test,
but
was
just
to
say
hey
with
these
inputs
from
kubernetes,
we
had
the
right
outputs
to
envoy,
that's
what
those
were,
but
they
were
real
generic
and
they
kind
of
got
big,
because
you
know
testing
like
say
you
know:
endpoints
is
a
big
space.
So
we
started
breaking
those
apart
into
these
things
called
feature
tests.
A
So
idea
was
to
move
things
from
the
e
to
e
tests
into
feature
tests
so
basically
feature
test
and
enter
all
our
things.
Tensely,
the
same
thing
they're
just
an
Denver
first
and
then
feature
test
with
a
new
one,
so
this
was
more
of
like
hey
out
like
we
did
the
external
name
right,
someone
I
create
a
service
type
external
name
with
different
parameters.
This
is
how
it
should
be
hey.
This
is
how
we
should
program
envoy.
C
A
So
this
so
this
setup
here
this
is
big
enough
to
see
this
set
up
here,
we'll
we'll
build
basically
a
G
RPC
server
in
memory
and
XDS
server,
and
then
we
basically
go
ahead
and
say,
like
a
resource
handler,
add
the
service
we've
created
here,
we're
defined
here.
Add
this
dummy
ingress
resource,
and
then
here
we
assert,
so
we
should
have
basically
in
this
discovery
response
from
Envoy.
A
We
should
have
a
route
called
this
with
the
virtual
host
of
star
with
this
path
in
this
cluster,
and
this
is
and
there's
some
magic
here
in
terms
of
formatting
this
to
be
smaller,
and
then
we
validate
that
we
have.
You
know
these
clusters
here,
so
this
is
dotting
a
cluster.
This
is
the
other
thing
around.
So
it's
this
thing
hey.
We
should
have
this
brought
an
envoy.
A
We
should
have
this
cluster
an
envoy
and
then
we
go
ahead
and
delete
that
ingress
resource
and
then
we
add,
looks
like
a
new
proxy
okay,
it's
adding
it
so
we
delete
the
ingress
resource
now,
we're
gonna
add
a
HTTP
proxy
resource.
We
should
value
that
we
have
that
same
route
again
with
the
same
cluster
and
then,
if
we
add
another
resource,
we
should
now
have
that
one
existing
in
here
this
the
second
route
in
cluster.
A
So
that's
validating
that
the
actual
name
thing
works,
so
these
are
handy
so
like
if
you
go
and
change
something
somewhere
that
should
have
an
effect
on
these
yet
I
mean
so
these
are
the
all.
But
you
know
how
every
feature
should
function
and
work
from
from
and
then
from
contour,
I'm
sorry
from
kubernetes
into
envoy,
and
if
you
change
how
something
functions,
save
his
external
name
thing.
Then
these
tests
should
they
all
instantly.
You
should
know
it
easily.
C
A
A
They
run
fast
and
they
don't
take
much
time.
So
that's
so
hats
off
the
day
Cheney
for
a
lot
of
this
stuff.
He
helped
bring
the
vision
of
this
in
so
this
is,
you
know,
good
pioneering
all
this
yeah.
It
is
a
lot
of
boilerplate
and
then
I
get
I,
get
hung
up
and
all
the
little
helpers
that
we
create
to
make
the
code
less
verbose,
because
then,
when
you
change
something
you've
gotten
and
modified
a
little
helper
and
that
helper
breaks
a
lot
of
things
yeah,
it's
it's
not
super
simple,
a.
D
What
kind
of
configuration
I
need
to
anyway
to
to
create
that
feature,
but
then,
when
writing
unit
tests
for
that
I
need
to
like
work
backwards
and
like
see
that
ok,
this
is
that
code
that
I
wrote
that
creates
the
invoice
configuration
now
I
put
the
same
code
here
into
unit
tests
to
verify
that
it
actually
sends
what
I
just
wrote
a
moment
ago,
but
but
of
course,
I'm
not
doing
it
for
myself,
it
is
more
for
the
next
developer.
Who
then
like
benefits
from
that?
D
D
A
Yeah
for
sure
yeah
yeah
these
these
are
great
just
to
explain.
You
know
how
contour
should
work.
You
know
for
getting
all
the
little
unit.
Tests
are
good
for
not
a
huge
component,
but
this
is
very
much.
You
know
like
what's
a
good
one
like
I,
don't
know,
retry
policy
might
be
a
good
one.
You
know
like
if
you
pass
in
the
service
with
this
ingress
resource
with
these
different
annotations
on
it,
then
you
should
get
this
kind
of
output.
A
You
know,
so
it's
just
very
good
of
just
functionally
here's
how
it
should
work
and
then,
if
you
change
these,
how
these
you
know
timeouts
affect
the
the
output
and
these
all
these
touches
they
all
instantly,
which
is
which
is
good.
You
know,
like
you
say
you
don't
have
to
understand.
What's
there
that
you'd
know
with
confidence
that
you
know
you
broke
a
Torah
test
for
lacking,
you
know
what.
D
One
thing
that
I
observed
that
supported
in
some
cases,
the
helpers
are
exactly
the
same
helpers
that
are
used
by
the
actual
code
that
creates
the
Oracle
integration.
So
then,
then,
they're
kind
of
there
are
cases
when
that
the
code
that
is
being
tested
is
using
a
helper
that
creates
a
structure
or
something,
and
then
the
unit
test
is
calling
the
same
exact
helper,
which
creates
the
same
structure
so
that
that
doesn't
give
that
much
ice.
You
know
yeah
validation
for
yeah.
D
C
So
yeah
so
anyway,
I'm
hearing
like
don't
touch
the
ete
package.
Cuz,
that's
kind
of
the
old
thing
feature
tests
are
good
dad
when
you
have
something
that
kind
of
bubbles
up
to
the
level
of
changing
inputs
and
outputs,
and
then
integration
tests
is
newer
and
maybe
think
about
adding
something
there.
If
you
want
something,
that's
actually
running
in
a
cluster
yeah.
A
C
A
This
would
be
great
to
like
figure
out
like
loop
through
all
these
Eevee
tests
and
figure
out
what
all
the
different
ones
are
and
then
like
make
it
list
in
here
or
something,
and
the
folks
could
pick
off
one
or
two
at
a
time
whatever
and
just
push
them
through.
You
know
to
move
them
into
future
tests.
There's
you
have
to
figure
out
what
the
future
is
and
I
mean
like
yeah.
This
is
some
of
these
are
just
generic
like
resource
filtering
I.
A
Think
it's
filtering
discovery
requests
from
envoy.
Can
I
you
know,
ask
for
certain
things,
but
but
yeah
circuit,
breaker,
annotations
things
like
that.
That's
an
easy,
lift
and
shift
from
here
to
there
and
there's
a
new,
that's
a
different
thing.
There
might
yeah,
so
I'd
be
good
just
to
do
that,
but
not
hurt
anyone
right
now.
It's
just
you
know
one
of
those
things
that
you've
done.
C
D
A
A
So
I
think
our
next
one
is
it
I
think
there
every
two
weeks,
let
me
schedule
them
for
ya,
August,
6th,
so
I
think
every
2
weeks
we're
gonna,
try
and
do
these
but
feel
free
to
chat
about,
and
if
there's
something
you'd
like
to
talk
about,
we
can
do
that.
We
can
be
more
of
this.
This
is
great
just
chatting
about
contour.