►
From YouTube: Contour Community Meeting - August 25, 2020
Description
August 25, 2020
News
(youngnick)Bringing forward 1.8 release: we have a few bigger PRs almost ready to go, we will cut 1.8 before they come in, which will be a few days early on the 28th of August, instead of the 31st.
What have we been working on?
[stevek] Guide for using Gatekeeper with Contour
Feedback
[stevesloka] Version of Kubernetes users are using. (re: CRD v1)
A
Okay,
hey
everyone
welcome.
This
is
the
contour
community
meeting
for
tuesday
august
25th.
This
is
the
u.s
time
zone
meeting,
so
I
I
will
paste
in
the
hack
md
here
for
folks.
That's
our
current
schedule.
I
guess
I
could
share
it
too.
A
And
get
started
feel
free
to
add
something
to
that.
If
you
want
to
see
me
talk
about
just
pop
it
in
there
somewhere
and
we
can
get
to
it,
we
have
sort
of
light
agenda
today.
A
A
A
Okay,
and
then
I
can
do
this
like
okay,
so
I
think
yeah,
I
don't
know
if
nick's
here.
I
doubt
he
is
super
early
for
him,
but
I
think
we're
looking
at
releasing
1.8
sooner
meaning.
Like
this
week
we
have
a
few
refactoring
prs
in
the
backlog
and
then
the
ideas
that
we
didn't
want
to
throw
those
in
right
before
release
and
have
that
kind
of
muddy
up
the
release,
just
for
fear
of
it,
causing
issues
just
because
they're
sort
of
big
refactors.
A
B
And
then
the
other
thing
is
that
the
next
release
1.9
or
hoping
it
to
given
the
extended
time
we're
going
to
allocate
to
that
release
all
the
way
to
the
end
of
september.
We
have
an
opportunity
to
maybe
potentially
come
up
with
a
big
feature
that
that
we're
working
on
so
it
might
be
a
bigger
release.
A
Yeah
for
sure,
so
I
think
we'll
have
we'll
get
some
of
the
notes
out
soon
with
that.
So
that's
how
it's
going
to
happen.
If
anything
changes
then
we'll
we'll
be
sure
to
post
in
slack.
You
know,
what's
going
on
with
that,
but
you'll
see
that
come
out.
Probably
this
week,
yeah
and
then
steve
you
had
some
things.
You've
been
working
on
with
gatekeeper
and
you
want
to
chat
about
that.
C
Yeah
I've
talked
about
this
a
little
bit
in
some
of
the
past
meetings,
but
I've
been
working
on
sort
of
experimenting
and
figuring
out
how
to
use
gatekeeper
with
contour.
So
for
those
of
you
who
don't
know
gatekeeper
is
part
of
the
open
policy
agent
project,
which
is
a
cncf
project
and
gatekeeper
functions
as
a
an
admission
controller,
essentially
for
kubernetes
that
allows
you
to
write
oppa
policies
to
perform
val
validation
or
policy
actions
against
items
that
you're
creating
through
the
kubernetes
api.
C
So
over
the
past
couple
of
weeks,
I've
been
working
on
putting
together
a
guide
to
go
on
the
contour
website
that
walks,
through
kind
of
how
to
get
gatekeeper
up
and
running
with
contour,
and
I've
also
started
to
write
a
handful
of
sample
policies
that
you
can
use
so
things
that
perform
some
basic
validations
on
http
proxy
resources
that
you're
creating
and
also
things
that
enforce
policy
for
so,
for
example,
only
allowing
timeouts
to
to
be
within
a
certain
range
and
other
things
like
that.
C
So
anyway,
this
this
pr
is
up
for
review.
I
you
know
if
anyone
is
interested
in
this
and
wants
to
kind
of
take
the
guide
for
a
spin
and
and
provide
any
feedback,
that'd
be
appreciated,
and
then
the
hope
is
to
continue
to
add
to
our
library
of
sample
policies
so
that
you
know
we
increase
the
number
of
validations
that
you
can
implement
through
gatekeeper,
if
you're,
if
you're
using
it
and
also
have
some
additional
sort
of
policy
based
constraints.
If
basically
for
you
to
draw
inspiration
from.
A
Yeah,
I
think
this
is
following
up
to
some
of
your
your
requests.
There
have
been
washed
some
of
the
things
that
you
folks
needed
in
terms
of
restricting
what
certain
users
can
define
within
the
crds.
So
does
this
make
sense
to
folks?
Would
the
folks
find
this
useful
for
their
environments.
D
This
looks
great,
it's
actually
a
great
starting
point.
Do
you
are
you
guys,
seeing
that
gatekeeper
would
become
sort
of
an
add-on
to
contour
or
in
with
contour,
would
continue
to
do
validation
on
its
side
sort
of
like
an
earlier
validation
point.
That's
optional
to
contour.
C
I
think
I
think
that's
basically
how
we're
thinking
about
it
yeah.
You
know
anyone
else
can
can
chime
in,
but
I
think
you
know
we're
we're
not
going
to
require
contour
users
to
run
gatekeeper
and
to
use
these
validations,
but
if
they
are
and
we
have
the
validations
implemented
as
gatekeeper
constraints,
then
a
user
can
just
get
earlier
feedback
and
essentially
have
their
resources
rejected.
C
E
It's
very
cool-
it's
michael
here
this
I
think
this
looks
cool.
Just
one
thing
to
to
ask
is
make
sure
that
we
can
sort
of
reuse
our
existing
gatekeeper.
So
you
know
cater
to
the
use
case
where
we've
already,
there
is
already
a
gatekeeper
who
can
plug
into
that
versus.
You
know,
deploy
a
separate
one,
just
contour.
C
Yep,
I
think
that
makes
sense,
and
you
know
what
we
have
in
the
guide
here.
As
far
as
the
deployment
is,
is
pretty
much
just
pulled
out
of
the
gatekeeper
dock.
So
I
think
if
you
already
have
a
gatekeeper
installed
in
your
cluster,
you
can
essentially
just
skip
the
deployment
section
and
and
move
on
to
applying
the
policies,
but
probably
worth
calling
that
out
explicitly
here
right.
C
Yeah,
definitely
I
could
do
that
either
in
one
of
these
or
in
the
office
hours
for
sure.
Oh.
B
And-
and
I
think
you
know
as
to
kind
of
look
forward-
you
know
with
this
work
coming
along.
That
is
obviously
going
to
be
super
great,
but
then
down
the
line.
You
know
a
lot
of
folks
are
using
git
keeper
and
opa
for
additional
policy
enforcement
right,
so
you
can
think
of
a
situation
where
a
container
has
vulnerabilities,
for
example,
and
you
want
to
basically
just
stop
all
ingress
to
it.
B
Just
kind
of
cut
cut
the
pipe,
so
maybe
an
opportunity
for
us
to
extend
this
gatekeeper
policies
to
include
some
of
that
work
as
well.
I
know
network
policies
exist
today,
so
you
could
actually
disable
at
the
cni
level
access
to
pods
and
containers.
But
maybe
this
is
something
for
us
to
think
about
for
contour
as
well,
especially
if
it's
easy
for
us
to
pull
it
off.
A
I
was
thinking
of
another
question.
I
didn't
want
to
change
the
subject,
so
I
was
curious
about
folks
what
what
cluster
version
folks
are
using.
I
know
here
you
see
you
have
a
114.
I
know
we've
been
looking
at
moving
to
116..
A
Excuse
me
to
get
the
the
crd,
the
v1
crd
spec
stuff,
so
we
can
prune
out
the
invalid
fields
and
we've
had
a
lot
of
issues
with
folks.
You
know
having
bad,
yellow
indentations
or
some
spellings
or
something,
and
then
it
applies
to
the
server.
Then
you
think
something
should
happen
and
it
doesn't
just
because
you
know
yaml's
hard.
A
A
If
you,
if
you
do,
let
me
know
if
not,
if
you
don't
want
to
shout
out
now,
you
can
just
chat
me
in
private.
If
that's
that's
something,
I'm
just
curious,
though,
just
to
be
clear.
You're
talking
116.
E
I
mean
we're
actively
planning,
so
we've
just
got
to
117,
but
we're
already
actively
planning
on
18.,
gotcha,
very
cool
and
planned
to
keep
pretty
close.
But
I
think
we're
also
maybe
unusual
in
that
regard.
There's
a
ton
of
enterprise
people
out
there
on
openshift,
and
I
don't
know
how
those
versions
track.
A
Yeah
yeah
and
we've
been
reluctant
to
enforce
that.
Just
for
that.
For
those
reasons
that
you
know,
lots
of
users
are
still,
I
think,
on
older
versions
and
stuff.
I
was
just
curious
and
trying
to
you
know
my
finger
to
win
to
see
kind
of
thoughts
around
you
know.
If,
if
most
folks
are
there
that's
great?
If
not
you
know
what
can
we
do
to
support
that?
Those
new
features
that
are
coming
out
and
that
and
with
the
crd
stuff,
specifically,
I
guess.
F
So
cool,
thank
you,
kubernetes
supported
versions,
which
is,
I
think,
two
or
three
back
is
good.
Maybe
mixed
with
you
know
what
are
cloud
providers
supporting?
I
think
most
of
them
116
is
standard.
Now.
E
Yeah
yeah,
what's
interesting,
there
is
like
eks,
you
know
for
the
longest
time,
though
they
were
miles
behind
113,
because
115
is
considered
unsupported.
Now,
like
there's
no
more,
I
believe
even
patching
for
that,
but
in
the
last
couple
of
months,
eks
have
completely
accelerated
and
now
have,
I
think,
118s
out
there
so
that
they've
caught
up
and
look
like
look
like
they're
going
to
remain
current.
B
Okay,
so
our
official
support
policy
is,
you
know
we,
you
know
it's
it's
listed
on
the
control
website.
We
definitely
don't
support
that
far
back,
but
you
know
it's
not
a
hard
support
policy.
Actually.
Not
too
long
ago
we
went
and
added
the
caveat,
they're
saying
that
they
were
interested
in
working
with
you
and
if
you're,
on
a
version
of
kubernetes,
that's
older,
come
and
talk
to
us
and
we'll
discuss
it.
B
So
actually
I
found
a
note.
There
says
if
you're,
using
kubernetes
distribution
offered
by
public
cloud
provider,
where
you
don't
have
the
option
to
upgrade
to
a
more
recent
supporter
version.
You
know
talk
to
us
and
we'll
find
a
way
to
see
if
we
can
support
your
use
case
so
at
the
going
by
the
letter
of
the
law.
I
guess
you
know
we
we
support
all
the
way
you
know
with
our
latest
release.
We
support
116,
17
and
18,
and
you
know
19
is
going
to
come
out
tomorrow.
D
To
say
for
us
that
we're
also
tailing
the
versions
from
the
cloud
providers
and
mostly
like
trying
to
match
the
lowest
common
denominator,
which
used
to
be
amazon,
but
that's
they're
catching
up,
but
we
also
have
to
consider
compatibility
with
rancher,
which
we
use
for
internal
cloud.
D
So
that's
another
data
point
that
limits
us
in
kubernetes
versions.
All
this
to
say
that
we're
at
115
right
now
we
should
be
moving
to
116
soonish,
but
yeah.
The
support
policy
seems
reasonable
to
me.
I
don't
see
any
big
issues
with
that.
D
I
think
that
the
116
is
the
big,
the
big
shift,
because
it
deprecates
a
lot
of
stuff
so
that
one's
a
little
harder
to
to
go
through
114
to
115
was
pretty
simple.
D
D
But
a
related
issue
is
the
x
remote
ip
preservation
and
basically
we're
trying
to
reconcile
all
the
different
options
for
carrying
the
original
ip
address
from
the
cloud
load
balancer
through
envoy
to
eventually
the
pod
receiving
a
request,
and
so,
as
I
understand
the
support
for
enabling
proxy
protocol
in
contour,
which
turns
it
on
in
onboard.
D
I
was
running
how
much
control
we
have
over
the
the
hd
headers
that
are
then
passed
to
the
the
pod
to
sort
of
identify
the
climb
to
the
pod.
A
I'm
pretty
sure,
given
the
right
load,
balancer
that
this
should
flow
through.
I
know
with
like
like
elbs
and
aws
like
they
like,
not
that
that
you
lose
those
headers
when
they
come
through,
but
there
I
think
if
you
use
an
nlb,
then
those
headers
should
pop
through
in
the
aws
specific
world.
But
I'd
be
curious.
If,
if
you're
not
seeing
them,
then
yeah
we
should.
We
should
figure
that
out,
because
you
should.
I
think
that
it
should
just
happen
now.
It
should
just
work.
A
D
A
D
This
is
the
next
step.
This
is
sort
of
to
turn
that
ip
address
into
a
site,
location
or
continent
or
city
or
whatever
the
first
part
to
that
is
going
to
be
to
preserve
the
ipa
address
in
the
first
place.
So
we're
probably
going
to
move
on
that
and
then
look
at
the
tagging.
A
Gotcha
yeah,
no,
I
think,
yeah,
I
think
for
sure
I
think
it'd
be
great
to
either
add
more
configs
or
make
it
so
that
that
just
happens
by
default.
Getting
that
information,
because
that's
you
know
something
that
contour
should
or
contour
unvoice
should
give
you
out
of
the
box.
I
would
expect
most
people
would
want
to
see
that
information
so.
D
Yeah,
because
this
there's
sort
of
a
security
aspect
to
it
and
envoy
documents
this,
and
it's
just
a
question
of
how
much
you
trust
those
headers
coming
to
you,
which
complicates
things
a
little
bit
because
I'm
afraid,
like
the
default
opening
things
by
default,
may
not
be
a
good
idea
for
security
reasons,
but
I'm
sort
of
afraid
of
the
complexity
of
configuration.
It
might
end
up
being
as
complex
as
envoy,
which
I
know
historically
has
been
sort
of
a
no-no
for
contour.
So
but
we'll
see,
oh
I'll
start
thinking.
E
Yeah-
hey
steve-
I
haven't
done
this
yet
because
I
I'm
still
doing
some
research,
but
I
pinged
on
slack
about
the
that
nginx
annotation
that
passes
the
the
cert
to
the
stream
yeah.
So
I
haven't
forgotten
about
that.
I'm
going
to
do
some
research
into
where
the
invoice
can
even
do
that
and
I'll
I'll
raise
the
ticket
for
that
too.
A
Okay
yeah,
I
did
a
quick
search
and
it
didn't
seem
that
there
was
a
setting
that
I
could
tell
onward
to
do.
That
envoy
did
the
validation,
but
I
wasn't
sure
if
it
would
actually
pass
it
through
or
not.
A
A
G
E
G
Right,
yeah,
I
I
will
also
I
I
think
I
recall
quite
closely
where
it
is
mentioned
in
in
the
documentation
of
and
what
what
can
be
passed.
So
after
this
call,
I
can
ping
you
in
in
slack,
if
you
are
there
in
slack,
so
so
we
can,
we
can
discuss
this
also.
F
Thanks,
I'm
curious
if
there
are
any
updates
on
external
auth
or
if
the
roadmap
is
still
you
know
solid.
A
Yeah,
I
believe
nick
pushed
a
change
to
that
to
the
we
pushed
a
few
things
back,
so
what
we
end
up
doing
so,
yes
to
answer
your
question,
external
auth
is
definitely
on
the
road
map
of
getting
done
through.
We,
we
finished
up
the
the
design
docks
for
that,
and
I
know
james
has
a
pr
to
enable
we
had
to
switch
around
some
endpoint
work
so
traditionally
contour
maps
a
a
cluster
in
envoy
to
a
service
in
kubernetes
and
then
the
endpoints
associated
with
each
go
into
each
cluster.
A
In
envoy,
there
was
a
design
idea
with
the
auth
proposal
where
we
would
have
multiple
upstream
so
so,
just
sort
of
like
how
http
proxy
allows
multiple
services
to
be
to
be
mentioned
from
a
single
route.
We
wanted
to
kind
of
make
that
same
thing.
The
same
parity
go
through
the
problem
is,
is
that
envoy
doesn't
let
you
have
multiple
clusters
kind
of
a
multi-service
upstream
in
the
connection
to
the
external
auth
server,
which
is
the
grpc?
A
So
what
we're
going
to
do
is
we're
going
to
have
the
alternative
is
to
make
a
locality
load
balanced
cluster
in
envoy
right.
So
now
we
can
basically
have
multiple
upstreams
associated
from
a
single
cluster
and
to
do
the
weight
shifting
you
can
just
supply
weights
within
that
locality
within
a
single
cluster
so
to
the
users
it
you
will.
The
api
contract
is
exactly
the
same.
It's
just
the
implementation
in
the
back
end
on
voip,
so
we're
looking
at
how
we
can
do
that.
A
I
believe,
there's
a
pr
that
we've
been
going
through
with
james
he's,
enabling
that
that
was
one
of
those
pr's
that
kind
of
changes
a
lot
of
how
the
back
end
pieces
of
contour
works,
which
is
why
we're
trying
to
push
1.8
out
sooner
to
then
implement
these
new
bits,
and
then
you
know
allow
for
more
more
validation
and
testing
of
that.
A
So
I
see
all
that
to
say.
Yes,
it's
still
on
the
roadmap.
I
was
trying
to
pull
this
up
here,
so
we
have
that
work
done.
We
just
need
to
get
get
through
those
little
bits
there
and
then
we'll
be
able
to.
You
know
finish
out
off.
I
think
that's
the
one
blocker
to
off
some
of
the
work
that
I
was
working
on
was
refactoring.
A
A
D
Yeah,
if
I
may,
this
is
interesting
because
when
we
implemented
access
lock
service
on
our
side
through
a
xds
proxy,
we
stumbled
upon
a
problem
where
these
type
of
grpc
clusters
in
envoy
cannot
be
discovered
dynamically,
and
so
we
had
to
basically
bake
in
that
cluster
definition
and
use
dns
discovery
as
opposed
to
eds
and
and
that's
how
we
got
access
slot
service
support
in
so
I
was
running.
Does
that
touch
up
on
your
roadmap
item
for
access
lab
service?
Will
that
be
using
some
of
the
same
building
pieces?
D
You
you
made
for
off.
A
So,
in
the
case
of
you
have
two
services,
you're
routing
traffic,
to
and
say
that
whole
second
service
goes
offline
completely,
no
matter
what
we
do
onboard
will
still
try
to
route
to
it
unless
you
set
away
to
zero
so
doing
this
locality
stuff
will
let
us
actually
get
around
that
issue
as
well.
So
this
solves
this
plus
a
bunch
of
other
things,
so
yeah.
A
D
D
I
had
another
question
we
were
talking
about,
like
nginx
feature
comparisons.
Earlier
we
had
another
case
where
a
lot
of
our
users
are
migrating
from
edge
x,
ingress
controller
and
they're,
hosting
single
page
applications
and
they're
used
to
some
of
the
path
rewriting
and
redirection
support
in
the
nginx
ingress
controller,
and
I
know
historically
like
static
responses
and
redirections
from
envoy
and
contour.
I've
been
sort
of
lacking,
I
was
running.
D
A
D
A
Okay,
because
we
can't
do
the
path
rewrite
bits
with
proxy,
we
don't
have
that
support
in
ingress.
That's
I
thought,
maybe
you
were
going,
what
you
wanted
to
ingrow
support
for
that
there
was
an
issue
I
know
on
the
on
the
redirects
folks
wanted
to
like
redirect
certain
paths
in
certain
urls
and
stuff.
A
I
could
go
find
that,
but
yeah
the
rewrite
should
work
with
the
redirects.
I
think
we
don't
have
support
for
it.
You
can't
say
like
hey.
If
I
get
a
request
over,
you
know,
stevesloco.com
redirect
this
to
you
know
stevecrest.com
or
something.
A
I
think
it's
just
we
just
haven't
done
it
in
contour.
It's
just
not
a.
I
think,
we've
implemented
yet
again
yeah.
So
I
don't
think
that
I'm
available
to
do
it
for
sure
yeah
you
can.
You
can
have
static,
redirects
and
stuff.
That's
how
we
do
the
redirect
from
like
insecure
to
secure
on
from
http
to
https.
You
know
we
can
have
envoy
issue
a
301
redirect
back
so
it's
possible.
It's
just.
We
just
haven't
done
it.
I
guess
so.
I
know
there
was
an
issue
for
that.
Someone
wanted
to
like.
A
There
was
one
issue
of,
like
certain
paths
should
404.
Maybe
michael
you
had
this
one
as
well,
because
we
use
path
prefixes,
and
so,
if
you
say
hey,
I
have
a
route,
that's
slash,
who
that's
a
prefix
right,
so
that'll
match
slash,
foo,
slash
bar
and
some
folks
wanted
to
like.
Have
that
slash
bar
path
like
404
and
not
actually
go
somewhere,
because
you
want
to
restrict
what's
what
you
know,
urls
will
actually
respond
in
the
applications.
A
So
there
was
an
issue.
I
believe
on
that
that
you
could
yeah
yeah.
E
C
E
We
had
some
behaving
how
we
thought
they
should
and
some
not,
and
we
think
that's
a
problem
on
our
side.
So
I
need
to
go
back
and
validate
that,
but
but
the
general
idea
is
that
we'd,
like
a
a
non-existent
wild
card
domain,
if
it's
not
there,
for
it
to
404
versus
throw
a
tls
error,
which
is
what
right
we
think
it
currently
does.
Oh.
A
A
Yeah-
and
there
is
one
in
there-
I
was
thinking
of
someone
wanted
to
do
it
for
paths
and
stuff,
so
so
they're
like
yeah
I'll,
go,
try
and
find
those,
and
I
can
ping
them
to
you
in
the
contour
slack.
If
those
match
then
go
ahead
and
add
whatever
details
you
need.
Otherwise,
if
you
would've
might
open
up
a
new
issue,
we
can
definitely
chat
about
how
to
you
know
how
to
make
these
things
work
and
contour
if
it
isn't
supported
already
all
right.
Thanks
yeah,
absolutely.
A
Cool
we're
just
about
out
of
time.
We
want
to
have
anything
else,
I'm
happy
to
go
longer,
but
if,
if
we
can
end
it
here
too
as
well,
this
has
been
a
great
discussion
everyone.
So
I
appreciate
everyone
commenting
chatting
sweet
if
not
well,
hey
thanks
thanks
everyone
for
coming
appreciate
it.
I
think
the
next
one
will
be
the
australia
time
zone,
so
6
30
p.m.
Eastern
time
a
week
from
now,
and
then
I
think
the
next
community
meeting
office
hours,
you
could
look
here,
real
quick.
A
I
forget
when
that
is
the
first
and
there
I've
looked
at
my
calendar,
so
I
figured
we
just
happened
last
week,
so
I
think
it's
the
week
after
that
we'll
have
the
next
office
hours.
So
cool
awesome!
Well,
thanks
everyone
for
coming,
appreciate
it
and
we'll
see
you
all
soon
quick
question:
do
they
alternate
in
australia
and
eastern
time
so
like
this?
Was
eastern
next
week's
australian.
A
A
And
then
the
our
office
hours
are
on
thursdays
and
those
right
now
are
only
u.s
time.
I
think
they're
at
1
pm
1
to
3
p.m.
Eastern
time.
H
Hey
I
had
a
quick
question
as
well.
I
realized
my
mic
was
turned
off
for
some
reason,
so
we
are
trying
to-
and
I
think
kevin
can
maybe
help
with
this
a
little
bit
we
are.
What
we
want
to
do
is
basically
to
to
replace
headers.
So
how
would
that
work
with
contour
currently,
because
we're
looking
at
a
few
different
pr's,
I
believe,
or
a
few
different
issues.
A
Yes,
so
you
want
to
replace
headers
on
to
the
the
pod
receives.
Yes,
I
believe
so
yeah,
so
you
can
set
and
again.
This
is
an
http
proxy
only
thing
right
now,
but
you
can
set
a
request.
Header
policy
request,
headers
policy
and
you
can
set
or
remove
headers
depending
on
what
you
want
to
do,
and
then
you
can
also
there's
a
response
headers
which
would
go
back
to
the
client.
H
I
see
so
there's
no
way
to
do
it
like
with
like,
with
like
a
regulator
or
something,
because
what
we
really
wanted,
you
know
is
to
upgrade
our
connections
from
http
to
https,
and
so
that's
that's.
What
we're
kind
of
looking
at
so
is
that
something
that
is
possible
with
current
contour
or.
A
Right
now
the
weights
written
now
right,
we
don't
have
any
regex
support
for
those
regex
is
done
dynamically
behind
the
scenes,
because
you
can
set
on
a
condition
set
for
headers.
You
can
do
you
can
match
headers
and
that
ends
up
making
a
regex
behind
the
scenes.
But
yeah
we
haven't
done
any
cataracts
things.
A
I
know
in
the
past
we've
been
sort
of
hesitant
to
but
doesn't
mean
that
we
can't
can't
do
that,
but
I
know
we
had
issues
before
with
you
know
the
specific
type
of
regex
and
validating
that
that's
the
right
type.
A
That
envoy
would
take,
because
you
know,
there's
different
formats
of
how
regex
just
get
together
yeah
it's
a
bit
of
a
pain
yeah,
and
I
think
we
had
a
bunch
of
issues
before
using
just
plain
ingress
and
stuff,
so
we
just
hadn't
done
it
yet
so
yeah
right
now,
there's
not
a
way
to
do
it
through
regex's
or
any
kind
of
dynamic
variables,
and
that's
come
up.
Some
folks
like
in
this
example
here
they
want
to.
You
know
pop
in
there's
envoy
variables
that
you
can
inject
into
these
spots
and
then
envy
will.
A
H
Gotcha,
so
if
we
wanted
to
start
working
on,
you
know,
let's
say
a
pr
for
that
or
something
like
that,
where
what
would
be
a
good
place
to
start.
You
know
in
the
in
the
code
base.
A
Well,
the
first
thing'd
be
great
is
to
open
an
issue,
so
we
can
then
chat
about
it
from
there
in
the
code
there's
a
couple
places
we
we
could.
We
could
walk
through
that.
A
I
have
to
go,
find
them
for
you,
but
there's
a
spot
in
the
envoy
package
where
you
actually
set
the
the
the
header
values
that
you
get
passed
down
to
envoy
when
you
configure
those
but
you'd
have
to
back
up
and
have
the
right
api
in
http
proxy
first
once
you
have
that
and
then
the
builder
has
to
know
that
and
then
you
represent
that
work
in
in
the
dag.
A
So
the
steps
would
be,
I
guess,
open
an
issue
and
then
we'd
have
to
add
the
types
to
the
to
the
crd
once
they're
in
the
crd,
then
you
can
have
the
builder
process
those
in
and
then
have
that.
Then
you
know
work
its
way
through
all
the
stuff
down
to
the
envoy
package,
which
actually
writes
the
envoy
config
out
to
envoy,
okay,
yeah
I'll
go
ahead
and
open
that
then
yeah.
You
should
see
that
too.
A
If
you
look
for
like
that,
replace
or
the
request,
headers
policy
in
the
search
to
search
for
headers
policy
and
you'll,
see
the
response
and
request
and
that'll
it'll
all
the
wiring
wiring's
there.
I
guess
for
it.
We
just
have
to
add
the
new
type,
then
for
the
for
the
regex
type.
A
A
A
All
right
cool!
Well,
I
guess
we
wanted
here.
So
thanks
all
for
coming
and
we
will
catch
you
all
later.
Thanks,
bye,
bye,
everyone.