►
From YouTube: Contour Community Meeting - June 18, 2019
Description
Our first Contour Community Meeting of 2019!
In this meeting we go over what's new in the next release, spent a great deal of time on the road towards 1.0 and what's needed there, and answered questions from the community.
For more information about these community meetings, head on over to https://projectcontour.io/community
A
Hi
everyone
and
welcome
to
the
contour
community
meeting
we're
kicking
this
off
again
here
in
June.
2019
super
excited
to
have
you
all
here
and
yeah
for
this
community
meeting
we'll
go
through
the
road
map
of
contour
will
talk
about
the
work
that
we
have
done
and
work
that
we
might
need
some
help
with
so
I'm
gonna
share
the
the
agenda
here
with
everyone.
B
Good
morning,
good
afternoon,
if
that's
we
are
in
the
world.
Thank
you
very
much
anis.
So
the
agenda
we
have
is,
first
of
all,
we
have
a
lovely
spiffy
new
website.
Perhaps
none
of
you
knew
about
our
old
github
pages
powered
website,
which
is
kind
of
this
thing.
That
kind
of
took
all
the
took
a
little
marker
in
in
the
repo
and
kind
of
like
tried
to
make
a
jackal
slide
out
of
it
and
kind
of
excreted
it
onto
a
github
github
pages
we
never
actually
published
it.
B
So
I
think
I
think
it's
fair
to
say
that
only
a
very
select
few
knew
about
it
and
even
less
than
got
very
from
it.
But
the
great
news
is
that
that's
all
in
the
past
we
have
wonderful
new
new
kind
of
like
the
home
of
contours
project,
contoured
IO
and
over
the
next
Rolly
cycle,
we're
going
to
be
that
can
be
the
home
of
the
documentation.
So,
unlike
like
the
docs
directory
in
the
design
directory,
all
that
will
be
moving
to
project
contour
IO.
So
that's
the
that's.
B
B
Nothing
from
this
next
section,
don't
use
anything
less
0.12,
but
one
that
means
non
0.12
and
not
0.11,
and
certainly
nothing
befallen.
The
the
bug
that
was
fixed
there
is
to
do
with
everyone's
third
friend
scaling
a
service
to
zero.
What
would
happen
is
if
you
had
a
service
Scout
to
zero
in
your
pasta,
and
you
restarted
your
contour
pods
envoy
will
wedge
on
startup,
because
it's
told
there
is
a
cluster
here,
it's
okay!
B
If
it's
a
cluster
I
need
to
go
and
get
the
members,
but
because
the
service
has
been
scalped
zero,
there
are
no
members
and
it
just
kind
of
stops.
The
underlying
technical
details
of
this
are
the
there's,
an
ambiguity
in
the
GRC
protocol
between
that
we
use
between
contour
and
envoy.
That
makes
it
difficult
to
know
how
to
say:
nothing's
there,
it's
it's
kind
of
the
difference
between
an
empty
glass
and
no
glass
at
all,
and
sometimes
envoy
winner
asks
a
very
specific
question.
B
You
have
to
give
it
an
answer,
even
if
you
don't
have
any
answer
to
give
it
other
times.
If
it
asks
a
question,
it's
totally
ok
to
to
just
ignore
it.
So
finding
these
edge
cases
is
always
fun
and
they
always
seem
to.
If
you've
read
the
the
Manto
adage
from
a
couple
years
ago,
our
old
friend
scaling
services
a
zero.
It's
always
seemed
to
bring
these
up
so
that
the
simple
case
is
please
be
using
0.12.
B
The
one
for
the
moment,
we'll
get
to
you
know
we'll
get
to
in
like
a
week
for
a
couple
of
days
from
now
did
fast
would
be
0.13,
but
yet
really
you
want
to
be.
You
want
to
be
using
that
version
so
drop
in
drop
in
upgrade
to
0.0
twelve
and
the
release.
Notes
in
0.12
tell
you
how
to
upgrade
from
zero
to
11
I'm
gonna
keep
barreling
on
anymore.
Many
stuffed
questions
the
next.
B
B
B
B
A
very
simple
way
to
expose
this
like
there
are
a
lot
of
knobs
about
how
session
Finity
works
and
how
you
configure,
and
you
can
figure
various
issue
of
cookie
and
things
like
that.
But
after
a
bit
of
looking
into
it,
we
figured
out
that
almost
all
these
only
have
one
same
value.
So
there's,
for
example,
contour
has
to
supply
the
cookie.
It's
not
it's
not
possible
to
reuse
a
cookie
from
your
application,
because
we
can't
control
how
that
cookie
is
created,
so
I
in
the
question/answer
section
I
can
go
into
if
anyone's
interested.
B
Why
we
can't
do
that,
but
the
the
the
the
net
result
of
that?
It's
the
contrast
to
create
the
cookie
and
therefore,
if
contour
creates
the
cookie,
we
don't
need
to
involve
you
in
any
of
the
details.
So
we
don't
expose
those
it's
configuration
options,
but
the
it's
in
the
ingress
route
document
now
it'll
be
in
release
notes
as
well,
but
the
way
that
you
choose
sticky
sessions
is
the
load
balancer
policy
instead
of
being
like
at
least
weighted
or
something
like
that.
B
You
just
say
cookie:
that's
all
there
is
to
it
that
will
give
you
your
classic
cookie
based
session
infinity,
with
all
the
advantages,
if
you're
bringing
an
application
over
from
Java.
Like
you
have
a
tomcat
thing
that
keeps
session
data
in
memory.
You
can't
let
that
session
wander
between
application
servers
and
all
the
limitations
like
this
is
a
sticky
sessions
are
very,
not
cloud
negative
pattern.
They
like
they.
They
placed
a
lot
of
assumptions
that
the
members
of
your
cluster
won't
change.
Frequently
they
won't
come
and
go.
Their
IP
addresses
won't
change.
B
They
have
stable
names,
none
of
those
things
in
really
true
in
kubernetes,
so
we've
we've
we've
done
what
we
think
is
a
reasonable,
the
reasonable
job.
On
this
feature
there
are
some
they're
always
going
to
be
some
mutations,
because
it's
not
a
very
cloud
native
pad,
but
that's
the
that's
one
of
the
main
headline
features
for
0.13
that
we've
been
working
on
I
going
to
pause,
can't
see
the
chat
from
where
I
am
but
I'm
gonna
pause,
while
I
quickly
review
the
other
notes.
B
A
B
B
Sticky
sessions
is
the
big
thing.
Steve
sloka
has
done
a
lot
more
work
around
metrics
and
health
checking.
This
is
around
making
sure
this
is
around
making
sure
that
the
contour
body
is
healthy
before
the
Cuban
eighties.
Like
your
replica
says
it's
healthy.
This
is
done
by
adding
a
health
check
there,
like
the
the
pre
run
and
the
the
run
health
checks
done
more
work
tooling,
with
the
graph
arrow
metrics.
B
You
I'm
just
skipping
over
a
bunch
of
internal
internal
cleanup
that
we've
done
something
something
else
related
to
sticky
sessions.
We
have
removed
the
maglev
and
ring
hash
supported,
load
balancer
strategies.
This
will
be
in
the
release,
notes
as
well.
The
reason
we
have
removed
them
is
that
we
made
a
mistake
envoy
offers
us
a
list
of
options
for
that
field.
We
just
said
sounds
great,
we'll
turn
them
all
on
maglev
and
ring
hash
are
used
in
behind
the
scenes
by
envoy
to
implement
sticky
sessions,
but
without
additional
configuration
that
can't
actually
be
used.
B
So
if
you
chose
maglev
or
ring
hash
without
any
additional
configuration
which
you
don't
really
expose
expose
through
the
ingress
router
API,
you
just
got
random
behavior,
like
that
without
a
hash
key
on
void,
just
chooses
random
around
I
think
so
because
they
were
not
able
to
be
used
safely,
we've
just
removed
them.
What
you
really
want
is
the
reason
you
would
use
one
of
these
policies
if
you
were
doing
sticky
sessions
and
replaced
it
with
this
strategy,
key
cookie
name,
which
actually
that's
what
it
says
on
the
tin
Bob.
B
So
those
going
forward,
those
names
will
not
be
accepted
by
CID
validation,
but
that's
we
thought
about
leaving
them.
Leaving
them
in
place
is
just
kind
of
know.
What's,
but
really
they
don't
work
like
it's
a
mistake,
it's
a
mistake
to
kind
of
let
let
existing
deployments
think
that
they
do
so.
We
remove
them
and
remove
them
from
the
CID
validation.
So
you
can't
say
those
names
anymore,
just
I.
Think
great.
B
Another
thing
that
we've
done
again,
which
is
around
communication,
is
that
since
the
earliest
days
we
had
a
directory
called
deployment
deployments
which
had
sampled
different
yeah
malls
for
different
scenarios,
daemon
set
using
as
a
deployment
things
like
that.
Now
one
of
the
words
that
I
used
there
was
sample,
we've
always
thought
of
them
as
samples.
They
are
examples
to
show
you
how
to
do
it.
A
certain
thing,
and
there
are
examples
to
show
you
the.
B
There
are
backups,
you
should
use,
think
things
like
that,
so
we
voice
assent,
but
the
word
the
word
deployment
and
the
folder
name
deployment
kind
of
has
a
different
meaning
in
kubernetes.
It
really
does
mean
like
here's,
the
code,
here's
the
deployment
code,
and
that
was
never
our
intention.
These
were
only
examples
and
that
they're
there
we
just
needed
a
place
to
point
that
getting
started
shortly.
So
what
we've
done
and
we
have
not
changed
the
content
of
them
just
renamed
the
directory
from
deployment
examples
to
make
it
very
clear
that
these
are
examples.
B
Samples
starting
points,
that'd,
probably
bring
us
into
a
much
bigger
question.
Well,
where
are
your
actual
official
deployments,
where
your
help
charts?
Where
your,
where
your
draft
manifestos
things
like
that?
And
that's
a
very
good
question?
We
don't
have
an
answer
for
that
at
the
moment.
It's
something
that
will
definitely
working
for
that.
Definitely
working
towards
for
wonder
like
we
need
to
have
the
actual
official
here's
how
you
install
it.
But
that's
it.
B
That's
apart
from
just
doing
work,
that's
kind
of
a
complicated
question
because
it
if
it
does
drives
down
to
a
tool
like
home.
That
means
that
we
I
now
have
to
become
helmet
expert.
So
we
just
don't
know
the
answers
to
that
yet,
but
I
want
to
make
it
very
clear
that
the
deployment
we've
really
in
this
directory
to
be
clearer
that
these
are
just
example.
These
are
starting
points
for
you
to
work
from
I.
A
Think
as
you're
working
towards
1.0,
as
you
mentioned,
you
need
to
update
the
the
docs
and
making
sure
that
everything
everything
is
easily
understandable.
I
think
those
are
those
are
grace
first
issues
and
things
that
the
community
can
help
out
with
as
well,
because
we
do
have
a
lot
of
people
actually
running
this.
So
getting
some
help
from
the
community
and
making
sure
that
the
docs
are
updated
properly.
I
think
that's
a
great
way
to
incorporate
the
community
feedback
as
well.
B
Yeah,
that's
that
that's
a
really
good.
That's
really
good
point,
Yanis,
don't
actually
straight
three
things
there.
One
is
our
current
documentation,
and
this
is
something
that,
when
weren't
moving
merged
in
your
website
over
the
next
couple
weeks,
this
would
give
us
a
whole
new
lease
on
life
like
to
be
very,
very
clear.
B
Let's
go
to
the
session
affinity
page
like
so,
we
will
reconstitute
the
documentation
to
be
more
focused
around
goals
and
outcomes,
rather
than
just
Dave
says
you
can't
submit
this
without
some
documentation.
So
if
that's
that's
one
of
the
other
things
I'm
really
looking
forward
to
now.
You're
honest
touched
on
good
first
issues,
and
this
is
a
great,
a
great
segue.
We
have
two.
B
We
have
two
labels
in
the
repo
good
first
issue
and
Help
Wanted
and
there's
some
subtly
they're
good
first
issue
is
effectively
help
want
it,
but
you
probably
like,
if
you're
looking
for
the
one
to
start
on
that,
would
be
that
would
be
it
Help
Wanted
is.
Is
he
perhaps
saying
we
need
a
subject
matter
expert
on
its?
We
need
someone
who
is
this
distance
could
be
at
a
larger
kind
of
issue
like
they
could
be
quite
involve
change.
You
could
involve
doing.
You
know
signed
up,
and
things
like
that.
B
So
arguably
not
great
for
your
first
cab
off
the
rank
but
yeah
good.
First,
it
could
help
one
who's
been
first
issue,
I
think
they're
even
linked
from
the
bottom
and
there's
like
a
sample
search
at
the
bottom
of
the
readme
empty
on
the
website,
but
we're
going
to
keep
keep
highlighting
those
because
those
are
always
the
questions
like
if
you
want
to
be
get
involved
in
content
development.
You
know
a
few
cycles.
You're
interested
you'll
find
out
more
about
the
project.
These
are
these.
B
These
are,
these
are
the
places
to
start
and
I
do
I
do
like
most
project
leads
I
spend
far
too
long.
Spending
at
the
github
master
and
screen
so
I
do
try
to
try
to
keep
the
update
and
certainly
keep
them.
Keep
them
accurate,
like
like,
as
the
if
you
finish,
involves
to
the
point
where,
like
war
I'm,
actually
go
and
think
about
this
again,
then
I'll
definitely
take
the
take
those
labs
off
because
I,
don't
it's
it's
not
fair
to
anyone.
B
You're,
like
hey,
you
said
you
said:
I
should
I,
should
work
on
this
and
like
well
actually
I
change,
but
I
change
my
mind,
but
I
didn't
tell
anybody.
That's
not
fair
on
you!
So
I
do
put
a
lot
of
effort
into
keeping
those
labels
up-to-date
and
the
third
one.
The
third
one
and
I
do
recognize
that
I'm
monopolizing.
All
the
time
is
to
talk
about
contour,
wonder
I,
think
that
comes
into
their
be
coming
up
the
our
focus
between
now
and
play
in
October
and
November
right
that
the
issue
says
contour.
B
Getting
contour
to
a
point
that,
as
the
development
team,
all
of
us
feel
confident
we
can
put
a
wonder
Oh
on
this
and
stand
behind
it.
You
know,
for
all
the
reasons
that
one
not
always
superior
is
like
a
zero
dot,
something
release.
So
that
is
the
big
overarching
story.
That's
that's
happening
over
the
next
four
four
to
five
months.
B
In
support
of
that,
we
have
the
0:13
release,
which
is
happening
at
the
end
of
this
week,
where
you've
got
0.14,
which
is
about
a
month
from
now
15
another
month
from
there
we're
sticking
for
four
to
six
week
cadence
on
there
and
those
will
be.
Those
two
will
be
the
last.
What
we
call
the
development
cycles,
like
that's,
only
going
to
try
and
get
all
the
big-ticket
feature
work.
B
You
know
and
I'll
come
back
to
the
what
what
some
of
those
big
ticket
items
are
and
from
there
the
rest
will
be
a
process
of
being
as
DeMille's
candidates
leading
up
to
late
October
early
November.
So
that's
a
that's
our
big
overriding
plan
and
it
has
that
that
has
some
implications
and
because
we're
no
longer
just
kind
of
rolling
along
just
this
can't
kind
of
kind
of
tumble
we're
just
picking
up
features
all
along.
We
are
going
to
become
more
selective
about
the
feature
set
that
that
we're
going
to
add
to
the
product.
B
B
There
are
milestones
called
unplanned
and
we
will
probably
in
the
next
couple
of
months,
have
a
one
but
one
milestone,
which
is
things
that
we
intend
to
add
after
we
get
to
wonder,
though
so
I
want
to
say
all
this
to
make
it
very
clear
that
if
you
say
here
say
no,
we
don't
want
to
do
that
or
what's
more
likely
to
say
no,
we
don't
want
to
do
that.
Yet
it's
not
a
rejection.
It's
more
driven
by.
We
have
to
be
quite
quite
strict
about
our
goals,
otherwise
we'll
never
get
to
100.
B
B
Get
the
English
route
angrist
route
specification
21.0,
thank
you
having
it
in
beta
and
really
to
know
the
product.
Pretty
much
means
when
we
take
it
out
of
beta.
We
have
to.
We
may
have
to
do
some
major
changes
so
making
ingress
getting
your
out
to
100
stage
is
very
much
the
key.
The
key
thing
to
getting
English
to
get
in
contour,
to
wonder
we're
going
to
do
that
in
a
series
we've
recognised
it
changing.
Adding
to
that
English
route.
Specification
is
relatively
painless.
B
We
do
it
in
a
you
know,
back
in
the
paddle
manner,
but
we
do
anticipate
we'll
probably
have
to
change
something
and
I'll
come
to
what
the
thing
is,
we're
going
to
change
the
next,
but
we
want
to
do
that,
as
least
as
possible
every
time
that
we
make
a
major
change
to
the
kind
or
the
name
or
some
of
the
some
parts
of
the
being
respect.
That
means
we
have
to
change
the
change,
the
kind
of
the
version
or
the
the
name.
We
lose
people.
We
often
people
on
older
versions
in
the
CRT.
B
B
So
we
intend
to
make
one
change
to
the
English
routes
back
in
the
next
couple
of
months,
which
will
actually
change
the
name
and
the
kind
and
I'll
explain
why
in
a
second
and
then
we
hope
we
hope
that'll
get
us
to
the
point
that
the
final
polishing
release
candidate
will,
if
we
have
to
do,
it,
will
be
very,
very
minor,
but
we
don't
intend
to
be
this
rolling
like
every
month.
There's
a
new
version
thing
this
route
CID
that
would
that
would
be
terrible
for
the
entire
community.
B
B
We
can't
keep
pushing
this
old
brand,
so
there
is
a
requirement
to
rename
our
CID
like
to
get
the
the
old
hair
care
brand
out
of
there,
and
so
we're
going
to
take
that
opportunity
to
do
the
breaking
changes
we
need
to
ingress
wrap.
We
will
continue
to
support
the
old,
combat
heavy
Oh
beetle
one
ingress
route
for.
B
If
we
will,
if
we
will
launch
contour
100
with
support
for
an
old
hippie
I
branded
beta
ingress
rat
CID,
but
we'll
obviously
I've
talked
a
lot
about
how
changes
hardness
you
can
break
and
lose
people
we're
going
to
do
this
change
as
humanely
as
possible,
so
that
that
is
the
matron
will
be
coming
up
and
happen
in
the
next
couple
of
months
during
the
14
and
15
release
cycle.
The
second,
the
second
change
that
is
driving
this,
which
is
we're
going
to
piggyback.
B
The
ticket
is
called
routing
in
chief
around
in
the
next
generation,
which
is
a
vague
catch-all
topic
for
we
wanted
to
want
to
have
more
flexible
routing
options.
I
don't
go
into
what
those
are
yet
because,
quite
honestly,
we're
still
debating
like
it's
an
open
field
of
possibility
and
that's
a
difficult
place
to
do
a
design
from
we,
but
we
do
recognize
that
I,
current
prefix
based
routing
is
limited,
while
we're
probably
never
going
to
go
down
to
the
level
of
complete
free-for-all.
B
Regex
routing
will
probably
have
things
like
wildcard
matching
or
globbing,
or
something
something
like
that,
so
you
can,
but
the
major
complicating
factor
in
this
is
delegation,
and
we
have
to
think
of
every
time
that
we
think
about
routing
changes.
How
do
we
think
about?
How
does
that
affect
delegation?
B
Delegation
at
the
moment
is
very
much
path
based.
It
is
include
at
this
point
some
other
conflict
from
another
ingress
rat
CID,
and
we
need
to
that.
That
is
one
of
the
major
design
constraints
that
we're
still
talking
about
internally,
when
we
have
something
to
talk
about
like
we'll,
be
publishing
design
design
documents
for
the
0:14
cycle.
Am
I
pushing
the
team
really
hard
to
this?
Is
this?
Is
this
is
a
big
big
thing
we're
working
on?
B
Ok,
so
that's
that
was
a
lot
about
a
lot
about
English
rap
one
at
OU.
Getting
to
contour.
B
Wonder
oh
another
thing,
another
big
ticket
item
that
we've
been
chipping
away
out
for
a
couple
of
releases
now
in
the
in
the
background
we're
going
to
continue
to
do
it
is
the
ability
to
support
envoy
and
contour
hosted
in
different
in
different
in
different
pods
at
the
moment,
the
default,
the
point
we
call
the
default
deployment,
the
one
that's
the
QuickStart
guides
you
to
and
the
one
that
is
the
majority
of
the
samples
in
the
example
directory
puts
envoy
and
contour
together
as
two
containers
in
one
pod.
This
is
very
convenient.
B
It's
very
simple
users.
You
know
it's
just
just
want
one
just
one
pod
to
track
in
queue,
control
get
PO,
but
it
has
some
limitations.
It
ties
the
lifecycle
of
envelope
the
lifecycle
of
contour.
It
means
if
envoy
seg
faults
or
contour
seg
faults.
The
other
pot,
the
other
container
in
that
pot,
is
taken
down.
This
makes
it
hard
to
do
hard
to
change
hard
to
change
each
of
the
components
without
incurring
a
incurring
an
action
on
the
other,
and
really
the
life
cycle
of
envoy
envoy
should
live,
should
be
deployed
at
the
the
frequencies.
B
The
envoy
team.
Do
it
seems
we
quarterly?
You
want
to
leave
those
pods
running
for
long
time.
They
are
your
data
plane.
The
contour
pods
have
different
lifes,
like
you
may
want
to
read
them
change.
You
may
want
to
restart
them,
often,
especially
when
we
start
moving
to
config
in
a
config
file
rather
than
just
the
sea
of
CLI
flags,
and
so
separating
those
deployments
separates
their
lifecycle,
which
is
the
key
thing,
and
it's
going
to
unlock
a
huge
number
of
things
for
us.
B
It's
going
to
give
us
an
answer,
for
how
do
we
do
things
like
just
having
a
decent
answer
for
for
doing
status
on
ingress
routes
is
a
big
big
problem
which
is
driven
itself
by
the
lack
of
leader
election.
We
really
like
to
later
election
we'd
really
like
to
have,
rather
than
one
contour
pod
for
envoy,
just
have
a
deployment
of
three
to
five
contour
contour
pods.
What
have
you,
whatever
you
think,
is
a
reasonable
number?
B
Don't
forget,
they're
not
involved
in
the
traffic
routing
they're
just
there
to
serve
conflict
if
they're
down
for
a
few
seconds
on
board,
just
reconnect
and
get
its
config
when
it
needs
to,
but
you
may
want
to
put
your
own
voice.
You
know
in
a
daemon
set
one
per
host:
stick
them
on
host
networking,
so
their
their
lifecycle
is
completely
different.
Their
deployment
scenarios
a
completely
different.
We
want
to
that's
the
the
big
thing
is
to
enable
that
and
a
lot
of
the
work
behind
the
scenes
is
when
these
two
things
were.
B
Co-Located
together
was
very
simply
as
told
them
to
talk
about
the
local
hosts
when
they're
separated
apart.
We
need
to
secure
that
communication.
We
need
to
make
it
easy
for
them
to
find
each
other.
The
way
that
envoy
community
communicate
over
G
RPC
is
via
MPLS
with
client-side
certificates.
So
we're
coming
up
with
a
pattern
for
where
to
find
those
certificates
had
distributed
them,
how
to
secure
them.
How
to
get
had
will
probably
at
some
point,
have
a
tool
which
will
generate
them,
which
will
give
us
back
that
quick
start
behavior.
B
So
if
this
is
a
new
installation,
the
first
thing
happens
will
be
just
like:
we
have
contour
bootstrap
will
have
contour
bootstrap
certificates
or
something
like
that
I'm
just
making
the
name
that
will
create
all
the
certificate
material
that
you
need.
If
you're
a
known
by
an
environment
where
you
have
volatile,
you
have
your
own
company,
CA,
that's
fine!
You
don't!
You
won't
need
to
use
this
tools,
it's
really
just
a
convenience.
B
So
that's
that's
the
nuts
work
at
being
driven
by
Nick
Young,
and
so
that's
the
one
of
the
other
big
ticket
items
for
getting
us
to
wonder,
and
there
are
just
before
I
pause
for
some
questions.
Some
other
things
are
my
long-held
desire
to
have
a
better
integration
with
the
certificate
manager.
I
got
to
spend
some
time
with
jet
tech.
Folks,
a
couple
months
ago,
we
count
with
a
plan
for
how
contour
can
learn
to
write
out
the
certificate
request
records.
B
That's
certain
manager
needs
we're
going
to
do
that,
either
in
process
or
as
a
shim
just
like
so.
Managers
in
Gresham
does
and
hopefully
get
better
better
integration
with
asset
manager
there,
and
this
is
really
the
those
are
the
three
big
ticket
items
there
are
about
about
a
bunch
of
health
and
hygiene
ones.
I
need
to
perform.
It
means
your
performance
load
test,
get
some
good.
Some
updated
numbers
on
that.
We
haven't
done
that
since
zero
at
six
last
year,.
B
Still
keep
chipping
away
on
issue
for
nine
nine,
which
needs
a
bunch
of
changes
upstream
which
we're
tracking
tracking
closely,
but
we
hope
that
that
will
be
resolved
well
before
wondered
oh
and
yeah,
rather
than
trying
to
remember
I'm
gonna
suggest
that
we
now
have
all
those
milestones
between
now
100
in
github,
and
so
things
have
been,
things
are
being
moved
around
and
kind
of
scheduled
for
them.
Sometimes
they
sometimes
they're
scheduled.
B
You
need
the
milestone
which
I
expect
them
to
happen,
but
if
they're
big
items
we're
going
to
be
working
on
them
or
all
the
way
up
there,
it's
just
github
doesn't
really
have
a
good
way
of
saying
we're.
Gonna
work
on
this.
This
particular
big
overall
epic
issue
over
three
milestones:
they
don't
let
you
select
three
so
I
just
did
the
best.
I
could
so
pause
take
a
drink.
If
there
any
questions
in
the
chat
or,
if
you
want
to
you,
can
you
can
damn
you
notice
and
he
lost
it.
A
A
All
right,
I
have
one
question
for
you
Dave,
so
you
talked
about
the
ghost
disconnecting
envoy
and
and
contour
essentially
well,
not
disconnecting,
but
not
having
them
be
deployed
at
the
same
time
and
separating
the
life
cycle
of
them.
That
would
then
enable
us
to
easier
for
an
easier
deployment
method
for
contour
and
existing
environments.
Where
you
have
envoy
installed,
will
that
be
correct?.
B
B
There
is
usually
a
strong
connection
between
the
listening,
socket
port
443
and
where
it
gets
the
connection
for
all
the
things
that
hang
downstream
of
that
the
certificates,
the
the
the
the
list
of
res
list
of
virtual
hosts.
It
is
not
impossible,
but
certainly
quite
complex
to
serve
those
from
different
different.
B
Not
different.
Envoy
calls
the
thing
that
gives
it
configuration
the
configuration
server
very
exciting
name.
It's
it's.
Certainly
it's
not
impossible
to
do
that,
but
it
certainly
be
quite
complicated
to
mix
together
configurations
if
you
had
multiple
configure
it
in
configuration
service
I
can't
off
the
hand
off
the
top
of
my
head.
Think
of
I
mean
there
are
a
bunch
of
products
like
ambassador
still
itself,
which
also
fulfill
that
the
management
interface
but
I,
don't
think
anyone
has
really
done
a
lot
of
work
in
trying
to
mix
them
together.
I
think
I
mean
I.
B
I
would
almost
lean
very
heavily
on
the
world
that
doesn't
sound,
supported
and
kind
of
push
you
to
push
you
to
say
things
like
well.
What
is
it
you
actually
want
to
do,
even
though
contour
and
envoy
are
separate,
they
have
set
the
life
cycles,
the
new
the
separate
products
delivered
by
different
by
different
groups.
There
is
a
strong
relationship
that
a
set
of
envoys
belong
to
a
management
server,
so
they
belong
to
a
contour
or
they
belong
to
ambassadors
version
of
whatever
management
server
that
they
speak
so
yeah.
B
If
you
you
have
on,
if
you
have
envoy
installed
great,
it's
it's
a
question
of
pointing
it
to
contour.
We
we
do
that
again.
This
is
another
know,
one
of
the
things
that
we
fought
for
our
it
was
great.
In
our
simple
case,
we
have
this
thing
called
bootstrap,
which
generates
the
kind
of
skeleton
config
file
that
you
need
to
feed
to
envoy
to
say,
go
and
look
over
here.
This
is
where
you'll
find
the
rest
of
your
config
again
that
that's
just
convenience.
B
We
we
did
that
for
ourselves,
so
that
we
didn't
have
to
constantly
editing
a
config
map,
but
is
by
no
means
a
requirement.
If
you
are,
you
know,
in
a
shop,
that's
using
ansible,
something
like
that
to
ship
the
config.
You
cut
it
in
a
config
map,
you
don't
need
to
use
bootstrap,
but
you
do
need
to
provide
this
little
skeleton
that
says:
here's.
Here's,
the
name
of
your
management
server,
here's
the
address,
we'll
find
it
and
coming
up
in
the
future
is
going
to
be
here.
B
A
B
A
See
I've
had
first
of
all
a
contour
question
here,
just
trying
to
learn
a
bit
about
contour.
How
different
is
it
from
the
regular
ones
like
nginx
and
HJ
proxy,
apart
from
apparently
being
tailored
specifically
to
guide
traffic
to
pods
and
services?
Tom
answered
a
little
bit
but
I
think
would
be
worthwhile
if
you
could
dive
into
a
little
above
the
differences
there.
I
also.
D
B
Okay,
let
me
take
a
few
minutes
to
answer
that
question,
because,
as
I
work
in
this
product
all
day
every
day,
the
terminal
analogy
I
use
can
sometimes
be
quite
fluid
because,
like
you
know
the
team,
we
all
know
what
we're
talking
about.
We
don't
pull
each
other
up,
but
there
are.
There
are
some
things
that
I
need
to
be
quite
precise
about
most
of
the
ingress
controls
in
kubernetes.
The
injection
ingress
ingress
control
of
the
GLB
ingress
controller
contour
define
themselves
by
the
tools
that
they
use
for
their
data
plane.
B
So
the
injects
ingress
controller
obviously
uses
nginx.
It
runs
an
engine
X
process
and
gives
it
configuration
the
the
GLB
one
in
gke.
Its
job
is
it's
as
English
control.
Its
job
is
to
mediate
between
you
and
Google's
global,
over
load
balancer
and
for
contour
on
void.
We
serve
traffic
down
but
envoy
does
or
the
or
the
actual
handling
and
handling
of
the
data
plane
traffic.
So
in
a
question
in
a
question
that
you
asked
where
you
said
nginx,
that's
that
that's
the
role
that
the
rhomboid
feels
so
the
the
death
question
might
be
well.
B
Why
do
we
have
so
many
different
ingress
controllers,
and
why
do
they
like?
Why
does
it
need
to
be
so
many
of
these
things?
Something
I
was
too
driven
by
commercial
reasons
like
the
the
GLB
is
a
commercial
service
provided
by
Google.
They
need
to
provide
you
an
operator
to
connect
that,
but
to
talk
about
some
of
the
differences
between.
B
Engine
X
or
H
a
proxy
and
contour
the.
For
me,
it
comes
down
to
the
configuration
interface
most
of
the
tools
that
I've
worked
with.
You
either
write
out
a
file,
and
you
tell
the
process
to
update
itself
until
the
nginx
english
controller
works.
I,
don't
it
might
be
different
if
you
have
plus
I,
don't
know
how
many
people
have
plus,
but
the
standard
ingress,
nginx,
ingress
controller,
writes
out
reads:
the
company's
API
writes
out
a
config
file.
B
Those
tells
nginx
to
hop
itself
we're
a
little
bit
different
in
over
a
non-void
land,
because
homeboy
provides
a
jpc
stream,
so
we
stream
configuration
straight
to
it.
So
one
of
the
one
of
the
ways
we're
thinking
about
contour
is
that
it's
just
a
one-way
translator.
We
watch
the
kubernetes
api
using
the
standard
watch
sake.
Api's,
do
a
bunch
of
data
transformation
to
turn
kubernetes
rest,
jason
e,
stuff,
human
HTML
into
gr
PC,
and
we
just
Ford
that
straight
down
to
envoy,
and
this
gives
us
for
my
background
as
an
operator.
B
This
gives
one
of
the
things
I'm
really
quite
proud
of
is
that
there
is
effectively
no
latency.
There
is
that
we
have
a
little
hold
off
timer
that
just
coalesces
changes,
but
we
keep
that
timer
below
the
rate
that
you
can
kind
of
observe
as
a
human
sitting
outside
your
goodness
cluster
with
poop
control.
We
do
that
because.
B
Certainly,
in
other
English
controls
that
I've
used
I'm
debugging
dresses
in
murder-mystery
the
thing
isn't
working,
okay,
go.
Look
at
the
deployments.
Are
some
pods
okay,
the
pods
are
good.
Is
there
a
service
to
the
service
match
like
you
go
to
each
of
the
just
like
just
like
in
playing
clue,
you
go
each
of
you
go
to
each
of
the
rooms,
and
you
ask
the
question
is
this?
Is
there
is
the
problem
here?
Okay,
no,
that
looks
good.
B
Let's
go
to
the
next
ring
is
the
problem
here
and
one
of
the
things
that
can
complicate
that
process
is
a
delay.
If
there
is
a
delay
applying
the
configuration
you'd
now
know
you
now
don't
know.
If
the
problem
is
with
the
configuration
you
see
that
you
gave
like
it's
incorrect.
All
that
you
just
didn't
wait
long
enough
until
one
of
the
things
that
we
did
in
contour
is
because
we
can
stream
the
changes
directly
from
kubernetes.
We
just
removed
that
delay.
B
So
if
it's
not
working,
it's
the
config,
it's
not
you
haven't
waited
long
enough
and
that
that
is
what
was
one
of
the
major
reasons
for
choosing
envoy
the
product
to
do
this,
because
it
provides
this
this
different,
this
different
styled
configuration
API,
I'm,
I'm,
gonna
pause.
Maybe
you
if
there
are
any
for
if
there's
a
follow-up
question
to
that,
I've
kind
of
I've
talked
to
I've
talked
around
around
the
issue.
It's
kind
of
my
stump
speech
for
why
why
envoy
there
are.
There
are
some
other
reasons.
B
I
mean,
with
the
benefit
of
tears,
hand,
sides
clear
that
the
industry
is
centralizing
to
a
certain
degree
envoy.
We
use
it
as
a
web
proxy
I,
look
like
we
use
it
as
a
service
mesh.
Probably
other
people
use
it
as
a
MongoDB
proxy
has
to
him
a
client
talk
about
it.
He's
designed
to
be
universal
data
data
pipe.
B
It's
designed
to
be
a
universal
data
pipe,
so
it
doesn't
it's
not
prescriptive
in
the
way
you
use
it,
and
certainly,
if
you
interact
with
its
configuration
eight
guys,
they
give
you
very
little
opinion
on
what
is
the
right
and
the
wrong
way
to
do
things.
So
we
take
this
the
kind
of
raw
guts
envoy
and
we
send
a
configuration
that
makes
it
look
like
a
layer,
seven
load
balancer,
and
so
that
fits
really
well
with
without
any
rest
control
story.
C
Thanks
for
explaining
this
in
detail,
do
you
know
like
it's
been
going
on
for
some
while
and
the
other
ingress
controllers,
like
engineers
are
being
used,
but
almost
everyone
and
the
obviously
using
their
Google's
one?
So
who
are
the
main
customers
for
contour
that
you're
working
with
in
providing
feedback.
B
Giving
us
a
public
hole,
I,
probably
can't
talk
about
our
commercial
customers,
but
that
that's
actually
a
really
really
good
good
question.
We
don't
have
like
that,
then
confirmes
I
can
tell
you
about
who's.
Who's
using
contour
are
based
on
github
statistics,
the
number
of
stars
we
have
at
summer
of
people
who
came
across
and
said
this
seems
good
enough
that
I'll
click
the
like
button.
B
It's
pretty
weak
metric,
we're
about
three
hundred
and
something
people
in
the
contour
channel,
so
the
busiest
channel
in
the
world,
but
so
there
there
are
some
kind
of
some
indicators
that
there
are
people
interested
in.
We
don't
have
a
wall
of
low
because
we
don't
have
a
wall
of
testimonies.
We
don't
have
I
said
a
set
of
reference
clients.
B
That
would
be
very,
very
useful
to
do
some
some
ways
that
we
do
this
like
I'm
involved
in
the
NGO
project.
We
do
a
yearly
survey
where
which
is
kind
of
like
just
kind
of
like
a
Stack
Overflow
survey.
We
just
like
take
the
temperature
of
the
developers
who
are
interested
in
answering
that
survey.
So
we
should.
B
We
should
do
more
to
get
it's
non-public
testimonies
and
also
just
to
get
a
better
feel
for
like
how
big
our
communities,
because
I
know
that
that
is
certainly
the
reason
why
we're
having
these
calls
to
get
better
in
touch
and
remove
the
like
just
reduce
the
distance
between
the
users
of
potential
users
and
us,
the
the
development
team,
so
I'm
afraid
I,
don't
have
a
much
better
answer
than
that.
I
I
can't
go
into
the
commercial
customers
who
we
have
sorry.
C
No
worries
I
was
just
probably
last
year
when
you
were
still
happy.
You
attended
a
talk
from
twenty
Parker.
He
introduced
a
bunch
of
50
projects,
but
only
Ark
and
solo
boy
were
the
ones
that
came
to
my
mind,
contour,
it's
the
first
time
I
was
introduced.
So
that's
why
I
was
curious
to
the
mortgage
yeah.
B
If
it
was
last
year
and
we
so
we
part
of
some
of
the
support
offerings
that
the
commercial
support
offerings
we
offer
include
break
quick
support
on
the
on
the
open
source
products,
so
no
boy,
Valero,
etc.
It
wasn't
until
the
start
of
this
year.
The
contour
is
actually
included
in
that
search
for
ey.
It
wasn't
mentioned
last
year,
but
we
we're
we're
like
we
were
part
of
the
break,
fix
support
for
people
who
are
under
commercial
support,
my
tracks
through
us,
so
that
may
be.
Why
wasn't
mentioned
last
year?
Also
hey!
B
We
need
to
be
better
about
getting
getting
our
name
out
there
as
well
into
better
about
our
publicity.
There
was.
There
was
I'm,
not
sure
if
you've
ever
been
through
a
company
merger,
a
transition,
it
would
say
a
very
strange
time
and
we
certainly
dropped
the
ball
on
just
like
we
used
to
have
a
pretty
good
good
cadence
talking
about
stuff
we're
doing
on
the
health
care
blog.
We
kind
of
lost
that
kind
of
lost
that
that
channel,
and
so
this
is
I'm
looking
forward
to
the
web.
B
So
like
that,
hey
RSS
feed,
we
can,
we
can
blog
about
stuff
we're
doing
ourselves.
We
can
give
you
a
way
to
subscribe
to
new
releases.
You
don't
have
to
try
and
figure
it
out
from
watching
Twitter
or
something
like
that.
You
can
just
if
you're
like
an
RSS
geek
like
me,
you
can
subscribe
to
a
release,
announcements
and
things
like
that.
So
we
want
it.
We
want
to
use
the
website
to
be
a
focal
point
to
if
you're
interested
in
following
this
project.
It's
the
one
place
you
need
to
go.
C
E
Sure
so
yeah
so
contour
is
part
of
the
central
PKS
package.
It
is
supported
as
part
of
the
central
PKS.
There's
actually,
no
reason
you
couldn't
use
a
contour
with
our
other
offerings,
Enterprise
PKS,
as
well
as
as
cloud
PKS.
So
so
yes,
it's
certainly
something
that
that
you
can
take
advantage
of
with
with
VMware
speaking
of
offerings.
A
E
Exactly
I've
taken
a
note
for
myself
to
create
the.
How
are
you
using
contour
issue
so
that
people
can
weigh
in
on
our
github
and
just
let
us
know
who
you
are,
how
you're
using
contour
give
us
some
ideas
about
your
use
cases
and
and
ways
that
you
can
that
you've
been
successful
using
the
contour.
So
so
look
for
that.
We'll
definitely
talk
about
the
next
community
meeting.
Probably
do
a
post
on
the
on
the
website
as
well
to
let
people
know
that's
available.
A
A
F
Yeah
I
had
a
question
about
the
way
that
it's
it's
sort
of
like
integrated.
You
just
said
like
it
was
integrated
into
PKS
and
so
I
know.
My
first
experience
using
ingress
was
in
vke,
where,
if
you
kind
of
just
define
that
you
have
an
ingress,
you
don't
have
to
like
install
like
you
know
the
helm,
install
like
the
the
nginx
like
at
the
time-
and
you
know
like
tiller
and
in
Helmand
all
that's
to
do
the
nginx
ingress
like
thing
and
in
in
Azure.
It
was.
F
It
was
different
where
they
didn't
have
like
the
automat,
like
the
the
backend
for
for
like
what
picked
up
ingress.
I,
don't
know
if
I'm,
if
I'm,
just
speaking
nonsense,
but
like
it
kind
of
got
like
in
gke
the
nginx
ingress
for
free,
where
you
didn't
have
to
install
anything
and
then
when
I
went
to
Azure,
you
had
to
like
go
install
and
setup
like
the
the
proxy.
Is
that
what
you
mean
when
you
say
that
like
contour
is
integrated
into
PKS
being
like
a
VMware
product
as
well?
E
Yeah
so
with
the
central
PKS,
it's
part
of
the
package
of
bits
that
you
get
including
kubernetes
as
well
as
the
as
the
open-source
projects
you're
encouraged
to
use
it.
It
is
definitely
supported.
You
don't
have
to
use
it,
but
but
that
is
a
that
is
a
configuration
that
you
would
be
making
on
the
cluster
yourself.
We
have
obviously
documentation
for
guidance
on
how
to
do
that,
but
but
yeah.
So
when
I
say
it's
integrated,
that's
really
a
packaged
software.
E
A
B
Just
to
give
a
little
a
little
background,
the
way
it
works
in
gke
I'm,
just
looking
at
my
Michael
cluster
at
the
moment.
There's
this
thing
called
l7
default
back-end,
which
I
think
is
some
prudence
of
the
google
load
balancer
into
your
GK
cluster,
and
it
kind
of
comes,
will
be
extra
stuff,
other
stuff
that
you
get
with
GK
so
that
they
integrate
at
bio
7
like
if
you
have
a
few
just
slap
down.
The
ingress
record
works
through
Google,
slow,
bouncer
Amazon.
B
B
So
what
not
foot
words
in
words
in
Tom's
mouth?
What
he's
trying
to
say
is
you
you,
you
get
contour
I
mean
it's
a
source.
You
get
it
anyway
as
part
of
PKS,
but
one
of
the
things
with
big
s
is
that
you
can
bring
up
making
an
ant
a
very
capable
team
and
ask
questions
about
it
rather
than
gonna
go
try
to
get
support
out
of
google
it
if
you
so
the
when
we
say
its
integrate.
It's
part
of
the
overall
support
offering
you
can
install.
B
If
you
want,
you
can
use
nginx
if
you
want
they
hum.
If
you
talk
to
the
team
about
big
s,
they
have
a
list
of
supported
technologies
and
all
the
different
verticals,
like
cloud
providers,
versions
of
kubernetes,
ingress
controllers
and
align
networks,
all
the
different
ology
from
components
you
can
go
into
the
kubernetes
cluster
end.
So
contour
is
one
of
those
ones
amongst
supported
ingress
controllers.
I
can't
off
the
top.
My
head
remember
the
other
two.
So
hopefully
that's
useful.
B
No,
no
I
I
just
want
to
give
a
shout
outs
in
a
few
thanks.
First
of
all,
Yanis.
Thank
you
very
much
for
putting
this
together,
especially
on
a
short
timeline.
I
understand
is
your
first
week
back
after
paternity
leave,
so
I
really
appreciate
making
this
happen.
It's
magical,
I
love
it.
Thank
you
to
my
team
and
everybody
who
came
along
contributed
to
today
we're
we
want
to
get
a
regular
cadence
of
these
things.
Just
as
a
different
way
of
like
you
can
always
come
find
us
on
slack.
B
B
Sometimes
I'm
traveling
but
I'll
always
be
here
to
tell
you
what's
what's
going
on
and
answer
your
questions
and
if
you
want
to
put
me
on
the
spot
about
why
we
said
no
to
you
to
your
feature
or
a
feature
request,
then
I'll
give
you
an
honest
answer
as
well.
So
thank
you
for
everybody
who
came
came
along
today
and
hand
it
back
to
us
yeah.
A
Thank
you
so
much
Dave
and
thank
you
to
all
the
community
members
joining
us
super
super
happy
to
see
you
all
on
this
inaugural
call
this
year
for
the
community
meetings.
We
will
be
doing
these
on
a
regular
cadence,
as
dave
said
here,
we're
aiming
for
every
third
Tuesday
every
month.
So
please
join
us
then,
and
until
then
have
an
awesome
rest
of
your
week.
Everyone
and
talk
to
you
soon
have
a
good
one,
see
you
next
month.
Folks
thank.