►
From YouTube: Community Meeting 3.8.2018
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
D
D
A
C
A
D
G
D
D
So
I
will
tell
you
how
I
think,
in
terms
of
a
newbie
person,
what
I
would
have
to
you
know
here's
what
I
envisioned
I
would
like
to
say:
he'll
install
something
and
get
it
up
and
running
in
a
kubernetes
cluster.
That
I
already
have
that's
the
kind
of
starting
point.
I
would
like
to
get
like
a
one-liner
Elm
installed
blah
that's
what
I
was
looking
for.
I
could
not
find
it
yeah.
F
A
D
J
J
D
J
A
Fun
awesome
well
for
anyone
else
that
has
questions
or
anything
that
comes
up
if
you
want
to
add
them
to
the
working
doc,
and
then
we
can
address
them.
We've
got
a
couple
of
folks
that
we're
going
to
do
some
demos
today.
First
is
little
Bobbie
table,
which
is
my
favorite,
like
Twitter
handles
name
ever
I
took.
G
D
K
Great,
so
this
is
gonna,
be
a
love
story.
It
was
originally
titled
debugging
SEO,
but
when
I
started,
writing
it
and
started
actually
practicing
my
demos.
This
actually
just
became
a
love
story
about
SEO,
internals
and
I.
Think
knowing
those
and
knowing
how
it
is,
do
kind
of
plays
with
Envoy,
and
vice
versa
is
there's
also
the
intent
of
debugging.
It
can
also
help
you
with
debugging,
so
we're
basically
just
gonna
look
using
some
real-world
scenarios.
K
We
are
HR
software
as
a
service.
We
do
payroll
benefits
and
human
capital
management,
and
so
we've
been
here
for
about
two-and-a-half
years
now,
currently
I'm
leading
an
implementation
of
SEO
on
top
of
kubernetes.
We
do
run.
We've
been
running
kubernetes
internally
for
a
few
months
now
we've
built
tools
against
it.
We've
been
using
SEO
internally
for
since
version
zero,
one
so
pretty
excited
about
the
project.
We've
gone,
we've
done
a
lot
of
code.
Spelunking
we've
learned
a
lot
about
how
it
actually
works.
So
that's
kinda.
K
K
Okay,
so
one
of
the
things
that
SEO
is
really
good
at.
Is
it
it's
really
good
at
making
you
forget
about
envoy,
but
that's
also
one
of
the
things.
That's
also
almost
a
detriment.
In
some
cases,
the
abstraction
over
envoy,
which
is
the
layer
7
proxy
that
SEO
uses
to
create
your
service
mesh,
is
so
well
done
that
you
just
forget
it's
there.
So
once
sometimes
when
things
aren't
working,
it's
kind
of
hard
to
know
where
you
need
to
actually
go
and
look
at,
is
it
SEO?
Is
it
my
application?
K
K
So
a
good
example
is
SEO
ingress.
This
is
if
you're
familiar
with
the
kubernetes
world.
Address
controller
is
something
it
allows
traffic
into
your
cluster
and
what
SEO
does
is
it
actually
starts
a
non-void
process
to
accept
external
traffic
into
your
cluster.
It
is
I,
can
trigger
it
a
little
bit
differently
inside
cars,
but
we'll
go
into
that
a
little
bit.
So
what
is
do
basically
does
in
kubernetes.
Is
that
has
a
pod
called
an
SEO
ingress
and
it
runs
a
non-void
process
that
is
configured
to
retrieve
ingress
rules
from
the
kubernetes
api?
K
Basically
so,
and
then
another
component
of
all,
this
is
the
sto
pilot
component,
which
is
listening
to
the
kerbin
api
and
refreshing,
a
cache
of
actual
ingress
rules
and
its
routes.
So
envoy
is
actually
retrieving
these
ingress
rules
from
the
pilot
api
every
second,
but
that
can
be
configured
to
be
a
little
bit
more
than
that.
So
it's
adds
a
little
bit
of
jitter
enabled
to
so.
You
get
some
randomness
so
really
quickly.
K
If
we're
looking
at
this
diagram,
starting
at
the
top
left,
this
is
kind
of
a
life
of
ingress
and
how
traffic
kind
of
eventually
gets
to
your
pod.
So,
let's
say
we're:
we've
set
up
urbanization
ISTE
on
example.com
and
we've
set
up
an
ingress
rule
to
go
to
a
dashboard.
What's
gonna
happen,
is
your
SEO,
ingress
or
rather
envoy,
is
going
to
receive
this
request
first
and
on
Vuli
has
its
own
cache
of
clusters
and
routes
that
it
will
determine
where
this
request
actually
needs
to
go.
K
So
you
can
see
after
envoy,
it
goes
down
to
a
set
of
three
pods
that
are
load
balanced
by
envoy
and
what's
going
on
it
kind
of
in
the
background,
while
these
requests
are
coming
in
and
flowing
to,
these
pods
as
envoy
is
actually
requesting
from
the
pilot
API.
Every
second
is
there
any
new
routes?
Is
there
any
new
pods
I
need
to
attach
to
my
load
balancer
and
on
the
other
side
of
this
this,
your
pilot
API,
is
also
asking
the
kubernetes
api.
K
Everyone
wants
long,
or
it
has
a
watch
that
anytime,
a
new
pod
or
service
or
ingress
rules
added
pilot
refresh
as
its
cache.
So
if
you
think
I
in
an
ingress
rule
to
kubernetes
pilot
retrieves,
that
ingress
rule
from
kubernetes
and
refreshes
its
cache
and
then
every
second
envoy
across
your
entire
fleet
is
asking
the
pilot
API.
Is
there
anything
I
need
to
know
about,
and
that's
cut,
that's
kind
of
how
envoy
is
configured
at
a
very,
very
high
level
and
I'll
show
this
in
a.
K
K
This
is
actually
the
pilot
api's
response
for
all
of
the
services
that
it
currently
knows
about,
and
you
can
see
that
has
all
of
the
defaults
kubernetes
services
for
for
Sto
and
if
we
scroll
down
here,
there's
probably
a
stone
for
DNS
and
you
can
see
cubed
in
US
and
the
IP
address
is
associated
with
it.
And
what
we're
gonna
do
is
we're
gonna.
Add
a
simple
kubernetes
manifest
to
our
cluster.
We're
gonna
see
what
SEO
actually
does
with
it.
K
So
if
we
look
at
this
service
definition
very
very
simple,
called
example:
web
public,
it
selects
an
app
called
example.
It
has
a
port
for
HTTP
on
port
80.
So
if
I
apply
this
really
quickly,
we
can
do
you
apply
service
dot.
Yanil
will
see
that
it
creates
it
and
then,
if
we
head
back
to
the
SEO
API
for
pilot,
you
can
see
that
it
actually
just
got
added
right
here.
So
pilot
actually
is
fetching
from
this
kubernetes
api
anytime.
K
So
now
that
we
have
that
we
can
actually
deploy
our
deployment
file
here.
So
you
look
at
it
really
fast.
It's
a
very
simple
Tribune,
a
deployment
that
installs
injects.
It
does
install
it
with
an
app
called
label.
So
that
means
that
our
service
endpoint
for
crew
Bay's
is
going
to
find
this
really
really
quickly,
and
we
can
see
here
just
it
just
spins
up
nginx
and
attaches
at
port
80.
K
So,
if
I
clear
my
terminal
here-
and
we
actually
do
a
cube,
CTL
apply,
dash,
F
deployments
will
create
our
deployment.
And
then,
if
we
head
back
to
our
our
pilot
API
services
endpoint,
we
can
see
that
it's
actually
registering
those
pods
as
they
came
up.
So
if
I
refresh
once,
if
you
saw
I
had
one
and
then
the
other
pods
came
up
and
it
just
kept
getting
these
IPs,
so
the
other
thing
that
sto
provides
is
the
service
discovery.
Endpoint
of
that
Envoy
actually
expects.
K
So
if
we
take
this
service
key,
we
can
actually
go
to
slash
registration,
slash
this
service
name,
and
this
is
all
of
the
IPS
for
just
just
this
service,
not
not
the
entire
fleet
in
your
kubernetes
cluster,
which
is
which
is
nice
if
you
want
to
build
just
fine
those
IDs.
So
now
that
we
have
created
our
our
endpoint
and
we
can
see
that
pilots
api's
have
updated
with
those
IPs.
What
I
want
to
do
is
I
actually
want
to
switch
gears
over
to
the
ingress
side
of
this
now
so
we've
added
that
endpoint.
K
But
if
I
refresh
here
we're
still
getting
a
404
and
that's
because
we
actually
haven't
told
envoy
how
to
connect
to
these
pods,
which
is
what
our
ingress
rule
is
actually
going
to
be
doing.
So
what
we
can
do
is
we
can
actually
tail
the
logs
for
the
Envoy
ingress.
So
if
we
do
keep
CTL
and
for
SEO
system
namespace
shell
for
label
equal
pulse
ingress,
we
can
get
our
pods.
K
K
These
logs
will
pop
you
up
here
in
half
a
second
and
there
we
go
so
we
can
see.
The
envoy
has
received
these
requests,
but
it
doesn't
know
what
to
do
with
them
in
the
easiest
way
that
you
can
tell
when
envoy
doesn't
know
where
to
send
something
is
looking
for
something
called
a
response
flag
and
the
default
log
lines
for
envoy
will
include
something.
K
After
the
status
code-
and
we
can
see
right
here-
this
NR
I
can't
I,
don't
know
if
you
can
see
where
I'm
highlighting,
but
after
get
HTTP,
1.1
404,
there's
this
NR
and
that's
on
boys
logging
flag
for
saying,
there's
no
route
so
automatically.
We
know
that
on
we
receive
the
request,
but
it
doesn't
know
a
route
for
it.
So
we
need
to
give
it
one.
K
So
now
what
we
can
do
is
we
can
actually
we
can
actually
go
ahead
and
add
our
our
actual
ingress
here,
but
the
first
thing
that
we
can
do
or
another
thing
that
we
can
do
pardon
me
is
we
can
actually
find
the
sto
note
ID
to
actually
get
what
it.
What
routes
does
it
think
it
has?
So
we
can
do
this
by
exec.
K
That
name
that
I
had
right
there.
We
can
do
a
PS
X
now,
what
envoy
or
what
is
is
it
actually
starts
envoy
with
a
flag
indicating
the
node
ID
now
inside
of
this
very,
very
long
command
right
here,
you'll
see
this
service
node
and
what
we
can
do
is
we
can
highlight
this
name
right
here.
It
goes
all
the
way
down
wraps
around.
We
can
copy
this,
and
what
we
can
do
is
we
can
actually
go
look
at
the
routes
for
this
specific
node.
K
So
if
we
go
back
to
the
SEO
pilot
API,
we
can
actually
go
to
slash
routes,
slash
80,
and
if
we
go
to
the
service
or
the
cluster
name,
SEO
ingress
and
then
paste
in
our
node
ID,
we
can
see
that
there's
no
routes.
So
we've
we've
proven
what
on
boys
already
told
us
that
it
doesn't
have
any
routes
to
actually
apply
anything
to.
So
what
we
can
do
now
is
we
can
apply
our
ingress
and
we
can
watch
sto
actually
update
this
route
table.
K
So
I
have
a
very
simple
ingress
here:
I
can
do
cat
ingress
and
we
can
see
that
it
has
a
name
of
example
public,
and
we
can
see
that
it
has
some
very
simple
padding's
it.
Just
it's
going
to
route
everything
that
it
gets.
This
annotation
dot
start
is
a
special
SEO
way
of
indicating
everything
after
the
slash
should
get
routed
to
this
service
and
we're
routing
it
to
our
service
name.
Example:
web
public.
K
So
now
what
we
can
do
is
we
can
actually
apply
this.
We
can
do
a
cube,
see
tale,
apply,
ingress
or
egress
has
been
added,
and
if
we
refresh
our
route
endpoint
for
SEO
ingress
in
our
node,
we
should
see
that
it
actually
has
defined
a
route.
So
now
we
can
see
that
okay,
it's
it's
prefix
endpoint
for
slash,
so
basically
anything
is
going
to
go
to
this
cluster.
Now
a
cluster
is
a
non-void
term
for
saying
a
grouping
of
IPs
for
a
cluster
really.
K
So
all
of
the
services
that
we
had
all
of
those
different
IPS
are
attached
to
this
cluster.
Now,
the
way
that
we
can
actually
look
at
this
very
easily
is
we
can
actually
go
to
the
clusters
endpoint.
So
if
you
go
to
/v
1
slash
clusters,
we
can
go
to
our
ingress
here
and
we
can
see
that
these
are
all
of
the
clusters
that
are
ingress
actually
knows
about,
so
to
back
up
a
little
bit,
there's
a
route
that
points
to
a
cluster
and
then
a
cluster
actually
points
at
a
set
of
services.
K
So
if
we
look
here,
we
see
this
service
name
and
if
we
you'll
notice
that
this
is
actually
the
same
service,
name
that
we
added
earlier
and
if
we
take
this
service
name
and
we
go
to
registration,
slash
and
then
that
service
name
these
are
the
IPS
that
envoy
is
actually
going
to
attempt
to
route
to.
So,
if
I've
done
all
of
this
right,
what
we
should
go
to
do
now
is
we
go
back
to
our
little
integrase
and
point
here
at
the
port
32,000
refresh
and
we
get
in
genetics.
K
So
this
is
exactly
how
Envoy
an
ingress
have
played
together
is
that
we
gave
it
hey.
We
have
this
route.
It
goes
the
best
service
here
are
all
of
the
IPS
for
that
service,
and
then
envoy
knows
how
to
route
to
those
individual
pods,
and
one
thing
that
we
can
do
now
again
is
that
we
can
actually
go
look
at
our
logs
for
the
ingress,
and
you
can
see
here
that
we
no
longer
have
a
flag.
K
B
K
Should
be
available
if
you're,
following
all
of
the
installation,
steps
that
Istria
provides
on
on
its
website
the
as
long
as
you
have
issue,
oh
well,
I
mean
another
way
to
say
this
is
that
SEO
won't
work
at
all
unless
pilot
is
running
so
when
you
apply
the
SEO
manifest
from
is
Gio's
website,
it'll
include
pilot
and
the
ingress
controller
in
the
mixer,
and
all
of
that
will
be
installed
automatically
for
you.
Yes,.
J
K
J
That's
why
okay
yeah
I
was
just
looking
at
my
queue.
Bonetti
install
with
its
do
so
that's
a
data
thing
you
have
to
do
so.
You
have
two
choices.
Why
is
you
can
expose
your
sto
pilot
service
as
a
no
port
in
mini
cube
environment
or
if
you
are
wrong
in
Cuba
Nettie
has
no
balance
of
service
type
support,
so
you
can
expose
it
as
a
load
balancer
service.
Then
you
can
just
access
through
that
kind
of
scope.
Question
about
neutral
TRS
is
your
environment,
has
mutual
tiers
enabled
or
disabled
currently.
K
K
Okay,
great,
if
there's
no
any
other,
if
there's
no
more
questions,
I
I
can
move
on
to
the
next
piece
of
this
okay.
That
was
enough
awkward
silence
for
me,
let's
keep
going
so
this
is
great
and
all
and
we
solved
the
traffic
actually
go
from
our
ingress
into
an
in
genetics
pod.
But
the
thing
we're
kind
of
missing
here
was:
how
did
that
pot
even
get
that
traffic
if
you're
coming
from
kubernetes
and
crimini
services?
K
K
So
if
you
you've
probably
heard
this
term
a
bunch
on
the
sto
sidecar,
so
what
it?
What
actually
this
means
is
when
you're
using
issuing
communities
your
pods
are
going
to
have
a
sidecar
attached
to
it.
So
if
you
think
sidecar
like
a
three-wheeled
motorcycle,
that's
kind
of
my
picture
where
you
have
like
the
big
engine
like
a
little
tiny
little
car
inside
of
it.
K
There
is
one
way
that
you
can
make
it
not
receive
traffic,
which
is
in
the
cases
where
you
want
it
to
go
around
envoy.
Not
for
the
scope
of
this
talk,
though,
so
the
envoy
sidecar,
continuing
on
a
little
bit
envoy
accepts
the
connection
on
port
15,000
won,
and
then
it
looks
for
was
something
that's
called
a
listener
that
has
been
configured.
The
pilot
API
also
gives
these
site
parks
a
list
of
listeners.
K
So
using
this
diagram,
starting
on
the
left,
a
request
comes
into
our
ingress,
which,
as
we
just
saw,
is
just
an
envoy.
You
know
binary
running
and
then
what
envoy
does
is
it
it
finds
from
the
pool
of
IPs
an
IP
to
connect
to
and
that
IP
actually
has
an
envoy
running
as
well.
That's
an
intercepting
all
requests
or
all
traffic
really
so
that
envoy
sidecar.
K
What
it
then
does
is
says:
oh
I
have
a
listener
for
the
port
in
the
IP
that
you're
attempting
to
connect
to
I'm
gonna
forward
this
traffic
into
that
that
process,
in
our
case,
nginx
listening
on
port
80,
so
kind
of
doing
another.
What
we're
gonna
do
is
we're
going
to
look
at
envoy
and
how
it
listens
for
traffic
by
looking
at
the
pilot
LDS
endpoint,
which
is
the
listener
discovery
service.
K
K
So
let
me
jump
out
of
there,
so
what
we
need
to
do
is
we
need
to
grab
the
node
ID
of
an
engine
xpod
that's
running
really
quickly.
We
can
do
this
again,
just
by
using
that
PSX.
That's
the
quickest
way
that
I've
found
to
do
it,
but
if
we
do
a
get
pod,
we
have
our
three
engine.
X
deployments
running
right
here,
they've
been
running
for
about
21
minutes.
K
If
I
actually
grab
one
of
these
I
can
exec
PSX
run
this
going
to
run
that
somewhere
else,
I
meant
to
run
that
in
the
sto
proxy
container
there
we
go
so
now
we
actually
have.
If
we
look
here,
we
can
see
it
on
boys
just
another
process
running
inside
of
this
box,
and
we
have
our
service
node
identifier
here.
So
we
can
grab
this
long
string
and
now
what
we
can
do
is
we
can
go
back
to
the
pilot
API
and
we
can
go
to
the
listeners
endpoint.
So
we
can
go
to
listeners.
K
Listen
hers,
slash,
sto
ingress,
which
is
the
name
of
our
node
or
in
the
name
of
our
cluster
I'm.
Sorry
and
if
we
hit
this
and
we'll
actually
find
all
of
the
listeners
that
envoy
is
currently
configured
to
use
now
the
first
one
I
mentioned
at
the
beginning
of
this
segment,
we're
all
traffic
is
being
received
on
port
15,000
won.
K
So
you
can
see
this
very,
very
first
listener
that
envoy
has
or
that
pilot
has
registered
here
is
the
address
15,000
won
and
what
we're
doing
is
we're
saying
we're
telling
envoy,
yes,
bind
this
to
a
port
open
up
port
15,000
1
to
accept
traffic
on
envoy
and
then
what
we're
saying
is
use
the
original
destination.
So,
if
you're,
making
a
request
to
this
Envoy
was
originally
intended
for
port
80.
What
envoy
will
do
is
it
will
say?
Oh
I
received
your
request
on
port
15,000
won,
but
you
actually
wanted
port
80.
K
So
that's
that's
where
I'm
gonna
send
this
traffic
to
and
if
we
scroll
down
here
all
the
way
at
the
bottom
there's
a
bunch
of
listeners
because
there's
there's
all
types
of
things
that
envoy
needs
to
care
about.
We
scroll
all
the
way
down
to
the
bottom.
Here
we
should
see
our
endpoints
that
we
care
about.
So
the
service
IP
or
the
pod
IP
in
our
case,
is
what
172
17
0
14.
So
you
can
see
that
this
is
just
another
address.
K
That
envoy
is
currently
listening
for
and
you
can
see
that
it
actually
goes
through
and
adds
a
bunch
of
it
adds
this
internal
crowd
as
well.
So
it
says
oh
there's
a
prefix
for
it
and
it's
going
to
go
to
into
port
80,
and
then
you
can
also.
It
also
has
all
of
the
filters
as
well
such
as
mixer
and
a
few
others
that
if
you
were
to
enable
custom
filters
but
out
of
scope
for
for
this
demonstration,
so
we
know
that
envoy
is
actually
receiving
traffic
on
scroll
to
far.
K
K
So
what
we
can
do
now
is
we
can
actually
just
tale
the
logs
of
this
pod,
so
we
can
actually
do
a
cube.
Ctl
cube,
CTL
tail
I'm,
sorry,
I,
keep
saying
tell
women
say,
is
logs
and
we're
gonna
follow
and
we're
gonna
actually
follow
that
pod
name,
so
we're
gonna
do
keep
CTL,
sorry,
ok,
PV
and
then
we're
gonna
use
the
pod
name
and
we're
actually
gonna
listen
to
the
sto
proxy
container.
So
one
thing
I
keep
in
mind
is
when
you're
using
ISTE,
oh
you're,
always
going
to
have
harbour
making.
K
So
the
very
end
of
this
log
line
is
where
it's
actually
against
its
forward
destination,
in
our
case
port
84
in
genetics.
So
we
that's
that's
kind
of
how
that
traffic
gets
here.
If
we,
so,
let's
recap,
so
we
have
an
issue
ingress
which
is
running
on
void
and
then
envoy
knows
about
all
of
the
other
endpoints
that
it
needs
to
serve
traffic
to
for
a
certain
host
and
prefix
route.
Prefix
so
envoy,
says:
ok,
I
just
received
this
request.
K
I'm
gonna
pick
one
of
these
three
pods
that
I
know
about
and
then
it'll
send
that
traffic
to
that
pod
and
that
pod
has
envoy
intercepting
all
traffic
so
envoy
on
the
other
side
of
the
world
says:
oh
okay,
I
just
received
a
request
it's
for
this
IP
and
for
this
port
I'm
going
to
bend
forward
now
to
this
next
up
stream,
which
is
nginx
in
our
case.
So
that's
kind
of
a
life
cycle
of
a
request
through
an
envoy
SEO
mesh.
K
J
K
I
we
have
Jaeger
in
production,
but
we're
not
I,
don't
have
an
actual
like
stat
cm
point
or
anything
set
up
for
this
case,
maybe
maybe
that'll
be
a
follow
up.
If
you'll
have
me
back
but
yeah.
So
another
great
thing
about
SEO
is,
it
does
add
observability
for
free,
because
all
of
your
requests
are
going
to
these
on
boys
and
they're,
intercepting
all
of
your
traffic.
J
K
K
All
right,
let's
do
the
last
piece,
so
one
of
the
things
that
we've
found
very
very
helpful
when
we're
trying
to
figure
out
what
envoys
state
of
the
world
is
when
when
it
working
with
this
do
is
envoy
actually
has
an
admin
page
that
you
can
actually
access,
and
this
is
great
because
it
will
actually
show
you
the
information
that
SEO
has
actually
provided
you
or
provided
envoy,
because
remember
envoy,
is
actually
sitting
there
in
every
second
or
two.
It's
it's
hitting
the
sto
pilot
API
and
retrieving
new
information.
K
But
if
something
breaks
in
the
middle-
and
you
don't
know
why
you
just
kind
of
want
to
know
what
the
state
of
the
world
is,
that
envoy
knows
about,
you
can
actually
boot
up
this
admin
page.
So
what
we
can
do
is
we
can
actually
port
forward
using
cube
CTL
to
actually
see
this
page
for
our
internet
spot.
So
let's
go
ahead
and
do
that
exit
out
of
here
and
what
we're
gonna
do.
K
So
if
I
come
in
here
now,
I
can
actually
go
to
port
15,000
on
my
local
host,
and
this
is
just
a
very
simple.
You
know
back
to
the
1990s
design,
page
of
Envoy,
so
we
can
actually
look
at
the
routes
to
get
started
and
we
can
see
that
this
do.
Pilot
API
has
provided
a
set
of
it's.
That
envoy
actually
knows
about
which
is
great.
This
is
this
is
exactly
what
we
want
to
see
when
you're,
actually
adding
in
these
routes
and
everything
you
want
to
see
this
page
kind
of
your
about.
K
K
The
other
thing
that's
helpful
to
see
is,
if
you
click
on
clusters.
This
is
where
envoy
spits
out
all
of
the
IPS
and
the
service
names
that
it
actually
knows
about.
So
we
can
see
in
here
that
we
have
let
me
scroll
down
and
find
it.
We
have
the
example
web
public
and
then
over
here
we
can
see
that
this
is
an
IP
that
it
actually
knows
about
and
what's
kind
of
cool
here
is.
It
actually
has
a
little
bit
of
stats
and
some
simple
statistics
about
this,
this
actual
pod
and
its
IP.
K
K
That
pilot
knows
the
state
about
knows
the
state
of
the
world
and
envoy
has
a
different
view
of
the
world
because
it
just
hasn't
converged
yet
because
envoy
actually
works
on
an
eventually
consistent
model,
and
that's
where
that
polling
hitting
the
pilot
API
actually
comes
into
play.
So
if
something
isn't
working,
it's
possible
that
Python
boy
just
doesn't
even
know
about
it.
So
this
is
a
really
really
good
place
to
go
and
look
forward,
but
if
you
can't
figure
it
out,
you
can
also
toggle
like
things
like
logging
level,
which
is
really
nice.
K
So
know
when
something
is
wrong.
Honestly,
the
best
place
to
go
is
your
pilot
logs.
If,
if
something,
if
pilot
doesn't
know
something,
and
because
it's
the
source
of
Ruth's
things
are
gonna
brick
if
Pilate
needs
to
be
restarted,
that's
we
found
that
when
things
are
just,
we
can't
figure
something
else.
We've
actually
been
able
to
just
restore
Pilate,
and
things
tend
to
come
back
very
very
effectively.
That's
not
a
great
solution.
K
K
So
one
thing
that
I'll
close
up
with
is
we've
really
had
a
good
time
playing
with
this
yo
on
the
statistics,
observability
of
it,
the
meshing,
the
control
plane
for
envoy
that
it
provides,
is
actually
very,
very
good.
It
provides
most
of
the
functionality
that
on
boy
provides,
because
one
thing
to
consider
is
that,
because
it
is
a
control
plane
for
envoy,
it's
possible
that
envoy
might
have
a
future.
That
sto
does
not
support.
Yet
a
good
example
is
you
know
there.
We
found
something
the
other
day.
K
K
One
thing
that's
kind
of
been
hard,
and
this
goes
back
to
I.
Think
my
third
slide
that
on
a
co
does
such
a
good
job
is
that
it
kind
of
hides
envoy
from
you,
which
is
great.
That's
that's
actually
kind
of
what
you
want,
but
it's
important
to
keep
in
mind.
That
issue
is
not
handling
any
of
your
traffic
going
into
your
cluster.
K
It's
it's
all
of
the
traffic
is
actually
being
done
by
envoy
sidecars
or
processes
at
the
investible,
and
that
was
something
I
was
kind
of
hard
for
us
to
really
understand
initially,
especially
moving
from
a
non-service
mesh
mentality
to
a
service
mentality.
You
just
kind
of
have
to
remember
that
envoy
is
the
one
doing
all
of
the
traffic
load.
Balancing
and
SEO
is
just
providing
where
things
should
go
and
in
answering
questions
that
envoy
has
so
on.
We
might
have
a
question:
what
is
this
service?
What
portrait
listen
on?
K
K
One
difference
the
word
that
is
actually
a
little
different,
that
I
would
use
as
pull.
So
what
is
your
pilot
does?
Is
it
just
provides
a
JSON
API,
and
what
on
void
then
does
is
is
hitting
that
pilot
API
now
the
way
it
receives
where
that
pilot
API
is
and
where
it
should
actually
send
these
requests
to
is
there's
a
JSON
file.
That
envoy
is
actually
started
with
that
provides
all
of
those
endpoints.
K
J
Just
a
quick
note:
that's
a
current
and
imitation
of
the
ISTE
Oh.
Today
the
development
team
is
actively
working
on
a
push
model
you're
absolutely
right.
Sometimes
the
Envoy
configuration
comparing
with
the
configuration
in
pilot
doesn't
match
that
which
is
also
the
results
of
the
pull
model,
because
some
of
the
role
configuration
could
be
delivered
out
of
sequence.
So
we
are
looking
at
the
newer
version.
Our
version
2
API
to
switch
to
a
push
model
so.
B
B
H
H
J
K
K
Right
now,
I
want
to
say
we
have
around
30
in
production,
we're
not
the
biggest
micro
services
shop.
We
actually
are
in
the
we're.
In
the
you
know,
the
five
years
stage
of
a
start-up
where
we're
breaking
up
a
rails
monolith,
it's
the
smaller
components,
but
we
are,
we
are
using
all
the
transport
features
of
misty,
oh
and
envoy.
So
we
are,
you
know
we
have
HTTP
API
is,
but
we
also
have
G
RPC
api's.
All
of
our
new
services
are
all
GRP.
Today,
in
fact,
most
of
our
services
are
communicated
via
GPS.
K
K
So
again,
we're
relatively
smaller
in
terms
of
our
GPS,
so
I,
my
number
is
not
a
great
number
I
mean
I.
Could
we
do
like
120
request
per
minute
to
our
ingress,
but
we
also
it
because
it
is
just
a
pod
running
in
your
cluster.
You
can
spit
up
more
ingress
this.
It
is
horizontally,
scalable,
horizontally,
scalable,
okay,.
J
C
K
F
K
K
So
we
are
using
route
rules,
but
for
internal
use
cases
only
we're
not
using
them
in
production.
It
is
a
little
bit
hard
with
their
CID
CD
pipeline
to
use
them
mostly
because
we're
switching
to
spinnaker
for
all
of
our
deployments
and
just
kind
of
kind
of
coordinating
how
you
do
a
canary
with
a
route
rule
and
then
how
do
you
delete
that
route?
Rule
is
something
that
we
haven't
totally
gone
into,
but
we
do
use
them
for
actual
internal
development.
What
we
actually
do
is
every
developer.
K
J
K
F
K
We
have,
that
is
because
we
actually
have
our
laugh
or
web
application
firewalls
installed
inside
of
that,
and
then
what
we
do
is
we
actually
let
nginx
just
talk
directly
to
sto,
ingress
and
inside
of
in
genetics.
We
actually
have
a
loose
grip
that
just
appends
a
header
for
whatever,
whatever
the
sub
domain
is
currently
and
then
we're
able
to
create
route
rules
on
that
very
easily.
You.
J
E
K
Using
it
primarily
for
observability
right
now,
we
want
to
use
it
for
the
Canary
deployments,
but,
like
I
said,
our
CI
ACD
is
not
not
totally
there
for
that.
So
we
just
kind
of
have
that
on
the
back
burner.
Spinnaker
is
not
great
at
deploying
random
urban
IDs
manifest
files.
So
we
we
kind
of
have
to
work
around
that.
J
L
K
We
are
using
the
stats,
D
endpoint,
actually
so
we're
actually
sending
everything
to
data
dog.
So
I
was
very
happy
to
see
that
data
dog
handler
landed
in
zero
six.
So
we
actually
got
tagging.
We
actually
did
build
something
that
was
a
Prometheus
to
date,
a
dog
exporter,
so
we
actually
had
something.
K
I
was
grabbing
all
the
Prometheus
metrics
that
isseo
nan
boy
was
providing
and
then
sending
it
the
data
dog
with
tagging,
so
that
that's
something
that
we
do
right
now,
the
other
side
of
the
observability
is
we
do
use
the
distributed
tracing
feature
of
envoy
in
sto,
so
we
actually
have
Jaeger
setup
and
we
actually
have
Jaeger
set
up
to
an
elastic
search,
managed
by
Amazon
Web
Services
Amazon
Web
Services
has
an
elastic
search
thing
that
you
can
just
spin
up.
So
we
just
decide.
That's
the
easiest
way.
K
A
Thanks
Bobby,
this
has
been
great
and
I
know.
We
are
out
of
time
for
today's
meeting,
so
I
want
to
thank
everybody
for
joining,
and
Paul
from
Cisco
was
also
going
to
share
something,
but
we'll
do
that
the
next
meeting
and
if
you've
got
any
questions
or
anything
else
that
you
want
to
explore
more.
Please
put
it
to
one
of
the
message
groups
or
you
are
welcome
to
reach
out
on
the
the
working
doc
as
well,
and
we'll
keep
an
eye
on
that
and
can
answer
questions
there.