►
From YouTube: Layer5 Community Meeting (Sept 24th, 2021)
Description
Layer5 Community Meeting - September 24th, 2021
Join the community at https://layer5.io/community
Find Layer5 on:
GitHub: https://github.com/layer5io
Twitter: https://twitter.com/layer5
LinkedIn: https://www.linkedin.com/company/layer5
Docker Hub: https://hub.docker.com/u/layer5/
A
Welcome
everyone
to
the
life
community
meeting
today
is
24th
of
september.
We
have
a
couple
of
topics
to
discuss
today
and
we
have.
We
also
have
a
demo
from
jared
on
the
nginx
service
mesh.
The
link
to
the
meeting
minutes
is
in
the
chat.
A
Please
add
your
names
as
well
as
any
topics
that
you
might
have
to
discuss
here,
like
every
other
file
meeting.
This
meeting
is
also
recorded
and
will
be
made
public
on
youtube.
A
We
also
had
a
couple
of
newcomers
join
us
this
week,
but
I
don't
think
anyone
is
joining
us
on
this
call
today
if
they
join
in
between
then
let's
meet
them
with
that.
Let's
get
on
to
the
first
first
agenda
item,
and
so
yesterday
we
kicked
off
a
new
meeting
in
life
community,
which
is
a
mashery,
build
and
release
meeting.
So
we
kind
of
restructured
the
ci
meeting.
A
I
think
I
had
the
wrong
thing,
so
we
kind
of
restructured
the
ci
meeting
to
make
it
so
that
we
we
track
measures,
release
readiness.
A
So
what
we
did
yesterday
was
we
went
through
this
measuring
test
plan,
so
this
is
basically
some
nearly
300
tests,
user
flows
that
a
person
using
measuri
might
follow.
So
what
we
are
trying
to
do
here
is
to
test
test
out
these
user
flows
manually
at
first
to
ensure
ux
correctness
and
other
more
human,
focusing
things.
A
So
we
have
people,
we
had
people
sign
up
to
test
test,
some
of
these
or
sign
up
to
test
a
particular
category
or
a
test
group,
and
what
we
are
doing
now
is
to
have
the
next
release
the
v0.6
release
by
next
week.
So,
probably
by
october,
1st.
A
Yeah
there
are
still
some
empty
columns
in
this
test
plan,
so
it's
open
for
anyone
to
sign
their
names
up
and
start
testing
it.
So
a
couple
of
considerations
to
stay.
A
Is
on
some
of
these
some
of
these
actions.
We
have
an
expected
outcome
and
we
also,
we
might
also
have
some
testing
guidelines.
So
please
try
to
go
through
that
as
well,
and
we
also
test
this
across
multiple
operating
systems
and
across
multiple
platforms
and
across
multiple
service
meshes.
So
some
of
these
actions
service
machines.
Some
of
these
actions
are
specific.
To
a
certain
platform.
I
mean
certain
platform
where
measuring
is
deployed
yep,
so
this
is
still
open.
A
So,
if
you
are
interested,
please
sign
up,
if
you
are
interested
in
testing
it
out,
we
could
use
a
lot
a
lot,
a
lot
of
feedback
in
the
user
experience,
especially
and
yeah.
A
We
we
are
also
another
goal
of
the
meeting-
was
to
also
take
initiative
in
automating
some
of
these
tests,
so
some
of
the
folks
in
the
community,
like
push,
have
been
trying
to
trying
to
bring
more
unit
tests
and
integration
tests
into
measuring.
Yes,
especially
take
taken
measuring,
ctl
and
almost
and
is
working
on
automating
most
of
it.
So
our
another
goal
for
not
this
release
but
in
the
upcoming
releases,
would
be
to
automate
some
of
these
tests.
A
Use
that
that
runs
on
our
ci
cd
pipelines
and
yeah
like
to
be
more
systematic
with
this.
The
the
reason
why
we
are
doing
this
manually
right
now
is
to
ensure
ux
consistency.
So
since
that
can't
be
automated,
we
have
to
do
it
manually.
So
measuring
also
has
a
couple
of
github
actions.
A
Measurey
integrates
with
service
management,
performance
and
smi
confirmations
two
specifications
and
mashery
has
a
github
action
to
use
both
of
these
specs,
so
these
specs
will
also
be
used
when,
when
we
create
a
integration
test,
all
right
so,
as
I
mentioned
before,
the
next
release
target
is
on
october
1st.
So
we
are
trying
to
do
trying
to
make
sure
that
we
are
ready
for
the
release
before
that,
and
there
are
also
a
couple
of
open
issues
which
I
will
share
in
chat
after
this
that
we
have.
A
We
need
to
focus
more
on
before
the
release
so
yep.
That
is
it.
So
the
call
to
action
here
is
to
go
through
the
test
plan
and.
A
And
probably,
and
if
you
are
interested
sign
yourself
up
to
test
some
of
these
and
give
us
feedback,
anyone
else
like
to
add
anything
more
to
this.
B
A
A
C
We
have
in
the
past
probably
a
year
ago,
we'd
gone
through
and
verified
a
couple
of
versions
of
kubernetes
and
then
since
then,
there
is
like
the
lack
of
automated
testing
makes
that
extremely
tough
to
keep
up
with,
especially
between
the
various
flavors
of
kubernetes,
that
manage
services
and
then
the
distributions.
C
Some
of
the
folks
that
are
on
this
call
have
pointed
out:
incompatibility
between
measures
manifests
and
measures,
helm,
charts
with
kubernetes
122
because
of
some
deprecation
of
apis
in
kubernetes
122..
C
Those
issues
have
been
addressed,
so
measure
is
now
compatible
with
122.,
but
then
yeah
with
just
for
like
the
project
itself,
is
for
lack
of
tooling
really
around,
like
being
able
to
speak
to
that
very
well,
so
so
mesherie
as
a
project
has
been
going
on
for
quite
for
some
be
coming
up
on
a
couple
of
years
here,
and
so
it's
been,
it's
been,
you
know
a
couple
of
years
old
of
keeping
track
with
kubernetes
versions,
yeah
that
doesn't
have
no.
B
I
I
I
understand
that
completely
but
like
when
I
was
doing
the
running
the
last
things
I
was
trying
to
test
out.
I
was
using
114
and
it
doesn't
work
and
then
you
think,
okay
122.,
it
doesn't
work
either,
and
you
find
you
know
not
even
a
sort
of
like
a
pragmatic
hint
as
what
people
use.
B
C
Holy,
that's
great,
that's
a
great
suggestion.
It
is
also
curious
that
it
makes
sense
now
that
there
have
been
some
changes
to
support
122..
B
C
B
I
think
and
122
makes
sense
as
well,
because
a
lot
of
us
deprecate
it
in
one.
When
was
it
deprecated
122
are
removed
actually
right
and
then,
of
course
you
also
have
dependencies
to
like
you
know
the
the
the
manifests
that,
like
you
know,
istio
is
using
so
where
we
actually
haven't.
You
know
it's,
it's
not
our
fault.
If
it
doesn't
work
and
we're
depending
on
that
as
well,
so
I
think
it
would
just
be
a
good
hint.
You
know
so
that
can
refer
to
yeah.
We
actually
have
tested.
B
D
C
C
So
novendou,
I
don't
know
if
you
said,
but
the
release
target
is
about
a
week
away
from
today.
So
ideally,
this
next
friday
would
be
the
v060,
I'm
sure
so
as
a
project
as
we
go
to
get
more
mature
with
the
processes
here
and
begin
to
have
meetings
that
aren't
just
about
building
but
are
also
about
releasing,
which
is
what
the
vendor
was
referring.
C
You
know
talking
about
yesterday's
meeting
that
we've
got
five
more
point
releases
to
go
before
that
gets
really
you
know
hardened,
but
there
are
that
that
said,
that
doesn't
mean
that
measuring,
isn't
shouldn't
be
bug
free
or
that
it
shouldn't
be
usable
in
production.
Prior
to
them,
there
would
be
certain
like
existing
features
should
be
of
that
high
quality.
C
C
Those
can
indirectly
invoke
quite
a
bit
of
code,
and
so
just
going
out
and
scheduling
the
github
actions
will
be
massively
helpful
and
actually
to
michael's
point
making
scheduling
those
actions
three
times
with
a
couple
of
different
kubernetes
versions,
for
example
not
a
lot
of
effort
so,
and
so
those
things
were
discussed
on
yesterday's
call
as
well,
and
that's
that's
good.
C
But
I
don't,
I
was
telling
some
other
people
yesterday
there
were
there's
well,
there
are
people
from
other
well-known
technology
companies,
so
jared's
on
the
call
today
from
nginx,
which
is
great,
jared's,
been
trying
to
work
through
some
measuring
bugs.
C
So
we
want
jared
to
have
a
good
experience
as
he
and
he
wants
to,
as
he
begins
to
take
pride
in
his
work
here
as
well.
He
wants
to
make
sure
that
everyone
else
has
a
good
experience,
I'm
sure,
but
there
were
intel
engineers
this
week,
google
engineers,
cisco
engineers
and
a
red
hat
engineer,
all
of
which
should
message
privately
asking
and
actually
a
principal
architect
from
charter
the
serviceman,
the
service
provider
in
the
u.s
like
hey.
Can
you
give
examples
of
where
people
are
running?
C
Measuring
it's
like
good
or
like
so
we're
we're
hitting
the
tipping
point?
I
don't.
I
don't
know
that
measuring
itself
has
like
what
I
would
say.
Measuring
itself
doesn't
have
product
market
fit
like
it
hasn't
seen
a
hockey
stick
of
users
coming
through
that's
coming
and
from
a
quality
perspective.
We
really
need
to
be
ready
for
it
collectively
anyway,
I'll
get
off
my
soapbox.
So
good,
so
hopefully
we
need
to
get
release
out
before
cubecon.
That's
in
part
like
we
will
do.
C
You
know,
unabashedly
we'll
do
conference
driven
development
and
it's
as
good
of
a
mild
post
as
any
to
to
use,
but
there
are
a
number
of
things
that
have
been
accomplished
between
the
dot,
the
the
dot,
5.0
release
and
now
and
yeah,
and
they
need
to
be
touted.
You
know
a
lot
of
you
have
a
lot
of
yeah
anyway,
anyway,
good
yeah
good,
so
we
need
to
release
out
in
the
next
ideally
a
week.
C
C
Actually,
next
up
is
speaking
of
jared.
His
ears
are
burning,
so
mr
byers
is
with
us
today.
He
had
kindly
accepted
a
request
to
tell
us
about
nginx
service
mesh.
Most
of
us
have
not
gotten
our
hands
on
it.
Some
of
us
have
seen
some
of
the
neat
features.
C
It
has
the
ways
in
which
it
you
know,
treats
smi
service,
mesh
interface
apis
as
first
class
as
as
its
first
class
api
and
how
it
uses
other
ecosystem
projects,
spire
and
jaeger,
and
I
won't
steal
all
jared's
thunder,
but
jared
welcome,
thanks
for
being
on
again
on
the
call
and
take
it
away
like
to
tell
us
about
nsm.
E
Thanks
lee
yeah,
so
I
I
do
get
a
bit
nervous
with
demos.
So
hopefully
this
this
goes
well.
E
Oh,
I
can
mind
if
I
share
my
screen
here.
Please
please,
yeah
just
grab
bump
all
right.
E
All
right,
hopefully,
that's
that's
big
enough.
Sorry,
if
it's
a
bit
a
bit
dry,
I
tend
to
work
mostly
in
the
command
line.
I
been
waiting
to
get
a
fancy
demo
app
with
with
a
nice
ui
and
everything,
but
I
just
haven't
gotten
around
to
it.
Yet
so
you'll
bear
with
me
here
yeah.
So
I'm
sure
most
you
are
familiar
with
the
concept
of
service
mesh.
You
know,
working
at
you
know
with
a
with
a
product,
that's
mesh
freeze.
E
You
know
with
meshes
to
the
root
word
there
and
working
with
you
know
close
with
smi
with
the
conformance
test
and
all
that,
I'm
sure
everyone's
aware
of
that.
E
E
Apps
running
oh
yeah,
there
we
go
so
I
have
my
app
here,
the
pod,
which
is
think
of
it
as
a
front
end,
and
then
that
will
make
some
calls
to
our
back
end
here.
So.
E
So
this
this
call
went
from
from
our
front
end
to
your
my
app
to
the
back
end,
which
my
app
is
really
just
a
a
friendly
name
for
for
a
bash
container,
where
I'm
running
curl,
just
just
to
simplify
the
mechanics
there
so
that
it's
easy
to
test
different
things
on
the
on
the
fly.
So
right
now
we're
making
a
call
and
we're
reaching
our
back
end,
e1
no
problem,
and
so
if
customers
have
apps
already
deployed,
we
don't
want
to
go
in
and
make
them
redo
everything
when
they
install
a
mesh.
E
So
we
let
people
actually
keep
their
existing
well,
they
will
have
to
re-roll,
but
they
won't
have
to
remove
and
redeploy
anything.
So
I'm
going
to
install
the
next
service
mesh,
let's
tell
them
here.
We
have
our
helm.
Repo
already
set
up,
we
do
create
a
namespace
nginx
does
require
it's
a
dedicated
namespace
to
work
in,
so
you
can't
deploy
in
in
default.
E
E
So,
security,
conscious
users,
you
know
they
might
want
to
set
this
this
flag
to
automatically
block
all
communication
between
services,
and
so
they
have
to
explicitly
allow
requests
between
services
within
their
cluster.
So
I'm
setting
that
this
to
automatically
block
all
those
connections
and
then
just
a
way
to
make
sure
that
all
the
pods
go
up.
E
So,
while
while
that's
running-
let's
see
oh
so
this
air
here
or
warning
rather
is-
is
expected
for
now.
As
lee
said,
we
do
depend
on
spire
for
our
identity
management,
for
pods
and
services
within
the
cluster
and.
E
Using
an
older
api
for
for
now
due
to
a
dependency
on
that
that
one
should
be
fixed
in
our
next
release,
so
the
warning
will
go
away,
but
it
doesn't
have
any
effect
right
now.
It's
just
a
visual
warning
unless
you're
on
kubernetes
122,
that
is
a
problem
in
our
next
release.
We
will
have
122
support,
but
right
now
we're
just
kind
of
restricted
to
121
and
earlier
so
that's
going.
As
I
mentioned,
here's
the
here's,
the
deployment
for
the
the
app
that
I'm
running.
E
Essentially
it's
just.
I
just
built
a
custom
container
for
bash.
That
already
includes
tools
like
curl
in
there
and
then
the
back
end
is
actually
nginx
with
a
very
simple
enginex
conf
here
we're
just
returning
200
and
this
text
just
running
on
port
80,
really
simple.
E
I'm
pulling
the
the
nginx
image
here
and
then
it
has
its
relevant
service
all
right.
So,
let's
see
once
I
get
deployed
yeah,
so
we
by
default,
we
we
go
ahead
and
include
a
few
common
tools
just
so
that
it's
easy
for
users
to
get
in
and
get
started
without
having
to
deploy
anything
extra.
E
Of
course,
in
a
you
know,
production
environment.
The
expectation
is
that
people
will
not
use
these
included
services.
Actually
you'll
set
a
flag
to
not
deploy
these.
E
You
will
use
your
own
grafana
and
jager
or
zip
or
whatever
tracing
service.
You
need
there
same
thing
with
prometheus.
It's
just
convenience
to
help
people
get
up
and
running
and
see
what
we're
about.
E
We
have
our
control
plane
here,
something
for
metrics
and
then
the
the
spire
services
here.
So
let's
go
ahead
and
re-roll
our.
B
C
Jared
is:
is
it
okay?
If
I
interrupt
with
questions
as
we
go,
oh.
C
Quick,
so
by
the
way,
if,
if
these
aren't
things
that
are
on
the
tip
of
your
tongue,
I
totally
get
it,
but
is
the
the
control
plane
name
space,
the
nginx
dash
mesh
namespace?
Is
that
configurable
or
is
that
set
in
stone.
E
No,
you
can,
you
can
put
it
in
whatever
name
space
you
want.
Like.
I
said.
The
only
caveat
is
that
it
just
has
to
be
dedicated
for
the
mesh.
You
can
name
it
whatever.
Whatever
you
want,
okay,
gotcha,
oh
gosh,
we
do
have
a
cli,
which
is
those
who
work
with
this
know,
unfortunately,
is
behind
it
a
ula
page.
E
E
But
if
you
I
kind
of
lost
my
training
out
there,
but
you
can
use
mesh
ctl.
Oh
yes,
yeah!
That's
why
I
was
thank
you
yeah,
so
you
can
just
run.
E
C
Yeah,
oh
actually,
a
couple
more
as
you're
going
through
and
at
you're
identifying
the
fact
that
nginx
service
mesh
offers
people
choice
around
some
of
these
add-ons,
the
grafana
and
jaeger
so
and
prometheus.
So
if
people
already
have
those
tools
they
can
optionally,
they
can
optionally
choose
to
use
their
existing
deployments
with
engineering
service
mesh
is
the
in
order
to
configure
nginx
service
mesh
to
use
those
existing
deployments
is
that
is
that
just
configuration
you
know
is
that
things
that
they
do
in
values.yaml,
if
they're
using
helm
or
what.
E
E
Yeah,
our
in
our
docs,
we
have
the
configuration
option,
options
with
the
list
of
various
things.
So
again,
if
you
have
a
private
registry,
you
can
set
those
here
but
specific
to
your
question,
your
prometheus
address.
So
if
you
have
a
your
own
prometheus
server
already
running,
you
can
just
set
that
either
using
the
set
flag
in
the
helm.
E
Command
or
in
in
the
the
ammo
file
like
this,
like
mentioned,
there's
quite
a
few
options
here
you
can
choose
whether
or
not
you
want
to
have
auto
injection,
enabled
or
not
or
set
specific
name
spaces
to
to
have
your
sidecars
or
that
your
pods
automatically
injected
with
the
sidecar
you're
tracing
back
ends,
so
we
use
jaeger
by
default,
but
you
can
use
zipkin
or
datadog
yeah.
It's
up
to
you
various
types
of
persistent
storage
which
we
need
for
for
spire
to
work
to
store
the
asserts.
E
Mtls
just
for
convenience
that
and
going
back
to
a
lot
of
these
default
values
are
for
people
who
just
want
to
try
it
out
without
deploying
into
a
production
environment.
So
we
do
deploy
with
mtls
enabled,
but
it's
just
set
to
permissive,
so
plain
text
will
go
through
still,
obviously
in
production.
We
want
to
have
this
set
to
strict
a
bunch
of
options
in
there.
E
C
E
So
yeah,
so
so
we
just
automatically
inject
our.
We
have.
Our
net
container
sets
everything
up
of
course,
and
then
we
have
the
nginx
mesh
sidecar,
which
I
said
is
just
nginx
configured
to
work,
as
is
our
proxy
and
and
we
update
the
the
com
as
needed
to
yeah,
add
load,
balancing
or
whatever
to
to
be
honest
that
that
part
of
it,
I
I
haven't,
there's
there's
so
many
different
components
and
I
haven't
had
a
chance
to
dive
into
every
little
little
bit
of
it.
E
So
unfortunately,
I
I
wouldn't
say
that
I'm
an
expert
on
what
exactly
we
can
configure
with
the
sidecar,
so
I've
been
focused
on
on
other
aspects
of
it
kind
of
have,
subject
matter,
experts
that
work
on
individual
ones.
If
you
want,
I
could
probably
set
up
a
a
demo
or
a
q,
a
session
with
someone
who
would
be
more
knowledgeable
in
that.
If
you
had
more
specific
questions,
yeah.
C
Yeah,
that
might
it
might
be
kind
of
interesting
it's
one
of
the
one
of
the
really
cool
thing.
Well,
I
think
one
of
the
really
cool
things
about
nginx
service
mesh-
and
it's
in
part
like
well
or
correct
me
if
I'm
wrong,
but
maybe
part
of
the
slogan
so
to
speak,
for
the
engine
x
service
mesh-
is
that,
like
the
data
plane
matters,
the
data
plane
is
it's
the
thing
that
does
you
know
a
ton
of
work?
It
does
the
heavy
lifting,
yeah
and.
B
E
C
But
in
the
meantime,
like
the
exciting
thing
is,
is,
and
maybe
jared
you
might
have
said
this,
but
the
the
proxy
that's
used
is.
Is
you
guys
brought
out
the
big
guns
like
you,
it's
the
nginx
plus
right,
and
so
that's
that's
it.
You
know.
C
Here's
I'll
venture,
one
more
question
toward
the
proxy.
If
I
may,
that
is
so.
C
We
talk
about
there's
some
goings
on
of
in
terms
of
functionality
with
respect
to
measuring
and
web
assembly
filters,
like
the
extensibility
of
that
data
plane
and
nginx
like
well,
I
assume
is
well
known.
I
mean
I
assume
is
well
known
for
a
number
of
things,
but
but
one
of
those
things
is,
you
know
the
ability
to
put
in
your
own
filters,
not
necessarily-
or
I
don't
think,
historically
necessarily
in
terms
of
web
assembly
but
other,
maybe
on.
C
I'm
missing
the
word:
it's
what's
that
language,
not
lisp,.
C
So
the
question
is
like
with
respect
to
a
web
assembly,
or
rather
maybe
with
respect
to
whether
it's
web
assembly
or
something
else,
are
those
filters
dynamically
insertable.
Or
do
they
have
to
be
compiled
into
the
image
that
you're
using.
E
Oh,
that
is
I'm
actually
writing
that
down
so
I'll
I'll
have
to
get
back
to
you
on
that
one,
because
I'm
sorry
that's
a
little
deeper
than
I
I've
gotten
into.
I
don't
know.
E
Right
right,
I
clearly
need
to
go
back
and
and
dive
into
some
of
the
more
core
nginx
server
aspects.
So
are
you
wondering
how
how
filters
are
implemented?
Is
that
correct,
yeah.
C
Or
like
yeah,
I
guess,
there's
probably
two
three
questions
in
there,
probably
which
is
a
what
like
what
types
of
filters
are
supported
by
types
I
mean
written
in
what
languages.
C
E
I
was
working
in
in
other
areas
and
so
during
my
time
at
enginex,
I've
just
worked
on
nginx
service
mesh,
and
so
I
don't
have
a
lot
of
the
legacy
knowledge
about
the
core
product,
which
is
obviously
a
failing
on
my
part,
but
so
most
of
my
knowledge
has
been
more
around
kubernetes
and
clusters
and
and
focusing
on
smi
and
that
sort
of
thing
rather
than
the
nginx
implementation
itself.
That
makes
sense.
E
Not
for
me,
okay,
all
right
so
just
deployed
the
the
the
mesh
and
re-rolled
the
side
car.
So
I'm
gonna
try
it
and
make
that
request
again.
E
Nice,
and
so,
if
you
remember,
I
deployed
the
mesh
with
deny
enabled
by
default,
and
so
now
these
requests
are
forbidden,
which
is
what
we
expect
if
you're
having.
If
you
want
a
secure
cluster
and
want
to
really
lock
down
what
services
can
talk
to
to
other
services.
E
And
then
just
familiar
with
it's,
my
spec
will
know
that
you
also
need
an
http
route
group
to
specify
the
rules
surrounding
the
types
of
requests
that
are
that
are
allowed
or
yeah.
So
here's
the
here
so
I'm
just
going
to
make
it
so
that
only
get
requests
are
allowed.
We
see
make
these
a
lot
more
complicated,
restricted
to
specific
paths
or
headers,
etc.
E
Try
making
that
request
again
and
we
get
our
200
it's
going
through
successfully
now,
let's
double
check
that
make
sure
that
that's
actually
blocking
it.
So,
let's
make
it
try
to
make
a
post
and
405
method.
Not
allowed
is
returned
so
so
perfect.
So
that's
that's
our
access
control
working
as
expected.
E
Another
feature
is
that
the
traffic
splitting
html,
so
I
I
have
my
back
in
version
one
released.
I
want
to
make
an
update
to
version
two,
but.
E
Or
or
or
a
use
case
would
be,
for
instance,
a
canary
release
for
users.
If
they're
going
to
do
a
canary
release
traffic
split
will
let
you
do
this
by
sending.
E
Limited
amounts
of
traffic
to
to
these
new
services,
so,
let's
say
our
current
service:
we
want
to
send
90
of
our
traffic
there
and
only
10
to
our
new
service
just
to
see.
If
there's
any
any
issues,
we
can
not
affect
too
many
users.
E
E
And
then
we
also
have
actually
let
me
just
not
worry
about
the
matches
for
now.
Let's
go
to
playing
playing
traffic
split
here
all
right,
so
we
got
that
out.
E
A
few
requests
here,
yeah
just
still
getting
obviously
v1,
because
we
haven't
deployed
the
others,
but
traffic's
still
flowing
nicely.
Here's
our
back
end
v2
and
actually
how
this
one's
set
up.
It's
going
to
return
a
500,
so
this
will
be
an
example
of
if
you
actually
deploy
a
broken,
build.
E
E
Oh
yes,
it
would
seem
that
that
always
the
curse
of
the
demo.
Sorry
I
went
through
this
before
and
it
worked.
I
think
it
must
have
messed
something
up.
E
All
right
actually
make
it
more
obvious.
Just
change
this
to.
E
If
you
have
the
the
cli
installed,
you
can
actually
run
top.
It
will
show
you
the
last
30
seconds.
Just
give
you
a
quick
check
if,
if
there's
services
that
are
not
behaving
so
you
can
see
that
none
of
the
backend
v2
services
were
successful,
all
of
them
failed.
You
could
also
check
this
on
grafana
or,
if
you
have.
E
Other
other
dashboards
that
are
pulling
from
prometheus
to
view
this,
but
it's
obvious
that
our
backend
v2
isn't
working.
So
we
can
go
back
and
use
traffic
split
to
send
everything
back
to
v1.
E
Yes,
it
is
okay,
yeah
so,
and
it
just
captures
the
last
30
seconds.
So
it's
not
meant
to
be
an
in-depth
tool.
It's
more
just
kind
of
a
pulse
check
how
my
service
is
doing.
C
Sure
you
know,
but
then
jared
as
you
like,
the
the
rate
limit
and
the
circuit
breaker
capability,
the
crds
like
those
are,
do
those
can
yeah.
Maybe
you
can
even
speak
to
those,
but
also
do
those
build
upon
it
looks
like
they
do
like
do
they
build
upon
smi
constructs.
E
Let's
see
so
this
is
here
I
and
blinking.
I
don't
remember.
A
circuit
breaker
is
actually
part
of.
I
mean
it's
built
on
top
of
the
construct,
but
there's
not
a
smi
api
for
circuit
breaker,
correct
right,
so
yeah
we're
just
just
building
on
top
of
that
trying
to
to
have
a
sense
of
consistency.
E
Sure,
but
obviously
it
is
a
you
know.
It's
not
an
official
crd
and
that
circuit
breakers
is
a
bit
harder
to
demo,
because
you
have
to
have
a
fallback
and
it's
hard
to
visualize,
because
what
circuit
breaker
does
is.
If
you
have
a
service
that
that's
that's
failing
and
you
have
a
fallback
defined,
you
always
hit
that
that
fallback
circuit
breaker
works
behind
the
scenes
to
actually.
E
Just
it'll
prevent
the
the
failing
service
from
being
called
for,
however
long
you
say
so,
for
instance,
here
it's
10
seconds,
so
it
just
speeds
up
requests
by
not
going
to
a
failed
service
first
and
then
redirecting
to
fallback
it
just
short
circuits
and
goes
directly
to
fall
back.
E
E
Yeah
so
so
I
made
the
request
too
quickly,
and
so
now
it's
it's
returned
this
503
temporarily
unavailable
and
then
in
about
15
seconds,
you
should
be
able
to
make
another
successful
request.
So
so
what
the
rate
does
is
it
doesn't
split
it
up
into
four
wrapped
requests
per
minute.
It
actually
divides
up
the
unit
of
time
by
the
request,
so
forward.
Request
per
minute
is
actually
one
every
15
seconds.
E
So
that's
it!
I
I've
taken
up
a
lot
more
time
than
I
was
planning
on
apologies
for
kind
of
the
wandering
around
a
bit,
but
hopefully
it
was
somewhat
informative.
Is
there
any
any
questions.
C
No
yeah
jerry
yeah
jared
this
is
this
is
cool.
I
mean
like
there's
some
things
that
that
jared
is
pointing
out
that
some
of
you
may
not
realize
are
you
know
fairly
special.
He
pointed
out
a
few
already
about
nginx
plus
being
the
proxy
of
the
data
plane
which-
and
that's
that's
really
neat-
I
mean
hey,
there's
only
one
service
mesh
in
the
world
yeah.
C
Let
me
take
that
back
for
a
moment,
but
so
there's
only
yeah,
there's
only
one
service
match
in
the
world:
promoting
use
of
nginx
plus,
actually
that's
true,
there's
only
one
in
the
world
that
does
that
they've
been
other
service
meshes
that
have
used
nginx
as
their
data
plane,
not
enginex,
plus,
which
is
pretty
neat.
C
The
other
thing
that
you
don't
you
only
see
like
one
or
two
other.
You
know
it's
like
two
other
surface
meshes
out
there
that
use
smi.
Well,
maybe
three,
but
anyway,
it's
still
really
special,
like
nginx
jared.
Correct
me
if
I'm
wrong,
but
there,
the
nginx
service
mesh
api
is
smi
smis,
api
other
than
the
two
rate
limit
and
circuit
breaker,
those
two
kind
of
custom
definitions.
E
Yes,
we
we
implement
the
the
smi
go
library
for,
for
smi,
the
the
http
rob
groups
are
handled
by
the.
E
So
I
just
could
cannot
think
this
morning,
but
it's
it's
a
js
module
for
nginx,
obtain
all
the
regex
patterns
and
all
that,
but
yeah
primarily
we
just
depend
on
on
the
smi
for
for
the
apis.
C
Nice
yeah
another
another,
pretty
cool
thing:
nice
yeah.
Anybody
else
have
questions.
I
can't
wait
until
we
get
until
we
get
jared
working
with
meshri
working
with
with
jared's
setup.
I
think
mesri
helps
with
well
sample
apps
to
show
and
helps
with
the
perfor
like
the
curls
that
are
going
on,
like
a
run
those
performance
tests
that
can
help
people
familiarize
with
nginx
service
mesh
a
bit
more
really
nice.
Also,
the
smi
conformance
nginx
service
mesh
is
underrepresented
on
the
the
map
so
to
speak,
like
by
rights.
C
I
would
think
like
by
by
rights
by
principle.
It
should
be,
you
know,
pretty
prominently
represented
in
terms
of
like
you
know,
passing
with
flying
colors,
and
my
hunch
is
that
somewhere
between
mesherry
the
mesher
adapter
for
nginx
service
mesh,
just
starting
to
work,
you
know
consistently
and
maybe
some
later
smi
spec
versions.
C
I
guess-
and
this
isn't
a
fair
question
to
ask
anyone,
but,
but
I
I
don't
know
if,
if
the
latest
smi
spec
version,
which
I
think
is
dot
six,
if
that's
you
know
necessarily
support,
there's
anyway,
there
could
be
challenges
around
versioning,
but
but
I'd
love
to
see
nginx
service
mesh
like
beating
everyone
else,
even
open
service
mesh
on
there,
so
that
will
help
spur
on
others
to
get
their
house
in
order
and
to
pass
those
tests.
So
that's
that's
one.
E
Of
my
goals-
and,
like
you
pointed
out,
we
are,
I
mean
we
do-
have
a
story
to
update
to
it
to
the
latest
api
versions,
but
I
think
we're
we're
trailing
a
version
or
two
behind
on
some
of
those
yeah.
I
think
that's
prioritized
for
this
next
release.
Just
make
sure
that
we're
up
to
date
with,
what's
all
the
smi
api
versions,.
C
Nice,
it's
cool
jared
thanks
for
doing
this,
just
kind
of
off
the
cuff
as
well.
It's
nice!
I
I
I'm
taking
away
a
number
of
things
that
I
didn't
know
before.
It's
I
just
yeah
the
power
of
the
circuit
breakers
really
is
such
an
intriguing
capability
of
a
mesh
to
go
play
with
it
and
configure
it
and,
like
you
know
it's
hard
to
tell
if
you're,
using
those
correctly
like
how
to
how
to
not
abuse
them,
but
how
to
make
sure
you're
taking
advantage
of
something
so
smart.
E
That's
great
thanks
for
for
putting
up
with
my
or
letting
me
practice,
demoing.
D
E
But
looks
like
meshri
might
be
a
good
option
from
the
chat
to
help
step
up
my
demo
game.
C
Those
are
biased
perspectives.
I
I
acknowledge
that,
but
yeah
cool,
good,
good,
good,
okay,
well,
fair
enough,
the
vendo
we
had
a
couple
of
other
items
on.
Oh,
I
guess
we're
almost
at
time
side.
I
don't
know
if
there's
anything
urgent
or
not.
A
We
have
two
more
items
to
cover,
but
if
it
is
not
urgent,
maybe
we
can
move
it.
D
Yeah,
I'm
not
sure
if
it
is
urgent,
though
I
think
they
can
shift.
D
This
is
one
one
thing,
though,
regarding
the
release
blog,
we
still
have
few
sections
to
be
covered,
applications,
filters
and
dynamic
management
ui.
We
need
people
in
for
that.
That's
one
thing
I
just
wanted
to
let
the
members
know
so
that
is
one
thing
for
that,
and
that
is
kind
of
important.
A
Yep,
so
meghana
was
mentioning
about
the
release
blog.
So,
as
we
discussed
earlier
on
the
call,
we
have
a
release
coming
up
in
one
week
and
we
don't
have
enough
blocks
to
talk
about
all
the
cool
stuff
we
are
doing
here.
So
if
you
are
interested,
there
are
still
a
lot
of
blocks
that
needs
to
be
written.
So
there
is
this
open
epic
that
you
can
check
out
and
sign
up
on
any
of
this
and
write
a
blog.
F
Yeah,
actually,
I
just
wanted
to
share
where
we
are
in
the
implementation
of
state
management
during
sorry,
the
implementation.
So
currently
I
wouldn't
take
much
time
I
would
like
I
would
make
it
short
I'll
just
share
my
screen
right
now.
F
So
yeah,
so,
if
you
can
see
like
it
is,
actually
you
wouldn't
be
able
to
make.
We
are
not
trying
to
make
any
difference
in
the
ua
as
of
now
so
this
is
the
implement
like
this
is
what
we
have
now,
that
is,
we
have
the
navbar
header
and
all
this
stuff
complete
and
the
dashboard
page
is
also
copied.
F
But
then
this
particular
component,
the
service
mesh,
the
component
for
showing
data,
plane
and
controlling
details
of
the
service
which
is
rocket
being
is
notated
and
basically,
from
this
point
on
we
just
have
to
write.
We
just
have
to
keep
writing
react
components
and
try
and
keep
integrating
them
with
the
back
end.
So
all
the
so
actually
the
backend
integration
company
we
just
have
to
write,
react,
components
and
hook
them
up.
F
So
as
a
step
towards
that,
I
have
actually
created
a
spreadsheet
and
some
people
who
are
interested
in
working
on
that
is
already
working.
So
if
any,
if
anyone
else
who
is
a
who
wants
to
contribute
to
this,
then
they
can,
they
can
actually
do
so.
You
can
actually
reach
out
in
slack
or
any
other
platform.
That
is
one
thing
and
yeah
I
don't
know.
I
also
wanted
to
discuss
about
the
user
flow
that
the
any
normal
user
measure
user
would
go
to
when
opening
measure
ui.
F
So
there
are
some
things
that
needs
to
be
discussed
and
which
is
not
yet
concrete,
but
yeah,
I'm
just
telling
the
cloud.
So
one
of
those
is
that
basically
like
when
you
open,
when
you
start
naturally
like
we
have
to
hydrate
our
local
state
with
the
server
cache.
So
that
is
one
thing
that
has
to
be
taken
care
of
and
right
now.
The
way
we
are
doing
it
is
that
we
are
actually
dispatching
multiple
actions
that
would
yeah.
F
If
I
could
show
you
again
like
I'll
just
refresh
your
face-
and
this
is
a
development
tool
written
by
redux
people
which
helps
us
visualize,
the
actions
and
the
action
that
redux
dispatches
and
all
those
stuff
which
is
helpful
and
helpful
for
developers
to
visualize.
So
we
can
see
that
multiple
actions
are
getting
dispatched
and
based
on
those.
These
data
are
getting
filled
from
the
server
and
they
are
updating.
So
my
hands
is
that
we
we
ideally
we
should
not
have
to
dispatch
multiple
actions.
F
We
should
have
a
single
endpoint,
which
would
give
us
the
data
that
is
necessary
for
us
necessary
for
the
user
to
see
the
details
when
he
logs
into
machine
first
thing.
So
ideally,
this
would
like
in
the
future
this
would.
This
would
be
a
single
action
which
would
be
called
initialize
or
hydrate
server
state
or
something
that
would
do
all
the
job.
That
is,
it
is
actually
doing
that,
but
some
of
the
some
of
the
actions
will
be
specific
to
the
particular
page
or
particular
component.
F
What
else?
Okay
yeah?
I
don't
think
there
is
anything
okay.
One
more
thing
is
that
we
have
to
find
a
way
to
deal
with
the
subscriptions.
So
as
of
now,
we
have
multiple
subscriptions
set
up,
but
actually
there
are.
There
are
many
subscriptions
that
we
have
to
deal
with,
which,
for
example,
some
will
be
global
in
the
sense
that
we
have
to
establish
this.
F
For
example,
if
we
have
multiple
measuring
components
like
operator,
missing
controllers,
also
so
in
the
whole
life
cycle
of
maturity
ui,
so
we
have
to
be
aware
of
the
connection
status,
so
that
is,
that
is
something
that
is
a
global
subscription
in
the
sense
that
we
have
to
start
the
subscription
on
the
app
startup
and
it
has
to
be
alive
till
the
app
is
shut
down
or
it
is
closed.
So
we
I'm
actually
trying
to
divide.
F
You
know
categorize
the
substitutions
into
two
parts,
one
that
are
concerned
with
only
particular
features
or
components,
and
the
other
which
is
globally,
which
are
global,
so
yeah,
in
the
sense
that
the
global
subscription
would
be
dispatched
in
a
single
action,
as
I
described
before,
for
the
other
server
hydration.
So
that
would
be
initialized
and
yeah.
We
would
still
be
getting
the
data
and
we
would
hook
up
a
reducer
which
would
update
the
state
based
on
the
data
that
is
getting
and
yeah.
F
We
can
actually
discuss
more
about
this.
There
are
some
advantages
and
dispar
disadvantages
of
doing
this,
but
I
don't
think
it
is
worth
doing
it
right
now,
because.
F
Like
it
would
not
make
much
of
a
difference,
so
I
will
just
hook
it
up
with
we'll
just
try
to
implement
that
like
that,
for
now
and
in
the
future,
if
it
is
something
that
is
hindering
our
performance
or
entering
or
handling
the
user
experience,
and
we
can
easily
switch
back
to
switch
to
different
implementation
so
yeah.
This
is
basically
the
user
flow
that
anyone
will
have
to
like
any
mystery
user
will
undergo
so
yeah.
If
anybody
has
any
questions
or
any
comments,.
C
A
quick
comment
from
me,
which
is
that
well
in
a
good
way,
the
readme
for
the
the
new
ui
architecture
is
almost
better
than
the
readme
was
for
the
old
current
ui
like
was
for
a
year,
so
so
like
we're
already
off
to
a
much
better
start
with
setting
up
people
for
contributions
and
being
organized
now
the
focus
to
date
has
been
on.
C
F
Yes,
actually,
we,
I
haven't,
actually
thought
about
integrating
them
into
a
single
one.
So
if
that
is
something
that,
but
I
don't
think
it
would
be
an
issue,
so
if
you
want,
we
can
do
it
whenever
we
want
to,
but
yeah,
okay
and
honestly,
like
I
don't
see
any
difference
between
having
integrating
one
difference,
would
be
that
the
developer
experience
is
that
if
we
have
it
in
the
same
app,
then
it's
just
one
command
that
would
get
about
get
both
provider
and
dual
running
so
other
than
that
yeah.
B
F
Okay,
so
also
one
more
thing
like
if
people
who
are
willing
to
contribute
to
this,
I
have
actually
made
a
spreadsheet,
I'm
not
able
to
find
it
right
now,
but
I'll
put
it
in
the
chat
or
in
the
slack
channel.
So
there
we
are
keeping
track
of
what
are
the
components
that
has
to
be
written
and
what
are
the
tests
that
has
to
be
written
all
this
stuff,
so
yeah.
Let
me
find
it
real,
quick.
C
Yeah,
it
might
be
the
second
one
from
the
bottom.
You
know
the
state
management
implementation,
yeah,
yeah
yeah.
I'm
sorry.
F
Oh
so
yeah
there
are
people
working
on
this
already
and
yeah.
We
as
you
can
see.
There
are
a
lot
more
components
that
has
to
be
put
up,
but
we
haven't
got
the
time
to
write
them
up,
but
I'll
do
it
real
soon
and
yeah
if
they
are
interested
in
working,
people
can
take
any
of
these
components
and
start
writing
them,
but
they
won't
be
any
different
from
what
we
have
right
now,
but
it
would
be
much
more
refactored
than
the
much
more
reusable
so
yeah.
C
It's
great
so
by
the
way,
maybe
the
last
question
for
me
because
we're
over
time,
but
the
there's
eight
rows
here
and
in
total,
how
many
rows
do
you
think?
So
this
isn't
the
total
number
of
rows.
I
suspect
for
the
different
components,
but.
F
No,
it
isn't.
I
I
don't
think
I
can
give
you
an
estimate,
but
around
25
to
30.
If
it's
a,
I
don't
like,
I
don't
know
how
good
I
guess
it
is,
but
it's
not
it's.
F
F
B
A
Thank
you
this
that's
all
we
have
for
today.
A
I
guess
we'll
meet
next
week,
have
a
great
weekend
guys
bye.