►
From YouTube: Istio Community Meeting 3.21.19
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Sure,
let's
see
so,
there
were
two
basic
questions
asked
at
our
previous
meeting.
One
was
if
we
were
going
to
be
updating
our
sizing
guide
for
perf
and
scale
honesty,
oh
one,
one
anytime
soon
and
I
can
report
that.
Yes,
we
have
updated
that,
with
the
release,
I
will
drop
a
link
into
chat
momentarily
to
the
sto
IO
page
for
performance
and
scale,
which
has
now
been
updated
for
one
one.
B
Specifically,
there
was
some
interest
in
how
much
CPU
allocation
was
going
to
be
needed
for
a
thousand
requests
and
whether
or
not
we
were
going
to
meet
some
improvement
targets
that
we
had
originally
talked
about
for
one
one
and
so
to
that
and
I'm
publishing
the
ISTE
of
performance
dashboard
from
our
release,
qualification
cluster
for
us
to11,
which
shows
that
or
policy
telemetry,
ingress
gateway
and
proxy.
We're
all
hovering
right
around
point:
six
V
CPUs
per
thousand
requests,
which
I
believe
meets
or
exceeds
the
the
goal
it
was
several
months
ago.
B
B
A
B
Four
divided
by
the
number
of
thousands
of
requests
per
second
to
be
particular,
and
what
we
find
is
that,
as
we
scale
up
from
40,000
60,000
80,000
requests
per
second,
it
does
seem
to
remain
fairly
constant.
I
thought
there's
no
deviation
at
all,
but
that
we
do
not
see
like
a
continual
growth
pattern.
It's
more
sawtooth
in
terms
of
V
CPUs
per
request.
B
A
Very
good
goodies
I
guess
you
guys,
whereas
were
hunting
for
topics
and
looking
for
the
community
to
digest
some
of
these
numbers
do
maybe
an
open
question
is
how
much
of
a
concern
is
performance
for
folks.
You
know
the
the
perfect
scale
community
meetings
at
me
every
other
week
are
well
attended
for
those
that
are
using
SEO.
Is
this?
Have.
E
A
So
you
know
just
in
reflecting
of
some
of
the
tooling
in
his
face
in
order
for
IO
being
compartment
parcel
to
the
SEO
project,
its
ability
to
measure
latency
and
throughput.
There
are
two
things
to
be
measured,
but
certainly
not
the
only
things
to
be
measured.
Yeah
overhead
us,
you
know
in
CPU
and
memory
of
the
nodes
in
the
cluster.
B
One
of
the
particular
focuses
of
one
one
that
I
can
speak
to
in
terms
of
performance
was
performance
for
very
large
clusters.
Yes,
do
100
every
sto
proxy
received
a
full
graph
of
the
mesh,
so
even
for
services
that
there
was
no
path
for
a
particular
proxy
to
talk
to.
You
would
still
have
configuration
for
that
with
the
introduction
of
namespace
isolation,
as
well
as
the
sidecar
CRD.
We
see
much
better
scale
in
very
large
clusters.
Talking
about
thousands
of
services
or
more.
A
A
F
A
B
So,
for
the
most
part,
what
I've
seen
in
in
working
between
20
and
120,000
requests
per
second
I,
don't
see
a
lot
of
things
that
are
not
scaling
linearly
with
CPU,
which
which
leads
me
to
think
that
if
I
was
willing
to
spend
more
on,
my
cluster
I
would
expect
that
about
10
times
more
spend
should
get
me
about
10
times
more
throughput.
I,
don't
see
any
artificial
limit
to
that
in
the
numbers.
B
I've
played
with
I
do
hope
to
push
the
200,000
requests
per
limit
a
number
in
the
next
couple
of
weeks
without
changing
too
much
on
the
cluster.
So
we
do
see
some
improvements
coming,
but
there
is
nothing
that
seems
to
be
scaling
exponentially
as
we
grow,
because
with
this
particular
service
config,
there
are
particular
ways
you
can
configure
services
that
do
lead
to
exponential
growth
and
cause
problems,
but
I,
don't
think,
there's
any
artificial
limits.
There.
F
B
In
that
case,
each
service
will
need
to
know
about
the
entire
service
graph
of
the
other
service,
which
is
an
exponential
scaling
algorithm.
We
are
looking
at
algorithms
to
limit
the
impact
of
that
and
to
mitigate
that,
but
it
is
sort
of
a
corner
case.
It
point
is
not
something
that
we
hear
from
a
lot
of
people
that
they're
having
trouble
with
thank.
B
G
F
D
G
So
man,
all
right
I,
call
that
the
hundred
K
number
yep
and
just
before
you
joined
Matt,
who
was
bringing
this
up,
had
mentioned
that
their
mesh
their
meshes
they're
very
small,
but
are
like
10
services
with
a
depth
of
1
to
2,
but
they're
pumping
a
hundred
and
fifty
to
two
hundred
thousand
and
RPS
through
them.
Ok,.
G
F
H
Right,
yes,
so
what
one
one
would
be
a
significant
improvement.
You
would
still
need
to
make
sure
that
your
size
is
to
a
telemetry,
correct,
Gillian
and
all
that
so
good.
We
have
all
that
documented
now
on
history
at
I/o,
but
but
yes,
yes,
as
long
as
you
make
sure
that
things
are
sized
correctly,
it
should
be
fine.
G
H
D
I
H
F
F
H
A
A
And
there's
someone
there
just
before
you
joined
to
facilitate
some
conversation
and
to
help
educate
the
rest
of
the
community,
and
one
of
the
questions
we
had
asked
was
just
you
know
what
what
was
your
favorite
performance
and
scale
enhancement
in
one
one
and
I
know
you've
got
a
litany
of
the
month.
Atop
your
head.
Oh.
H
Yes,
it's
a
101
that
did
really
help.
Quite
a
bit
was
the
name
space,
isolation
or
the
sidecar
resource,
and
it
it
did
help
while
it's
on
scalability
but,
more
importantly,
what
it
did
was
it
drove
down
the
memory
consumption
on
proxies
and
if,
if
you
take
things
in
total,
that's
a
you
know
much
for
the
good
way
because
site
cars
are
everywhere:
you're
thousands
outside
Carson,
if
all
of
them
shave
50
megabytes
that
that's
that's
that's
a
lot
of
sea
time.
H
Yes,
so
so
the
so
the
reason
there
was
a
good
talk
about
the
sidecar
resource,
which
enables
namespace
isolation
so
that
car
resource
the
resource
and
isolation
is
another
feature,
but
essentially
that's
what
it
scoops
down
the
amount
of
fake,
like
you
said,
to
individual
proxies
and
by
default
the
scope
is
a
namespace
and
you
can
actually
scope
it
down
further.
So,
for
example,
if
your
namespace
happens
to
have
five
four
thousand
services-
and
we
have
actually
heard
this-
some
customers
to
die-
then.
I
H
If
you
saw
that
there
are
a
couple
of
places
so
on
the
revamped
is
to
or
performance
page,
there
is
a
link
to
yes
that
correct.
So
if
you,
if
you
go
there,
the
default
configuration
in
load
testing
actually
does
give
you,
so
everything
is
contained
in.
So
if
you
look
at
set
a
blush
test
yeah,
so
so
so
all
these
tests
are
fully
reproducible
and
we're
adding
more
tests
here
and
there
actually.
H
So
this
is
a
good
time
for
me
to
mention
that
we
are
actually
actively
asking
the
community
to
add
kind
of
more
scenarios
here.
So
I
would
be
very
much
interested
in
getting
more
PRS
here
that
stress
different
parts
of
the
system,
and
then
everything
had
everything
should
be
measurable,
so
we're
kind
of
building
towards
that,
where
every
scenario
should
be
distilled,
do
a
few
metrics?
Do
somebody
metrics
that
that
you
can
track
over
time.
G
H
Did
this
particular
sub?
There
is
a
load
test.
If
you
go
back
up,
one
I
don't
know
striking,
but
if
you
go
back
up
one,
there
is
also
a
benchmark.
So
a
benchmark
test
is
the
one
that
that
gives
with
latency
right
that
that's
checking
what
kind
of
latency
we
get
and
it
is
testing
baseline,
single
sidecar
and
double
sidecar.
So
again,
these
are
more
scenarios
and
I
would
definitely
love
if
we
get
more
PRC
or
let
go
through
more
scenarios
like
these
yeah.
I
Okay,
just
what
I
want
to
do
with
this
I
think
is
then
going
compare
this
to
some
testing.
We
did
some
pre
1:1,
you
know
like
RC
sort
of
testing
and
we
found
a
lot
of
improvements
to
memory
consumption,
but
it
seemed
for
us
and
in
our
environment
too,
mostly
because
of
the
reorganisation
of
the
statistics
that
we
were
like
envoy
was
collecting
and
all
that
sort
of
work.
I
C
A
Bad
jokes
and
all
are
here:
this
is
good.
It's
good
to
see.
Leaders
like
Mandar
others
in
perfect
scale
group.
Your
focus
there
just
how
much
interest
there
is
on
on
those
tests.
Sometimes
sometimes
those
are
the
people.
Those
are
the
boring
jobs,
but
performance
test
means
part
a
lot
of
variables.
Gotta
get
ring.
A
A
Some
results
in
combination
with
the
efforts
that
are
that
you're
leading
will
see
it
we'll
see
if
that,
how
much
that
tool
helps
the
masses
potentially
one
that
is,
has
a
decent
user
experience,
but
maybe
doesn't
go
quite
as
deep
as
some
of
the
tools
that
you
guys
are
running
in
some
other
other
comments
on
on
one
one,
maybe
just
in
general
any
anyone.
So
certainly
if
something
you
want
to
call
have
helped
with
release
validation.
Certainly
some
of
you
downloaded
are
running
it
to
comments
in
general
features
and
functionality
that
you
like.
H
So
I
I
just
want
to
make
another
comment
if
it
hasn't
been
made
yet
is
that
there
were
several
changes
and
they're
all
documented.
However,
I
just
want
to
mention
that
it
still
policy
check
right,
our
disabled
by
the
font.
So
if
you
were
using
mixer
policy
checks
in
106-
and
you
are
upgrading
to
101-
then
just
just
make
sure
that
you
re
enable
you
explicitly
enable
policy
checks
in
1
1
again.
This
is
this:
is
there
in
release,
notes
and
everywhere
else,
but
I
just
want
to
reiterate
that.
A
A
A
A
A
G
The
key
is
that,
even
if
you
didn't
have
any
policy
checks
enabled
the
envoys
don't
necessarily
know
that
fact
when
they
go
to
do
the
check,
so
they
were,
they
were
doing
the
check
and
they
always
got
a
yeah.
Let
the
request
through
answer.
So,
even
if
you
had
no
policy
at
all,
they
were
doing
the
policy
check
unnecessarily
and
that
was
costing
a
lot
of
perfect
a
lot
of
latency.
So
this
way
the
common
case,
which
is
no
policy
check
that
no
policy
is
enabled
you
don't
need
to
check
for
policy.
G
C
This
is
Riggs
I
just
wanted
to
take
the
opportunity
and
thank
everyone
in
the
community
that
helped
us
update
the
docs
for
release.
Anyone
in
the
community
that
participated
in
the
dot
day
and
and
in
just
in
general,
helping
us
update
and
making
sure
that
our
tasks,
our
examples,
everything
was
well
functioning
and
technically
accurate
for
release.
So
thank
you.
Everyone.
A
B
A
E
A
M
A
N
N
J
All
chime
in
here
on
the
what
we're
talking
about
events.
If
anybody
has
any
sto
events,
they
want
added
to
the
sto
calendar,
be
sure
to
you
can
send
it
out
my
way
and
I
can
add
those
we
have
pucon
and
servicers.
They,
of
course,
are
up
there
been
trying
to
pre
pretty
diligent
about
adding
any
meetups
that
come
up,
but
if
there's
anything
else,
you
see,
that's
not
on
the
calendar,
shoot
me
a
note,
no
get
it
on
there.
A
O
Artie
looky
here
not
yet
right
now,
but
we
are
planning
to
organize
called
native
these
Canada
in
June
and
we
want
to
include
the
East
your
track,
so
people
who
want
to
present
about
Easter
I'll,
try
to
you
know,
try
to
align
some
spots
for
them
specifically.
So
if
anybody
wants
to
come,
it's
called
native
Canada.
That's
he.
M
That's
what
I'm
saying
at
least
around
North
America,
that
I
haven't
seen
too
many
sto
specific
meetups
that
they're
mostly
kind
of
folding
into
talks
that
either
palliative
meetups
or
kubernetes
meetups,
okay
and
in
the
Boston
area.
We
did
start
an
envoy
proxy
Meetup,
but
I
didn't
see
too
many
of
those
either.
A
M
A
A
Alright,
well,
let
me
ask
this:
since
we
did
talk
about
performance
as
much
as
we
did,
this
tool,
measurable,
I,
think
I
meant
to
people.
You
presented
it
at
Q,
Connie
human
of
those
who
have
taken
a
people
is
that
is
there
a
desire
to
see
a
demo
of
that
maybe
I'm?
This
call
at
some
point
in
the
future.
Maybe
yeah
I,
don't
know
that
in
the
past
that
we've
replayed
Q
Khan
he
steel
presentations
here,
more
Jess
I
think
that
would
be
useful.
I
think
that
would
be
used.
A
A
M
Actually,
just
real
quick
one
thing
that
I
was
kind
of
excited
to
see
was
that
if
Co
is
now
supporting
the
envoys
service
or
sorry
secret
discovery
service
for
delivering
identity
and
delivering
secrets
to
the
pods
without
having
to
mount
them
into
into
the
pods
themselves
and
without
having
to
issue
private
keys
and
so
on
across
the
network.
If
you
go
check
out
the
docs
on
issuing
identity
with
SDS,
it's
it's,
it's
pretty
pretty
good.
M
Yeah,
absolutely
I
thought
they
were
I
thought
I
thought
they
were
good.
They,
they
kind
of
clearly
explained
what
what
was
going,
what
were
the
challenges
and
why
why
we
moved
to
that
implementation?
So
I
thought
it's
good,
okay,
awesome
and
actually
use
that
we
use
those
docks
to
kind
of
help
frame
at
as
solo.