►
From YouTube: Successes? Lessons? Istio @ Quizlet
Description
I didn’t know Quizlet uses Istio?! James was a SRE for Istio at Quizlet, have rich experience in operating Istio at Quizlet in production environment. In this livestream, James will join Lin to discuss his job at Quizlet and share why Istio, how was the Istio adoption and key lessons he learned along the way while adopting Istio at Quizlet.
#istio #adoption #production
A
B
Yeah
I
was,
I
was
started
as
a
senior
sre
about
four
years
ago
and
moved
up
to
staff
and
among
other
projects
like
moving
from
vm
to
kubernetes
and
doing
a
lot
of
data
overhaul.
One
of
the
larger
projects
I
did
there
was
was
introducing
istio
to
quizlet.
Why
don't
I
go
over
you?
Would
you
like
me
to
speak
about
like
what
yeah
that's
great.
B
A
That
so
I
am
so
excited
to
talk
to
you
james,
so
in
this
live
stream,
I'm
so
excited
to
talk
to
you
about
your
job
at
quizlet
and
also
share
with
us.
Why
is
your
and
how
was
is
your
adoption
and
the
key
lessons
you
learned
along
the
way?
Well,
adopting
is
your
equisat,
so
I
know
you
just
gave
a
quick
intro.
A
B
So
quizlet
started
out
over
a
decade
ago
from
a
fellow
named
andrew
sutherland.
It
was
his
kind
of
project
and
it
started
out
as
just
a
way
to
create
and
share
flash
cards.
So
if
you're
studying
for
a
nursing
exam
or
anything,
you
know
anatomy
or
a
lot
of
things
that
require
memorization
to
use
flash
cards.
It
was
a
way
of
sharing
your
stack
of
flash
cards
with
someone
else,
which
is
great
because
you
know,
if
you're,
all
in
the
same
textbook.
Maybe
you
want
to
help
out
your
study
buddy
at
school.
B
You
could
share
it
and
it's
evolved
way
past
that
you
know,
there's
coordination
with
books
and
advanced
kind
of
learning,
and
you
know
more
ai
behind
how
questions
are
formed
and
built
and
how
those
stacks
are
made
and
it,
but
it
ultimately
is
a
user
content,
driven
site
that
houses
flashcards
and
a
lot
of
learning
tools.
At
this
point,
the
saturation
is
quite
high
when
I
was
there
about
one
or
two
years
in.
A
B
It
is
a
very
phenomenal
market
when
I
was
I
was
there
for
about
four
years,
and
that
was
during
that
time
there
was
about
a
10x
growth
over
those
four
years,
so
it's
it's
become
a
pretty
big
powerhouse.
I've
used
it
in
my
on
my
own
to
test
test
for
exams
the
the
amount
of
use
for
it
is
phenomenal.
I
don't
everyone's
like.
B
Oh
it's
just
you
know
online
quizzes
or
whatever,
but
like
that's
really
useful,
you
know
people,
nurses,
I
remember
when
I
was
having
my
son,
I
wore
one
of
the
shirts
and
I
had
like
three
different
nurses
come
up
and
thank
me
for
working
on
quizlet.
They
like.
B
Very
large
user
base
very
large
amount
of
user
content,
created
user
created
content,
so
yeah
and,
like
I
said
it's
been
around
for
10
years,
very,
very
popular
amongst
people
that
are
in
school.
So.
A
B
So,
as
I
kind
of
jumped
the
gun
and
mentioned,
I
started
out
as
a
staff
as
a
senior
esri
and
became
a
staff
sre,
the
philosophy
there
was
kind
of
like
hey.
If
you,
whatever
you
want
to
go
whatever
you
want
to
aspire
to.
Do
here,
go
do
it,
so
you
know
I
really
wanted
to
run
small
projects
with
small
teams.
Well,
actually,
really
large
projects
from
small
teams
is
kind
of
what
they
became.
B
So
when
I
started
it
was
really
about
shoring
up
some
of
the
things
that
need
to
be
fixed,
metrics
and
logging
and
somewhere.
You
know
our
data
stores,
but
then
one
of
the
bigger
projects
was
moving
from
from
kubernetes
from
vms
because
running
all
vms
to
kubernetes,
and
I
worked
with
a
great
team
underneath
the
the
architecture
direction
of
a
guy
named
amalda
schmuck
from
twitter,
and
you
know
it
was
kind
of
a
longer
project.
It
was
very
difficult,
but
that
that
got
us
to
kubernetes
and
it's
funny
everyone
talks
about.
B
Well,
when
do
you
go
into
a
service
mesh?
It's
like
immediately
after
you
figure
out
kubernetes
you
go
to
a
service
mesh.
It's
literally
like
a
neck-
I
I
can't
imagine
it
not
being
in
the
next
step
if
you
want
to
do
a
micro
services
architecture,
so
I
had
really
early
conversations
about
istio
about
linker
d
about
you
know
what
a
service
mesh
would
mean
to
quizlet
even
before
we
had
finished
kind
of
moving
to
kubernetes
and
I
had
throughout
the
kubernetes
project.
I
I
did
vertical
outreach.
B
You
know
vertical
outreach,
so
talking
to
see
the
the
scp
to
miller
and
horizontal
outreach
so
like
having
these
engineering-wide
meetings
to
try
and
get
everyone
on
the
same
page
and
then
immediately
kind
of
did
the
same
thing
with
istio
so
yeah.
That
was
my
role.
There
wasn't,
as
was
sre,
so
I
had
all
the
other
fun
stuff
on
call.
B
A
That's
awesome
because
the
siu
is
one
of
the
key
personas
we're
targeting
for
service
mesh
and
across
the
service
match
industry
right,
it's
not
just
for
istio,
so
that's
really
really
cool.
So
now
I
believe
yesterday
was
your
first
day
and
so
I'd
like
to
kind
of
ask
you
you
know
it
sounds
like
you
had
an
interesting
career
at
quizlet.
What
motivates
you
to
join
solo.
B
So
this
is
almost
a
pandering
answer,
but
part
of
it
is
actually
you
you
sit
unless
I'm
wrong
you're
one
of
the
on
one
of
the
seats
of
the
toc
for
istio,
right
you're,
you
and
you.
B
Oh
yeah,
two
out
of
the
five
seats,
are
at
istio,
the
other.
The
other
two
are
google
and
ibm
right.
I
yeah
by
the
end
of
this
talk
you'll
understand
that
I
really
loved
working
with
istio
and
what
better
place
to
go
to
a
place.
That's
literally
driving
it
right,
and
there
are
some
things.
I've
very
much
learned
about
istio.
That
made
me
think
a
lot
about
someone
is
going
to
figure
out
a
way
to
to
market
this
in
a
as
a
service
in
a
better
format
than
it's
currently
done.
B
Kind
of
the
lay
of
the
land
with
seo.
Right
now
is,
if
you
wanted
to
pay
someone
to
do
it
for
you
you
for
for
quizlet,
especially
because
they're
gcp,
you
were
you'd,
buy
antho
service,
mesh
right
and
you
you'd
be
kind
of
locked
into
what
they're
building
and
then
other
than
that
there's
kind
of
glue
and
kind
of
no
one,
really
that
that
was
fitting
into
the
space
for
us,
especially
so
yeah
that
was
a
big
part
of
it
was
like
hey.
B
This
is
this
company
does
something
I
really
have
enjoyed
working
on
and
then
all
the
conversations
I've
had
with
everyone
here
has
been
really
great.
My
philosophy
for
finding
a
job
is
right,
place
right
time,
right
people
and
every
single
step
of
the
interview
process
whenever
I've
spoken
to
and
even
after
I've
started,
as
I
mentioned
previously
like
it's
gone
very
very
well.
I
really
like
everyone
so
far
solo,
so
I
I
don't
think
that'll
change
at
all.
So
yeah
I've
been
very
impressed,
so
yeah
it
was
like
yeah.
A
B
Okay,
so
let
me
get
some
context
and
time
right
so,
like
I
said
I
had
discussions
with
our
chief
architect
amal
about
hey.
What
do
we
wanna?
What
do
we
wanna
do
about
because
we're
moving
to
a
microservices
architecture
at
the
time?
It's
a
monolith.
We
had
very
much
run
into
conway's
law.
We
really
needed
to
start
breaking
into
microservices,
so
we
could
start
accelerating
our
engineering
group
right.
B
That
was
a
big
push
behind
it
and
we
had
these
early
kind
of
talks
about
service
mesh,
but
we
were
pretty
far
away
if
we
even
have
a
kubernetes
at
this
point
and
so
like
you
know,
we
talked
about
istio
and
this
istio
would
have
been
like
in
a
1.2
1.3
era
where
they
had
citadel
and
a
lot.
It
was
like
that
might
be
a
little
too
much
for
us
at
this
point,
but
you
know
we'll
revisit
it
once
we
have
it
that.
B
Right
time
right
so
for
us
we
knew
we
knew
it
was
something
we
needed
to
think
about,
because
we
were
moving
to
a
microservices
architecture
and
we
needed
to
have
insight
and
control
over
that,
which
is
a
big
part
of
moving
that
kind
of
architecture,
and
we,
by
the
time
we
actually
moved
to
kubernetes.
B
We
had
a
core
cluster
that
was
running
the
month
and
a
container
and
we'd
broken
that,
apart
into
other
smaller
services,
sidecar
services,
and
then
we
had
another
cluster
that
was
actually
starting
to
to
run
real
dive
in
the
wool
actual
micro
services
and
immediately
after
kind
of
getting
everything
linked
up
and
kubernetes
was
successful.
You
know
we
had
good
scaling.
Everything
else
was
going
pretty
well
for
us.
We
realized,
like
hey,
we
don't
have
good
insider
control
over
the
actual
services
underneath
and
what.
B
Happening
is
some
services
weren't
responding
quickly
enough
or
they
were
responding
in
a
way
that
we
didn't
understand,
because
we
had
no
observability
into
it
and
we
had
no
control
over
it
either.
So
if
we
needed
to
throttle
it
or
we
had
some
other
design,
some
of
their
need
for
a
break
off
point-
we
had
nothing
to.
We
had
nothing.
We
could
do
that
with
it
was
we
were
kind
of
we
just
assumed
they
were
going
to
work
perfectly,
and
that
is
a
terrible
assumption.
B
Any
sre
worth
their
salt
will
say:
no,
you
need
to
assume
they're
going
to
work
poorly
and
they'll
work
backwards
from
there
right,
and
so
at
the
time
after,
after
working
on
vm
to
kubernetes
another
group,
we
started
that
was
doing.
Traffic
management
underneath
security
underneath
boris
federer,
like
that
was
myself
tarek,
skeens
and
graham
trummel
and
tarek
comes
from
a
really
heavy
great
networking
background.
So
the
two
of
us
kind
of
conspired
on
on
what?
What
would
it
mean
to
to
use
a
service
match?
B
And
we
had
a
lot
of
these
long
conversations
with
them
all
about
hey?
What
do
we
really
need
here
and
we
kind
of
we
identified
some
key
things?
I
think
anybody
who's
looking
at
a
service
mesh
and
this
kind
of
true
for
all
projects
identify
two
or
three
things
that
you
really
want
from
it,
and
you
know
create
some
some
measurability
around
that.
So
for
us
it
was,
you
know,
observability
the
ability
to
control,
break
points
at
any
point
and
then
repealed
in
a
weird
repeatability
about
creating
new
services.
B
So
part
of
the
problem
we
had
was
creating
a
new
service
was
pretty
onerous
on
the
service
owners
and
then
they
couldn't
see
what
was
going
on
right.
So
that's
really
difficult.
I
have
an
engineering
background
I
built
stuff
like
this.
I
can
imagine
making
a
service
and
going
well
I
in
my
tests
it
worked
okay,
but
I
don't
even
have
a
good
test
platform.
So,
like
all
right,
we
need
to
make
that
you
know
a
much
better,
much
much
much
better
experience
for
our
service
owners
and
that
we
need
for
as
sres.
B
We
need
to
be
able
to
see
what
our
traffic
is
like
and
what
you
know.
What
and
we
need
to
be
able
to
control
it
as
we
need
it.
You
know,
as
we
need
right
so
service
mesh
kind
of.
Does
those
those
things,
those
three
things
very
very
well.
It
does
way
more
than
that.
There's
you
know
encryption
across
the
board.
You
know
you
can
hook
into
idc
there's
all
these
other
things
that
you
know
are
really
important
about
service
mesh,
but
they
weren't
key
to
us
to
begin
with.
So
we
kind.
B
Principles
and
we
started
looking
okay.
Well,
what
is
you
know
what
what's
the
lay
of
the
land?
What
companies?
What
or
what
what
projects
can
provide
this?
And
so
we
started
looking
at
istio,
we
looked
at
linker
d,
we
looked
at
console
mesh
and
those
are
kind
of
the
big
three
we
looked
at
there's
others
that
are
that
we
were
kind
of
like
looking
at,
but
we
weren't
super
sure
on.
B
Ultimately,
we
picked
seo
not
just
because
I
I
feel-
and
I
think
tarek
would
agree
that
it's
the
most
mature
product,
but
it
also
had
heavy
tie-ins
at
google
and,
if
you're
in
gcp
and
one
day
you
maybe
you
do,
want
to
move
to
antho
service
mesh
or
you
want
to
move
to
one
of
their
products.
Well,
they're
running
istio,
at
least
on
one
of
their
service
meshes
they
have
a
few.
So
you
want
to
keep
in
in
lock,
step
with
them
right.
B
It
spread
on
it's
everything,
so
primarily
google
there's
an
aws
presence
for
a
few
things,
but.
A
B
So
security
was
important
to
us,
but
it
wasn't
a
key
driving
factor.
We
didn't
you
know
there
are
things
that
were
left
on
the
table
by
the
end
of
the
project
that
I
wish
we
had
done
better,
but
one
of
the
things
we
did
do
that
we
didn't
need
necessarily
need
to
do
right
out
the
gate.
Was
we
actually
set
up
a
white
list
to
make
sure
our
application
only
asked
to
only
access
things
it
was
supposed
to?
We
did
set
up
mtls
for
things
like
our
redis
cache.
B
You
know
things
that
weren't
necessarily
like
hey.
This
is
top
of
mind
priority,
but
we're
like
hey.
These
are
good,
easy
wins
that
we're
gonna
get
by
doing
this.
So
the
security
thing
wasn't
it
wasn't
like
the
biggest
thing
in
the
world
so
yeah
it
was
really
more
about.
I
mean
the
biggest
win
easily
was
the
observability.
It
was
available
to
see
hey
what
what
is
the
latency
like
from
these
services?
B
Well,
you
know
when
we
get
these
spikes,
you
know,
and
the
biggest
thing
is
understanding
where
the
origin
of
your
issue
is
so.
A
B
Of
the
things
that
envoy
does
and
istio
does
is,
as
an
extension
is
a
great
great
thing
for
any
sre
is
it
tells
you
which
end
of
the
the
pipe
is
having
a
problem?
If
your
service
is
going
slow
and
it's
an
upstream
issue
and
you
go
to
your
service
owner
and
your
service
over
there
and
scratch
their
head,
and
this
happened
to
us
they're
sitting
there
banging
their
heads
on
trying
to
figure
out.
You
know
why
some
of
these
connections
are
failing.
It
turns.
B
It
was
the
the
main
application
killing
its
connection
during
scale
up
and
scale
down
and
without
istio.
Without
that
insight,
we
never
would
have
seen
it
yeah.
B
There
was
an
issue,
but
it
was
at
least
about
where
yeah,
where
the
problem
was,
I
think
yeah.
A
big
chunk
of
being
an
sre
is
understanding
systems
and
understanding
where
problems
go.
Where
things
go
wrong
and
understanding,
we
jokingly
call
it
the
four
fundamental
resources
of
computing
right,
cpu,
ram,
disk
and
network
right
well,
network's,
a
big
part,
and
if
that's
failing,
you
need
to
know
where
it's.
Where
is
it
failing
right.
A
B
That
was
a
huge
huge
thing
for
us
and
it
really.
A
That's
great
to
hear
yeah,
so
I'd
like
to
tell
our
attention
to
the
first
question
we
got
from
candy
now.
Should
I
pronounce
your
name?
Apologies!
If
I
didn't
somehow,
I
don't
see
a
very
good
learning
material
for
rain.
Limiting
in
istio.
Would
it
be
possible
to
share
an
enterprise
architecture
suggestion
for
is
your
global
random
meeting.
So
I
wasn't
sure
if
you
are
using
ready
meeting
in
your
program.
B
We
did
it
for
testing,
we
didn't
actually
need
it
for
anything
we
were
doing.
We
ended
up
not
having
to
limit
things
so
much
as
debug
them
was
really
the
major
source
of
solution.
For
a
lot
of
this,
I
do
recall
there
was
some
rate
limiting
testing.
We
did
through
the
istio
docs,
but
I
cannot
remember
where
sorry
that
doesn't
practically
answer
your
question
well,
but
that
that,
just
speaking
from
my
experience
that
wasn't
something
that
was
was
immediately
needed
by
us.
A
So
that's
it!
That's
that's
useful,
so
I
guess
from
an
issue
perspective
I'll,
try
to
chime
in
to
answer
your
question
too.
So
from
an
issue
perspective,
I
believe
one
of
the
challenges
with
the
istio
project
is
the
way
reigning
meeting
have
if
you
go
to
istio.
I
o
right
now,
which
I
am
and
if
you
search
on
rainy
meeting
right,
so
we
actually
have
our
documentations
on
how
to
do
ready
meeting,
but
the
challenging
is,
I
believe,
and
it
doesn't
have
an
official
api.
A
So
I'm
going
to
put
a
link
in
here
on
the
guidance
of
using
istio,
so
feel
free
to
go
to
that
page.
So,
essentially,
it
shows
how
you
could
configure
limiting
with
onboard
filter.
The
challenge
is
angular.
Filter
is
not
a
mature
api
in
istio
and
it
could
be
changing
from
one
release
to
another
release
and
it
has
had
long
upgrade
issue
in
the
past
which
james
you
probably
run
into
if
you
ever
use
only
filtering
issue.
A
So
so
that's,
unfortunately,
the
recommended
solution
in
istio
with
red,
limiting
in
glue
mesh
or
group
edge
gateway.
We
do
support
rainy
meeting
as
a
first
citizen
api.
If
you're
interested
to
explore
that,
so
essentially
you
could
do
a
reigning
meeting
as
an
official
api
and
you
can
just
configure
your
limiting
resource.
A
So
that
might
be
something
interesting
to
you.
So
please
let
us
know
if
that
answers
your
questions.
Thank
you.
So
much
for
that
question.
Could
you
appreciate
that
all
right
with
that,
I
like
to
ask
you
james
what
went
well
right
so
sounds
like
you
talk
about
why
surface
mesh,
which
is
starts
with
observability
you
talk
about.
Why
is
seo?
Which
is
because
a
lot
of
you
workload
was
running
on
google
cloud
and
it
still
also
was
the
most
mature
service
measure
out
there,
which
I
completely
agree.
B
Our
cutover
amal
has
a
saying
that
he
repeated-
and
I
often
repeat,
which
is
it's
very
easy
to
be
90
done
and
then
have
90
to
go
for
us.
We
had
kind
of
completed
all
the
testing
we
needed.
We
did
stress
testing
with
fortio.
We
actually
built
out
a
fortio
service
for
other
people's
stress
tests
with
as
well.
You
know
we
we
had
to
answer
a
lot
of
questions
around
latency.
B
Like
you
know,
the
general
rule
of
thumb
was
basically
two
milliseconds
of
latency
per
envoy
and
that
that's
kind
of
the
upper
bound
of
it.
You
know,
and
then
we
had
a
lot
of
you
know
very
you
know
real
questions
about
reliability.
You.
B
Saying
hey,
I
need
to
have
ecod
running
in
order
to
do
releases,
which
is
an
interesting
caveat
that
I
think
most
people
don't
understand
is.
Is
you
can
kick
over
istio
d
while
it's
running
and
it's
your
traffic
will
still
route
it's
you're
as
long
as
envoy's
going
as
soon
as
you
deploy
and
you
knock
out
your
side,
cars.
You
have
a
problem,
but
so
it's
not
as
like.
B
It's
not
as
though,
like
your
traffic
is
running
through
stod,
so
you've
introduced
yet
another
single
point
of
failure
and
the
other
parts
where
that
can
feel
like
the
ingress
gateway
things
like
that.
Those
are
equivalent
to
the
other
other
single
points
of
failure.
You
already
exist
with
so
any
other
gateway
you're
using.
A
B
So
you
know
we
had
to
answer
a
lot
of
those
questions
and
you
know
we
when
we
were
confident
with
it,
we're
like
all
right.
Well,
how
do
we?
You
know?
What
do
we
really
want
to
do
with
cutting
over?
So
we
started
doing
stress,
testing
and
all
the
services
through
istio
to
make
sure
everything
worked
out
correctly
and
now
that
we
have
the
observability,
we
can
verify
that
which
was
great
and
we
started
cutting
it
over
from
one
cluster
to
another.
B
So
that's
kind
of
like
we
got
it
a
little
easy
because
that's
like
kind
of
a
cheat
in
a
way
if
you,
if
you're
cutting
from
one
cluster
over
to
another
you're,
basically
just
routing
traffic
and
saying
hey,
instead
of
going
to
cluster
a
go
to
cluster
b
and
we
had
introduced.
B
You
know
correct
dns
names
for
the
new
service
and
we
could
see
everything
routing
and
we
pre-warmed
them,
so
we
actually
cut
it
over
cut
over
all
our
services
during
peak
traffic,
basically
three
or
four
in
the
afternoon:
zero
hiccups,
not
even
noticeable.
So
that's
a
rare
thing
to
kind
of
I
talk
about
it
being
the
equivalent
of
moving
you're.
B
Changing
your
tires
on
a
moving
car
like
it's
it's
generally
generally,
you
would
not
do
that
during
peak
time,
but
we
were
so
confident
in
it
and
we
had
so
so
many
ways
of
observing
what
was
going
on
and
control
that
we
were
yeah.
We
were
fine
with
it
and
then,
ultimately,
I
think
within
a
week
we
actually
shut
down
the
old
cluster
and
everything
was
running
smoothly
through
istio
and
and
yeah.
B
It
worked
out
really
phenomenally
well,
I
think
a
lot
of
our
actually
several
service
owners
came
and
thanked
me
personally,
and
I
think
a
lot
of
our
service
owners
were
quite
happy
because
it
really
made
their
life
so
much
easier
and
then
from
an
sre
and
traffic
management
perspective.
It
really
gave
us
a
lot
of
insight.
So
you
know,
if
you
have
an
incident,
you
know
for
quiz
that
a
lot
of
a
lot
of
what
they
observed
was
their
the
response.
So
the
sla
was
like
hey.
B
We
want
to
have
99.95
our
responses
to
be
good
right,
which
is
about
15
minutes
of
downtime
month.
It's
a
very
hard
target
to
get
to,
and
so
when
you
started
seeing
500s
at
the
application
level,
you're
like
alright.
Well,
why
is
that?
And
it
was
for
the
long
for
a
long
time.
It
was
very
hard
to
to
to
determine
if
that
was
a
downstream
service
or
not,
but
now
all
of
a
sudden,
it's
like.
Oh,
we
knew
exactly
when
services
were
having
problems.
We
knew
exactly
what
kind
of
traffic
caused
it.
B
We
knew
exactly
if
it
was
upstream
or
downstream,
and
so
we
we
you
know,
I
mentioned
you-
we
really
wanted
to
get
rate
limiting
in
there
and
we
had
the
mechanism
to
do
so,
but
we
never
actually
needed
it,
because
the
thing
that
we
were
going
to
write
limit
on
we
just
fixed,
I
was
like
we
don't
need
to.
B
B
Fixing
the
underlying
service
is
the
best
answer
anyway,
and
having
being
able
to
put
a
learning
and
a
bunch
of
other
observability
around
it
and
actually
empower
our
service
owner
to
take
care
of
it
is,
I
think,
ultimately,
the
absolute
best
answer
you
can
have
to
that
problem
so
yeah
and
that
you
know
that
that
got
a
lot
of
things
out
of
it,
so
that
those
were
all
really
great
successes.
Service
owners
were
happy
from
a
traffic
management
perspective.
We
were
quite
happy.
No
real
impact
to
customers
like
unobservable.
B
Anyway,
we
didn't
have
anything
pop
up
at
all
and
yeah,
and
it
was
done.
It
was
it's
a
weird
thing.
A
Yeah,
so
that's
great!
No!
No!
No!
I!
I
just
want
to
make
sure
you
know
we
capture
this,
because
what
you
said
was
really
really
interesting.
You
said
within
a
week
right,
you
were
able
to
move
to
istio
and
transition
your
services
to
around
them
in
istio.
So
that's
really
impressive
and
it's
kind
of
a
counter
from
what
we
heard
from
a
lot
of
people
on
social
media
and
talk
about
how
hard
it
was
is
to
adopt
the
issue.
A
How
lengthy
was
so
much
simpler
so,
but
for
you,
it
actually
went
pretty
well
from
what
I
heard
within
a.
B
Short
time
our
poc
phase
and
and
us
understanding,
it
was,
I
think
that
was
a
month
or
two,
but
the
actual
cut
over,
like
the
actual
hey,
we're
really
doing
this
yeah
very
short
amount
of
time.
I
will
say
that
anybody
who
thinks
that
istio
is
very
complex.
You're,
not
terri,
you're,
not
wrong.
You
can
do
a
lot.
B
It
would
really
behoove
you
and
the
most
powerful
thing
that
we
did
was
we
sat
down
and
went
through
all
of
the
examples
on
the
istio
documentation
on
all
of
the
components,
virtual
services
and
and
how
all
of
that
worked
one
by
one
by
one.
I
think
that
took
maybe
a
week
or
two
at
the
most
to
do,
and
we
just
did
them
in
a
poc
cluster,
so
we
really
understood
them
they're
phenomenal.
B
B
Here
I
am
again
so
yeah
I
mean
that
was
a
big
part
of
it
was
us,
you
know,
sitting
down
and
going
through
those
examples
and
and
having
the
confidence
that
we
knew
what
we
were
getting
into
and
then
some
of
it
was
you
know,
general
sre
paranoia,
like
we
kicked
over
istio
d
to
see
what
would
happen
over
and
over
again,
you
know
we
we
saw
that
there
was.
You
know
the
two
millisecond
that
number
we
brought
up
about
having
enough
void.
A
Yeah
now
I
want
to
say
hi
to
bojang,
he's
actually
currently
upgrading
his
seo
from
version
112
to
version
114.
So
good
luck
with
that
make
sure
you
use
revision,
and
can
you
talk
about
what
versions
were
you
using
initially
and
also
talk
about
a
little
bit
about
upgrade
because
of
your
audience?
That
opportunity
has.
B
Yeah
interesting.
A
B
So
we
we
started
in
in
one
eight
and
we
I
think
we
ended
in
110
when
we
were
doing
poc
work
to
do
multi-mesh.
So
a
little
more
context.
You
know
we
have
these
two
clusters,
the
istio
control
plane
existed
in
one,
it
did
not
exist
in
the
other.
We
wanted
to
make
a
multi-cluster
environment
which
we
went
through
the
poc
process
and
had
everything
kind
of
written
out.
B
I
had
left
the
company
shortly
before
that
production
market
started
working,
so
I
don't
know
where
that
ended
up,
but
I
could
tell
you
what
we
learned
and
what
we
were
doing
and
that
was
in
110
and
we
did
upgrade
from
one
eight
to
110..
That's
a
small
upgrade.
I
think
the
only
thing
that
we
were
like
called
out
on
was
the
fact
that
actually
kubernetes
api
was
upgrading
and
we
had
stuff
that
was
v1
beta
1
in
the
api
and
that
needed
to
be
changed.
B
B
It's
changed
now
and
it's
better,
but
when
we
were
doing
one
eight,
there
was
no
official
helm
repository,
so
there
was
just
the
one
you
could
get
from
git
and
we
had
to
kind
of
change
that
around
and
we
we
were
very
careful
about
only
changing
things
in
the
values
file
as
needed,
making
sure
that
the
path
to
upgrading
to
a
newer
version
was
going
to
be
as
clear
and
easy
as
possible.
B
It
don't
try
and
get
off
a
deviated,
don't
try
and
deviate
from
the
path
there
try
and
keep
everything
in
a
place
where
you
know
if
you're
turning
on
jaeger
or,
if
you're
turning
on
other
anything
that,
like
is
your
own
crd
or
any
other
kind
of
customization
you're,
doing
try
and
make
it
as
out
of
the
way
of
a
future
upgrade
as
possible.
So
that
worked
out
real
well
for
us.
B
I
don't
know
what
it'd
be
like
to
go
from
one
eight
to
114
or
111
to
114..
I
you
know
it's
just
like
kubernetes
right.
A
lot
of
stuff
is
rapidly
changing.
I
will
say
that,
looking
at
istio's
kind
of
longer
term
support
of
older
versions,
I
felt
confident
that
we
could
kind
of
slow
roll.
Our
updates,
but
it
is,
it
is
a
it
is
toil.
B
It
is
something
I've
mentioned
in
things
that
that
you
know
can
potentially
not
go
right
if
you're
at
an
organization
that
can't
support
istio
like
they
don't
have
the
actual
resources
in
in
human
terms,
to
support
istio,
it's
gonna
be
very
difficult
for
you,
it's
it's.
It's
got
the
same
kind
of
pitfall
that
kubernetes
has
to
do
to
a
degree
where
the
upgrade
cycles
are
very
fast.
Keeping
on
top
of
it
is
difficult.
It
is
you
know,
that's
part
of
the
sre
game
right,
like
people.
B
Ask
me
what
it's
like
being
an
sre
and
I'm
like
it's
like
going
to
college
for
the
rest
of
your
life,
because
you're
constantly
studying
something
new
and
that's
just
the
way
that
kind
of
goes.
That's
part
of
you
know
the
part
and
parcel
of
the
job,
but
if
you
don't
have
the
resources
to
support
istio,
you
know
you
got
to
think
long.
Hard
about
what
what
value
are
you
getting
from
it?
Your
quiz
was
at
a
point
where
it
was
very
clear.
B
We
could
get
great
value
from
it,
but
it
does
take.
You
know
someone
has
to
keep
keep
their
eye
on
it.
Someone
doesn't
need
to
maintain
it.
You
do
need
to
upgrade
at
a
pace.
I
would
say-
and
this
is
anecdotally-
not
data
driven,
but
the
pace
of
upgrade
for
istio
was
less
than
the
pace
that
we
were
facing
with
kubernetes.
We
were
using.
A
B
We
followed
the
stable
yeah,
stable
release,
branch
and
so
yeah
it
was,
you
know,
that's
something
we
had
to
keep
an
eye
on.
B
I
think
all
good
project
plans
will
involve
understanding
what
the
resource
requirements
are
long
term
and
you
have
to
make
that
assessment
of
hey.
Are
we
going
to
be
able
to
support
this
long
term?
What
happens
when
this
upgrades?
How
do
we
you
know?
How
do
we
do
that,
and
we
have
that?
I
think
fairly
well
understood
yeah.
A
So
were
you
guys
using
revision
when
you
upgrade
from
one
eight
to
110?
The
reason
I'm
asking
is
bong
jiem
just
made
a
comment.
Unfortunately,
he's
not
using
revision
he's
using
in
place
upgrade
without
with
rollout
restarts
a
good
lock
with
that
there
may
be
a
some
daunting.
B
That's
exactly
exactly
what
we
did.
We
had.
We
had
a
rolling
upgrade.
Basically,
we
did
not.
We
didn't.
B
Too,
so
for
us
we're
using
argo
cd
to
apply
helm
templates
and
so
helm
was
the
one
that
was
was
applying.
The
istio
templates
and
building
istio
for
us
helm
does
not
have
any
kind
of
hooks
in
istio
ctl.
We
had
talked
at
length
about
actually
kind
of
building
our
own.
B
Running
ctl
to
commit
commands,
we
kind
of
found
that
that
might
be
too
comp
too
complex
and
had
kind
of
some
moving
parts
we
weren't
comfortable
with
so
yeah.
It
is
a
kind
of
a
concern
and
one
of
the
things
that
did
not
go
well
is
it's
hard
to
make
the
determination-
and
this
is
also
true
for
istio
operator.
What
is
the
best
way
to
install
istio
for
your
company?
I
think
that
in
another
world,
if,
if
istio
cto
had
argo
cd
implementation
or
if
you
know
we
didn't
have.
A
If
we
added
support
for
helm
and
make
it
official
like
stable,
yeah.
A
B
At
the
time
there
was
istio
ctl,
which
seemed
to
be.
It
has
a
lot
of
great
features
and
you
should
understand
what
is
ctl
does
for
multiple
reasons
and
not
the
least
of
which
is
diagnosing
things.
But
and
revision
is
another
one.
Obviously,
but
but
then
there
is
also
helm
and
now
there's
an
official
repo
you
can
pull
down
from,
but
it
doesn't
seem,
and
that
seems
to
be
the
direction
I
think
istio
may
be
going,
and
you
can
correct
me
because
you
would
know
if
that's
where.
A
Yeah
we're
trying
to
promote
the
helm,
support
in
co2
beta,
in
fact
daniel.
My
team
slo
have
contributed
a
bunch
of
tests
in
istio
upstream.
So
if
you
go
to
institute.iio,
you
look
at
the
helmet
star,
you
look
at
the
helm,
upgrade
test.
Those
are
all
automated,
so
that
gives
a
good
foundation
of
the
stability
of
the
helm.
B
That's
fantastic
news,
because
that
was
a
that
was
a
you
know,
kind
of
ongoing
concern
with
our
of
ours
because
it
seemed
like
antho
service
mesh,
also
relied
on
istio
ctl
commands
we're
like
okay.
Well,
if
we
move
there,
we
have
to
think
about
what
does
that
mean?
How
are
we
going
to
do
that
and
then
there
was
issue
operator
and
you
know
from
an
sre
perspective.
I
like
operators,
that
is
a
personal
opinion
they're,
not
always
the
greatest
kind
of
it
depends
on
what
the
operator
is
doing
and
how
it's
built.
B
But
you
know
I
like
the
the
the
idea
of
having
an
operator
that
handles
your
installation
and
controls.
It
was
really
attractive
and
there
was
an
issue
operator,
but
it
seemed
like
it
was
yeah.
A
B
Heart,
fortunately,
because
it
seems
like
it's,
maybe
not
the
the
the
route
to
go.
B
I
mean
I'm
yeah,
I
love
operators,
but
it's
I
was
glad
it
was
clear
like
there's
nothing
worse
than
installing
something
and
being
like.
Oh
well,
that
was
actually
abandoned
six
months
ago.
I'm
glad
it's
very
clear:
that's
not
a
supported
project
and,
ultimately
running
helm.
You
know
that's
for
us
at
quizlet.
It
was
no
not.
That
was
how
we're
doing
everything
else
anyway.
So
that
wasn't
a
big
deal.
A
B
Not
like
we
came
from
like
customize
to
get
to
moving
on
to
helm.
It's
like
this
was
something
we're
very
used
to,
and
I'm
really
I
mean
once
helm,
helm
three
came
out.
I
think
hell
three
was
when
they
stopped.
You
stopped
using
the
tiller
yeah.
I
was
like
all
right.
Bye,
bye,
opera,
you
know,
buy,
buy
anything
else
other
than
help
right,
no
more
more,
no
more
customized.
So.
A
A
A
A
Yeah,
I
think
that's
what
the
community
is
going
by
the
way
we
just
heard
from
he's
also
a
big
fan
of
issue
operator
and
he's
still
using
it
today
when
he
does
the
in-place
upgrade
from
1
12
to
114
it's
30
seconds
down
time.
So
I
guess
not
bad.
If
you're
planning
well
at
the
right
timing,
yeah.
B
That's
the
beauty
of
like
okay,
so
you
knock
over
istio
d.
You
basically
lock
your
deploys.
You
knock
over
htod,
you
upgrade
that
and
then
you
redeploy
and
as
long
as
your
envoy
proxies
can
speak
to
everything
which
they
should
be
able
to
that's
pretty
painless.
I
mean
there's
a
lot
worse
ways
of
upgrading
things.
A
B
A
Haven't
caught
it
deprecated
because
we
do
intend
to
have
it
for
a
long
time.
We
don't
encourage
new
users
to
use
it
because
you
know
there
are
helm.
There
are
it's
your
cuddle,
so
we
would
prefer
a
new
user
using
those,
but
for
existing
users,
like
you,
you're
already
using
insta
operator,
so
you
can
continue
to
use
it's
your
operator.
A
My
understanding
is:
is
your
operator
have
some
issues
with
the
way
revision
is
supported
and
we
didn't
have
enough
people
in
the
community
willing
to
fix
those
bugs
particularly
related
to
it's
still
operator
with
revision.
So
if
you're
just
doing
in
place
upgrades,
it's
probably
your
it's
going
to
be,
you
know,
what's
where
much
less
chance,
you
are
running
into
issues
because
that's
been
very
well
tested
for
a
very
long
time.
A
If
that
makes
sense,
good
questions
all
right,
I
think
we've
been
chatting
about
what
went
well.
I
think
you
kind
of
talked
about
what
could
have
done
better,
any
particular
lessons
you
learned.
I
want
to
ask
you
know
if
there's
anything,
especially,
you
want
to
highlight
as
part
of
your
learning
journey,
of
adopting
insulin,
quizzes.
B
I
I
think
it's
important
to
understand-
and
I
mentioned
this
earlier,
but
it
really
is
kind
of
the
answer
to
the
question
of
okay.
Well,
I
have
kubernetes
now
what
like
to
me
and
part
reason
why
I
was
stoked
to
come.
Work
for
solo
is,
I
think
I
think
istio
and
service
meshes
are
going
to
be
just
every
bit
as
important
to
the
industry
and
its
infrastructure
as
kubernetes
is.
As
time
goes
on.
You
know
my
micro
services
and
the
services
architecture
are
not
going
anywhere.
B
They
fit
within
conway's
law
from
an
organizational
perspective.
They
they're
very
smart
and
large,
the
larger
company
you
have,
especially
when
you
get
to
a
growth
stage
that
you
really
want
to
scale
up.
It's
kind
of
what
you
tend
to
have
to
do.
I
mean
there's
other
things
like
serverless,
et
cetera,
et
cetera,
but
even
with
that,
like
that
that
fits
in
with
having
to
be
able
to
understand
and
control
your
mesh,
I
will
say
that
there
was
a
lot
of
like
especially
high
points
of
joy
working
with
istio.
B
B
You
can
see
the
ants
move
everywhere
like
it
was.
I
was
very,
very
excited
about
that.
It
felt
very
good.
It
was
like
you
know
it's
one
of
those.
First
times
I
was
like.
Oh,
I
picture
a
lot
of
things
in
my
head
about
how
things
are
hooked
together,
but
see
it
mapped
out
and
actually
know
that
you
have
insight
and
control
over
that
like
that.
That
was
very,
very
strong
to
me.
It
was.
It
was
a
very
good
feeling,
to
say
the
least.
B
I
I
would
say
that
I
think
the
you
know
the
other
kind
of
real
great
thing
about
sus,
as
I
mentioned
before,
was
also
the
documentation
coming
from
a
gcp
world,
where
some
of
the
documentation
is
very
terse
and
hard
to
get
through.
It's
also
been
phenomenal
and
there's
there's
there's
also
so
many
things
to
come
with
it.
In
the
background
of
this,
actually,
I
think
that's
the
psyllium
logo
up
in
the
upper
corner
there,
or
maybe
it's
not,
but
they're,
still
in
the
ebpf
integration
into
this
there's.
A
B
Ongoing
change,
ongoing
changes
to
that's
onboarding
changes,
ongoing
changes
to
envoy
like
there's,
there's
a
lot
of
great
things
to
come
with
service
mesh
in
general.
I
think
it's
a
technology
that
will
will
has
many
has
great
legs
and
we'll
keep
going
forward.
You
know
I've
anybody
who
asked
me
what
technology
should
I
look
into
the
next
few
years
service
mesh
service,
my
service
mesh
and
usually
specifically
istio,
because
I
do
feel
like
it
is
the
most
mature
of
of
the
offerings
right
now.
A
Oh
yeah,
we
could
never
have
enough
fan
of
istio
yeah,
I
mean
what
you
were
describing
it's
actually
the
vision
we
have
of
istio,
where
we
want
issue
to
be
part
of
the
infrastructure
right,
so
people
would
be
running
transparent
mesh
where
they
don't
even
have
to
realize
the
mesh
is
there
and
what
you
were
describing
about
the
ends
on
the
diagram
right.
That's
exactly
what
we
want
you
to
have
as
a
user,
that
you
don't
have
to
do
anything
by
injecting
the
cycle.
B
I
I
will
also
say
if
you
were
scared
of
istio,
because
you
looked
at
it
three
years
ago
or
two
years
ago
or
however
long
ago,
give
it
a
revisit.
That
was
the
thing
that
you
know
I
was
like.
You
know
when
we
looked
at
like
oh,
this
is
this
is
going
to
be
a
lot
for
us
and
with
the
again
this
goes
back
to
understanding
your
organization.
B
If
you
only
have
so
many
people
that
work
on
this,
if
you
only
have
four
or
five
sres
and
they're
already
completely
taxed,
keeping
things
going,
you're,
probably
going
to
have
a
hard
time
running
istio,
maybe
look
at
solo.I
o
for
a
product
or
you
know
anyone
else,
I'm
not
going
to
try
and
pitch
our
own
product
right,
but
for
sure
you
know
understanding
what
their
organizational
requirements
are.
I
will
say
also
it
does.
B
I
do
feel
like-
and
this
is
somewhat
anal,
but
somewhat
data
driven
the
requirements
to
actually
running
istio
from
a
a
human
perspective.
Are
our
lesson
now.
It's
gotten
a
lot
easier
and
the
support
is
is
much
higher
and
again,
like
you
know,
it's
like
any
technology.
B
As
more
and
more
people
adopt
it
and
the
zeitgeist
gets
behind
it,
the
more
brains
you
have
behind
it,
the
easier
it
is
to
work
with
the
more
questions
that
you
can
find
on
stack
overflow
or
in
the
documentation,
or
on
solar
dial
or
everywhere
else,
like
all
of
that
gets
easier
and
easier.
So
yeah
don't
be
scared
of
it
by
any
means.
It'll
make
your
life
a
lot
easier.
Certainly,
that's
how
I
felt
about
it.
A
A
But,
as
you
said,
the
community
has
put
a
lot
of
focus
on
making
the
dock
really
easy
to
consume
and
also
make
the
product
more
mature
by
fixing
a
lot
of
bugs
we
heard
from
our
user,
so
the
the
product,
definitely
you
know,
involves
as
far
as
maturity
and
the
usability
so
definitely
plus
one
yeah
give
yourself
a
try
of
istio.
If
it's
new
to
you-
and
you
know,
solo
is
here
to
help
along
with
your
adoption
journey
or,
for
instance,
service
smash,
all
right,
punjab.
A
I
couldn't
quite
understand
your
question,
so
I
didn't
quite
understand
what
you
were
trying
to
sign
up.
Migration
will
be
pain.
I
understood
yeah,
so
there's
no
plan
to
deprecate
it's
your
operator.
If
that's
what
you
were
concerned
right,
we
discourage
new
user
to
use
it,
but
there's
no
plan
to
deprecate
the
existing
users
and
rohit.
Thank
you.
So
much
for
saying
hi
appreciate
that
thanks
for
joining
us
is
there
anything
else
you
want
to
highlight
or
add,
as
we
wrap
up
the
session
james.
B
No,
I
mean
I'm
really
stoked
on
the
space
in
general,
like
I'm
yeah,
I'm
really
excited
about
working
on
service
mesh
stuff
like
it
was
a.
I
interviewed
a
few
places,
and
that
was
kind
of
I
think
every
interview
asked
like
what
are
you
guys
doing
with
the
service
mention?
And
surprisingly,
most
companies
actually
had
an
answer,
and
some
of
it
was
hey
we're
looking
into
it,
and
some
of
it
was
we've
got
one,
and
sometimes
it
was
we
well.
B
You
tell
us,
but
I
think
that
from
just
an
sre
skill
stance,
skill
set
standpoint
understanding
what
service
meshes
do
and
understanding
you
know
any
of
them,
but
I
think
especially
istio
is
nothing
but
a
good
idea
for
you.
A
Yeah
totally
yeah.
Now
we
got
clarification.
Where
do
I
sign
up
for
help
for
with
it's
your
operator?
So
certainly
you
can
go
to
the
community
for
help.
A
You
can
also
go
to
solo.il
for
help
which
we
provide
enterprise,
support
for
istio,
so
that
might
be
useful
to
you
so,
and
there
are
also
other
vendors
in
the
community
who
provide
support,
but
I
definitely
recommend
solo.
So
you
can
talk
to
us
about
any
issues
you
have
in
it's
your
operator,
we're
also
using
it's
your
operator
in
our
product
as
well,
so
yeah
with
that
james.
I
am
so
happy.
We
had
the
conversation,
it's
just
interesting
to
hear
your
journey
about
adopting
it's.
A
Your
equity
started
with
observability
how
your
rollout
story
goes
and
it
went
smoothly
and
how
excited
you
were
to
see
the
observability
dashboard
and
be
able
to
talk
to
your
specific
service
owner
when
things
goes
wrong.
So
that's
totally
amazing
and
it's
definitely
one
of
the
scenarios
we
intended
people
to
use
in
istio
so
really
really
excited
that
that
went
very
well
at
quizlet
and
I'm
very
looking
forward
to
work
with
you
also
at
solo.
A
So
I
want
to
take
a
moment
to
thank
you
so
much
for
your
time
and
also
thank
everyone
for
joining
us
on
this
live
stream
really
appreciate
all
the
questions
you
guys
are
sending.
A
So
if
you
guys
find
this
interesting,
give
us
a
thumbs
up
and
also
subscribe
to
our
channel,
so
you
don't
miss
any
of
our
future
educations
istio
envoy,
ebpf,
graphql
and
cm
and
I'll
see
you
guys
next
week
thanks
everybody
bye
now.