►
From YouTube: Layer5 Community Meeting (Sept 10th, 2021)
Description
Layer5 Community Meeting - Sept 10th, 2021
Agenda:
- [Zain/Meghana] v0.6.0 release announcement
- [Aditya] Discuss letter template
- [Tanuj] Preview of Meshery marketing video. CLI splash asset
- [Anirban] Review of Nighthawk’s adaptive load control
- [Utkarsh] MeshSync update
- [Utkarsh] Move to Helm in mesheryctl
- [Piyush] Mesheryctl roadmap update.
Join the community at https://layer5.io/community
Find Layer5 on:
GitHub: https://github.com/layer5io
Twitter: https://twitter.com/layer5
LinkedIn: https://www.linkedin.com/company/layer5
Docker Hub: https://hub.docker.com/u/layer5/
A
Hey
everyone
so
welcome
to
the
layer.
5
community
call
so
today
is
september
10th
and
we
will
be
starting
off
with
our
layer.
5's
community
call.
So
if
I'm
dropping
the
minute
meeting
minutes
in
the
chat
box,
please
drop
down
your
attendance.
A
A
Okay,
so
so
this
week
we
have
many
new
commerce
joining
us,
so
by
tradition
we
have.
We
have
that
our
newcomers
will
introduce
in
this
call
so
that
the
communities
knows
get
to
know
you
and
as
well
as
you,
so
any
newcomers.
You
would
like
to
say
hi
to
this
call.
C
D
A
Yeah,
that's
great,
so
you
are
joining
and
yeah.
There
are
a
lot
of
areas
to
cover
in
the
unit
test,
so
yeah,
that's
great
okay.
So
any
anyone
else
would
like
to
introduce.
A
A
Am
I
missing
anyone?
I
think.
F
Yeah,
by
the
way,
who
was
the
last
person
that
just
introduced
was
that.
F
F
A
A
We
can
start
off
with
our
announcements
for
today.
So
the
first
announcement
is
that
layer,
5
and
its
project
are
highlighted
in
service
mesh
ultimate
height,
so.
F
Well,
the
there's
a
couple
of
things
to
say
about
this:
so
there's
a
bunch
of
stuff
to
say
the
work
that
goes
on
in
the
community
gets
noticed
by
a
lot
of
folks,
that's
kind
of
an
interesting
looking
article
on
the
right
hand,
side,
adoption
of
cloud
native
architecture,
service,
mesh,
orchestration,
interesting
anyway,
the
related
content,
the
second
one
kind
of,
but
so
this
particular
article.
F
This
is
the
second
time
the
second
edition
of
this
like
overview
of
overview
of
the
service
mesh
ecosystem,
there's
a
lot
of
things
that
are
intertwined
to
what
we
do
here
that
I
don't
know
that
everyone
is
quite
aware
of
that.
Everyone
pays
attention
to
by
the
way
I've
got
randomly.
I've
got
this
in
my
background,
which
is
a
wall
plate
for
a
light,
switch
that
one
of
my
sons
smashed
his
on
his
wall.
F
F
If
you
haven't
seen
it,
it
was
worked
on
for
quite
some
long
time,
there's
kind
of
a
couple
of
neat
things
to
look
at.
It's
really
related
to
this
particular
article.
If
you
go
to
layer,
five
io,
slash,
landscape,
you'll,
see
it
and
well.
This
was
built
by
a
lot
of
people
actually,
notably
the
actually
nikhil
lada
mesh
mate
of
the
year
mesh
mate
of
this
year,
we'll
see
about
what's
going
to
happen
next
year,
but
anyway
actually
yeah.
I
don't
you
know
what
shy
it
done.
I
don't
know.
F
F
I
don't
know
if
it's
obvious
but
like
the
timeline
keeps
on
going
for
a
long
time.
So
anyway,
there's
a
lot
of
info
in
here.
The
tables
in
here
hide
or
have
a
lot
of
information,
the
so
those
tabs
on
the
the
green
table.
It's
non-functional
functional
tools,
so
yeah
right
there.
If
you
tab
over
across
the
tabletop
there,
you
go
nice
yeah,
there's
a
lot.
F
There's
a
lot
going
on
in
here
a
lot
of
info
anyway,
this
that
article
that
we
were
just
looking
at
it's,
I
think
there'll
probably
be
an
email
going
out
about
it.
I
think
they
just
published
it
are
about
to
announce
it
so
they've
sourced
a
lot
of
info
from
our
landscape,
which
is
you
know
great.
F
They
asked
for
me
to
review
it.
I
actually
didn't
get
a
chance.
I
was
so
busy
doing
other
stuff.
I
turns
out
the
guy
that
authored
it
lives
here
in
austin
texas,
which
is
where
I
live.
The
cncf
was
asking
for
me
to
review
a
couple
of
things
I
want
to
talk
about.
F
If
you
go
back
to
the
article
for
a
moment
that
you'll
find
both
measuring
landscape
and
service
mesh
performance,
the
projects
that
we
have
here
are
referenced
with
links
in
this
article.
I
haven't
read
the
article.
I
just
noticed
that
we
were
called
out
a
few
times
and
rightfully
so.
F
F
There's
a
lot
of
projects
inside
of
it.
Now
all
of
the
service
mesh
projects.
We
also
run
the
cncf
service
mesh
working
group,
which
is
in
part
where
service
mesh
performance,
one
of
our
other
projects,
is
advanced.
F
So
I
guess
the
big
point
of
this
was
puff
your
chest.
Take
pride
feel
good
about
the
fact
that,
like
even
though
I
was
asked
to
go
and
help
uplift,
this,
I
didn't
need
to
go
be
our
own
cheerleader
people
are
already
so
there's
there's
at
least
like
three
links.
I
think,
to
the
work
that
you
guys
have
done.
F
So
it's
pretty
cool,
that's
good.
It
also
is
a
reminder
that
layer,
five
io
it
was
revamped
from
being
based
on
jekyll
to
being
based
on
gatsby
part
of
there
were
a
bunch
of
reasons
to
do
that.
So
shaitan.
If
you
don't
mind,
I'm
just
gonna
what
backseat
drive.
I
guess
your
mouse
for
a
while.
F
If
you
go
to
layer,
five
io,
there's
a
few
things
to
say
that
some
of
you
in
the
here
that
are
on
the
call
have
contributed
to
this
site,
there's
over
200
people
that
have
which
is
great,
which
is-
and
there
were
some
that
contributed
on
the
old
version
based
on
jekyll,
some
that
have
done
it
on
gatsby.
There's.
Part
of
the
reason
we
moved
to
gatsby
was
to
advance
to
be
able
to
host
a
lot
of
content
about
these
technologies.
F
We
offer
a
lot
of
learning
material
to
the
public
if
you
click
on,
learn,
there's
three
types
of
learning
material:
that
is
public
right
now
there
are
two
free
books
and
two
books
that
you'll
have
to
like
subscribe
to
or
buy
from
o'reilly,
but
but
there's
two
of
those
are
free.
There's
a
bunch
of
interactive
learning,
labs.
F
That
are
free
as
well.
They
teach
people
how
to
service
mesh
using
measuring
there's
a
new
lab
or
a
set
of
labs.
That's
coming
forth.
All
of
you
are
welcome
to
work
on
these.
These
are
open
source
talk
to
suhani,
aggarwal
or
adida
chatterjee
who's
on
the
call
right
now
about
those
anyway.
The
point
is
is
like
the
reason
we
made
this
site
and
revamped.
F
It
was
to
represent
all
of
the
works,
because
there's
a
lot
of
projects
that
go
on
here
and
to
help
people
to
acknowledge
that
it's
going
to
take
people
a
long
time
to
come
to
service
smashing.
It's
a
lot
to
learn.
Networking
is
difficult
and
goes
on
for
a
long
way,
and
you
know
it's
a
deep
topic
area,
but
we
haven't
put
up
a
ton
of
content
and
we
have
more
like
this
community
has
written
more
books
on
service
mesh
than
any
other
community
in
the
world
like.
F
It
is
true
that
we
represent
more
service
mesh
projects
than
any
other
place
in
the
world.
There's
some
really
neat
things
are
going
on,
but
we're
not
telling
that
story
very
well.
We're
not
answering
simple
questions
like
what
is
a
service
mesh.
F
The
world
doesn't
necessarily
need
that
answered,
and
it
is
spoken
to
in
a
number
of
other
areas.
Shaytan,
if
you
don't
mind,
can
you
browse
to
the
other
types
of
learning
while
we're
there's
another
set
of
learning?
That's
coming
out
it's
layer,
five
io
slash,
learn
hyphen
ng,
which
again
is
an
open
source
set
of
learning
paths,
different
courses
that
you
can
take
to
go
through
and
learn
how
to
master
a
service
mesh.
These
are
incomplete
and
yet
to
be
publicly
promoted.
F
F
Other
people
are
talking
about
what
we're
doing
awesome
we're
not
really
talking
about
what
we're
doing
all
that
much
like
there
are
a
few
blogs
that
we
have
few
posts,
but
there
could
be
a
whole
lot
more.
It's
not
just
about
blog
posts,
it's
also
about
like
just
just
how
do
you
say
like
long
long
standing
articles,
if
you
will
or
just
sectional
categories
of
info
about
what
is
the
difference
between
one
service
mesh
versus
the
next
or
which
one
is
the
fastest
or
any
number
of
learnings
like?
How
do
you.
F
How
do
you,
how
do
you
use
the
kong
ingress
controller
with
istio
or
can't
you,
or
can
you
there's
just
there's
a
so
many
things
to
discuss,
so
I
wanted
to
call
out
that
it's
this
is
all
community
built.
The
vast
majority
of
those
blog
posts
are
written
by
all
of
you
so
write
more
of
them.
There's
one!
That's
pending
an
announcement
of
four
interns
that
just
started
this
week,
some
through
lf
or
all
of
them
through
lfx
and
that
needs
to
get
published.
F
So
writing
is
a
good
way
to
reinforce
your
own
learning
and
also
get
well
get
a
byline
out.
There
get
your
name
out
there
as
well.
A
C
Okay,
I
don't
know
what
to
speak
about
okay,
so
yeah
thanks.
I
guess
it's
been
a
great
journey
and
I've
learned
a
lot
yeah
thanks
all
everyone
who's
present
on
this
call
and
everyone
who's.
Not
everyone
has
taught
me
things
that
I
have
not
learned
before.
So
thanks
a
lot.
A
Okay,
so
thank
you
congratulations,
so
basically
the
measurement
is
a
one
whom
you,
whom
are
newcomers,
come
and
ask
them
about
how
to
set
up
your
development
environment
or,
like
any
questions
you
have,
while
starting
off
like
many
of
us
face
many
questions
like
why
it
is
not
running,
and
why
is
it
not
so
measurement
are
the
person
who
are
good
to
go
for
that
so
yeah.
F
Yeah,
so
that's
great,
so
by
the
way,
the
the
there'll
be
a
little
more
information
about
some
links
to
the
talks
that
are
that
will
be
given.
All
of
you
are
encouraged
to
to
show
up
some
of
you,
some
of
you
that
are
on
the
call
will
be
in
those
talks.
That'll
be
really
nice.
Yeah,
we'll
have
more
info
on
that
next
week
for
sure,
with
some
links.
Speaking
of
those
links,
the
links
will
be
pointing
to
layer
five
io.
F
F
F
Fortunately,
it's
not
a
marquee
style
thing,
so
it's
not
too
obnoxious,
but
it's
good,
so
good!
So
a
lot
most
of
you
all
know
a
vegan,
and
if
you
don't-
and
you
probably
will
soon
he's-
ends
up
spending
spending
time
across
a
lot
of
the
repositories
engaging
with
people
as
they
come
in.
F
That's
part
of
he
espouses
the
culture
of
the
community,
about
helping
others
learning
together.
So.
F
That's
good
congrats,
mr
chatterjee.
F
A
So
yeah,
so
now
we
are
moving
on
to
our
topics
for
the
day.
The
first
topic
is
by
zayn
or
meghana
meghna.
So
actually
we
are
moving
towards
the
v
0.60
release
for
measuring.
If
you
go
to
the
this
block,
we
actually
have
a
when
we
had
a
zero
point
v
0.50
release.
We
had
a
block
on
what
are
the
feature
highlights
and
what
all
new
features
that
got
introduced.
So
now
we
are
moving
on
to
the
v
0.60
release
so
oops.
A
C
Sure
so
we
have
started
creating
issues
for
particular
blogs,
so
we
have
created
an
epic
I
have
put
in
the
chat
here.
So
everyone's
welcome
to
go
to
that
epic
and
get
it
assigned
to
yourself
if
you,
if
you
can
take
it
up,
so
there
are
nine
of
them
right
now
and
three
I
mean
the
three
features
have
individual
issues
as
of
now
and
we'll
be
creating
for
the
rest
of
them
as
well.
C
So
if
you
are,
if
you're
wanting
to
write,
you
can
go
to
this
epic
and
go
to
a
specific
issue
and
comment
below
it
and
it's
not
necessary
that
you
scientifically
should
take
one
of
the.
Can
you
take
applications
or
filters.
C
So
you
can
see
that
each
section
has
subsections,
so
there
are
three
of
them
here,
so
you
may
choose
to
write
on
the
whole
of
it
or
if
you
wish,
you
can
select
one
of
the
sub
sections
and
write
on
that
so
yeah.
That
was
about
that
we'll
be
updating
the
remaining
individual
issues
soon
so
keep
an
eye.
If
anyone
wants
to
be
contributing
to
the
vlogs.
A
Yeah
thanks
jane
zayn,
so
actually
this
is
a
very
good
place
to
start
contributing
contributing
has
different
meaning.
So
actually
you
can
also
help
us
by
writing
these
blocks.
This
is
we
are
going
to
release
this
very
soon.
So,
yes,
you
can
start
writing.
You
can
take
any
issues
you
want
from
this
apk.
One.
C
Actually
sent
me
if
you
go
to
the
filters,
I
think
we
have
one
of
a
new
comment.
Newcomers
comment
over
there.
I
do
not
have
access
to
send.
So
maybe
if
you
want
to
comment
on
something,
was
it
filters
patterns?
I
guess
yes,
yeah
satyakshi
had
wanted
to
write
on
it.
A
C
Yeah
she
wanted
to
write
on
that,
but
he
had
also
told
that
patterns
is
gonna,
take
a
bit
of
knowledge
and
maybe,
if
someone
who's
already
working
on
it
would
write,
it
would
be
better.
So
I
was
confused
about
having
in
their
signed
or
not
so.
F
Good,
okay,
good
it
is
it's
not
going
to.
I
will
get
it
by
the
end
of
the
day
for
sure:
hey,
let's,
let's
collaborate,
let's
do
this.
Let's
collaborate
I'll
crack
open
a
google
doc
just
because
it's
probably
easiest
to
like
you
know,
collaborate
there
and
what
we
can
do
is
there's
an
existing
service
mesh
patterns,
page
in
mesherie
docs.
F
We
can
source
kind
of
a
lot
of
that.
You
know
info,
that's
a
good
one
to
digest,
there's
also
a
repository
or
I'm
sorry,
there's
a
there's
another
github
org
it's
well.
The
org
is
called
service,
hyphen
mesh
hyphen
patterns
in
this
case
and
there's
a
reap
there's
a
repo
there
that
also
kind
of
describes
patterns
and
so
yeah.
Let's,
let's
do
it
like
yeah,
it's
a
big
old
topic
and
I'd
love
to
collaborate
on
you
know
on
that
one:
there's
there's
a
couple
of
yeah:
no
good!
F
A
Yeah
thanks,
then,
thanks
for
thanks
lee.
So
now
we
are
moving
on
to
the
next
topic
by
aditya.
So
he
want
to
discuss
about
the
discus
letter.
Template
so
discuss
is
a
discuss
forum
where
we,
you
post
your
doubts
and
we
try
to
solve
it
by
ourselves
by
any
of
the
community
members,
and
you
get
an
answer
so
adita
would
like
to
discuss
about
the
email
template
that
he
had
made
to
send
out
the
some
announcements.
C
So
here
we
are
so
there's
a
little
bit
of
an
a
little
bit
of
a
problem
with
this,
because
so,
if
you
preview
the
summary
from
here,
does
it
look
good
yeah?
I
feel
like
it's
aligned
over
here,
so
I
totally
changed
the
email,
so
I
have
been
able
to
send
myself
a
couple
of
test
emails
so,
but
it
becomes
like
this
when,
like
we
send
the
email
there's
no,
I
have
tried
to
do
a
lot
of
things
like
for
for
that
matter.
C
F
Cool,
and
so,
if
folks
are
interested
in
having
a
look
or
assisting,
you
know
like
collaborating
with
you
on
this.
Do
you
want
to
talk
about
how
they
can
do
that.
C
Yeah
you're
more
than
welcome
to
so
what,
if
you
guys
want
to
contribute
to
discuss?
First
thing
you
need
to
do,
is:
go
to
staging
dash
layer,
5
io
and
then
create
a
create
an
account
over
here.
Then
you
have
to
test
text
meshmate,
just
he'll
give
you
access
to
it.
After
that,
you
would
be
able
to
see
the
email
template
right
over
here
on
the
site
itself.
When
you
go
to
admin
when
you
go
to
admin,
you
will
come
to
your
dashboard,
there's
emails
over
here
and
when
you
go
to
preview
summary.
C
The
email
template
is
right
over
here
and
if
you
want
to
add
and
style
it
there's
the
html
and
css
is
right
over
here
itself.
So
if
you
want
to
make
any
changes
to
it,
you're
more
than
welcome
to
and
if
you
can
figure
out
how
like
how
to
place
it
so
that
we
get
good
looking
emails,
you
can
send
yourself
a
couple
of
test
emails
from
here
and
see
how
it's
turning
out
right
now,
yeah.
So
basically,
the
content
of
the
email
is
dynamic,
so
it
is
generated
from
disqus
itself.
F
Cool,
okay,
hey
real
quick
just
to
try
to
help
make
that
real.
Hopefully
I
don't
say
some
of
the
same
things
that
you
just
said,
which
is
there's
there's
two
sites
that
this
okay
there's
there's
actually
a
bunch
of
stuff
here,
so
on
layer,
5.
io,
the
website
that
we
were
talking
about
earlier.
There
is
one
or
two
call
outs
that
that
tell
people
to
go,
engage
in
the
discussion
forum.
F
So
there's
one
right
here:
great
and
there's
one
in
the
it's
not
even
a
call
out
it's
just
in
the
social
icons
in
the
footer
of
the
site.
There's
a
link!
That's
about
it!
Actually,
someone
needs
to
propagate
that
call
out
that
one
onto
other
a
few
other
pages
on
the
layer.
Five
I
o
site.
Otherwise,
it's
not
the
most
prominent
link
like
the
discussion
forum.
F
I
think,
if
you
go
to
the
menu
and
under
community
I
or
the,
if
you
go
to
the
mega
menu,
I
think
under
community
there's
a
call
out
to
the
discussion
forum,
but
there's
not
a
lot
of
those.
We
need
more
of
those
once
people
do
get
onto
the
discussion
forum.
There's
two
servers
that
are
running
running
in
an
instance
of
discourse,
so
the
one
of
them
is
the
production
site.
The
other
one
is.
This
is
staging
hyphen
disqus.layer5.io.
F
And
so
shortly
there
will
be
there's
actually
a
couple
of
open
items.
Requests
for
leaderboards.
F
This
forum
allows
us
to
identify
and
assign
solutions
like.
If
someone
asks
a
question,
someone
else
answers
it.
You
can
mark
that
as
a
solution,
there
will
be
leaderboards
for
those
that
are
providing
the
most
solutions
like
over
a
month
timeframe,
or
what
have
you
there'll
be
a
couple
of
those
those
actually
need
to
be
written,
there's
fresh
repositories
that
have
been
made
in
github
to
hold
those
leaderboards
like
the
logic
for
them.
So
there's
a
lot
kind
of
going
on
over
here
outside
of
just
asking
questions
and
engaging
there's.
F
C
Does
anyone
have
any
opinions
on
the
template
like?
Does
it
look
good.
F
It
looks
great,
it
looks
like
the
same
one
that
we
said
yeah
just
yeah,
even
if
they
do
there's
no
there's
no
like
this
is
the
one
we're
going
to
use.
F
A
So,
thank
you
lee.
Thank
you,
adita
for
you
for
the
update.
So
next
we
are
moving
to
the
next
topic
that
is
by
tanood.
So
thanos
had
made
a
marketing
machine,
marketing,
video,
so
thanos.
Would
you
like
to
talk
on
this.
D
Okay,
so
this
is
just
an
initial
draft
of
the
video.
So
basically,
currently
all
the
content
is
just
filler
content
and
the
screenshots
are
temporary.
So.
D
F
So
in
the
meeting
minutes
like
shaytan,
if
you
go
up,
there's
a
link
to
at
the
top
of
the
there's,
a
link
to
youtube
and
yeah,
and
so
tanuj
has
made
the
oh
look
at
the
newcomers.
Meeting
from
yesterday
is
up
from
last
week.
That's
great
anybody
know
who
that
young
lady
is
and.
F
It's
not
that
that's
okay,
wow
see
she's
been
here,
doing
all
kinds
of
things,
and
and
that's
what
happens
when
you
don't
turn
on
the
webcam,
you
just
don't
know:
okay,
okay,
anyway,
back
to
back
to
this
was
so.
If,
if
you
play
any
of
these
you'll
get
a
small
jingle
until
you've
made
that
tanush.
How
long
ago,
did
you
make
this
intro.
F
And
it's
still
not
old,
I
still
don't
get
tired
of
watching
it.
So
anyway,
the
point
is
there's
what
shaytan
was
saying
before
is
there's
like
a
lot
of
ways
to
contribute.
Obviously,
there's
a
ton
of
prs
that
get
merged
around
here,
a
ton
of
code.
F
This
is
the
community
meeting,
so
we
often
try
to
talk
about
some
of
the
softer
side
of
things
and
there's
a
lot
that
goes
into
and
goes
on
for
what
goes
on
like
there's
an
individual,
a
mesh
mate
called
aditya,
krishna
ari
for
short,
because
we
have
19
adityas
and
well.
As
a
matter
of
fact,
he
just
got
an
internship
at
red
hat
because
of
the
work
that
he
was
doing
here.
F
So
that's
probably,
why
he's
not
on
the
call
at
the
moment,
but
he
curates
he,
he
encodes
every
single
video
and
he
puts
in
the
intro
on
the
outro
and
that's
part
of
his
contribution
to
the
community,
which
is
really
helpful.
It's
really
fantastic!
F
So
there's
a
lot
of
a
lot
of
you
are
doing
a
lot
of
different
things.
The
certainly
the
desire
of
the
community
managers
here
that
your
works
and
your
efforts
don't
go
unrecognized,
which
is
in
part,
why
I
call
out
mesh
mate
adi
for
his
work.
F
F
A
Yeah,
so
thank
you
thanks
and
leave
so
that
was
wonderful.
Next,
we
are
moving
on
to
a
topic
by
anyone
review
of
nighthawks
adaptive,
load
control.
So
actually
nighthawk
is
a
new
project,
mostly
a
new
project
under
measuring.
You
would
find
it
on
layer.
Five
in
this
repository,
so
actually
nighthawk
is
a
load
generator.
So
if
you
have
run
machine
in
your
localhost
908.1,
so
you
would
have
seen
that
there
is
a
section
called
performance
text
testing.
A
Now,
if
you
have
clicked
on
that
and
fill
out
the
details,
then
I
think
you
have
already
encountered
three
options
for
adaptive
load
generators,
the
first
one
is
4k
or
the
second
one
I
guess
is
wrk2
and
the
third
one
is
get
nighthawk.
So
that
is
the
project
now.
That
is
what
we
will
be
talking
about.
So
I
think
anirban.
Would
you
like
to
talk
on
this.
B
Sure
yeah,
so
guys
am
I
audible,
yeah
yeah,
your
audio?
Yes,
yes,
so
I'll
just
share
my
screen,
I
I
could
not
prepare
much
because
the
most
of
the
things
from
the
code
perspective
I'm
going
to
present
it
next
week,
but
I'm
just
trying
to
present
something
based
on
my
understanding
of
adaptive
controller.
So
this
is.
This
might
not
be
fully
related
to
get
nighthawk,
but
it's
more
of
something
which
I
understand
and
the
ways
the
load
controlling
is
done
in
an
adaptive
way.
B
So
so
before
you
know
the
design
considerations
which
we
have
taken.
So
there
are
some
important
terms
and
I'm
just
covering
them.
The
first
thing
is
rss,
so
this
rss
is
nothing,
but
it's
a
receive
site
scaling.
B
So,
basically,
rss
selects
the
queue
and
the
desired
cpu
will
run
the
hardware
interrupt
for
processing
the
request.
So
what
happens?
Is
it?
It
actually
uses
the
hardware
interrupt
handler
to
allow
the
processing
and
hence
this
is
more
at
the
hardware
level,
and
this
also
is
used
as
a
mechanism
for
adaptive
load
balancing.
So
it
depends
on
which
method
you
want
to
select
so
right
now.
This
is
one
of
them
which
I
found
out
in
rfc
and
another
one
is
anyway
rps.
So
I
think
we
are.
B
We
are
already
considering
this
rps,
so
this
is
called
receive
packet
steering.
So
this
is
the
software
of
the
logical
implementation
of
rss
and
it
is
called
in
the
data
path.
So
we
have
seen
that
this
is
at
that
hardware
level
and
you
can
tweak
the
if
you
want
to
tweak
the
load
or
performance
of
the
load.
You
can
use
rss,
but
if
you
want
to
use
a
software
implementation,
then
you
must
select
rps.
B
So
what
rpa
just
does
is
rps
selects
the
desired
cpu
to
perform
protocol
processing
at
software
level
above
the
interrupt
handler.
So
we
were
talking
about
hardware
interrupt
handler,
so
it
just
does
the
protocol
processing
or
packet
processing
at
the
software
level-
and
this
is
very
important
so
how
our
rps
works
is.
This
is
accomplished
by
placing
the
packet
on
the
desired
cpu's
backlog
queue
and
waking
up
the
cpu
for
actual
processing.
B
So
we
know
that
how
the
load
works
on
the
cpu
is
like
how
the
load,
processing
or
packet
processing
happens,
is
the
cpu
will
be
idle,
and
once
we
actually
do
some
tasks
or
we
run
some
tasks,
then
the
cp
we
make
the
cpu
busy
and
we.
B
If
we
do
too
much
memory
intensive
tasks,
then
it
is
going
to
overload
the
memory
it
is
going
to
overload
the
cpu
I'm
sorry
and
that
time
we
have
to
take
care
of
a
few
things
like
smp,
which
is
called
symmetric
multiprocessing,
where
we
distribute
the
load
across
either
you
do
it
symmetrically
across
the
different
cpus
or
what
you
do
is
there
will
be
a
specific
cpus
to
perform
specific
actions,
so
we
distribute
across
those
cpus.
B
So
this
is
what
is
very
important
here,
so
the
packet
processing
is
done
in
cpu's
backlog,
the
packet,
the
packet
processing,
is
done
by
the
cpu
itself,
but
we
are
just
going
to
place
the
cpu's
backlog
queue.
So
here
we
are
actually
allowing
that
hardware
handle
up
to
process
process
it,
but
here
we
are
actually
putting
in
the
backlog
queue.
This
is
the
main
major
difference,
so
we
can
use
either
of
them
and
there
are
actually
six
ways
which
I
found
out.
B
But
this
is
the
second
way
and
the
third
is
receive
flow
steering.
So
so
we
will
talk
about
something
called
hash
how
the
packets
are
pushed
in
the
hash
table,
so
just
the
basics,
but
we
can
just
understand
that
rps
tiers
packet
based
on
hash,
based
on
the
hash
value
it
is
going
to
take,
consider
what
to
do,
but
it
also
provides
a
good
load
distribution.
So
our
major
consideration
here
is
load
distribution
and
this
load
distribution
is
also
well
done
by
the
rps.
B
But
one
thing
it
doesn't
take
care
is
application
locality.
So
you
know
that
what
is
application
locality
right
so
the
at
the
local
level
or
at
the
system
level?
B
It
doesn't
take
care
like
how
to
handle
that
scenario,
and
that
is
where
rfs
comes
into
picture,
and
this
is
used
to
increase
the
data
cache
heat
rate
by
steering
the
kernel
processing
of
packets
to
the
cpu,
where
the
application
thread
is
consuming
the
pack
application
thread
consuming
the
packet
is
running
so
wherever
in
the
cpu,
where
the
application
thread
is
running.
B
So
in
that
cpu
we
will
actually
increase
the
data
cache
hit
rate,
so
that
is
going
to
that
is
going
to
increase
your
performance
and
not
just
the
performance,
but
also
the
application,
locality
and
and
one
important
thing
or
one
important
difference
between
rfs
and
rps
is
in
rfs.
B
Packets
are
not
directly
forwarded
based
on
their
value
of
the
hash,
so
in
rps
it
is
done
based
on
the
value
of
hash,
but
in
rfs
it
is
not
done,
but
the
hash
is
used
as
an
index
index
to
the
flow
to
the
flow
lookup
table.
So
we
know
that
that
whatever
the
flows
are
there,
so
it
there
is
a
something
called
flow
lookup
table
which
is
used
to
maintain
the
flows
and
based
on
the
hash
value.
It
is
going
to
decide
what
to
do
so
this.
B
Actually,
this
is
how
this
is
different
from
the
rps,
so
another
consideration
is
for
design
which
we
can
consider
is
accelerated.
Rv
rfs,
so
accelerated
rfs
is
rfs
with
hardware
support,
so
we
we
talked
about
this
thing
that,
but
it
uses
a
different
mechanism.
We
saw
that
it
uses
the
hardware
interrupt
handler
for
processing
the
request,
but
here,
if
we
use
the
similar
way
how
rfs
works,
so
we
we
can
use
accelerated
rfs
also,
and
this
has
to
be
supported
at
driver
level.
B
So
since
I'll
talk
about
it,
so
this
is
a
method
for
transmitting
packet,
but
packet
is
already
generated
and
ready
to
send,
but
we
just
select
the
best
queue
to
send
it
with
and
to
easier
post
processing
with
freeing
the
skb.
B
So
we
know
that
for
packet
processing
we
use
a
sqp
skp
buffer
for
at
the
linux
kernel
of
data
processor
packet
processing
phase
and
it
basically
accelerated
rfs
will
select
the
best
queue
to
send
with
so
basically
pick
a
packet
is
already
generated,
but
it
selects
where,
which
is
the
best
view
to
send
that
packet
so
that
it
is
properly
distributed,
okay
and
accelerated.
So
there
is,
we
can
say
that
accelerated
rfs
is
to
rfs
what
rfs
rss
is
to
rps.
B
So
so
we
know
that
rfs
and
our
our
accelerator
rdfs
is
similar
to
rfs,
but
it
is
putting
in
a
bit
best
queue
that
is
only
different
but
rss.
We
know
that
rss
uses
the
hardware
mechanism,
while
our
rps
is
the
software
mechanism.
So
this
again
is
the
accelerated.
Rfs
is
like
it
uses
the
rfs,
with
the
accelerated
fashion
and
a
hardware,
accelerated
load.
B
Balancing
mechanism
uses
the
soft
state
to
steer
state
flows
based
on
the
where
the
application
thread
is
consuming
the
packets
for
each
packets
of
each
flow
running.
Okay.
So
now
this
again
is
a
hardware
feature,
so
this
must
be
supported
by
the
driver.
So
this
is
the
parameter
internal
which
can
which
will
be
used
for
the,
and
this
basically
has
to
be
supported
by
the
network
driver.
Okay
and
next
is
the
transmit
packet
steering.
B
So
this
is
another
way
which
we
can
consider
so
in
transmit
packets,
steering
it
is
mapping
the
cpu
to
hardware
queue.
It
is
mapping
from
cpu
to
hardware.
Queue
is
recorded,
okay,
so
so
from
the
cpu
to
the
hardware,
queue
is
recorded
and
the
the
goal
of
this
mapping
is
usually
as
assigned
used
exclusively
to
a
subset
of
cpus,
where
the
transmit
completions
for
this
queues
are
processing
a
process
on
the
cpu
within
this
set.
B
So
basically
this
we
know
that
there
is
something
hardware
queue
and
it
is
going
to
process
the
packets
and
that
mapping
from
the
cpu
to
hardware
queue.
The
x
transmit
packet
sharing
actually
records
this
and
it
does
the
mapping
of
the
cpus
basically
for
that,
how
much
is
completed
for
the
queues
and
how
much
is
process.
So
this
is
another
consideration
and
there
are
two
important
terms
which
must
we
must
know.
So
one
is
the
rps
flow
flow
limit.
B
So
this
is
an
optional
rps
feature
that
prioritizes
small
flows
using
contention
by
dropping
packets
from
dropping
okay,
not
by
dropping
packets
of
large
flows
ahead
of
small
flows.
So
basically
this
flow
limit.
We
know
that
the
packet
will
be
processed
and
there
is
a
flow
limit
with
which
we
can
like
fine-tune
the
flows
which
are
happening.
So
basically
it
does
this.
It
places
the
places
the
large
flows
ahead
of
the
small
flows.
Okay,
and
so
what
happens?
Is
it
does
a
better
it?
B
It
identifies
when
there
is
a
large
flow
and
there
is
a
that
needs
to
be
processed
first
and
it
puts
them
ahead
so
that
they
are
are
processed
first
first
and
faster
okay,
and
this
is
useful
on
a
system
with
a
large
number
of
concurrent
connections
where
single
connection
taking
60
percent
of
cpu
load
is
a
big
issue.
So
we
can
consider
this
where
we
know
that
there
is
something
called
smp
symmetric,
multiple
processing,
where
we
can
just
distribute
the
load
across
all
the
cpus.
B
But
there
are
systems
which
uses
a
single
processor
wherein
the
single
processor
takes
all
the
hit
and
it
gets
overloaded.
So
what
it
does
is,
whenever
there
is
a
network,
receive
request.
B
This
this
feature
has
to
be
enabled
where
there
is
a
very
large
cpu
load,
and
this
has
to
has
to
be
enabled
for
all
the
cpus
and
whichever
cpus
it
that
can
be
fine-tuned
at
kernel
level
or
using
application
tuned
parameters,
but
we
can
just
put
it
like,
whichever
cpus
or
all
the
cpus
based
on
the
design,
and
that
is
going
to
control
the
flow
limit.
So
this
does
two
things
one
is:
it
does
very
good
identification
of
large
flows,
and
second,
is
it-
has
a
it
has
very
few
false
positives.
B
So
these
are
the
important
things
and
next
is
per
flow
rate,
so
this
is
calculated
by
hashing
each
packet
into
a
hash
table.
So
we
are
talking
about
the
hash
table
right,
so
in
rfs.
So,
basically-
and
this
is
calculated
by
hashing
each
packet
into
a
hash
table
and
implementing
upper
backup
counter.
Okay,
so
each
packet,
which
which
comes
into
the
queue
for
processing
it's
put
into
a
hash
table
bucket
and
they
will
have
some
value
based
on
which
what
has
to
be
processed
is
considered.
B
Okay-
and
there
is
this
increments
and
implementing
of
the
bucket,
there
is
a
counter
which
actually
maps
like
how
many
buckets
have
been
processed.
The
hash
function
is
the
same
that
selects
the
cpu
in
our
rps,
but
the
number
of
buckets
can
be
much
higher
than
the
number
of
cpus.
So
this
we
know
that
cpus
can
be
limited
like
octa-core
or
it
can
be
quadrupled
or
something
like
that,
or
it
can
be
much
a
little
more,
but
buckets
can
be
much
more,
so
the
default
table
has
four
zero.
B
Nine
six
buckets
and
this
kernel
parameter
can
be
modified
at
runtime
using
ctm.
So
basically,
these
are
the
six
ways.
These
are
the
six
ways
by
which
I'll
say
five
ways
by
which
we
can
fine
tune.
The
adaptive
load-
and
these
are
the
two
important
considerations
which
we
can
consider
or
we
must
consider
for
for
calculating
load,
and
there
are
a
lot
of
kernel,
pre-configurable
parameters
which
can
be
modified
at
runtime.
B
So
we
can
also
consider
that,
but
I
need
to
look
at
the
code
in
detail
or
into
what
we
are
using,
because
I
have
not
got
a
chance
to
look
much
into
the
detail,
but
these
are
the
some
things
which
I
found
out:
okay,
interesting,
okay,
and
these
are
the
general
things
which
is,
I
think
already
people
have
covered,
but
we'll
just
talk
about
the
design
strategies.
So
so
there
are
two
things,
so
we
have
this
adaptive
controller,
so
this
adaptive
load
controller.
B
What
it
does
we
are
just
going
to
talk
so
measury
uses
nighthawk
as
one
of
the
load
generators
to
perform
performance
benchmarks.
So
when
we
talk
about
performance
benchmarks,
this
is
nothing
but
we
want
to
fine-tune
the
performance
of
the
load
and
we
want
to
see
like
we
were
talking
about
open
loop
and
closed
loop.
So
there
are
a
lot
of
technologies
which
we
can
take
later
like
how
based
on
load.
How
do
we
do
the
benchmarking?
Okay?
B
But
basically
we
can
do
the
performance
testing
and
we
can
fine
tune
it
using
the
load.
Generators
and
nighthawk
supports
adaptive
load
control
as
a
feature,
and
this
basically
is
used
for
performance
tuning,
the
or
performance
benchmarking.
The
load,
okay
and
measure
users
get
nighthawk
to
manage
the
lifecycle
of
nighthawk,
so
get
nighthawk
is
used
to
manage
the
life
cycle
of
the
nighthawk,
and
this
functionality
has
to
be
included
to
be
used
by
machine.
B
So
basically
on
the
fly,
it
should
be
able
to
control
the
load
and
it
should
be
adaptable
enough
so
that
we
don't
need
much
user
interventions,
say,
for
example,
the
cpu
gets
too
much
overloaded,
so
it
should
be
able
to
adaptively
or
in
on
its
on
the
fly.
It
should
be
able
to
balance
that
so
that
it
doesn't
get
overloaded,
and
this
can
be
used
by
measuring
so
measuring
project
can
use
this
feature
as
an
adaptive
load,
controller.
Okay.
B
So
by
default,
the
adaptive
load
controller
in
nighthawk
runs
with
mark
benchmarks,
with
different
rps
values
based
on
the
latencies.
So
we've
talked
about
the
rps,
so
this
runs
with
the
different
rps
values
based
on
latency
and
it
adjusts
the
rps
values.
So,
basically
here
I
think
they
are
mostly
using
the
rps
to
control
the
nodes,
and
this
has
different
values
based
on
latency
and
we
can
use
this
values
for
controlling,
and
this
has
to
be.
B
This
is
done
at
software
level,
and
this
can
be
controlled
with
this
rps
values
and
again,
as
I
said
that
this
is
similar
to
rss,
that
is
used
to
direct
packets
to
specific
cpus
for
processing.
However,
rps
is
implemented
at
the
software
level
and
helps
to
prevent
the
hardware
queue
of
a
single
network
interface
card
of
from
becoming
a
bottleneck
in
the
network
traffic.
So
if
we
used
our
instead
of
rps,
we
are
using
rss.
Then
it
uses
the
nic
card
only
and
uses
uses
the
hardware
interrupt
to
do
it.
B
So
basically
we
don't
want
to
get
it
overloaded,
so
that
control
we
are
putting
in
the
software
level
using
the
rps
and
with
the
matrices
from
this
test
measure,
you
should
be
able
to
adjust
the
resiliency
characteristics
of
mesh
automatically
so
as
to
improve
these
matrices
and
in
turn
improve
performance.
So
I
think
this
is
the
thing,
and
I
think
I
have
already
talked
about
it
and
the
the
next
is
and
design
another
design
calls.
B
So
this
is
all,
though
it
was
mentioned
as
a
design
strategy
design,
it's
part
of
the
strategy,
but
I
will
take
the
talk
about
this
as
a
design
goal
only
because,
with
this
custom
plugin,
we
should
be
able
to
bring
our
own
improves
as
well
as
matrices
to
measure.
So
this,
basically,
is
part
of
this
part.
This
only
so
using
we
can
use
the
matrices
to
adaptive
load
controller
feature
of
nighthawk,
and
this
is
used
to
do
the
benchmarking.
B
So
we
can,
we
can
have
this
custom
plug-in
and
we
can
bring
loads,
and
we
can
also
set
up
the
mattresses
which
we
want
to
measure
okay,
and
with
this
matrices
from
this
test
measure,
you
should
be
able
to
adjust
the
resiliency
characteristics
of
the
mesh
automatically
so
as
to
improve
this
matrices
and,
in
turn,
improve
performance
so
automatically
it
should
calculate
from
the
mesh
measuring
mesh
the
service
mesh
as
to
what
are
the
matrices.
What
are
the
bad
values?
B
What
are
the
good
values
and
automatically
improve
it
so
that
that
should
be
automatically
fine-tuned?
I
should
say:
okay,
and
this
is
the
same
thing
so
using
adaptive
load
controller
feature
measure
you
should
be
able
to
additionally
use
the
adaptive
load,
controller
capabilities
of
network,
add
custom
plugins
from
matrices
and
values,
so
define
custom,
plugins
for
capturing
matrices
and
changing
values
for
running
performance.
B
So,
as
I
said
that
the
rps
values
can
be
used
for
benchmarks
so
based
on
latency,
so
this
can
be
used
for
to
you
to
fine
tune
the
performance
and
define
mesh
mark
so
define
mesh
path.
Is
a
performance
index
scale
to
provide
people
the
ability
to
weight
the
value
versus
the
overhead
of
their
service
mesh?
So
this
is
basically,
like
a
I'll
say,
a
scale
which
perform
which
compares
the
the
what
what
is
the
current
performance
with
what
it
should
be?
B
This
is
what
I
understand,
and
we
can
just
the
best
mark
can
do
some
kind
of
benchmarking
and
it
can
find
out
what
is
it
so.
This
also
is
a
design
code,
and
this
is
the
this
is
how
the
architecture
of
nighthawk
looks.
So
we
will
have
a
nighthawk,
client
and
mydoc
server
and.
B
This
is
the
process,
so
basically,
this
process
will
spawn.
I
I
believe
it
is
going
to
spawn
threads
and
there
will
be
different
worker
threads
which
are
going
to
do
this
kind
of.
B
Benchmarking
and
they
are
going
to
give
the
statistics.
So
this
is
what
I
understand
but
correct
me
if
I'm
wrong
or
anyone
from
the
nighthawk
team
yeah.
F
You
know
I'll
talk.
Senator
did
so
near
right
now
to
I'll
toss
in
a
couple
of
small
bit
of
context
and
maybe
close
the
loop,
so
to
speak
on
some
of
the
some
of
the
efforts
that
you've
given
previously
so
so
last
time
that
we
were
last
time
that
school
was
in
session
from
professor
anirban.
F
We
were
learning
about.
It
was
on
the
topic
of
like
envoys
capabilities,
around
traffic,
capturing
and
potentially
like
traffic,
relaying
or
traffic
shadowing,
and
so,
if
you
think
about
so
nighthawk
is
a
load
generator.
F
But
envoy
is
a
load
router.
If
you
will,
you
know
with
a
number
of
different
things,
but
so
as
real.
So
there's
there's
a
couple
like
it's
fantastic
to
see
the
design
strategies
and
some
of
the
design
goals.
Because
there's
I
wanted
to
try
to
toss
in
another
one
to
to
think
through
or
to
kind
of
add
add
in
there,
and
that
is,
if
so,
in
in
a
vacuum
in
a
a
laboratory
we
can
do.
We
can
have
nighthawk
generate
load.
F
Do
some
low-level
analysis
of
how
to
configure
you
know
what
type
of
configuration
you
should
be
running
based
on
the
sort
of
load
that
is
being
generated
like
for
your
environment,
for
your
application,
for
the
type
of
request
etc?
Like
hey,
what
does
that
look
like
what
what's
ideal
for
for
you
and
the
tooling
that
we'll
have
around?
Is
that
that
get
nighthawk
and
mesherie
bring
will
help?
People
achieve
you
know,
answer
those
types
of
questions,
but
part
and
parcel
to
those
questions.
F
Well,
if
envoy
facilitates
traffic
shadowing
or
traffic
capturing
it
and
to
the
extent
that
that
pcap
or
that
that
same
traffic
signature
can
be
represented,
can
be
you
know,
generated
through
nighthawk,
it's
kind
of
represented
there.
F
I
don't
know
for
my
part,
like
I
don't
know,
if
nighthawk
has
the
ability
to
do
well
traffic
replay,
I
guess
of
what
I
understand
of
envoy.
It
has
the
ability
to
do
like
active
traffic
shadowing
to
basically
replicate
the
request.
F
It
receives
a
request
and
it
puts
it
out
on
two
wires,
like
okay,
good
traffic,
mirroring,
if
you
will,
but
then
the
ability
to
like
if
we
had
the
ability
to
internalize
that
in
nighthawk
and
say,
hey,
take
this
real
traffic
and
run
that
many
times
over,
and
you
know
augment
the
configuration
based
up
but
but
use
that
real
traffic
for
those
real
requests.
F
That's
an
open
question
in
my
mind.
I
don't
know
if
to
what
extent
we
could
achieve
that
kind
of
thing
or
to
what
extent
nighthawk
like
directly,
facilitate
like
we
can
achieve
it
and
if
nighthawk
doesn't
facilitate
it
directly,
I
would
I
would
suggest
I
would
ask
the
other
measuring
maintainers
if
yeah,
incorporating
in
pcap
like
the
ability
to
replay
traffic,
whether
it
was
through
nighthawk
or
something
else,
yeah,
you
know.
B
So
it's
a
open
question
and
we
need
to
check
with
the
team
how
it
is
because
I
also
have
to
see
how
it
what
are
the
features
currently
that
I'm
also
not
fully
aware.
So
I
also
need
to
understand
that
thanks.
F
Yeah,
and
so
just
noting
the
time,
maybe
like
so
nearby
like
this,
this
is
perfect.
What
I
just
like.
I
have
to
use
him
as
an
example
for
just
a
moment
to
say
like
what
a
great
way
to
to
accelerate
your
knowledge
and
understanding
at
the
same
time
that
you're
taking
everyone
else
with
you
like.
F
B
Sure
so
is
there
a
design
strategy
as
of
now,
because
I
have
just
seen
that
these
are
some
prototype
definitions
which
you
have
done,
but
is
there
a
design
strategy
which
you
are
discussing
or
what
exactly?
How
do
you?
What?
Because
I've
seen
that
adapter
code
but
yeah
there
will
be
a
lot
of
questions
there.
So
it's
good.
B
B
Not
currently,
but
so
I
have
been
out
for
a
couple
of
weeks,
so
I
haven't
been
able
to
work
on
it,
so
I
guess
we
can
look
into
it
this
week
we
are
trying
a
couple
of
folks
are
trying
to
build
nighthawk
and
test
it
out
and
like
nighttime,
doesn't
really
have
any
documentation.
So
we
have
to
figure
things
out
on
our
own,
so
yeah
we
are
getting
there.
I
guess.
F
F
And
part
of
it
has
been
well
there's
a
num.
We
know
that
there's
a
number
of
questions
that
we
want
to
have
answered,
but
part
of
it
has
been
well.
Geez
can
can
nighthawk.
Even
can
these
adaptive
load
controllers
even
answer
that
type
of
a
question
and
so
a
little
bit
of
an
egg
before
the
chicken
chicken
before
the
egg.
B
Great,
and
do
you
need
any
kind
of
driver
support
for
this,
this
kind
of
because
yeah
I
understand
rps
doesn't
use,
it
doesn't
need
that
driver
support.
But
if
we
talk
about
the
other
things
there,
they
might
need
a
driver
support.
So
are
we
going
to
use
a
couple
of
this,
or
are
we
only
going
to
use
rps?
That's
the
question
because
it's
from
software
I
understand
yeah.
F
Is
it
good?
I
don't
know
like
it's
a
good
question,
but
part
of
it
is
like
how
far
down
the
rabbit
hole
do
we
want
to
go?
I
guess
it's
part
of
the
question.
F
There's
kind
of
a
whole
nother
world
of
was
it
like
a
vpp
and
dptk
and,
like
a
whole
other
like
there
is
literally
a
whole
nother
world
of
why
it's
not
literally
figuratively
a
whole
other
world
of
networking
in
that
regard,
take
a
note
to
ask
that
same
question.
If
you
would,
on
the
next
service
mesh
performance
call
with
some
of
the
hopefully
we'll
have
a
couple
of
other
nighthawk
maintainers
there.
B
Yeah
I'll
definitely
try
to
attend
it,
because
why
I'm
saying
is
see
if
there
is
a
problem
we
are
assuming
that
rps
is
from
the
software
aspect,
which
controls
this
application
performance,
but
in
case
there
is
a
problem,
then
we
might
need
to
look
at
the
lower
layers
or
we
might
need
to
talk
to
the
cardinal
team
who
who
actually
from
our
different
geography
or
we
don't
know
who
to
contact.
B
So
if
there
is
nothing
if
it
works,
fine,
then
it's
okay,
but
if
it
doesn't
work
fine
and
we
actually
need
to
look
at
the
low
level
stuff.
That
is
where
the
problem
will
come
arise.
Yeah.
F
F
They
are
very
clearly
there
for
purposes
of
hard
or
for
hardware-centric
purposes
and
for
lower
layer
drivers
like
that,
it's
another
individual
that
actually
is
a
maintainer
of
vpp
who
who
will
be
joining.
That
call
as
well
he's
a
distinguished
engineer
at
cisco
city,
but
but
I
do
a
little
I'm
somewhat
yeah.
I
don't
I
don't
know,
is
kind
of
the
answer
like
I'm
hesitant
to
go,
but
there's
a
whole
nother
rat
hole
or
a
whole
another
be
nice
to
get
some
higher
level
questions
answered.
First,.
B
But
we
could
have
a
design
spec
and
we
can
get
it
reviewed
by
multiple
people,
and
I
think
that
is
the
best
solution
to
it
before
actually
starting
to
look
at
the
code,
because
yeah.
F
Yeah
yeah
and
yep
it's
kind
of
hard
yeah
and
there's
no
documentation
from
so
yeah.
I
don't
know
it'll
take
it'll
it'll,
be
it's
a
little
bit
hard.
This
is
why
it's
been
hard
to
form
the
project
in
part,
because
it's
like
well
there's
something,
but
then
so
not
this
next
service
mesh
performance
meeting,
but
the
next
one
there'll
be
a
googler
there
to
walk
through
the
guy
who
wrote
the
adaptive
load
controller
eric.
I
don't
think
it's
this
coming
one.
F
I
think
it's
the
next
one
that
he'll
be
there
to
take
us
through
that.
So
yeah
it'll
take
a
couple
of
iterations,
but
I
can
in
the
meantime
I
for
my
part,
I
can
certainly
articulate
some
of
the
questions
that
would
that
we
like
want
to
try
to
achieve
through.
Using
this
I
mean
some
of
the
answers
that
we
want
to
try
to
achieve
so
yeah.
B
B
Yeah,
yes,
surely
so
definitely
I'm
going
to
put
my
effort
because
I'm
also
a
part
of
my
own
company
and
I
have
my
own
work.
So
it's
a
little
difficult
to
balance
it,
but
I
will
try
my
best
to
pass
and,
and
the
code
which
is
written
is
very
good,
but
only
concern
was
like
in
such
situations
where
we
are
relying
on
rps
how
it
how
it
takes
care
of
everything,
because
we
we
have
to
ensure
that
there
is
no.
We
are
only
to
taking
the
the
best
effort.
B
Part,
not
the
bad
effort
part.
So
that
is
where
my
question
was
like:
how
do
we?
How
do
we
come
across
this
kind
of
situations?
So
that
was
the
only
thing
yeah
so.
F
Very
good
all
right
well,
wait
we're
we're
10
11
after
so
we
we.
I
think,
I
think
that
shy
time
do
we,
we
didn't
have
any
other
urgent
items
right.
Those
can
wait.
A
Yeah,
I
think
utkarsh
has
two
items
if
it's
not
originally
yeah.
E
F
Very
good
all
right
once
again,
congrats
aditya
mesh
mate
mesh
made
of
the
month.
Maybe
oh,
he
had
to
go
it's
okay,
all
right,
very
good!
Thank
you!
All
I'll
see
you
next
next
week
same
time
on
friday,.