►
From YouTube: CNCF SIG Network 2021-01-21
Description
CNCF SIG Network 2021-01-21
A
B
B
Hi,
great,
hey,
hey,
neil
good
good,
oh
just
finishing
up
lunch;
actually,
I'm
sure
you're,
most
interested
in
what
that
is.
It's
wontons,
which
had
the
effect
of
leaving
leaving
me
with
some
bad
breath,
which
means
any
amount
of
affection
with
my
wife
is
out
of
the
question
for
the
next
day
or
two.
So
now
that
now
that
we're
on
public
record
about
my
affections
with
my
wife.
B
Thankfully
she
doesn't,
she
doesn't
watch
these,
so
nice
bear
with
me
one
moment,
there's
a
few
folks
that
are
messaging
looking
to.
B
B
Oh
nice,
hey,
oh
there's,
otto
very
good,
hello,
hey
hey
there,
he
is
hey.
Did.
Did
you
make
the
big
transition
like
the
big
sort
of
role
change?
Is
that
a
public
publicly
talkable
thing
or.
E
B
Nice
yeah,
they
don't
yep
boy.
I
feel
I
feel
kind
of
awkward
saying
this,
but
it's
maybe
I
said
it
to
you
before.
I'm
not
sure
I
know
a
lot
of
red
hatters
and
there's
something
about
their
talent
acquisition
team
where
they
they
just
they
really
hit
it.
Like
close
to
the
mark
very
frequently,
with
quite
intelligent
people
who
are
genuine
and
open,
they
have
time
for
you.
B
They
they
want
to,
engage
and
share,
and
it's
it's
it's
that
obviously
doesn't
apply
to
you,
but
I'm
just
saying
that
you
know
you've
landed
in
a
good
spot,
so.
E
C
C
To
that
is,
is.
B
Telling
like
yeah
all
right
cool
well,
so
in
the
in
the
zoom
chat,
I
posted
a
link
to
today's
meeting
minutes
so
we're
three
after
and
so
now
we're
four
after
so,
let's,
let's
get
going,
let's
I'll
share
the
minutes
and
we
will
kick
off
today's
sig
network
meeting.
So
it's
january
21st.
I
think
this
is
the
second
meeting
of
the
year,
so
welcome
to
20
20
20
21..
B
Well,
that's
kind
of
what
are
we
we're?
What
are
we
20?
21
21?
Is
that
the
we
don't
have
quite
the
trifecta,
but
we're
close
on
the
date
yeah.
B
There
it
is
yeah
okay,
I
knew
there
was
something
special,
so
very
good,
so
we've
got
a
niche
here
with
us
as
well.
Mr
nema,
mr
mr
otto
who's,.
B
I
have
you,
you
will
correct
me
good
if,
if
you're
also
on
the
call,
everybody
should
have
access
to
the
notes,
so
please
fill
them
in.
Like
I
was
saying
this
is
a
sig
network,
cncf
sig
network
call
we
meet
twice
a
month
every
first
and
third
thursday.
B
We
might,
we
might
do
a
little
bit
of
since
there's
a
smaller
size
of
us
today,
or
at
least
so
far.
We
might
do
a
little
bit
of
introductions
that
would
probably
be
nice,
the
cncfc
network,
just
for
those
that
haven't
been
around
for
a
long
time,
unlike
unlike
nikolai,
who
has
been
around
for
a
long
time.
B
B
So
linker
d
and
grpc
nats
and
there's
a
long
list
service,
mesh
interface
network
service,
mesh
kuma,
I'm
gonna
do
a
disservice
to
all
the
other
ones
we
didn't
mention
we
have
had,
and
so
we
generally
start
off
our
meetings
with
those
topics.
First
and
then
move
into
our
working
group
topics-
and
the
working
group
today
is
what
I
expect
we'll
spend
most
of
our
time
on
it's,
where
we've
got
a
few
different
work
streams,
and
so
before
we
get
into
that
working
group
and
it's
kind
of
charter
and
what
we're
doing
within
there.
B
The
working
group
itself
is
a
subgroup
of
cncf,
stig
network.
Some
of
these
particulars
really
aren't
either
here
nor
there,
but
I'm
mentioning
them
for
for
clarity,
because
we're
going
to
talk
a
lot
today
about
service
mesh
things
a
lot
today
about
nighthawk
and
load
generation
things,
and
but
that's
not
all
of
what
sig
network
focuses
on.
B
So
our
topic
for
sig
network
has
hasn't
changed
since
last.
We
met,
which
is
to
acknowledge
that
the
ambassador
proxy
or
the
ambassador
project
based
on
envoy
proxy,
is
has
been
submitted
for
donation
to
the
cncf.
It's
been
submitted
at
an
incubation
level,
so
there's
sandbox
incubation
and
graduated
levels
in
the
cncf
in
terms
of
measuring
the
maturity
of
a
given
project.
It's
adoption,
et
cetera.
B
Speaking
of
kuma
kuma
is
at
an
at
a
sandbox
level,
but
I
suspect,
nikolai,
I
think
I
think,
probably
hinting
toward
or
maybe
thinking
about
that
next
step
soon.
F
Yeah
yeah,
I'm
sorry,
my
camera
is
not
working
after
some
of
the
meetings
today
I
need
to
reboot,
but
I
haven't
found
that
I
met
yes.
Yes,
we
are
definitely
looking
so
I'm
one
of
the
the
the
the
people
of
the
maintainers
there
on
this
service
mesh
or
I'd
like
to
refer
to
it
as
like
envoy
control,
plane.
G
G
F
But
yes,
we
are
sandboxing
projects
for
since
I
believe
june
or
like
late
june
or
early
july
last
year,
maybe
june
and
yeah
we
are.
We
are
very,
very
much
looking
into
gathering
the
needed,
mostly
like
case
studies,
or
how
would
you
call
it
success
stories
of
the
fusers
to
actually
be
able
to
qualify
for
the
incubation.
There's.
F
B
No,
no
sorry,
I
yeah
absolutely
so
yeah
case
studies
user
stories
in
preparation
for
incubation.
That
makes
a
lot
of
sense.
Yeah
the
kudos
on
the
the
clip
by
which
kuma
is
moving.
F
Yes,
yes,
it's
interesting
how
how
you
can,
because
I
joined
like
that
10
months
ago,
9
10
months
ago,
and
it
was
pretty
young
project
by
then
and
since
then
you
can
literally
see
people
getting.
You
know,
of
course,
as
with
every
project,
people
come
and
go,
but
it's
it's
interesting
how
the
the
the
profile
of
the
people
that
come
so
first
first,
you
get
some
kind
of
explorers,
people
that
just
go
there
to
poke
a
little
bit,
send
some
feedback
and
then
disappear.
F
And
now
you
get
people
that
stick
a
lot
or
like
come
and
start
contributing
directly.
So
it's
an
interesting
experience
for
the
whole
lifetime
of
open
source
project.
B
That's
funny
nice,
okay,
we're
good!
Well,
I
don't.
I
don't
have
any
further
update
on
ambassadors,
just
at
least
in
as
much
as
I'm
aware
just
out
there
for
review
and
so
do
go
out
and
everyone's
encouraged
to
comment.
B
B
This
is
just
intended
to
be
a
recap
and
introduction
for
folks
about
what
I
was
just
articulating
before
the
fact
that
there's
cncfc
network
sort
of
it's
mission
statement
and
things
so
I'm
going
to
touch
on
the
slides,
we're
not
going
to
cover
them.
There's
some
other
co-chairs
here.
Matt
klein
is
our
well
at
least
was
formally
or
still
is
our
toc
liaison.
So
I
don't
know
that
I've
got
to
go.
Look
that
up.
B
We've
got
some,
so
this
is
a
dated
as
of
this
last
kubecon,
so
that
oops,
so
we've
got
some
projects
that
are
on
the
horizon.
Ambassador
is
is
coming
on
in
as
a
sub
working
group
of
the
sig
network
is
a
service
mesh
working
group.
Some
of
the
the
just
briefly,
its
initiatives
include
a
collection
of
service
mesh
patterns.
B
B
But
if
the
link,
the
link
to
this
slide
is
this
deck
is
in
our
meeting
minutes,
and
so
too,
then,
is
the
link
to
the
full
list
of
patterns
that
are
being
described
and
articulated.
B
So
they'll,
be,
I
anticipate
there'll,
be
there's
there's
a
number
of
so
so,
if
you
can't
tell
I've
been
for
many
of
you,
I've
been
trying
to
corral
us
into
getting
a
lot
of
our
conversations
into
this
zoom
or
this
meeting
channel
and
because
there's
a
lot
of
work
going
on.
B
I
don't
know
that
I
would
characterize
it
as
behind
the
scenes,
but
there's
just
been
a
lot
of
work
going
on
in
these
various
initiatives
and
we're
trying
to
organize
those
here
so
another
one
of
those
is
service.
Mesh
interface
conformance
some
of
that
is
driven
from
the
smi
meetings,
but
those
are
30
minutes
long,
every
two
weeks
and
there's
a
lot
of
service
meshes
to
coordinate
with
and
so
there's
a
work
that
goes
on
outside
of
it
service
mesh
performance.
B
This
specification,
we'll
probably
talk
about
it
a
little
bit
later
today,
some
of
the
the
the
individuals
and
a
university
that
is
working
on
meshmark.
They
are
not
on
the
call
today
so
we'll
talk
about
that
instead
nighthawk
and
get
get
nighthawk
as
a
project
is
where
we're
driving
to
some
particulars.
B
So
people
don't
need
to
listen
to
me
speak
the
whole
time
auto
on
the
call
is
probably
the
a
core
nighthawk
maintainer
or
the
the
core
nighthawk
maintainer
otto.
Do
you
want
to
introduce
nighthawk
to
folks.
D
D
So
there's
cli,
which
allows
you
to
control
load
generation
and
it
comes
with
a
grpc
surface.
D
D
And
well,
in
short,
I
guess
that's
it
and
then
there's
obviously
the
why
nighthawk,
because
there's
a
bunch
of
load
generators
out
there
and
one
thing
that
we've
been
trying
to
make
nighthawk
shine
in,
is
being
like
super
sensitive
in
well
measuring,
latencies,
very
grain.
So
the
target
was
50
microseconds
of
precision
and
then
there's
also
like
multi-protocol
support
in
there,
so
h1
h2
and
well
yeah.
I
can
go
on
for
a
while.
D
You
know
about
all
the
features
it
has,
but
I
think,
like
the
sensitivity
is
like
a
key
thing.
So
that's
that's
kind
of
like
a
very
short
introduction.
I
guess.
D
B
And
so
yep,
and
so
there's
there's
part
of
the
problem
statement
that
nighthawk
is
aimed
at
solving
and
has
been
solving
nighthawk
from
my
vantage
point
has
been
growing
in
popularity
and.
B
And
those
that
have
been
using
some
other
load
generators
have
also
you
know-
are
also
turning
their
eye
tonight,
like
it's
compelling
enough
that
they're
people
looking
at
switching
off
of
some
of
their
load
generators
to
to
nighthawk.
B
D
Yeah
yeah,
so
yeah,
there's
quite
a
bit
of
features
around
these
days,
so
one
of
them
is
is
like
the
adaptive
load
control
that
was
contributed
by
google
fairly
well,
not
too
long
ago,
and
with
that
adaptive
load
controller,
you
can
well
research
questions
like
what
qps
can
I
sustain,
given
that
p90
stays
below
a
certain
threshold,
so
it
will
then
automatically
like
try
different
rps's
and
converge
and
towards
a
certain
frequency
and
then
attempt
to
sustain
that,
and
obviously
that's
that's
just
a
sample
because
it's
well
the
the
principal
piece
of
it
is
fairly
generic.
D
So
you
can,
you
know
iterate
on
some
other
things
as
well.
That's
all
extensible
and
pluggable,
but
this
is
kind
of
like
the
the
the
primary
use
case
that
it
was
built
for
and
then
there's
well
something
I've
been
working
on
myself.
That's
like
horizontal
scalability.
D
So
when
you
try
to
scale
a
bunch
of
load
generators
in
a
and
then
there's
a
couple
of
well
challenges
that
arise,
I
do
think
two
big
ones
are
keeping
these
clients
synchronized.
So
you
you
know
if
you
want
to
achieve
like
a
certain
global
request
frequency.
D
Well,
we
try
to
make
that
that
easy
and
accurate
and
then
the
other
part
is
like
collecting
all
the
results
and
presenting
them
aggregating
them
in
a
way.
That's
that
makes
sense.
D
A
third
challenge
is
abstract,
abstracting
away
from
all
that,
so
you
know,
I
think.
Ultimately
it
would
be
super
cool
if
you
could
run
a
horizontally
scaled,
remote
execution
well,
basically,
by
specifying
one
or
two
flags,
and
that
would
like
be
the
difference
when
using
the
cli
to
execute
the
local
test
and
a
horizontally
scaled
test,
and
so
basically
that
means
that
if
you.
D
Deploy
the
right
services
to
a
couple
of
notes
and
you
that
then
you
can
well
easily
orchestrate
those
and
make
them
work
together
to
send
a
load
somewhere
and
then
you'll
just
get
the
results
out
as
if
you
were
running
a
local
test.
So
that
makes
deployment
of
the
thing
easy.
D
I'm
trying
to
remember
that
you
mentioned
something:
did
you
call
out
something
else
for
me
to
dive
into
a
little
deeper.
B
Not
those
two
are
the
ones
that
continually
pop,
to
my
mind
as
being
like,
you
know,
really
intriguing
and
coming
from
spending
a
lot
of
time
on
the
meshery
project,
those
to
enable
the
enable
tool
like
mastery
or
our
users
to
answer
a
bunch
of
questions
that
we've
that
the
community
has
had
sort
of
sitting
out
there
latent.
D
So
so
one
thing
that
that's
also,
I
think
nice
to
mention
is
that
if
you
know
I'm
describing
like
the
adaptive
load,
control
and
the
horizontal
scaling,
because
of
the
abstraction,
that's
like
abstracting
away
from
all
that
the
adaptive
load
controller
can
also
talk
towards
that
horizontally,
scaled
system
and
well,
basically,
barely
be
aware
of
it.
You
know
doing
something:
that's
actually
running
remotely
yeah.
D
B
To
me,
the
though
that
type
of
a
capability
opens
up
the
ability
to
answer,
questions
that
my
little
brain
has
yet
to
ask
like
I'm
like,
I
think
you
can
go
like
performance
characterization
or
describing
nighthawk
as
a
layer.
Seven
performance,
characterization,
characterization
tool,
is
exactly
what
you
had
said
and.
B
And
that's
exactly
what
I
think
of
here
is
like
the
high
fidelity
way.
The
new
fidelity
ways
in
which
you
can
you
can
characterize
the
performance
of
your
environments
is
like
I
don't
I
don't.
I
mean
there's
one
other
load
generator
that
comes
to
mind.
That
is
isn't
nearly
as
sophisticated,
but
has
some
amount
of
horizontal
distribution,
and
I
want
to
say
it's
like
octopus
or
like
that's,
not
the
name
of
it.
B
I
can't
remember
I
wish
I
could,
because
I'll
go
on
record,
that
that
maintainer
is
not
not
a
friendly
individual,
not
a
welcoming
and
collaborative.
Moreover,
the
project
just
isn't
as
if
it's
so
anyway,
so
super
pleased
about
the
discussions
that
we've
been
having
from
the
work.
That's
been,
I'm
going
on
auto
with
you
and
jacob
and
hutch,
and
just
the
feedback.
B
That's
been
gotten,
there's
a
number
of
other
folks
on
the
call
today
who
have
who
you
know
broadly
participate
in
the
service
mesh
community
participate
in
and
around
some
of
layer,
five's
initiatives,
one
being
meshery
and
surface
mesh
performance,
and
some
of
these
other
things
they've
been
well
my
wife
hates
it.
When
I
say
this,
but
they've
been
hot
to
trot
on
this.
B
This
initiative,
they've,
been,
I
think,
excited
to
see
well
to
answer
some
of
those
same
questions
I
mean
part
of
the
mission
of
a
tool
like
measuring,
is
to
just
is
to
make
service
to
help
people
adopt
them,
and
do
it
a
little
easier
and
answer
questions
like
what
should
I
expect
what
what
type
of
the
the
question,
the
exact
question
that
you
just
phrased,
which
is
like,
if
our
requirement
is,
we
need
to
stay
under
if
we
have
this
sla,
this
slo,
how
do
we
stay
within
that?
B
Given
the
fact
that
we
have
this,
we
consistently
have
this
qps
or
like
what
are
those
inflection
points
for
us
when
do
we
trigger
when
we
and
that's
just
one
of
any
number
of,
I
think
other
questions
that
could
potentially
be
answered,
or
at
least
characterized
much
more
problem
statements
that
could
be
characterized
much
more
fully
to
be
able
to
more
intelligently,
to
give
people
a
bunch
more
info
that
they
would
need
to
to
run
their
systems
better
yeah.
It
could
have
been
locust
adina.
That's
a
good
call,
wow.
B
If,
if
the
subtitle
of
that
maintainer
is
douche,
then
that's
probably
the
one-
I
don't
know
anyway,
bad
jokes,
so
so
there's
a
project.
That's
coming
forth
here,
get
nighthawk
that
has
been
pleasantly
warmly
received.
It's
the
notion
that
nighthawks
been
growing
in
popularity
and
we
and
but
but
there's
only
auto,
there's
there's
only
there's
a
single
single
distribution.
Artifact
that's
available
for
nighthawk.
Is
that
an
inaccurate
statement
like
are
there?
D
That
that's
an
accurate
statement.
There
are
actually
actually
they
are
too,
but
they
are
similar.
So
right
now
we
push
docker
images
to
docker,
app
and
and
and
that's
it
yeah.
B
So
so,
on
this
initiative,
get
nighthawk
is
to
well
uplift
nighthawk
and
get
it
get
into
other
people's
hands
to
spend
some
more
time
with
otto
and
the
other
maintainers
of
nighthawk
to
help
help
and
take
advantage
of
from
a
measuring
perspective,
help
and
take
advantage
of
nighthawks
capabilities,
and
you
know
get
expose
those
to
more
folks.
I
mean
in
different
ways:
okay,
so
there's
a
couple
of
a
couple
of
early,
so
we
actually
covered
this
topic
a
bit
last
time
we
met
so
so.
B
This
is
why
I'm
sort
of
skipping
I'm
through
a
little
bit
of
this
and
that's
to
say,
let's
dispense
with
the
pleasantries
and
let
the
rubber
meet
the
road
on
some
things
like
there's
a
few
there's
a
few
folks
that
are
weren't
able
to
make
it
today
by
the
way,
though
I
three
of
them
will
will
be
re-watching
this,
but
they're
they're
raring
to
go,
and
so
so
let
me
introduce
a
couple
and
that's
some
so
pratia
banerjee.
B
A
B
So
to
so,
thank
you
neil.
So
neil
has
done
this
success
by
the
way.
Neil
is
almost
well
neil's,
a
maintainer
of
the
service
mesh
performance
website
as
well.
B
So
he's
no
he's
quite
familiar
with
these
types
of
websites,
moreover
ones
that
are
right
within
the
realm
of
what
we're
trying
to
do
ultimately
part
of
what
I'm
hoping
that
we'll
accomplish
with
get
nighthawk
and-
and
these
initiatives
is
a
few
different
things-
it's
a
little
bit
of
potentially
some
compatibility
with
service
mesh
performance
as
you
go
to
as
you
go
to
inform
nighthawk
of
a
load
that
you
would
like.
B
You
know,
of
a
load
that
you
would
like
for
it
to
generate
and
the
ways
in
which
you
would
like
for
it
to
do
that.
Nighthawk
has
its
own
mechanism
for
doing
that.
Smp
the
service
mesh
performance
is,
you
know,
coming
forth,
hopefully,
as
a
standard
is
a
strong
word,
but
just
coming
forth
as
a
specification
for
doing
that
consistently,
and
so
there's
discussions
to
be
had
around
that
and
where
or
how
that
might
happen.
B
Wow
I'm
digressing
and
there's
a
lot
of
different
things
to
connect
here.
What
I
was
going
to
say
is
very
pleased
that
neil
is
here,
he's
done
this
before
for
another
relevant
site.
There's
this
link
to
figma
is
in
the
at
the
bottom
of
the
get
nighthawk
project,
so
hopefully
everyone
can
access
it.
Hopefully
you're
able
to
comment
on
it
too,
to
the
extent
that
I'll
put
a
link
into
the
chat.
B
You
know
please
comment
the
designer
here
he's
not
on
the
call
right
now,
his
name's
augustine
a
lot
a
lot
of
different
people
coming
to
bear
on
what
what's
sort
of
on
the
surface
of
it
looks
like
a
small
project,
but
my
hope
is:
is
that
it's
not
that
it
ends
up
being
ends
up
in?
You
know
popularizing
nighthawks
capabilities
a
bit
enabling
people
with
a
few
different
things
that
they
couldn't
otherwise
do.
B
B
Here
this
this
isn't
just
a
draft
of
like
what
could
potentially
come
to
be
not
necessarily
not
not,
that
that
is
nighthawks
logo,
but
but
to
the
extent
that
nighthawk
doesn't
have
another
site,
then
you
know
it's
something
auto
for
you
to
kind
of
think
about
and
internalize,
or
maybe
how
closely
you'd
like
for
this
set
of
work
to
kind
of
embody,
nighthawk
directly
versus
sort
of
sit
on
the
side
of
of
it,
and
but
so
neil
you
had
put
together,
and
you
know
all
of
this
is
up
for
comment
and
that's
why
we're
walking
through
it
is
nearly
put
together
sort
of
a
project
site
in
its
purpose,
some
sections
that
it
would
have
sort
of
to
try
to
indicate
how
much
content
would
be
on
there
and
the
scope
of
it
and
so
structure.
B
Some
designs,
an
early
domain
has
been
registered.
I
wouldn't
click
on
the
link
right
now,
because
it's
just
is
just
an
early
design.
It's
just
totally
under
construction.
A
If
I'm
not
that
wrong,
there
is
another
design
of
the
gatekeeper
logo
in
the
figma
file.
Can
you
open
it
yeah
very
nice.
E
B
So
I
I
did
this
one
and
I
did
the
other
one,
so
anyone
can
say
anything
that
they
want
about
it
without
fear
of
hurting
feelings.
So
this
was
sort
of
inspired
from
the
fact
that
a
nighthawk
is
a
is
a
bird.
D
So
personally
I
like
that
second
one,
but
so
what
maybe
we
should
like
raise
a
vote
on
some
slack
channel
and
what's
the
better
best
option
is.
B
D
I
I
remember
how
the
name
nighthawk
came
to
be,
and
that's
actually
that
that
was
quite
a
bit
of
bike
sharing
and
well.
In
the
end,
we
were,
we
just
raised
the
vote
and
well
that
ended
up
being
it.
B
Nice,
okay,
very
good,
I
will
take
a
and
so
by
the
way,
adina
is
another
individual
on
the
call
who's
been
engaged
with
these
projects,
she's
been
helping
with
continuous
integration
on
in
the
meshery
project,
and
thankfully
she's
also
intrigued
by
get
nighthawk,
which
has
a
lot
to
do
with
you
know.
Part
of
its
initial
challenge
is
around
continuous
integration
and
producing
distributions
of
nighthawk.
B
B
Guidance
auto
for
people
for
there's
a
few
contributors
who
are
looking
to
spend
time
getting
fancy
inside
of
github
actions
inside
of
the
workflows
there
and
you
getting
familiar
with
envoys
well
with
nighthawks,
toolchain
and
sort
of
the
you
know,
basil
and
the
whole,
where
how
do
these
folks
get
get
ramped?
Or
where
do
they
go
to
look
for
the
current
build
process
and
like
what?
What
gotchas,
what
caveats
they
watch
out
for.
D
Yeah,
so
to
be
honest,
it's
it's
a
little
bit
more
involved
still
than
I'd
like
it
to
be,
because
there's
kind
of
like
we
piggyback
on
envoy
and
that
well
building
through
through,
like
broker
images,
that's
easy
of,
of
course,
but
like
preparing
your
own
environment,
to
do
the
same,
that's
well!
Then
you've
got
to
be
pretty
pedantic
about.
D
You
know
the
specific
build
needs
of
the
project
and
that
that
may
require
a
bit
of
tinkering
and
once
you've
gone
through,
that
you'll
also
find
that
the
project
is
it's
not
like
a
very
small
build.
It
takes
quite
a
bit
of
time
or
maybe
even
half,
of
the
battery
of
my
laptop.
So
it's
it's
a
significant
build.
D
Yeah,
so
that
that's
you
know,
if
it's
possible,
I
would
actually
consume
the
docker
images
that
get
pushed.
But
if
it's
necessary,
then
yeah
building
is
possible.
I'd
start
out
with
the
readme
and
at
some
point
that
punch
you
towards
and
voice
read
me
for
building
the
docs,
because
basically
the
requirements
are
exactly
the
same
and
from
there
you
know,
once
that's
set
yeah
you
should
be
set
to
go.
B
Okay,
and
so
so
so
adina
and
anish
is
here
as
well
and
actually
wrong
thanks
thanks
for
coming
thanks
for
joining.
H
Yeah
hi
yeah,
so
actually
I
go
with
sonkoo
more
or
less.
Nobody
knows
me
as
I'm
going
out,
but
yeah
good
to
be
here.
B
B
No
yeah
see
the
thing
is
it's.
I
sit
high
and
mighty
nobody
ever.
Nobody
ever
asked
me
how
to
pronounce
my
name.
I
don't
know.
H
This
easy
one
good
yeah,
actually
yeah,
thanks
for
sharing
this
so
right
now,
I'm
looking
into
my
docs
I'll,
come
back
to
you
with
certain
questions
about
how
it's
composed
and
components
are
how
it's
testing
it's
right.
Now
I've
been
doing
analysis
with
fort
io,
so
nighthawk
is
kind
of
the
next
one
that
we'll
be
working
on
so
yeah
soon
I'll
have
some
feedback.
C
Right
very
good,
so.
I
Yeah,
can
it
will
be
so
from
what
I
started
and
not
continue?
It
was
that
I
need
to
build
envoy
first.
If
I
want
to
build
the
nighthawk.
I
I
Actually,
actually,
I
just
answered
to
my
question:
no,
we
need
specific
gcc
libraries
from
what
I
and
I
think
so
yeah.
That
was
the
thing.
I
D
I
think
this
is
kind
of
like
a
so
first,
I
don't
know
from
the
top
of
my
head,
so
I
should
check
like
what's
the
oldest
version
that
ubuntu
or
the
oldest
ubuntu
version
that
can
be
used
to
build
a
thing
on,
and
then
I've
picked
the
oldest
one
yeah,
and
then
that's
pick,
the
oldest
one
and
hoping
that
the
resulting
binaries
will
be
compatible
on
all
the
newer
releases.
I
D
I
D
Well,
and
also
you
know,
because
I
don't
have
like
every
requirement
on
the
top
of
my
head
because
well,
both
envoy
is
like
a
fast-moving
target
in
a
sense
that
things
change
at
rather
high
pace
and.
I
D
Right
right,
but
now
like,
if
you
have
any
questions,
you
know
when
you
actually
start
cracking
on
this
and
run
into
anything,
just
feel
free
to
ping
me
on
slack
or
some
such
and.
D
I
Actually,
what
will
help
me
to
have
more
confidence
will
be
if
at
one
point
you
have
you
have
time
or
so
the
current
workflow
like
it
is,
I
don't
know
exactly
now.
What
was
the
bottleneck,
but
there
was
a
point
where
I
did
not
get
how
the
envoy
gets.
The
build
is
getting
builded
built,
I
mean
it's
like:
let's
say
you
run
the
beard
and
then
you
have
the
console
of
the
build,
and
you
see
all
the
things
that
happened.
I
didn't
saw
it
or
actually
yeah.
I
I
No,
but
I
I
think
I'm
missing,
I
was
missing
something
and
if
I
would
have
like
an
output
of
what
is
executing.
H
D
Maybe
it's
good
to
iterate
on
that
then
offline.
If
you
can,
you
know
reproduce
that
issue
you
ran
into
and
copy
paste
it
to
me,
then
I
can
help
them.
D
Well,
I'm
in
the
layer,
five
slack
channel,
so
maybe
over
there.
My
nick
is:
let
me
paste
it
something.
D
Yeah
yeah,
that's
my
handle
in
in
the
chat
right.
There.
I
B
Quick
confirmation,
the
the
current
nighthawk
workflow,
that's
in
circleci,
is
used
for
that.
So
this
is
the
current
build
workflow.
D
Yes,
that's
right:
okay,
but
but
that's
piggybacks
on
the
built
image
from
envoy
right,
so
that
well,
that
makes
things
easier,
because
that
has
all
the
prerequisites
already
there.
B
Nice,
okay,
maybe
to
so
it's
a
good
suites.
We
talked
about
the
where
the
site
designs
are
the
site
structure,
the
site,
designs,
sort
of
draft
logos,
draft
draft
designs:
everyone
here
is
welcome
to
assert
opinions,
and
I
will
do
you
know
like
auto
a
great
suggestion
on
on
a
poll
and
voting
and
things
rudolfo
martinez
who's.
Another
individual
who's
will
hopefully
collaborate
with
with
adina
and
make
some
waves
around
ci.
So
he
he's
able
to
join
today
he's
over
at
rackspace,
actually,
okay
they're.
B
B
It's
been
convenient
for
a
golang
based
project
like
meshri
to
be
able
to
use
fortio
as
a
golang
based
utility
and
basically
as
a
library
in
that,
in
that
respect,
it
measury
wrap,
provides
a
golang
wrapper.
If
you
will
around
wrk2,
which
I
don't
know,
if
that's
c
or
c
plus
or
what
but
it's
it's
not
go,
and
it's
one
of
the
c's
and
so.
A
B
And
so
that
was
the
original
approach
taken
or
it
is
sort
of
the
original
and
current
approach
taken
to
integrating
mesherie
with
nighthawk,
and
that
has
been
to
wrap
some
go.
Laying
around
nighthawks,
cli
or
9x
command
line,
interface
and.
H
This
is
important,
you
know
the
difficulty.
B
Of
kind
of
getting
or
it
running
is
separate,
so
so
it's
sometimes
ideal
that
that
might
run
separately
in
a
different
container,
because
because
to
what
ottawa
described
before
that,
taking
and
distributing
you
know,
literally
literally
littering
a
cluster
with
multiple
instances
of
nighthawk
is,
is
you
know,
probably
easily
easily
done
using
a
container
and
scheduling
a
container
in
a
kubernetes
in
a
cluster?
B
But
it's
not
the
only,
but
it's
also
highly
convenient
for
a
tool
like
mystery
to
have
nighthawk
built
in
or
within
the
same
container
or
available
there,
and
so
I
don't
know
what
I'm
what
I'm
trying
to
say.
I
guess
I'm
guessing
I'm
sort
of
trying
to
switch
so
say:
hey,
we
covered
these
topics
now.
One
of
the
other
topics
is:
did
the
horizontal
distribution
and
nighthawks
support
for
that,
so
the
you
you'd
auto
earlier
you
described
what
that
capability
is,
but
the
the
current
state
of
that
capability
is
in
flight
available.
D
I
think
I've
like
everything,
ready
for
it,
except
one
challenge,
and
that
is
that
so
currently
what
it's
able
to
it
hasn't
landed
yet
so
this
this
needs
to
go
through
review
still.
But
the
current
state
of
my
working
branch
of
that
is
that
it's
able
to
collect
all
the
outputs
but
the
outputs
they
come
in
streaming,
because
we're
also
preparing
for
our
force
well
aggregating
like
the
raw
or
high
fidelity
results,
so
not
aggregated
results.
D
D
So
what
I'm
trying
to
say
here
is
that
that
there
is
one
slightly
challenging
part
about
it,
and
that
is
that
this
you
know,
if
we're
gonna
aggregate,
very
large
responses,
then
first,
you
know
all
these
these
these.
These,
these
large
outputs
needs
to
be
sent
in
chunks
towards
the
a
central
aggregation
point
within
the
cluster
like
the
horizontally
scaled
load,
generating
cluster
and
that's
like
the
part,
and
then
I'm
trying
to
solve
now.
D
Having
said
that,
what
is
working
is
actually
quite
a
bit,
and
that
is
that
you
just
can
get
the
unaggregated
outputs
of
all
the
instances
involved,
but
that's
just
not
super
convenient
yet
because
for
us
humans,
that's
kind
of
like
a
lot
to
judge
at
digest,
in
the
sense
that
you
know.
If
you
have
like
200
notes
generating
loads,
then
you
get
200
result
sets
in
and
then
you
need
to
go
over
these
to
make
sense
out
of
them.
D
Ideally,
we
do
something
with
that
and
the
plan
is
then
to
so
we're
using
hdr
histogram
as
one
of
the
technologies
for
histograms
on
the
hood,
and
that
one
is
able
to
merge
these
statistics
we'll
be
able
to
do
that
with
the
current
state
and
that
that's
pretty
easy,
so
long
term
short.
I
actually
think
it's
about
time
that
I
create
a
pull
request
for
that
and
then
several
on
a
separate
track
finish
up
some
stuff.
That's
related
towards
well
streaming
raw
statistics.
So
to
speak.
E
D
Compute
state
of
this
on
the
fly.
D
Yeah
so
I,
but
I
actually
think
I'm
able
to
make
a
pull
request
that
would
would
like
following
the
800
rule.
You
know
it
would
be
quite
useful,
as
is
so
nice
and
and
then
we'll
be.
You
know
we'll
be
needing
some
time
to
land
that,
and
I
think
that
that
might
be
oh
well
weeks
to
months.
It
will
go
in
tiny
parts
and
it's
quite
a
bit
of
code.
So.
H
No,
I'm
saying
no,
it's
just
this
makes
sense.
So
one
of
the
models
that
we're
looking
at
performance
is
sorry
and
not
sub
traffic,
in
the
sense
like
focus,
is
on
like
a
telco
workload.
So
from
that
perspective
it's
more
measuring
from
outside
the
cluster
or
within
the
cluster.
H
How
would
how
much
would
a
node
what's
the
performance
of
a
node
for
the
micro
services
that
are
running
on
the
node,
so
in
that
sense
would
send
in,
like
a
you,
know,
gigs
of
traffic
across
the
node
to
see
how
the
microservices
perform
within
the
node,
if
not
we're,
scaring
across
two
or
three
nodes
but
yeah.
So
that's
another
model
that
looking
into
see
how
not
south
traffic
model
works
and
how
these
tools
are
helping
us
to
kind
of
achieve
that
type
of
results.
H
And
right
now
I
see
for
tayo,
I
mean
there's
still
things
to
investigate
to
see
why
it
behaves
at
the
way
it
does
to
a
certain
when
we
scale
beyond
certain
qps
like
10,
000,
qps
or
whatnot,
so
to
figure
out
how
nighthawk
does
and
how
it
scales.
B
Here,
let
me
let
me
toss
in
a
thought
if,
unless
otto
with
you
about
no
go
ahead,
oh
yeah,
I
so
one
of
the
things
he
said
is
he
hits
home
from
from
the
perspective
of
tooling
to
support
it
sounded
like
you
were
looking
at
characterizing
the
or
you
wanted
to
make
sure
that,
as
you
are
characterizing
the
performance
of
large
volumes
of
requests
like
of
a
telco
sized
environment
that
you're
doing
so
in
consideration
of
the
impact
that
generating
load
when
done
from
within
the
cluster.
B
Like
you,
you
don't
have
a
clean
scientific,
you
don't
have
a
clean
vacuum,
you're
you're
dirtying,
the
lab
with
which
can
which
I
think
is
which,
which
is
actually
why
the
horizontal
distribution
capability,
horizontal
scale.
Scalability,
is
interesting
to
me
because
it's
like
to
me
all
the
test
cases
are
valid
like
is,
is
it?
Are
you
dirtying
your
environment
if
you're
generating
load
and
burning
some
cpu
and
from
within
the
cluster?
Yes?
Is
that
valid?
Well,
I
think
so
because
did
you
have
microservices
deployed
and
spread
across
your
cluster
yeah?
B
Are
they
talking
to
each
other
yeah,
it's
one
generating
load
against
the
next
yeah
okay,
but
but
for
in
other
tests
situations
it's
like
look.
What
we
want
to
do
is
pretend
that
we're
a
user
that
all
and
we
want
to
generate
from
user
traffic,
and
then
you
won't
have
this
the
direction
controlled.
Here
all
everything
gets
generated
externally.
B
Maybe
multiple
sources
externally,
maybe
multiple
endpoints
at
the
same
time,
which
is
another
exciting
capability
of
nighthawk
that
hence
that's
been
a
focus
of
the
meshery
project,
is
to
deploy,
is
from
shrey
to
easily
deploy
outside
of
a
cluster,
generate
love
or
help
use
nighthawk
or
the
other
others
to
generate
load
or
to
do
it
internally
and
to
give
people
hopefully
easy
to
use
tooling
that
they
can
repeat
those.
B
Results
within
sunku
are
both
of
those
valid
for
you
am
I
putting
words
in
your
mouth
when
you're
saying
that
you
would
want
to
do
both
and
there's
certain
situations.
You
know
certain
test
cases
that
are
appropriate
for
one
versus
the
other.
H
Yeah
I
mean
yeah
you're
right,
I
mean
both
are
valid.
Surely
I
think
from
a
telco
deployment
standpoint
generally,
each
node
might
not
have
a
tons
of
micro
services
where
you
want
to
do
east-west
across
like
a
tons
of
microservices
within
a
single
node.
I
most
likely
have
a
few
important
couple
of
important
cnfs
to
say
as
a
container
network
functions
and
I
said,
deployed
in
probably
a
microservice
fashion
and
and
sorry
that's
why
you
know
so.
H
The
traffic
going
in
and
out
of
a
node
is
crucial
characterizing.
That
is
a
crucial.
At
the
same
time,
of
course,
they
are
deployed
in
a
cluster
fashion,
so
surely
need
to
understand
the
performance
across
these
microservices
scaled
across
like
few
servers,
few
server
nodes
so
yeah,
both
kind
of
models
are
surely
important
and
I
guess
the
the
key
part
there
is.
You
know
what
kind
of
network
characteristics
that,
in
the
sense,
like
network
parameters,
that
they
consider
scaling,
tcp,
sockets
or
consider?
H
How
is
the
layer
3
layer,
4
tuning
done
in
these
tools
in
in
order
to
in
order
for
side
cars
to
kind
of
process
them
and
deliver
their
hdtv
packets?
To
their
actual
application
right,
so
that's
something
you
know
to
consider
in
leveraging
these
tools
and
part
of
my
effort
is
to
understand
that
see
how
these
tools
are
performing
and
how
like
how
would
they
satisfy
their
needs
or
what
can
be
tweaked
a
little
bit
to
kind
of
satisfy
the
telco
needs.
H
B
Yeah
to
me,
it's
very
clear:
why
you're
on
this
call
you
just
yes,
I'm
glad
you're
on
the
call
very
nice
to
there's
a
lot
of
there's,
there's
not
that
many
people
that
I've
been
able
to
connect
with
that
are
trying
to
study
that,
and
I
think
that
you
know
it
becomes
it's
more
meaningful
to
study
the
higher
the
volumes
you
have,
the
more
the
more
impact
tuning
of
performance
has,
but.
D
So
so
I
actually
you
know
with
respect
towards
the
internal
and
external.
D
Loads
generation,
I
would
be
like
the
bigger
fan.
I
think
I
totally
agree
with
that.
Putting
like
the
the
load
generators
outside
of
the
test
subject
so
to
speak,
makes
a
lot
of
sense,
but
I,
but
but
the
thing
is
so
far
like
well
most
of
the
open
source
systems
that
I've
seen
that
actually
do
this
type
of
testing.
D
D
D
Being
able
to
drive
like
a
separate
cluster
that
you
could
set
up
for
load
generation,
which
then
sends
like
the
test
workload
towards
another
cluster
that
you're
actually
interested
in
measuring.
D
Cluster
under
test
could
then
also
have
an
egress.
You
know
if
you
need
origins
that
those
reside
in
another
cluster
and
then
and
that
that
seems
to
be
like
a
more
clean
approach
to
it's
also
easier
to
set
up.
H
No,
it's
just
to
say
in
terms
of
load
generation
right
so
traditionally
at
least
from
a
telco
models
we
have
rfcs
like
provided
by
itf,
for
example,
layer,
3,
rfc
2544
is
a
popular
one
and
that
determines
how
many
frames,
how
many
like?
What's
the
rate
when
to
back
off
right
so
when
to
open
sockets
or
when
to,
and
it's
all
layer,
three.
So
what
kind
of
packets
like
how
to
measure
these
packets
and
when?
How
are
the
packet
drops
measured?
H
So
all
of
these
are
kind
of
goes
per
standard
and
and
these
tools
that
we
are
looking
at-
and
I
know
it's
like
four
to
layer-
seven,
but
you
know
so
so
that's
something
we
need
to
kind
of
standardize.
In
my
opinion,
to
kind
of
say
you
know,
so
what?
What's
your
tcp
scaling
algorithm?
What
what?
How
are
you
scaling
your
http
packets
like
when?
When
can
you
what
kind
of
codes
to
return
when
right?
So
so
that's
that's
something!
H
I
see
a
difference
between
these
tools
and
when
you're
measuring
performance
right
latencies,
especially
based
on
the
tool
you
use.
The
latencies
are
a
different
number
than
although
you're
configuring
the
same
environment
right.
So
then
that's
not
necessarily
a
standardized
representation
of
what
your
latency
is
right,
unless
your
whole
company
uses
the
same
two
forever
kind
of
thing
right
so
and
that's,
I
think,
that's
the
gap.
I
need
to
address.
B
You
have
a
captive
audience
here,
there's
I'd!
If
you
don't
mind,
I've
got
a
request
of
you
and,
as
we
go
to
wrap
up
today's
call,
maybe
a
couple.
We
can
kind
of
recap
some
some
action
items
if
we
can
sure
so
I'm
gonna
I'll
feel
completely
at
leisure
cinco.
B
Did
you
just
throw
one
your
way,
which
is
I'd,
be
really
curious
for
your
thoughts,
kind
of
feedback
about
the
concept
of
mesh
mark,
which
is
articulated
in
a
slide
here
so
I'll
put
that,
but
it's
also
a
bit
further
described
on
the
s
p
spec.io
site,
so
here
auto
has
offered
some
thoughts
on
the
subject
in
the
past
as
well,
and
we
are
so
to
say
things
to
articulate
this
really
concisely
or
just
to
say:
hey
we're
looking
to
pick
up
this
this
thread
and
this
piece
of
work
and
engage
in
academia
to
do
so,
and
so
we
have
a
couple
of
different
universities
with
supporting
professors
to
do
to
to
hopefully
create
an
algorithm
or
define
how
this
should
be
measured
and
how
it
would
work,
and
so
I'd
be
curious
for
your
feedback.
B
Next
time
we
meet
or
before
next
time
we
meet,
I've
asked
for
a
mailing
list
separate
from
the
sig
network
mailing
list
for
the
service
mesh
working
group,
so
that,
as
we
potentially
use
that
to
drive
some
of
our
collaboration
that
that
we're
not
spamming
the
cni
guys
with
nighthawk
stuff
or
what
you
know
whatever.
So
hopefully
that's
coming
forth.
That's
an
action
item
for
me,
neil.
I
know
you
were
probably
moving
fairly
briskly
through
iterating
on
the
site
designs.
B
I've
seen
some
commits
coming
through
from
you
adina.
It
sounds
like
you're
gonna
go
off,
read
some
readmes,
we'll,
probably
bring
rodolfo
up
to
speed
as
well
and
make
an
attempt
at
some.
Some
builds
and
anish
has
been
here
absorbing
so
I
don't
know
if
he's
still
on,
but
he
is
yeah,
so
you're
very
much
in
danger
of
being
put
to
work
so
just
be
yeah,
I'm
just
waiting
for
it,
cool
good,
I'm
gonna
catch
up
with
you
just
after
the
call
just
not
to
make
everybody
but
and
nikolai.
B
I
I
dare
not
try
to
try
to
task
you
and,
if
that,
if
that
was
to
happen,
I
would
talk
to
you
about
smi
conformance,
but.
B
No
comment,
don't
don't
say
anything
fair
enough.
Did
we
nikolai
or
other
did
we
miss
anything?
Are
we
is
that
a
wrap
for
today.
B
Nice,
actually
just
one
last
question
for
me
is
sinclair:
do
you
you
characterize
some
of
the
your
current
focus,
any
particular
goals
that
you're
chasing
after
other
than
the
one
you
generally
described,
or
any
particular
questions
that
you're
looking
to
answer.
H
Yeah
I
mean
I
recently
started
this
work,
so
I'm
a
little
bit
in
a
still
early
stage,
and
these
are
some
of
the
gaps
I'm
noticing
with
respect
to
what
we
want
to
help
telcos
with,
but
yeah
so
soon
I'll
have
some
more
data
and
have
some
more
information
as
to
what
tools
and
I
used
how
and
or
what
could
tools
look
like,
or
things
like
that,
so
in
coming.
F
H
Folks,
yeah
I'll
take
a
look
at
the
mystery,
surely
and
yeah,
probably
probably
we
can
have
it
chat,
offline
or
so
like
going
ahead.
B
That'd,
be
great
that'd,
be
nice,
yes,
well
much
appreciate
it
all.
We'll
have
this
topic
in
a
couple
of
weeks
from
now,
but
I
anticipate
some
slacking
in
the
meantime.
B
So,
thank
you
all
see
you
in
a
couple
weeks
talk
to
you
later
all
right.
Thank.