►
From YouTube: Meshery Community Meeting - May 17th 2019
Description
Great community call today. Preparing new Meshery test results for #KubeCon, delivery of functional (but testing needed) @hashicorp Consul adapter and @octarinesec adapters.
See https://layer5.io/meshery for more Meshery project details and other recorded meetings.
B
D
B
Guys,
if
you
don't
mind,
jump
into
the
meeting
minutes
while
Santosh
joins
he's
we'll
get
him
introduced
here.
I
just
put
in
the
link
to
ten
minutes,
maybe
pre
fill
in
your
areas
and
greet
it's
pratik
and
while
I'm
Pavan
I
guess
you
guys
can
tell
me,
am
I
slaughtering
Santosh?
What
is
it
Santosh.
E
B
B
Maybe
we
should
get
started
anyway.
So
pratik,
you
might
be
the
man
of
the
hour.
D
At
of
no
meshes
yet,
but
there
are
some
interesting
things
that
I
have
noticed,
let
me
show
me
screen.
D
Yes,
so
this
is
a
test
of
what
is
what
are
the
overheads
that
serves
meshes
like
a
kadhi
and
everything
have
on
top
of
communities
when
there
is
a
load
test
running
when
there
are
no
load
tests
running
and
things
of
those
order.
So
let
me
come
and
explain
these
graphs
first
and
then
I'll
go
to
the
upper
charts,
so
this
is
idle.
Cpu
utilization
like
what
is
the
utilization
of
four
CPUs,
ignore
the
percentage
numbers.
These
are
right.
D
D
I,
haven't
just
updated
the
graphs
here,
so
the
blue
ones
is
the
plane
to
abilities
or
utilization
of
the
of
the
cluster,
and
the
red
ones
are
the
overheads
that
sto
or
lickity
has,
as
we
can
see,
I
tried
out
with
various
configurations
of
Sto,
so
this
is
the
demo
with
TLS
on
I've
just
tested
with
TLS,
because
liquidy
has
two
TLS
on
and
regardless
of
removing
telemetry
or
tracing
this
TPU
overhead
is
significant
same
is
with
memory,
and
these
are
the
first
two
graphs.
These
two
graphs
are
the
control,
plane,
overheads.
D
These
are
the
graphs
where
the
blue
one,
the
the
red
one,
is
basically
the
HTTP
pin
application
CPU
utilization.
Those
numbers
are
actually
the
same,
regardless
of
which,
if
it
is
in
a
meshed
setting
or
not
in
Animesh
setting
or
the
HPP
bin
applications,
utilization
of
memory
and
CPU
remain
the
same.
But
what
differs
is
the
proxy
injectors
overhead
when
a
particular
meshes
and
those
here
also,
we
see
that
sto
has
a
quite
significant
overhead.
D
The
history
of
proxy
overhead
first
is
quite
high,
then,
after
a
couple
of
minutes,
it
settles
down,
but
still
it
is
significantly
more
then
what
loaded,
HTTP
bin
application,
zip
utilization
looks
like.
Where
is
it
link
ADIZ
case?
The
link?
Id
proxy
is
a
very
thin
layer.
It
is,
it
is
hardly
any
overhead
on
top
of
HTTP
bill,
and
this
is
when
a
load
test
is
not
running.
The
interesting
case
is
when
I
do
run
a
load
test.
D
This
is
the
case
when
I
actually
do
run
a
load
test.
The
HTTP
bin
application
is
in
both
the
cases
shoot
up,
so
does
the
link
ID
and
is
the
approxi,
but
the
overall
utilization
is
about
0.12
for
Sto,
whereas
for
link
ID
the
overall
in
including
both
the
proxy
and
the
VP
pin,
it
goes
to
approximate,
is
0.18
to
some
times
0.2.
So
that
is
interesting
that
the
link
IDs
overhead
during
a
load
is
higher
than
s
Tio's.
Where
is
under
no-load?
It's.
D
D
D
D
Do
the
cluster
it
just
is
there
the
memory
utilization
keeps
increasing,
keeps
increasing
over
time,
so
this
is
over
two
and
a
half
dollars
to
three
hours
of
duration
of
cluster,
which
is
not
being
fitted
with,
and
the
memory
utilization
keeps
increasing
and
same
is
seen
with
liquidity
as
well.
I
do
notice
that
there
is
a
swipe
jump
here.
That
jump
goes
back
after
a
couple
of
hours
again.
I
am
Not
sure,
but
regardless
of
that
jump,
the
increasing
trend
of
memory
utilization
is
same
for
both
liquidy
and
sto.
D
We
do
notice
that
stos
memory
overhead
is
close
to
eight
hundred
to
one
inch,
whereas
for
linka
d,
it's
about
160
to
200
men
megabytes
only
so
that
is
true,
like
anybody
overall
in
any
experiment
that
I
have
run
I
have
seen
the
linker
D
has
very
low
overheads,
at
least
in
unloaded,
says
the
environment
and
loaded
environment.
Sometimes
the
performance
is
comparable.
Sometimes
sto
is
better.
D
This
is
one
other
graph
of
sto
control
in
memory
utilization,
as
we
can
see
as
in
this
graph.
The
idle
memory
utilization
is
tending
towards
one
gigs,
and
these
are
the
three.
The
three
peaks
that
we
see
here
are
when
I
was
running
notes.
I
was
running
tests
on
steel
and
obviously
like
the
memory
utilization
charter
of
the
resident
memory
said
shorter,
but
it
did
not
come
back
down
till
one
kids
again
get
just
shut
up
came
down
a
bit
and
then
it
remained
constant
at
a
higher
utilization.
D
Do
those
numbers
are
also
shown
here
in
under
load
testing
setup?
These
are
the
memory
and
stipulation
of
the
control
of
the
application,
and
these
are.
This
should
have
been
memory
and
CPU.
I.
Think
I,
screwed
up
something
and
I
have
memory
and
memory
make
the
same
graph
twice.
I
have
the
data
I
should
just
updated
quickly
so
that
we
can.
B
Well,
so
a
few
things
both
to
make
sure
that
I'm
oriented
in
the
right
way
and
the
rest
of
the
community
yeah
here
here
in
just
a
few
minutes,
we're
going
to
do
a
little
bit
of
a
community
introduction
to
some
some
folks
that
are
here
for
the
first
time,
but
as
a
reminder,
this
particular
environment
that
you're
that
these
tests
are
being
run
in
petite.
This
is:
is
this
still
a
single
node
Ubuntu
based
kubernetes
cluster.
D
Yeah
this
is
this:
is
my
environment
that
I'm
running
we
are
in
the
process
of
acquiring
some
servers.
I
am
not
sure
when
people,
but
it
should
be
sometime
over
the
summers
and
if
I,
when
I
get
those,
it
is
at
least
like
a
couple
of
servers.
So
I
will
be
testing
a
similar
things
on
those.
But
as
of
now
it's
on
my
local
machine,
which
is
wisdom.
D
D
D
B
And
the
interesting
thing
is
I
think
in
one
one
for
maybe
just
prior
by
default,
the
so
so
one
of
the
performance
issues
that
sto
had
been
experiencing
is
kind
of
well,
an
old
were
presumptuous
default
configuration.
The
default
configuration
was
I'm
continually,
sending
out
telemetry
reports
or
doing
no
I'm.
Sorry,
let
me
back
that
up
and
say
it
was
continually
performing
chest:
checks
to
see
authorization,
checks
or
quota
limit
checks.
Each
time
that
request
was
received,
and
if
that
request
wasn't
cashed,
it
would
be
performing
those
checks.
B
Anyway,
the
reason
I
mentioned
this
nuance
is
because
even
when
I
was
speaking
just
then
saying,
wow
look
at
that
that
significant
order
of
magnitude
difference
in
fact
we're
not
necessarily
comparing
apples
to
apples,
but
but
that
that
can
be
okay,
because
that's
just
one
of
those
things
that
we
show
here's
a
big
difference
and
by
the
way,
here's
probably
here's
the
logical
reasons
as
to
why
that's
differently
these
they
have
a
different
design.
This
design
includes,
you
know,
sort
of
always
includes
the
overhead
for
authorization
checks,
whereas
this
one,
maybe
doesn't,
and
so
yeah.
D
B
D
A
D
A
Quick
with
the
authorization,
that's
a
good
point
pratik.
So
if
you
actually
are
using
the
tweaked
version,
where
in
like
I
mean
we
created
using
hello
templates,
yeah
that
in
I
think
by
default
they're
the
proxy
choice
already
a
mixer
policy
verifications
are
probably
turned
off.
I
think
that's
right
on
turned
off
I
think
turned
out:
ok,
ok,
so
yeah,
so
by
default.
I
think
it
is
turned
off,
but
you
know
because
I
think
after
one
I
I
don't
know
the
precise
version.
A
What
you
wear
while
give
all
the
crud
off
like
you
know,
so
you
have
to
actually
turn
it
on
I,
don't
know
about
the
default
demo
version
I
can
checked
well
in
the
word
in,
like
you
know,
which
you
said
like
you
know
where
you
have
the
tracing
turned
off,
but
that
was
generated
out
of
help.
Complete
innocence
was
created.
Let
you
know
by
using
a
home
template
a
command
one.
So
I
did
not.
You
know,
I
mean
I.
Do
not
remember
specifying
that
the
procedures
have
to
be
enabled
so
by
default.
A
I
think
they
are
turned
off
so,
but
the
telemetry
is
still
on,
but
you
know
which
means
whatever
the
default
telemetry
limits
are
or
the
level
is
I
think
that's
still
there
you're
not
tuned
back
down.
The
tracing
is
actually
again
by
default,
which
one
percent
so
I
think.
The
comparison
you
have
here,
like
you
know,
are
precisely
like
you
know,
configuration
yeah.
D
Even
the
telemetry
turned
off
as
well.
I
did
not
see
a
very
significant
difference
like
there
was
the
memory
user
consumption
difference
which
I
have
made
a
note
over
here
has
not
made
a
graph
out
of
it
like
the
width,
TLS
and
without
TLS.
The
memory
utilization
was
similar
without
telemetry,
though
we
did
see
a
small
change
in
memory,
but
the
CPU
utilization
in
both
in
all
that
cases
were
similar,
so
I
think.
The
base
reason
why
we
have
this
overhead
might
be
something
in
the
implementation
which
which
I
am
planning
to
preserve
cool.
B
I
was
on
the
verge
of
putting
in
the
request
to
leverage
the
CNCs
cluster
or
to
get
some
time
there
for
about
a
week
to
two
weeks.
Some
of
the
tests,
I
think,
will
be
illuminating
at
scale.
I
think
for
those
that
are
running
I
mean
this
is
kind
of
a
point
of
frustration.
I
think
hime
is
familiar
with
this,
at
least
from
the
panel
that
he
was
on
that
dr.
Khan.
That.
B
Many
will
be
interested
in
performance,
characteristics
of
large
scale,
environments,
those
with
X
number
of
cert
like
a
high,
a
decent,
recently
high
number
of
services
you
know
and
which
it
is
somewhat
natural
in
that
that's
where
you
get
the
most
benefit
from
a
service
mesh,
and
that's
also
where
you
can
have
to
pay
the
most
overhead.
That's
also
where
you've
got
engineers
that
are
more
dedicated
to
this
type
of
a
focus.
They
have
more
time
to
focus.
B
B
It's
slightly
a
shame
that,
but
but
it's
natural,
that
those
that
have
smaller
environments
with
fewer
services
and
fewer
nodes,
a
there,
won't
be
as
big
of
a
performance.
The
difference
happening
probably
be
they're,
not
gonna,
get
as
much
return
on
investment
to
the
time
that
they
spend
tweaking
because
they're
only
gonna
save
you
know
times
10
versus
times
a
thousand
yeah.
D
B
Yeah
I'm
a
bit
surprised
given
there's
a
single
node,
very
small
application
deployment
environment.
I'm
surprised
that
there's
that
much
overhead
in
the
East
EO
proxy,
you
know
pratik
just
maybe
final
thing
on
this
you'd
said
that
it
wasn't
just
the
East,
your
proxy.
That
seemed
to
be
having
a
bit
of
overhead,
but
it
was
the
Leonid
container.
D
B
Anyway,
ad
scale
those
end
up
having
a
bit
of
an
issue
because
of
the
way
in
which
he
Steele
was
trying
to
push
basically
a
full
network
topology
down
to
each
envoy,
and
it
wasn't
partitioning
those
configuration
it
wasn't
saying
it
wasn't
being
realistic
about
only
giving
one
proxy
a
partial
view.
It
was
trying
to
give
every
proxy
full
view
of
the
mesh
and
I
became
too
much
yeah.
D
That
might
be
the
reason
why
we're
seeing
this
overhead,
and
maybe
Lincoln
he
does
the
same
in
a
multi,
node
environment
and
in
a
multi
node
environment
may
be.
Liquidy
would
also
be
somewhat
similar
to
what
we
are
seeing
for
a
steel
in
a
single
load,
environment
yeah,
but
that
is
left
like
that
is
something
that
I
haven't
tested
because
I
don't
have
the
answer.
I.
Am
it
right
now
so
so.
B
B
B
E
E
E
E
B
That
well
do
you
want
to
just
give
a
brief
introduction
about
yourself
to
the
crew
here.
E
They
built
it
from
a
secret
from
scratch
where
it
would
get
the
data
from
different
mint
forms,
and
then
people
present
on
the
UI
so
I'm
familiar
with
I'm
comfortable
with
angular
angular
JS
in
the
front
end
like
give
me
the
application
there.
We
used
languages
in
the
front
end
and
in
the
back,
and
we
have
used
Python,
flask
and
then
Django
in
the
coming
to
the
database.
We
have
used
MySQL
and
like
a
combination
of
both
and
in
my
current
project,
I'm
totally
a
full
stack
tour.
So
every
sprint
I
walk
on
different
technologies.
E
One
spring
we
I
walk
on
angularjs
one
sprint:
I
walk
on
Trask
one
sprint:
I
walk
on
database,
one
sprint,
I
walk
on
our
current
kubernetes.
We
use
copper.
Everything
is
containerized
here.
So
you
know
recently,
I
have
automated
few
have
automated
some
manual
manual
things,
for
example,
I
built
a
tune
which
could
analyze
the
performance
of
the
web
application.
E
Some
queries
perform
better
and
some
forest
doesn't,
as
there
are
many
API
endpoints
I
had
automated
I
have
automated
that
and
then
I
wrote
a
script
which
would
accept
har
file
as
an
input
and
then
automatically
use,
grab
the
api's
in
the
archive
and
then
hit
the
database
and
then
get
the
queries,
and
then
it
will
populate
those
results
into
excel
spreadsheet,
in
which
you
know
we
can
analyze,
which
queries
are
running
faster
and
slower.
So
that's
what
is
my
daily
routine
here,
yeah
and
then
yeah.
That's
about
you
know
whatever
science.
E
B
Right,
they're,
very
good,
very
good,
well
Santosh
we'll
try
to
come
back
to
you
in
just
a
few
minutes,
but
let's
jump
into
the
meeting
minutes.
If
we
can
I'm
gonna
go
ahead
and
and
share
them
here,
there's
another
fresh
contributor
that
has
just
started
this
week.
Her
name's
her
her
last
day
and
I
won't
slot
her
she's
coming
from
not
just
Lumina
networks,
but
the
network
service,
mashed
project
and
so
she's
submitted
a
couple
of
very
small
pr's
today,
just
to
get
it
the
ball
rolling
she's
gonna
focus
on
the
NSM
adapter.
B
So
that's
that's
great.
Her
Prem
her
director
is
very
kind
towards
the
project,
and
so
so
we'll
certainly
make
mention
of
them
at
the
well
there's
about
five
mystery
talks
at
cube
con
and
around
Q
Khan
this
next
week.
So
we're
gonna
get
some
mileage
out
of
the
efforts
that
you
guys
have
given.
It's
just
good,
there's
also
sort
of
the
the
surprise
announcement
around
service
mesh
interface
that
specification
and
last
week,
Girish
demoed
the
hour
in
or
measure
ease
initial
compatibility
with
that
new
spec.
B
So
with
that
we're
those
are
the
general
updates.
That's
so
purty
I
think
you
went
through
many
of
you
err,
some
of
your
findings,
I
guess,
maybe
last
question:
there
is,
as
we
look
toward
Wednesday
as
kind
of
the
big
presentation
day
what
what
further
remains.
D
B
C
C
C
C
C
So,
but
because
for
the
application,
I
saw
that
Google
has
some
polyglot
they're
more
like
moniker
service
I
haven't
seen
that
that's
as
one
option
and
other
maybe
some
simples
up
that
will
check
one
for
TCP,
140
RPC.
What
I
haven't
decided,
because
in
sample
application
in
bootcamp
or
has
my
sequel,
one
is
TCP.
B
C
B
C
C
C
B
B
B
Octarine
adapter,
including
setting
up
the
resources
on
the
control,
plane
and
remember
guys,
the
op
dis,
octarine
adapter.
It's
gonna
work
with
a
managed
control
plane,
one
that
octarine
the
company
hosts
okay
and
then
he
also
completed
the
coding
of
deploying
the
data
plane
and
then
deploying
book
info
crate.
He
still
needs
to
set
up
the
docker
file
and
you
know
a
lot
of
testing,
but
I
don't
think
he
urged
this
though
so
I
don't
know
that
anyone
in
the
community
can
necessarily
test
this
I.
A
All
right
cool
ace,
so
I
have
you
know:
I
tried
to
make
a
note
of
some
of
the
work
that
I've
been
doing
so,
but
the
initial
major
thing
is
actually
I've
been
working
with
pratik
in
Saco
to
come
up
with
letting
our
next
set
of
metrics.
That
will
be
interesting
for
comparison
and
that's
one
of
the
queer
things
like
know
which
predict
actually
presented.
Hopefully
the
Kazakh
will
have
some
similar
things
to
share.
A
Probably
next
week,
like
you
know,
on
G
RPC
and
maybe
on
GCP,
sorry
on
TCP,
sorry,
with
respect
to
measure
itself,
I'm
gonna
try
to
ship
is
being
strapped
with
that
there
are
some
really
minor
changes
like
you
know
nothing
major,
but
the
major
one
is
just
give
me
one
second,
so
the
major
one
is
the
adapter
for
console.
We
thought
it'll
be
nice
to
actually
have
it
for
next
week.
Can
you
guys
see
my
screen?
Yes,
yeah.
A
Okay,
cool
I
hope
you
guys
have
seen
my
screen.
So
some
of
things
that
you
know
like
I
mentioned
so
are
like
mostly
UI
changes
like,
for
example.
Like
you
know,
the
header
is
now
like.
You
know
slightly
taller.
The
title
of
the
page
is
actually
the
center.
A
Nothing
specifically
change
the
performance
page
now.
The
other
major
change
is
actually
the
console
adapter.
So
I
have
the
console
adapter
configured
here,
so
you
can
now
see,
like
you
know,
the
three
different
types
of
attackers
with
no
link
ready,
sto
and
console
first
looking
on
the
logo
is
not
being
updated
because
it
also
doctor,
like
you,
know,
like
just
practical
functional
state
like
you
normally
yesterday
so
but
these
are
the
very
basic
functionality
like
an
I
thought
like
I
will
be
good
to
start
with
our
first
one
is
installing
the
installing
console
on.
A
Ladies
with
the
sidecar
injector,
unfortunately,
console
does
not
work
at
the
namespace
level,
so
you
know
it
has
to
so.
Every
service
has
to
be
annotated,
the
right
way,
and
if
services
have
upstream
services,
then
the
main
service
will
also
need
to
have
those
annotations
in
place.
So
the
book
info
app
and
HTTP
then
app.
A
I
think
you
know
the
same
applications
will
actually
help
so
the
book
info
app
and
the
history
within
app
or
as
part
of
this
adapter
or
annotate
at
the
right
way,
so
that
you
don't,
you
will
have
the
site
for
proxy
injected
and
that
also
have
like
you
know
the
the
upstream
services,
my
included
as
part
of
the
annotations.
So
that's
the
consult
after
now.
The
other
thing
I've
done
is
like
a
nice
port
of
link
ID
as
well.
A
Looking
I'm
I'm
now
providing
people
I'm
at
of
the
same
book
info
app
and
the
HTTP
is
an
app
because
you
know,
like
I,
said
looking
for
apples
to
apples
comparison,
we
wanted
it
across
the
board.
Sorry
very
steep
hill
and
the
same
thing
for
link
ready
and
this
to
us
canonical
booking
for
a
poor
now
available
across
the
board.
So
you
know
if
people
want
to
prepare
it'll,
be
a
lot
easier
for
them,
pratik
and
Saco,
who
have
been
testing
or
conducting
a
lot
of
tests.
A
The
other
thing,
like
you
know,
if
you
guys
have
seen,
is
the
interaction
on
this
page
earlier
it
used
to
be
a
radio
button
with
you
know,
with
some,
you
know,
submit
and
preferred
options,
but
this
time
I'm
giving
this
option
where
individually
like
now,
they
have
their
buttons,
of
course,
eventually
working
on
some
color
schemes,
but
those
are
some
tiny
ones
and
the
custom
Yahoo
option
comes
in
a
model
centered,
so
that
looking
people
can
still
like.
You
know,
work
on
the
VMO,
so
there's
no
prisoner
pretty
much
what
you
know.
A
That's
exactly
the
same
behavior
across
the
board
again
doesn't
really
in
turn
using
react
conference.
So
this
is
one
of
the
major
changes
that
was
actually
done.
Yeah
funny
is
if
people
have
really
looking
paying
attention
to
the
snack
bar
behavior
I
mean
earlier.
It
used
to
be
that
this
network
will
actually
come
once
and
other
consecutive
snack
bars
will
usually
not
show
up
unless
the
first
one
goes
away.
A
You
know,
so
that
was
actually
not
the
best
behavior,
so
I
try
to
bring
in
the
stacking
behavior
where
you
know
when
there
are
more
they'll,
keep
stacking
one
of
the
other.
Looking
on
the
wake
up
on
screen,
I'm
trying
to
showcase
Todd,
but
for
other
I
need
to
have
a
simple
test
container
running
so
I'm
just
want
to
check
if
I
can
actually
bring
one
up
very
quickly.
A
Sorry,
I
wasn't
prepared
to
actually
show
this
all
right.
Okay,
so
I
have
a
for
iOS
or
that's
running
locally
dat,
alright.
So
so
this
is
a.
This
is
a
cool
URL
to
use,
of
course,
like
I
know,
that's
not
gonna
work,
because
now
I
I
think
supposed
to
comport
ninety
eighty,
so
I'm
just
gonna
give
that
a
try
and
I'm
so
desperate
for,
like
one
second
should
technically
fail.
You
can
see
that
I.
Can
it
actually
failed?
A
So
if
I
keep
hitting
it
like
a
multiple
times,
you
will
see
so
this
is
kind
of
just
reduce
the
behavior.
Like
you
know,
the
snack
bars
will
actually
stack
up
the
same
way
like
an
eye.
You
can.
You
know
they
stand
there
for
some
time
like
an
open,
you
can
also
post
them
on
the
other
side,
like
not
a
valid
one
like
this.
A
Just
on
that
for
like
one
second,
and
you
can
see
the
green
one,
just
appeared
like,
oh,
if
I
do
it
again,
another
one,
so
the
same
stacking
behavioral
economic
fears
like
not
football,
which
is
kind
of
a
decent
thing
earlier.
This
wasn't
the
case
so
so
this
is
like
now
kind
of
a
common
framework
across
all
the
pages.
A
So
you
know
we
gave
you
know,
be
uniform
across
pages
and
with
with
a
playground
again
in
these
cases,
work
you
know
with
console
and
literally
when
they
do
not
when
they
do
not
have
an
inverse
gateway,
they
will
actually,
you
know,
be
able
to
see
the
port
on
which
the
application
is
exposed
in
the
notifications
that
come
up
a
synchronously
under
section.
So
these
are
some
of
the
changes
that
I've
been
working
on
for
a
mystery.
There
are
some
other
ones
like
you
know,
which
are
probably
like.
A
You
know
not
visually
seen
much
what
you
know,
for
example,
I
mean
easily,
as
am
I
you
know,
I
mean,
as
so
much
changes
are
coming
at
a
rapid
pace,
so
you
know
anytime.
I
see
some
major
changes.
I
try
to
kind
of
update
our
Gamble's
with
the
new
images,
because
again
they
do
not
have
images
at
this
point
in
time.
So
I
have
to
bull,
create
an
image
push
to
dr.
hub
and
I'm,
using
that,
as
part
of
order
is
to
your
record,
so
I
have
to
kind
of
keep
that
up
to
date.
A
Now,
while
I
was
working
on,
like
you
know,
multiple
adapters,
like
I,
see
quite
a
lot
of
functionality
that
I
could
actually
move
into
a
separate
library.
So
you
know
I'm
going
to
be
creating
an
issue
to
kind
of
keep
track
of
that
and
I'll
be
getting
to
that,
but
no
more
probably
after
coupon
there.
B
A
E
A
You
have
references
to
it,
yeah,
that's
pretty
much
most
of
the
things
I'm
working
on
so
me
and
Lea
will
be
a
dr.,
Khan
I'm
starting
my
journey
to
that's
pretty
much
it
like.
No,
we
are
presenting
mush
Roo
at
several
places,
so
hopefully
like
no
get
more
attention
to
the
mushroom
project.
That's
pretty
much!
My
updates,
yeah.
B
Speaking
of
there's
a
network
engineer
who
started
to
take
a
look
at
Mary
today,
he
lives
in
in
Austin
actually
and
then.
Actually
speaking
of,
there
was
another
there's,
a
Google
Cloud
solutions,
architect,
who's,
giving
a
couple
of
talks
on
steel
and
I.
Think
he's
just
been
he's
just
become
aware
of
the
project,
so
so
Sandeep
Parikh,
but
that's
his
name
and
then
the
other
guy,
the
network
engineer
in
Austin,
Tristan
Mendoza,
and
so
it's
good.
Just
a
small
small
uptick
of
folks.
E
B
Yeah
you
mean
for
the
well
yeah
I
think
for
the
meetup
for
short
over
the
demo
mastery,
so
that
people
can
see
it
and
understand
it,
use
it
and
contribute
and
then
for
the
community
meeting,
there's
a
number
of
folks
in
Austin
I,
don't
know!
If
there's
an
I
near
guaranteed
that
if
I
were
to
try
to
get
one
in
person,
I
would
probably
fail
the
community
every
time
by
not
being
there
so
but
yeah
santosh,
there's
a
another
couple
of
other
friends
at
Yogi's,
Sunil
and
Ravi
or
Ravi
I.
B
F
Compared
to
the
last
week,
I
tried
with
the
vagrant,
like
installation
of
misery
in
Vega
and
I
like
getting
sucked
in
that
so
I
take
a
different
way.
I
just
install
the
Ubuntu
machine
in
VirtualBox
and
then
install
the
misery
and
like
it's
like
I,
can
able
to
connect
the
tube
Nettie's
cluster
from
the
Ubuntu
machine.
F
B
F
We
have
three
nodes:
cluster
one
is
natural
and
two
nodes
and
haven't
no
we're
in
the
test.
Node
installed
the
mystery
and
our
punished
lungs
I
wish
I
can
able
to
log
into
this
machine
and
I
can't
come
into
the
kubernetes
cluster.
This
is
the
basic
part.
I
did
like
lost
me
and
I'm
running
on
Windows
7.
So
instead
of
running
machine
in
Ubuntu
machine
I
try
to
install
in
my
laptop
Windows
7,
but
like
the
docker
container,
when
I
start
installing
the
dock.
F
F
F
No
like
well
I
would
say
is
like
only
the
yamen
file.
I
can
the
version
from
three
to
two?
That's
it
a
part
of
that
I
didn't
do
any
changes
Oh
like
when
I
run
the
mess
restart.
It
give
me
some
error
message
where,
like
it
says
that,
like
it
won't
turn
off
on
the
issue
with
the
ml,
which
is
in
the
root
directory,
like
so
I
just
change
the
version
three
to
two
nice.
B
Okay,
let
me
I'll
get
you
I
mean
I.
Actually,
in
the
memory
repo
there's
a
Doc's
folder
I'll,
try
to
point
it
out
to
you
where
we
can
update
either
have
a
troubleshooting
section
for
folks.
We
also-
and
this
goes
for
Socko
and
you
Pavan.
We
need
a
compatibility
matrix
inside
the
mastery
Doc's
and
that
matrix
need
to
include
gke,
eks,
VirtualBox,
meaning
queue
etc,
and
any
caveats
that
are
there
with
them.
Yeah.
B
B
Well,
fair
enough,
we'll
with
that
some
pratik
best
of
luck
on
the
move.
Thanks
for
the
results,
I'm
gonna
go
through
them,
myself
try
to
try
to
ascertain
why
we're
seeing
the
deltas
yeah
outside
of
just
the.
What
could
be
the
straight-up
fact
that
the
these
are
two
totally
different
code
bases
and
they
they
perform
totally
differently
just
in
case
there's
some
environmental
configure
of
the
test
that
you
ran
quick
confirmation,
so
I
get
the
environment
that
you
run.
You
ran
them
in
and
then
I
knew.
Last
time
we
spoke
and
we
were
presenting
results.
B
D
I
I
do
have
the
charts
that
I
have
like
at
least
the
bar
graphs.
That
I
have
all
of
those
numbers
are
in
that
format,
just
above
the
graphs
in
the
same
sheet,
but
the
line
the
graph
on
of
screenshots
that
I
have
those
are
basically
picked
up
from
Griffin
I'd,
like
those
were
a
lot
of
data
points
to
just
put
on
sheets.
So
I
just
screen
shorted
interesting
points
from
Griffin
and
put
it
there.
We
can
have
a
discussion.
We
can
go.
Oh
oh
oh',
just
you
me
equation
power.
B
There's
a
there's
a
couple
of
smaller
ones:
there's
one
on
Sunday
one
on
Monday
night,
another
one
on
Tuesday
night
I'll
be
talking
about
it
on
a
panel
on
Wednesday
morning.
But
then
the
main
presentation
is
on
Wednesday
the
22nd
at
noon.
Central
Eastern
Europe
time,
which
is
like
basically
end
of
day
Tuesday
I
think
you
know,
for
you
guys,
yep.