►
From YouTube: 2022-01-06 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
amir,
I
was
just
checking
with
jesse
offline
and
he
he
believes
that
the
meeting's
cancelled
this
week,
because
germany's
still
on
vacation.
B
Yeah
actually
volton
slack
that
we
will
only
be
meeting
on
january
13th,
okay,
I
was
just
logging
in
to
see
if
someone
is
here.
Maybe
oh.
B
So
yeah,
so
I
guess
we
have
the
day
off.
B
For
a
small
startup
called
the
spectre:
okay
located
in
tel
aviv
is
and
we're
doing
a
distributed.
Tracing
solutions.
A
Okay,
so
you
have
like
a
back
end
like
dewy
product
or.
B
Yeah,
we
have
a
back
end
for
twices.
We
accept
tracers
and
process
them
in
gesture,
ingest
them
process
them
index
them,
and
we
have
a
website
where
customers
can
view
them
and
some
details,
and
we
also
have
the
sdks
like
where
open,
telemetry
distributions
that
lapse
open
telemetry
and
makes
it
more
easy
for
the
clients
to
use
open,
telemetry.
A
Okay
and
so
what
what
are
some
of
your
like
differentiators
for
some
of
the
other
products,
you
think
there's
quite
a
few
options
in
this
space.
B
Yeah
yeah:
we
believe
that
we
do
it
better
than
others
there's
a
lot
of
competition.
What
about.
C
A
A
We
also
now
have
a
software
broker
and
we
have
like
sort
of
free
to
try
stuff,
you
know
and
everything,
but
it
is
a
closed
source
sort
of
you
know
proprietary
solution
and
we
do
implement,
like
you
know,
some
standard
protocols
like
mtt
and
amtp
and
rust,
but
yeah.
We
also
have
our
own
proprietary
protocols,
that
are,
you
know,
slightly
better
performance
and
that
sort
of
thing
with
our
own
apis
and
everything
so
yeah
yeah
and
we
have
jms
apis
as
well.
A
Customers
are
always
looking
for
new
things
and
and
and
it
we
started
off
in
the
early
days,
with
our
appliances
being
very
financial
services,
oriented
like
so
a
lot
of
high-end,
very
low
latency,
high
performance
like
market
data
and
stuff
like
that,
but
now,
as
the
products
matured,
it's
just
gone
into
all
types
of
use
cases
you
know
like
anywhere.
Anyone
does
messaging
it's
being
used
all
over
the
place
now
so.
A
No,
so
we're
developing
it
right
now,
so
it's
a
new
thing
for
us
and
we're
in
the
process
of
developing
it
and
our
sort
of
interest
in
in
this
specification
particular
is
because
we
we
found
the
existing
experimental
one
and
we're
sort
of
using
that
as
a
guide
to
implementation.
But
just
you
know
finding
all
sorts
of
things
really
lacking
in
it
in
what
it's
specified
and
and
jesse
who
I
work
with,
who
also
joins
these
calls.
A
He
he
works
with
he's
involved
with
cloud
events,
and
he
sort
of
knows
clemens
through
that
involvement
and
sort
of
found
out
about
this
group
through
him
and
then
and
then
we
joined
in
that
way.
But.
A
I
find
these
group
meetings
are
interesting,
everyone
you
know
it.
The
pace
moves
fairly
slowly,
but
you
have
such
a
diverse
group
of
people
that
all
speak
their
own
languages
and
it
takes
a
long
time
to
get
everybody
to
understand
this
things
the
same
way
that
maybe
a
small
subset
of
people
do
you
know
it's
so,
but
I
think
it's
tremendously
useful.
I
think
it's
it's
a
good
good
project
here
to
have
everybody
collaborate
and
make
sure
we
get
this
stuff
figured
out
right.
B
A
So
what
we're,
what
we're
implementing
is?
Our
broker
will
generate
like
so,
as
you
can
imagine,
our
broker
would
have
a
bunch
of
queues
for
for
customer
data,
we'll
have
a
queue
that
the
broker
then
dumps
trace
data
into
in
our
own
proprietary
format,
and
then
we're
going
to
implement
an
open,
telemetry,
collector
receiver
that
connects
to
our
broker
and
pulls
the
data
off
the
queue
and
and
converts
it
to
open.
Telemetry
span
objects
in
the
collector.
B
A
Look
and
ultimately
you
know
if,
because
our
I
don't
know
what
the
typical
use
cases
are,
we
have
a
lot
of
our
customers
are
very
interested
in
use
cases
for
tracing
where
they
want
to
use
it.
As
you
know,
proof
of
delivery
and
like
so,
they
want
to
use
it
as
if
someone
didn't
get
a
message
somewhere,
they
want
to
be
able
to
look
at
the
trace
and
figure
out
whose
fault
it
is
right.
This
is
for
high
value
messages
and
it
sounds
like
that's
not
maybe
typical.
A
A
So,
using
the
queuing
capabilities
in
our
broker,
allow
like
it
allows
us
to
have
to
avoid
building
something
that
we
already
built
right,
where
we
can
persistently
store
messages
in
that
queue
and
they'll
pile
up
on
disk
and
there's
a
way
to
connect
and
pull
the
messages
off
like
if
the
collector
were
to
be
down
periodically
it
can
buffer
them
up
and
store
them
on
disk
until
you
connect
in
and
get
them
off
again.
A
So
it's
it's
definitely
a
performance
drain
right
because
it's
a
lot
of
overhead
per
message,
but
customers
seem
interested
in
it
even
if
the
product
couldn't
perform,
they
would
rather,
you
know,
buy
more
and
spin
up
more
instances
to
handle
the
load
then,
and
not
have
the
capability.
A
We're
it's
probably
I
mean
I'm
guessing
a
bit,
we're
we're
kind
of
still
in
the
architecture
phase
for
it
and
we're
getting
very
close
to
starting
development,
and
I
would
hazard
a
guess
that
it's,
you
know
ballpark
a
year
away
from
from
a
product
that
we're
selling.
But
it's
actually
has
enough
market
interest
that
it's
unlike
most
features
where
we
add
it
to
the
product
as
something
to
make
it
more
enticing
to
buy
the
product.
It's
actually
a
separate
feature
that
we
sell
on
its
own.
A
I'm
the
architect
so
I'll
be
involved
throughout
the
implementation
process,
but
not
not
so
much
writing
every
line
of
code
with
a
broker
like
ours.
It's
hard,
it's
hard
to
appreciate
it.
Unless
you
get
into
all
the
details,
but
the
the
high
performance
nature
of
it
of
the
product.
It
means
that
the
design
is
quite
complex
and
the
downside
to
that
it
takes
a
long
time
to
implement
features.
You
know,
there's
the
threading
model
and
everything
is,
and
you
know,
sharing
and
persistence
and
reliability.
Stuff
is,
is
all
quite
involved.
A
So
I'm
I'm
really
writing
up
all
the
the
stories
for
the
individual
development
groups
to
work
on
they
break
it
down
and
implement
it,
and-
and
I
tend
to
over
oversee
it
and
meet
with
them
all
regularly.
But
then
I
kind
of
tend
to
move
on
to
the
next
feature
as
they're
well
underway
in
development,
and
I
just
kind
of
keep
tabs
on
thing
as
it
goes
through
development
and
test
and
stuff.
B
I
think
you
all,
you
will
be
the
first
messenger
broker
to
implement
internal
tracing.
This
powerpoint.
A
Yeah,
that's
what
we
found
too.
I
was
hoping
that
we
could
just
look
and
see
what
kafka
did
and
use
that
as
like,
but
we
couldn't
find
it
or
whatever
we
couldn't
find
any
other
broker.
They
did.
We
found
all
these
references
to
rabbit
and
kafka
and
stuff,
but
it's
all
api.
You
know
client-side
tracing.
That
is
all
we
found.
We
didn't
find
any
other
brokers
that
did
anything.
A
Yeah
so
we're
gonna,
we're
gonna
we're
actually
having
a
debate
on
naming
right
a
little
bit
internally,
I'll
I'll
sort
of
say
what
I'll
use
terms
that
you're
most
that
seem
most
natural
and
then
I'll
explain
the
debate
on
names
a
bit.
A
But
so
when
we
receive
and
persist
a
message,
that'll
be
a
span
and
then
at
that
point
the
broker
can
decide
to
enqueue
it
on
any
number
of
queues,
and
so
we'll
have
nq
spans
for
each
queue
that
it
encues
the
message
to
and
then
as
it
as
it
tries
to
deliver
it
to
a
client.
It'll
generate
like
a
send
span
set
where
it
sends
it
to
the
client.
A
And
then,
when
the
client
settles
that
delivery,
we
would
have
like
you
know,
an
acknowledgement
or
settle
span
where
we
remove
it
from
the
queue
one
of
the
debates
we
had
was
on
the
send
side.
Should
it
be
one
span
to
say
you
know,
I
you
start
it
when
you
send
it
and
when
he
acknowledges
and
remove
it
from
the
queue
that
ends
the
span
or
should
it
be
one
span
to
say
I
sent
it
and
then
another
span
to
say
he
acknowledged
it
and
we
see
there
being
the
problem
with.
A
I
like
the
idea
of
a
single
span
and
then,
if
he,
if
he
reports
an
error,
you
put
an
error
in
the
span.
I
like
that
idea,
but
we
think
there's
value
in
being
able
to
say
I'm
sending
it
to
him
and
see
that
you're
sending
it
and
he's
not
acknowledging
it
right
like
because
that
can
point
to
a
problem
where
there's
a
problem
on
the
application.
Whereas
if
you
wait
until
you
get
the
acknowledgement
or
the
error
to
generate
the
span,
you
can't
tell
that
you've
sent
it
by
looking
at
your
telemetry
data.
B
Yeah
yeah,
I
I
also
thought
a
lot
about
it,
like
for
four
instrumentations
that
I
wrote.
Sometimes
I
just
put
it
all
on
the
same
span,
and
sometimes
I
just
like
spam
for
receiving
and
spend
for
acknowledgement.
A
I
I
find
that,
because
messaging
is
very
asynchronous
compared
to
some
of
the
other,
you
know
use
cases,
I've
seen
it.
The
whole
start
and
end
is
often
very
start
and
end
time.
Stamps
is
very
academic
right,
it's
just
it's
there
because
it
you,
but
you
put
it
almost
side
by
side
in
in
the
in
the
code
where
it
really
doesn't
mean
much
so
it
it
sort
of
feels
like
we're
not
using
it
the
right
way,
but
because
it's
asynchronous,
I
think
you
can't
wait
for
some
other
thing
to
finish
its
job.
A
When
you
don't
know
when
it's
going
to
happen
right
implements
yeah.
Well,
that's
what
I
wonder
should
you
should
you
try
to
make
it
one
span?
But
if
you
exceed
a
certain
time
you
kind
of
close
it
off
and
but
then
you
have
to
generate
if
it
does
happen
after
the
timeout,
you
know
yeah,
do
you
have
to
generate
that
another
spanish?
Then
your
model
is
like
it
might
look
like
this,
or
it
might
look
like
this
depending
on
the
timing.
A
I
don't
know
that.
I
I
don't
fully.
I
I'm
because
I
don't
have
much
of
a
general
telemetry
background,
I'm
more
a
messaging
guy
who's
figuring
out
telemetry.
I
don't
know
the
importance
of
naming
and
like
what
does
naming
even
matter
right.
Is
it
just
something?
That's
human
readable?
That
means
something,
or
is
there
some
more
importance
to
naming?
I
don't
know.
B
B
So
if
some
some
libraries
are
doing
a
and
some
other
libraries
are
doing
b,
then
it
makes
it
really
hard
to
process.
So
I'm
not,
I
don't
think
naming
is
very
important.
The
thing
that
is
important
is
that
everyone
will
do
the
same,
but
since
you're
the
first
one
to
implement
it,
then
you
can
think
about
it,
and
then
people
will
follow
you.
I
guess
because.
A
The
the
the
confused
potentially
confusing
part
is,
is
if
a
broker
receives
it
and
calls
it,
you
know,
receive
whatever
and
then,
but
there's
like
there
might
be
other
messages
going
through
the
broker
that
don't
get
traced
and
then,
if
it
gets
to
a
consumer-
and
he
has
this
a
receive
as
well,
it
would
become.
I
think
it
might
become
confusing
to
say,
see
something
that
said
receive,
and
you
don't
know
if
that
means
the
broker
received
it
or
the
client
received
it.
B
A
Yeah
we
have
two
related
features
along
those
lines.
One
is
a
delivery
delay,
just
exactly
as
you
described.
The
feature
we've
implemented
is
not
quite
like.
I
don't
know
if
you're
familiar
with
the
jms
delivery
delay,
where
the
publisher
says
how
long
to
delay
the
delivery,
but
like
that,
that's
it's
a
per
message:
property
in
jms.
Ours
is
a
per
cue
property
where,
as
messages
show
up
on
a
queue
that
all
messages
on
that
queue
get
delayed
a
certain
amount
of
time.
A
That's
that's
one
configurable
item
on
the
queue
another
delivery
delay
related
feature
is
delayed
redelivery,
where,
if
an
application
says
you
know,
receives
a
message
and
then
and
then
says
I
can't
take
it.
It
gets
scheduled
for
redelivery
later,
whereas
the
old
with,
if
you
don't
configure
that
feature
it
immediately,
redelivers
it.
And
so,
if
you
have
a
use
case
where
I
don't
know,
it's
like
a
you're,
overloaded
and
you're
kind
of
just
saying.
A
A
That's
the
end
of
one
trace
and
when
we
try
to
deliver
it,
that's
the
start
of
another
trace
and
even
if
you're
doing
it
right
away
just
for
a
consistent
model
that
we
always
start
and
end
one
and
start
a
new
one.
And
then
we
start
the
new
one.
Obviously
we'll
link
to
you
know
the
trace
that
put
it
on
the
queue
and
the
big
debate
there
is
that
because
links
aren't
necessarily
well
supported
in
a
lot
of
the
back
ends
that
were
hesitant
to
do
that.
A
B
I
think
we
kind
of
started
that
we're
going
to
use
links
yes
for
sure
yeah,
so
it
will
apply
for
the
vocal
as
well.
If
there
are
links
in
the
messaging
semantics
yeah,
you
can
safely
use
them
in
the
vocal
instrumentation
as
well.
A
Yeah,
I
mean-
I
guess
you
know
the
devil's
advocate
on
that
is
that
as
a
as
the
broker
instrumentation,
we
wouldn't
care
so
much
if
the
customer
was
unable
to
link
the
application
layer
stuff
because
you
know
they
can
still
view
our
tracing
right
if
we're
being
selfish
about
it.
But
you
know
in
the
discussion
of
the
group,
we
try
not
to
think
that
way,
right,
we're
trying
to
think
about
the
greater
good,
but
but
I
do
I.
I
do
hope
that
the
more
links
are
are
as
yeah.
The
more
links
are
used.
A
Hopefully
it
does
just
push
these
back
ends
to
get
links
supported
better
and
I
I
and
the
flip
side
is:
if
we
don't
do
that,
then
we
run
into
the
problem
that
back
ends
have
with
long
traces
and
and
what
do
we
do
with
those
because
often
the
long
traces,
the
ones
that
took
a
long
time
to
get
delivered,
are
the
ones
you
are
interested
in
because
something
went
wrong
so
and
if
the
back
end
isn't
helping
you
see
what's
going
on
there,
that's
a
problem,
I
think
so.
B
A
A
Well,
so
that's
up
to
the
customers,
we're
just
providing
you
know
an
open,
telemetry
receiver
and
it's
up
to
them
what
back
end
they
plug
in
we're
kind
of
testing
and
focusing
on
zipkin
and
jager,
and
probably
datadog
initially
for
our
managed
software
like
we
do,
have
a
managed
broker
and
the
sort
of
back
end
that
we
already
use
for
other
things
besides
tracing
as
datadog,
and
so
we
might.
A
We
were
looking
at
whether
we
would
use
datadog
for
this
in
our
managed
software
solution
and
the
jury's
still
out
on
whether
we'll
use
datadog
for
this
or
not
in
in
the
managed
solution.
B
Something
ready
feel
free
to
write
to
me
and
then
see
how
it
looks
in
respect
and
give
the
feedback.
A
A
Yeah,
I
I
think
so
like
we're
it's
one
of
these
features
in
our
company.
It's
been
getting
more
internal
heat
than
than
any
feature
has
in
quite
a
while,
because
it
just
seems
like
customers
are
very,
very
interested
in
this,
and
so
everyone's
saying:
when
can
we
have
it
done
and
they're?
You
know
they
want
it
tomorrow
and
but
it's
it's
a
lot
of
work
to
get
this
done
in
a
way.
A
That's
not
you
know
it's
not
gonna
like
it's
gonna
hurt
performance
but
we're
very
performance
oriented
and
so
we're
trying
to
do
it
in
a
you
know
the
best
way
we
can
so
it
hurts
performance
as
little
as
possible.
Let's
say
but
yeah.
A
I'd
be
guessing
a
bit,
but
I
would
I
would
just
hazard
a
guess
that
it's
going
to
be
about
a
year
before
it's
done.
I
think
I
think
around
let's
say
june.
The
way
our
broker
is
implemented.
A
So
I
think
that
we'll
have
something
that
is,
you
know
a
good,
a
pretty
good
working
prototype
for
the
ingress
side,
stuff,
maybe
in
six
months,
but
we
haven't
even
entered
the
planning.
Yet
I'm
kind
of
at
the
tail
end
of
architecture
and
probably
the
next
couple
weeks
we're
going
to
start.
You
know
breaking
down
tasks
and
estimating.
A
So
it's
a
bit
of
a
guess
right
now,
but
they
are
planning
on
on
applying
as
many
resources
as
we
can
to
this,
as
it
is
practical
to
get
it
done
as
fast
as
we
can,
so
it
might
might
be
sooner.
I
don't
know.
A
B
Two
years
old,
okay,
we're
focusing
on
tracing
specifically,
we
were
mainly
focusing
on
the
node.js
okay,
with
the
we
have
sdk
for
node.js,
but
we
are
accepting
open,
telemetry
data
from
any
source,
integrating
it
into
our
system
yeah,
and
we
are
currently
a
small
spell
code
with
10
people,
but
we
plan
to
grow
in
the
near
future.
A
Sounds
similar
to
me
a
while
ago
I
was
when
I
started
at
solace.
I
was
the
11th
employee,
but
that
was
in
2004
and
I
think
we're
around
400
people
now
or
something
like
that.
So
so
maybe
you
maybe
in
a
few
years
you
guys
will
be
400
people
and.
A
Well,
okay,
so
headquarters
are
in
ottawa
and
canada
like
ottawa,
ontario
canada.
It's
actually
interesting
one
of
the
guys
who
worked
for
us
on
in
in
sort
of
this
area
of
the
data
path
from
the
product
he
moved
to.
A
He
was
from
tel
aviv
and
just
recently
moved
back
to
tel
aviv,
but-
and
he
worked
there
remotely,
for-
I
don't
know-
maybe
the
last
year,
but
unfortunately
he's
leaving
us
now,
so
he
just
had
a
a
a
child
and
I
think
he's
just
taking
some
time
off
for
a
bit
so
but
so
yeah.
I
guess
it's.
It's
funny,
television's
a
big
place,
so
I
can't
say:
hey
do
you
know
him,
but
it's
funny
when
it's
funny
you
get
that
a
lot
when
you
say
you're
from
canada.
I
said.
A
B
Yeah
on
the
skeletons
and
the
I
think
scenes
yeah
cool.
What's
what's
his
name,
maybe.
A
Galad
shh,
I
don't
know
how
to
pronounce
the
last
name.
It's
s-h-a-I-k,
I
don't
know
if
it's
shayack
or
sh
sh.
B
A
Yeah,
if
we
have
something
that
I
think
you
that
you
know
like,
for
example,
if
we
get
something
up
a
prototype
up
and
running
and
have
some
trace
data,
we,
I
might
send
something
to
you
and
see
if
see
how
it
looks
inspector.
I
really
love.