►
From YouTube: 2022-11-16 meeting
Description
Open cncf-opentelemetry-meeting-3@cncf.io's Personal Meeting Room
A
C
B
D
E
Hey
I've
got
a
PR
out
right
now
to
add
drop
to
the
transform
processor
I
tried
to
find
all
the
different
issues
that
I
could
that
was
related
to
that
functionality,
and
there
was
quite
a
few
of
them,
but
try
to
add
as
many
as
I
could
to
that
PR
I'd
love
to
get
eyes
on
that.
It
would
be
amazing
to
get
that
release
before
we
do
the
the
next
collector
release
so
I'd
love
to
get
that
merged
before
then.
E
Yep
I've
already
updated
the
documentation
to
handle
both
orphan
Telemetry
and
identity
crisis
that
was
caused
by
dropping
any
sort
of
telemetry
in
the
processor.
F
E
We
already
we
technically
already
allow
that
yeah.
So
that's
one
of
the
earnings
that
we
have
on
the
transform
processor
and
there's
actually
a
couple
issues
opened
right
now
to
add
a
function
to
re-aggregate
after
doing
Transformations.
F
Don't
think
reproducing
the
metrics
go
ahead,
I
don't
think
we
should
do
that
in
the
transform
processor,
the
aggregation
and
stuff
I
think
we
should
use
the
same
ottl
language,
but
we
should
have
a
dedicated
processor
for
doing
metric
segregation.
It's
it's
a
hard
problem
that
will
it
requires
a
lot
of
code
and
a
lot
of
configuration
and
stuff
and
I
I.
Believe
we
shouldn't
do
that
in
the
same
processor
as
other
basic
transformations.
F
E
Well,
there
are
some
issues
opened
on
that
topic
right
now.
If
you
want
to
throw
some
ideas
out
there,
someone
created
a
bunch
of
issues
for
reproducing
metrics,
transform
functionality,
metric
transform,
processor
functionality
and
the
transform
processor,
and
one
of
them
is
aggregation
via
labels.
So
there's
a
couple
issues
about
that.
If
you
want
to
share
your
thoughts,
the
concept
of
dropping
attributes
and
not
re
aggregating
is
something
that
the
transform
processor
allows
us
to
do
today.
E
So
we're
already
in
the
space
where
the
transform
processor
lets
you
hurt
yourself,
and
we
have
taken
steps
to
help
users
understand
that
you
can
do
that,
and
this
is
the
outcome.
If
you
do
it
wrong.
E
I'm
very
interested
I'll
get
the
I
can
post
the
issues
about
the
metric,
transform,
processor
and
aggregation
in
the
Sig
dock.
For
you
perfect,
thank
you.
G
B
I
had
a
quick
question
not
to
block
anything
because
obviously
I'm
not
like
reviewing
the
subject.
I
said
who
am
I
to
I'm
curious,
though
around
when
droppings,
that's
specific
to
traces
like
when
dropping
is.
Is
it
dropping
a
span
one?
Is
it
dropping
a
Spam
or
a
trace,
I
think
the
last
span,
and
in
that
context
you
know,
let's
say
it's
an
internal
span.
It's
parent
is
a
server.
B
There's
a
string
of
internal
spans.
It's
parent,
the
the
root
span
within
the
trace
is
a
server
span.
You
drop
all
the
internal
spans.
B
E
The
reason
nope
just
yeah,
just
like
the
filter
processor,
the
transform
processor,
doesn't
provide
any
help
on
what
you're
dropping.
So
that's
why
Bogdan
asked
about
the
orphan
traces
like
you'll,
easily
create
organ
traces.
If
you
start
dropping
Telemetry,
left
and
right.
B
That's
yeah
for
sure.
Is
there
a
functionality
to
allow
you
to
sort.
H
Like
there's
a.
B
The
context
is,
we
have
an
internal
implementation
of
a
sampler
which,
which
Bridges
the
which
connects
to
the
parent
spin.
F
Essentially,
yeah
yeah
right,
Eric
Eric
to
answer
your
question:
I
think
that
should
be
a
separate
processor
and
we
already
have
what
we
call
the
the
group
by
Trace
or
something
like
that
processor,
or
something
like
that
related
to
that.
So,
essentially,
you
need
to
group
the
trace
and
then
apply
this
logic,
because
if
you
don't
have
the
trace,
like
the
child
Span
in
the
Paris
band,
to
be
able
to
modify
the
IDS,
you
might
be
able
to
do
something
like
this.
So
so
you
need
the
state,
and
here
is
in
the
transform
processor,
I.
F
B
Environment,
yeah,
yeah,
I
agree
and
the
con.
The
additional
context
is
our
internal
implementation
lives
as
it's
a
it's
a
subclass
of
the
rubies
SDK
sampler.
So
it's
not
like.
We
have
something
I'm
just
evaluating
whether
this
would
be
an
alternative
I,
don't
think
it
would
be,
but
it's
good
to
understand
the
behavior,
so
I
appreciate
it.
E
E
B
E
Cool
bogd
and
I
threw
out
the
two
issues
that
have
been
opened
recently
around
re-aggregation
of
metrics
with
labels.
If
you
want
to
take
a
look
at
that
mm-hmm,
one
of
them
links,
one
of
those
issues
is
nested
under
a
larger
issue
of
what
are
the
features
that
the
metrics
transform
processor?
Does
the
transform
processor
can't
do
talking.
F
E
I
F
G
C
I
Stayed
for
from
the
previous
previous,
exactly
state
in
that
case,
yes,
between
different
metrics.
E
So
I
think
my
only
question
would
be
the
metrics
transform.
Processor
is
a
stateless
processor
today,
so
is
it
safe
to
make
an
assumption
that
anything
that
the
metrics
transform
processor
does
the
transform
processor
could
be
made
to
do
and
we
could
deprecate
or
Sunset
the
metrics
transform
processor.
F
E
E
F
Now
you
get
a
message
and
you
expect
that
all
the
data
points
are
in
the
same
patch,
which
is
not
the
case,
because
somebody
made
a
change
in
the
batch
processor
which
I
want
to
revisit
about
splitting
the
data
point
level,
not
at
the
metric
level,
and
that
causes
you
to
not
have
all
the
data
points
in
the
same
metric
in
the
same
message,
even
though
you
may
have
sent
that
so
that's
another
problem
that
we
have,
if
you
have
the
batch
processor
in
front
of
this
and
then
yeah.
F
E
Would
not
probably
higher
price
of
these
processors?
Okay,
yeah!
That's
fine!
If,
if
the
problem
is
outside
of
both
the
Transformer
and
the
metric
processor,
it's
like
more
of
a
problem
that
they're
both
they
would
both
have
then
yeah.
Then
we
can
have
a
larger
discussion
about
that.
I'm,
not
super
into
metrics,
so
someone
else
might
be
driving
that
work
with
the
transform
processor
but
yeah.
That
definitely
is
a
conversation
that
should
be
had
in
the
future
time.
I.
F
Think
we
should
find
a
champion
a
person
who,
who
wants
to
lead
that
conversation
and
I
can
share
all
the
thoughts
and
all
the
caveats
there.
But
unless
we
have
somebody
who
is
looking
actively
into
this
I
I,
don't
think
it
is
necessarily
useful
for
our
time
to
to.
C
C
Me
hello,
it's
Andre,
I
was
I
was
wanting
to
try
to
help
with
the
trials
in
open
times,
collectoral
character,
retro
country
repositories
and
I
was
I
had
trouble
understanding.
What
actually
I
need
to
do
when
I
see
an
issue
that
has
a
needs
trash
label
and
thanks
I
I
posted
on
slack
today
and
thanks
Ivan
for
your
thoughts.
That's
very
helpful,
there's!
Actually
a
nice
diagram
in
the
specification.
C
That
I
wasn't
aware
of
I
wonder
if
you
want
to
do
something
similar
for
the
country
like
a
car
collector.
J
So
I
can
speak
to
this
a
little
bit,
so
we've
been
revising
how
we
do
triaging,
just
as
we
get
more
workflows
to
automate
a
lot
of
parts
of
it.
I
wouldn't
mind
moving
something
or
moving
towards
something
similar
to
what
the
specification
does.
Although
I
don't
know
that
the
way
the
specification
does,
it
is
really
going
to
work
now
mind
you.
J
This
is
mostly
for
the
contributo
I
think
that
for
the
collector
core
repo,
it's
a
bit
of
a
different
a
bit
of
a
different
Beast,
so
I
think
that
the
process
there
would
look
different.
J
But
I
do
think
that
now
that
we
have
some
of
these
workflows
in
place,
it
would
be
good
to
probably
document
this,
but
again,
yeah
I'm,
not
sure
I.
Think
that
having
something
like
this
would
help,
but
I
don't
know
whether
it'll
look
exactly
like
the
spec.
C
I
think
currently,
the
issues
in
the
country
are
not
being
assigned
in
Spec
they're
being
automatically
assigned
to
to
a
one
of
the
approvers
right.
I
think
so,
but
maybe
I
don't
know.
Will
it
make
sense
for
the
these
issues
to
be
assigned
to
code
owners
of
specific
components
if
it
was
for
house
metrics
receiver
then
assign
it
to
the
code
Governor
one
of
the
code
owners
or
the
receiver.
C
C
J
That
code
owners
are
pinged
automatically
in
when
most
issues
are
open,
since
we
have
the
the
component
drop
down
so
that
it
used
to
be
that
you
would
remove
it
when
the
codons
were
pinged,
but
I
think
now
we
should
move
toward
a
process
where
this
label
is
removed
when
the
issue
is
considered
accepted.
So
whatever
that
definition
is,
but
that's
likely
that
you
know
this
is
a
actual
bug
or
that
the
enhancement
actually
makes
sense
or
so
on
and
so
forth.
J
We
don't
have
that
document
anywhere,
so
it's
kind
of
up
to
the
code
owners
on
what
they
want
to
do
right
now,
but
at
the
end
of
the
day,
I
think
that
the
most
sensible
solution,
or
the
most
sensible
definition
for
needs
triage
is
we
don't
know
whether
this
issue
is
valid
or
not,
and
it
should
be
removed
once
we've
determined
that.
C
J
So
treasure
is
helpful
because
they
can
add
so
anyone
with
the
traditional
role
in
a
repo
can
and
remove
whatever
labels
they
like
and
we
don't
expose.
So
we
do
expose
a
handful
of
labels
through
the
comment
workflow,
which
you
can
see
in
the
link.
That's
on
your
screen
right
now,
but.
C
J
All
labels
are
exposed
like
this
and
generally
that's
where
triagers
can
come
in.
They
usually
have
a
little
bit
more
knowledge
on
the
repo
and
they
can
help
apply
additional
labels
and
Route
issues
where
necessary.
E
It's
another
thing
that
the
triagers
can
help
with
is
like.
If
they
know
a
component
well
enough
and
they're
like
super
active
in
the
community,
then
they
can
help
with
the
user.
Before
maybe
a
code
owner
may
appear,
because
sometimes
our
code
owners
they
may
not
be
as
active
as
other
community
members
and
I
know.
The
triagara
is
react,
really
active.
F
I
also
think
that,
thanks
to
even
the
the
job
changed
this,
this
job
initially
was
about
applying
labels
and
then
doing
this
work,
but
even
try
to
move
us
to
a
more
automated
way,
which
is
better
so
I
think
the
trial
now
becomes
more
or
less
building
this
automation
that
even
was
driving.
So
if
you
want
to
help
with
this,
you'll,
probably
better
help,
even
with
with
all
these
automation,
so
we
we
make
the
repo
self-served
or
the
owner
self-served.
F
That's
that's
probably
so
the
goal
would
be
to
not
have
this
strategy
role
for,
for
this,
it's
more
or
less
to
automate
everything
into
the
repo
so
that
we
don't
have
to
to
to
have
dedicated
person
just
to
to
do
this,
though
we
still
have
to
to
have
people
in
the
trash
at
all.
That
will.
What
I
think
we
should
do
is
raise
awareness
about
some
of
the
bugs
or
things
that
we
we
may
miss
and
escalate
them.
F
F
C
A
C
A
J
H
So
we
were
looking
for
some
input
on
this
issue.
We
have,
we
opened
yesterday
we're
looking
to
add
some
new
functionality
to
the
biolog
receiver
to
remove
log
files
after
they're
fully
read
delete
them
from
the
file
system,
so
you
know
basically
the
for
context.
We
have
a
a
customer
out
there
who
has
a
bunch
of
log
files
that
are
like
they're,
complete
they're,
moved
into
this
directory,
and
then
they
want
them
read
by
The,
Collector
and
then
removed,
because
they're
no
longer
useful.
H
Now
that
all
the
data,
all
the
Telemetry
data
is
now
you
know
in
The,
Collector
and
exported
so
really
what
we're
looking
for
is,
you
know,
is
this
a
future
that
is
worth
adding
to
the
collector?
Do
we
have
any
concerns
about
this
or
possibly
any
other
ways
to
solve
this,
then
having
the
collector
delete
the
file.
I
H
K
Yeah
I
hadn't
gotten
around
to
responding
to
this,
yet
just
giving
some
thought
to
the
security
concerns.
I
think
there's
a
valid
use
case
here,
and
there
are
other
examples
of
other
log
agents
handling
things
like
this
I
think
it
makes
sense
just
want
to
make
sure
we
can
do
it
in
a
way.
That's
secure
if.
K
H
F
F
L
A
bit
sure,
hey
everybody,
yeah,
so
there's
a
little
bit
more
context
on
this.
L
Basically,
we
have
a
use
case
where
a
customer
has
basically
a
bunch
of
different
accounts
and
they
want
to
send
Telemetry
data
to
a
lot
of
different
accounts
based
on
some
custom
logic,
and
the
number
of
counts
is,
like
probably
you
know,
could
be
tens,
dozens,
hundreds
and
so
I'm,
trying
to
figure
out
a
way
if
the
collector
can
even
support
something
like
this
in
like
a
generic
way,
and
so
when
I,
when
I
look
at
the
collector
I
see
that
you
know
it's
using
grpc,
and
so
what
I
was
trying
to
attempt
to
do
is
basically
create
an
Interceptor
that
can
dynamically
sort
of
set
secrets
in
the
headers,
and
so
because
we
don't
want
to
have
these
API
Secrets
like
leaked
in
like
resource
attributes
and
so
I'm
trying
to
see.
F
Yeah,
have
you
looked
at
the
load,
balancing
I,
think
that's
what
it's
called
load
balancer
process
or
some
yeah
or
exporter
I?
Think
it's
called
the
one
that
you
can
configure
rules
on
how
to
load
balance
I
think
one
is
by
Trace
ID.
Otherwise,
by
other
things
and
and
allow
you
to
to
Route
I
know
it's
route
processor
without
something.
Let
me
let
me
find
it.
L
Yeah
the
issue
with
the
routing
processor
is
that
the
number
of
I
guess
the
number
of
accounts,
or
so
like
the
number
of
API
Keys,
is
large.
So
like
we're
talking
about,
like
you
know
dozens
or
maybe
hundreds
so
like
for
the
routing
processor,
it
makes
sense
for
like
a
small
number
of
routing
use
cases,
but
when
you
talk
about
like
like
being
able
to
dynamically,
determine
like
hey
I
want
to
send
it
to
this
particular
account
and
I
understand.
L
This
is
probably
like
a
very
specific,
like
The
Relic
use
case,
but
I
looked
at
the
routing
processor
and
it
it
it
does
work
for
some
stuff.
But
the
number
of
the
number
of
API
keys
that
we're
talking
about
it.
It
would
be
difficult
to
configure
that
what.
E
About
the
headers
that
are
extension,
that's
the
one
that
knows
how
to
get
the
the
API
key
from
the
incoming
request
and
then
like
like
move
it
on.
Is
that
useful.
E
Me,
a
relatively
new
one,
I
think
I
think
that
jerosity
and
Co
Bruce
worked
on
it.
F
No,
that's
you,
you
mean
the
conflict
about
that
Anthony
posted
so.
F
I
can
figure
out
that
so
the
config,
oh
it's
an
interface
at
an
extension.
You
can
Implement
Alex
and
is
part
of
the
configuration
for
almost
every
component
that
we
have.
Every
exporter
accepts
out
and
you
specify
an
extension
and
that
extension
looks
more
or
less
I
think
actually
is
a
grpc
Interceptor
in
case
of
grpc.
F
So,
even
though
it's
not
on
out,
you
can
hijack
Us
in
whatever
hackers
a
bit
and
and
put
your
logic
there
for
for
this,
via
that
I
I
expect
that
will
work,
but
still
still
the
user
has
to
configure
the
100
accounts
correct
in
this
extension.
So
so
I'm
not
sure
this
is
more
scalable
than
the
routing
processor.
You
still
need
to
to
configure
the
thousand
hundred
tokens
that
you
mentioned
right.
L
Yeah
and
there's
other
issues
with
this,
because
it's
kind
of
like
the
way
that
it's
kind
of
accepted
right
now
is
that,
like
there's
one
collector
that
is
managing
all
of
these
secrets,
and
so
presumably
there's
like
a
there's
like
a
fan
in
where
you
have
a
bunch
of
it.
You
have
a
bunch
of
collectors
that
are
sending
Telemetry
to
one
kind
of
service
level
collector
and
so
like
there's
potential
that
that
has
to
scale
out
horizontally
to
handle
the
volume
of
of
telemetry
and
so
like
I,
haven't
really
solved
that
problem.
Yet.
F
Yeah,
but
indeed
it's
it's
possible
to
do
with
the
config
out
hack
or
to
the
process
Matrix
processor,
so
both
of
them
are
valid
questions
regarding
your
issue,
I,
don't
think
I,
don't
think
there
is
any
other
way.
I
mean
the
other
way
would
be
if
we
have
another
section
in
the
config
called
interceptors
or
something
like
that
and
transform,
and
you
can
build
an
extension
that
implements
the
inter
the
grpc
Interceptor
interface
and
then
we
can
install
them
correct.
F
L
L
Yes,
yeah
I'll
take
a
look
at
that.
It
may
solve
my
use
case.
G
F
K
Yeah
I
don't
see
that
on
any
other
Dan's
joined
so
looks
like
we
have
to
skip
that
one
again
sure.
So
the
last
item
here
just
wanted
to
call
attention
to
this
connectors
implementation
here,
looking
for
feedback
on
this,
and
if
anyone
wants
to
kick
the
tires,
I
think
it's
a
pretty
interesting
feature
set
and
I
feel
pretty
good
about
where
it's
at,
but
would
definitely
like
some
more
buy-in
on
trying
to
move
this
thing
forward.
Please
take
a
look.
F
So
there
are
two
problems
here
that
we
need
to
to
get
feedback.
First,
about
the
configuration
and
second
about
the
implementation.
F
My
top
priority
would
be,
let's,
let's
make
sure
we
get
the
configuration
right
and
then
the
implementation
we
can
play
with
it,
and
we
can
change
that
if,
if
needed
so
focused
on
policy
review
on
the
configuration
I
already
reviewed
that.
So
that's
why
I
was
talking
to
Dan
about
other
things
in
the
PR.
But
I
would
like
any
new
person
to
look
at
the
configuration
and
see
if
that
makes
sense
to
them,
and
they
would
understand
how
to
configure
this.
How
to
use
this.
F
So
so
this
is
just
an
action
item
for
us
then
to
to
look
into
that.
I
saw
your
comment
on
slack.
I
will
try
to
to
do
one
more
round
today
to
to
tell
you
the
definitive
answer
for
that
I
think
we're
getting
there.
But
let
me
let
me
take
another
look.
There
are
lots
of
things
happening
right
now,
but
I
I
will
try
to
find
time
to
to
make
sure.
Looking
at
that,
thank
you.
F
And
for
the
other
dance
question,
Alex
I
think
we
need
to
find
somebody
who
who
we
want
to
answer
that
I.
Think
it's
it's
reasonably
good.
I
I
mean
it's
a
good
question
and
they
they
have
a
very
reasonable
use
case.
I
and
I.
Think
we
should.
We
should
care
about
that
and
provide
some
guidance
there.
D
I
I
have
one
more
item
that
wasn't
on
the
agenda.
It
seems
like
in
the
contributor
CI
has
been
failing
on
the
Windows,
build
a
hell
of
a
lot
I,
don't
know
if
we
have
anybody
here
that
works
on
Windows
machines
or
has
access
to
Windows
machines
to
try
and
make
some
of
the
workflows
work
better.
D
My
current
work
to
work
on
Windows
is
not
great,
so
I
can't
really
spend
much
time
trying
to
improve
make
improvements
there,
but
it
seems
like
it's
failing,
basically,
every
other
build
and
I
think
we
should.
We
should
consider
almost
disabling
those
builds
if
they
keep
getting
in
our
way,
because
it's
it's
almost
ridiculous
to
wait.
D
45
minutes,
I
I
spent
a
little
bit
of
time
digging
into
some
of
the
reasons
why
the
windows
builds
were
so
slow
and
it
turns
out
that
the
implementation
of
gzip
is
really
slow
on
windows,
so
even
like
extracting
the
go.
Cache
takes
10
to
15
minutes
at
times
and
I
tried
a
couple
workarounds
and
couldn't
actually
get
anything
to
work,
but
anyway,
so
that's
only
one
part
of
the
problem.
D
The
other
part
of
the
problem
is
that
tests
are
failing
intermittently,
so
anyways
just
wanted
to
call
that
out,
because
I
do
think
we
need
to
spend
some
time
there.
It's
it's
killing
our
pipeline.
I
I'm
sure
we
should
completely
disable
Windows
test,
given
that
we
claim
that
we
support
Windows.
Otherwise
we
should
we
I
mean
if
we
disable
all
the
tests.
We
shouldn't
say
that
we
are
completely
compatible
with
Windows
right.
D
E
Is
the
are
the
tests
that
are
failing
on
Windows
the
same
every
time
or
is
it
just
like
different
tasks?
Just
fail
constantly.
I
F
Let's
keep,
let's
keep,
let's
start
skipping
filing
issues
and
Skip
tests.
If
we
see
an
issue
for
two
times
appear
like
sorry
a
test
for
two
times
failing:
let's,
let's
document
that
and
start
skipping
them
on
Windows,
at
least
that's
a
good
start
for
for
making
ourselves
run
faster.
The
other
idea,
by
the
way,
if
we
don't
want
to
run
the
windows
test,
Alex
one
option
is
to
not
run
for
every
PR
Windows
test.
F
So
essentially
what
we
can
do
is
run
it
only
on
the
merge
PRC
in
Main
is
that
will
remove
a
lot
of
the
things.
F
I
I
believe
that
there
are
lots
of
similarities
between
Windows
and
Linux
in
terms
of
the
the
how
go
is
implemented
and
stuff
and
I
I'm,
confident
that
if
we
run
Windows
only
on
the
merge
PRS
and
we
cannot
release
on
unless
the
windows
is
green,
it's
good
enough
to
not
stay
into
our
our
way
of
of
making
progress
and
it
is
going
to
improve
a
lot
of
our
GitHub
action
usage
because
lots
of
time
is
spent
on
windows.
So,
if
that's
an
acceptable
idea,
I
think
we
should
proceed
that
way.
F
I
I
mean
if
we,
if
there
is
no
change
on
specific,
like
a
receiver,
so,
for
example,
if
we
have
one
change
only
one
receiver,
why
don't
we
run
Windows
tests
on
under
that
particular
receiver?
And
that's
it.
A
I
Linux
test
genius
test
on
everyone
and
everything,
but
Windows
tests
just
like
spawn
specific
okay.
F
F
I
But
even
yeah
I
I
understand
the
dependency
problem,
but
even
if
we
keep
it
for
like
clear,
clear
changes
in
the
specific
like
procedures,
you're
not
touching
the
path,
it's
still
better
than
not
not
running
anything
at
all.
That's
my
idea.
F
D
F
F
F
Or
maybe
we
do
we
do
some
more
Automation
and
whatever
every
we
send
an
email
or
slack
whatever
we
can
think
of
I
think
the
slack
integration
would
be
called
to
if
the
build
on
Main
fails
to
being
guys
on
the
slack.
D
About
the
question
from
Brian
regarding,
can
we
throw
more
compute
at
the
windows
tests?
Yes,.
F
Tried
that
for
for
I,
don't
know
if
you
saw
the
community
issue,
I
tried
that
for
for
demo
repo
and
it
cost
fifty
dollars
for
one
build
in.
A
F
No,
it's
it's
absurd,
how
much
it
costs
like,
and
that
was
Windows
Linux,
not
windows
by
the
way
the
fifty
dollars
yeah,
because
it's
it's
I,
think
it's
one
cent
per
minute
or
something
like
that.
So
we
have
like
probably
100
minutes
or
200
minutes
that
was
600
was
for
fifty
dollars.
So
no
it's
10
cents
per
minute,
10
cents
per
minute
around
that
so
10
cents
per
minute
of
running
we
run
48
or
45
or
7
or
45.
It's
like
money
is
gonna
flow.
A
lot
here,
foreign.
D
You
know
the
odds
that
I
got
that
VM
running
the
day
before
the
release
that
broke
windows
again
are
just
astronomical.
E
It's
you're
stuck
with
it
now.
D
F
Oh,
the
next
release,
tigran
looks
like
he's
not
responding.
There.
I
will
ping
him
when,
when
do
we
want
to
do
it,
do
you
want
to
start
it
Monday
or
what?
What
is
the
the
preferred
thing,
Alex
and
Pablo?
Whoever
are
involved
in
that
discussion?
I'd.
G
Say
Monday
makes
sense.
Okay,
usually
it
takes
us
a
couple
days
to
do
it
so
yeah.