►
From YouTube: 2021-10-13 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
B
Hi
everyone
it's
three
minutes
after
so
I
think
we
should
probably
get
started.
I
know
tigard
can't
attend
and
I
don't
see
bogdan,
so
I
think
I'll
jump
in
and
try
to
leave
here.
First
of
all,
please
sign
in
if
you're
on
a
meeting
first
item
from
tigran
review
the
config
and
value
sources
proposal.
B
All
right
so
yeah,
please
take
a
look.
There's
a
poll
out
there
for
how
to
schedule
the
working
group.
So
please
fill
that
out.
If
you
are
interested
in
joining.
B
Third
issue
is
mine,
so
curious,
maybe
those
who've
been
involved
with
the
collector
longer
than
I
have
if
there's,
if
there's
any
plan
or
concrete
plan
or
any
real
way
currently
to
consume
the
ops
report
package
and
the
signals
that
are
provided
to
it,
it
strikes
me
that
the
collector
is
is
in
need
of
more
internal
instrumentation
and,
as
I
understand
it,
we
are
eventually
going
to
be
using
the
the
go
level.
Laundry
go
library
to
instrument
the
collector,
but
aren't
doing
that
until
it
reaches
a
certain
point
of
stability.
B
All
right,
I
think
I'll,
write
up
a
proposal
and
send
that
out.
B
Jurassic,
did
you
want
to
go
next.
C
Sure
yeah,
I
think
it
might
be
kind
of
related
to
the
previous
one,
but
it's
actually
something
that
I
that
I
came
up
today.
So
someone.
C
Whether
we
had
latency
information
for
receivers
and
exporters-
and
I
thought
we
had
does
anyone
else
remember
having
that
kind
of
histograms
or
you
know
things
like
that
for
receivers
and
exporters
and
potentially
processors
as
well.
B
Yeah,
do
you
think
the
ops
report
package
is
capturing
like
a
trace
or
a
span
for
the
the
life
cycle
of.
C
Okay,
so
I'll
take
a
look
now,
if
it's
not
there,
is
it
something
that
we
want
to
have
it
there.
B
I
would
think
so
as
long
as
it's
not
you
know
like
affecting
performance
of
the
collector
too
much.
It's
just
my
opinion.
E
C
Right
so
I
think
end-to-end
is
kind
of
difficult
if
we
have
things
like
the
batch
processor
in
between.
C
D
Sure
so
I
created.
D
Basically,
when
we
have
when
we
are
using
the
prometheus
receiver
and
we
try
to
relabel
the
job
labels
in
the
tag
for
the
targets
and
relabel
the
job,
or
instance,
labels
for
the
metric.
D
So
we
are
unable
to
do
it
because,
because
of
the
way
that
promise
sends
us
the
labels
and
we
try
to
derive
the
metrics
from
the
labels.
D
So
since
the
labels
that
may
be
relabeled
are
are
considered
unique
and
we
try
to
identify
the
targets
based
off
of
these
labels
and
since
they
are
reliable,
it's
not
able
to
find
the
right
targets
because
of
which
the
metric
collection
is
failing.
D
So
that's
the
summary
of
the
issue
and
that's
what
I
found
by
digging
into
the
code
so
far,
and
there
are.
There
are
a
couple
of
ways
to
solve
this,
so
one
of
them
is
to
basically
change
the
way
prometheus
sense,
the
metrics
by
adding
the
old
and
new
labels
just
for
job
and
instances.
So
that's
one
of
the
ways,
but
that
would
be
very
custom
to
make
the
hotel
collector.
D
D
F
D
A
try
I
will
give
it
a
try
and
I
will
check
if
it
works
so
far.
I've
seen
that
they
create
hatches
for
the
label
sets
which
might
not
be
a
problem
in
prometheus,
but.
E
B
All
right,
travis.
A
A
Yeah,
so
we're
just
wondering
what
the
steps
are
to
get
once
a
pull
request
has
been
approved,
getting
it
merged.
We
we
just
fixed
the
the
lint
error
and
but
we
we
saw,
we
don't
have.
A
We
don't
have
permission
to
merge
so
when
we're
ready
to
merge
we're
just
wondering
what
what
the
steps
are.
B
Yeah,
I
think
sorry
go
ahead.
C
Yeah,
I
was
just
gonna
say
yeah
I
mean
once
the
build
is
green.
Once
every
step
is
passing,
then
a
maintainer
would
merge
this
pr
so,
namely
tigran
or
bogdan
okay,
so
they
both
seem
to
have
approved
already.
C
There
are
also
a
couple
of
other
approvals,
so
they're,
probably
just
waiting
for
the
build
to
be
green
again.
C
Now,
if
you
don't
see
that
merged,
you
know,
I
don't
know
in
a
reasonable
time,
feel
free
to
ping
them
directly
or
you
can
try
pinging
in
the
channel
and
they
would
like
to
merge.
It
can.
A
G
Hey
everybody.
Sorry,
I'm
like
not
very
good
at
managing
my
google
tabs
cool
yeah.
I've
been
talking
with
poking
or
looking
around,
I'm
not
sure
if
any
relevant
folks
are
in
the
meeting.
I
know
it's
a
it's
a
busy
day
because
of
all
the
conferences
and
stuff
yeah.
I
was
kind
of
checking
in
one
if
I'm
even
in
the
right
place.
Two
I
was
curious
around
who
to
kind
of
work
with
or
liaise
with
around
the
semantic
convention.
G
There's
some
work
on
underlying
components
going
on
in
the
open,
telemetry
go
sig,
sorry,
open
telemetry
go
repo
that
tigrum
is
working
on,
so
I
was
kind
of
hoping
to
ask
him
whether
he
there
was
plans
to
then
add
a
you
know
like
basically,
what's
the
what
are
the
next
steps
around
this
processor,
it
seems
like
there's
some
foundational
work
being
done
to
support
translating
semantic
inventions
based
on
schema
urls
in
the
collector,
but
I
wasn't
clear
on
like
what
the
roadmap
is
and
also
like
what
the
feature
set
of
that
processor
would
be
specifically
my
questions
and
if
no
one
knows
that's
fine,
I
can
you
know
like
we
can.
F
D
F
As
well
as
with
and
and
with
bogdan
and
tigran,
but
I
think
mostly
bogdan
and
anthony,
have
been
working
together.
On
the
you
know,
collector
semcon,
support
based
on
the
gulf
updates
and
and
we
are
kind
of
working
lockstep
on
the
go
sdk
and
reusing
that,
for
the
collector,
with
a
you
know,
subset
being
applied
for
the
collector.
F
Right
so
again,
please
feel
free
to
ping
him
there
he
is
bogdan.
Did
you
want
to
talk
about
your
own
mute
but
on
semantic
conventions.
G
Sorry
which
semantic
convention,
so
this
otep
152,
merge,
schema
url
like
the
kind
of
first
support
for
schema
url
and
then
with
that
comes
like
sort
of
implied,
like
a
bunch
of
future
work
around
like
using
schema,
url
to
enforce
schema
and
translating
between
schemas
and
and
all
that
fun,
incredibly
fun
stuff,
and
so
I'm
trying
to
it's
a
one
like
I'm
confused
on
there's
work
going
on
in
open
telemetry
goes
sdk,
but
I'm
confused
on
whether,
like
we're
doing
any
of
this
translation
work
in
the
go
sdk
like
is
it
meant
to
be
done
in
all
language
clients.
G
I
have
questions
on
like
what's
next
on,
the
otep152
is
a
bunch
of
proposals
around
like
kind
of
future
like
what's
next
like
defining,
for
example
like
root
schemas,
defining
backwards
compatibility,
and
it's
very
it's
like
it's
clear
that
people
are
working
on
something,
but
it's
unclear
like
what
that
is,
and
I'm
not
asking
to
say
like
work
harder
or
faster,
although,
like
yes,
everyone
should
do
that
all
the
time,
but
yeah
more,
like
I'm
trying
to
understand
where
I
might
be
able
to
jump
in
and
help
okay.
H
So
I
think
people
discuss
and
they
found
that
it's
a
it
is
a
strong
requirement
to
have
some
kind
of
support
in
the
sdk,
especially
around
the
fact
that
if
you
link
together
multiple
components
with
different
schema
url,
you
may
have
troubles,
especially
with
the
current
definition
of
schema
url,
which
applies
to
the
entire
span
or
or
stuff.
So
they
they
kind
of
found
out
that
there
needs
to
be
some
kind
of
support
in
every
sdk.
So
that
being
said,
definitely
we
need
to
have
something
in
the
collector
at
a
higher
level.
H
G
Okay,
I
can
yeah
like
elite
said
I
can
sort
of
ping
tigran
and
anthony
on
that
issue,
and
you
know
they're
nice
people
and
I'll
try
not
to
bug
them
too
late
at
night
and
whatever
I'll
be
able
to
interact
with
them
yeah.
I
think,
personally
speaking,
we're
interested
in
having
some
support
for
schumereal,
specifically,
like
maybe
a
custom
schema
url
like
that,
handles
some
sort
of
like
root
parentage
relationship
like
a
superset
of
otp.
That
can
also
incorporate
like
a
company's
like
specific.
G
G
You
have
one
open,
or
so
I
see
in
the
otep,
it's
like
things
to
do,
there's
like
a
section
of
like
stuff
that
might
get
done
next
and
there
and
it's
a
bunch
of
like
relevant
stuff.
So
I
don't
want
to
say
so
I
don't
have
an
issue,
I'm
just
working
on
implementing
excuse.
G
My
url
in
ruby
right
now
is
like
my
first
thing
and
then
it's
unclear
whether
I
should
jump
around
like
some
other
languages
that
we
use
internally
and
implement
it
there
or
whether
I
should
maybe
jump
into
you
know
helping
get
some
sort
of
you
know
parent
root
thing
or
backward
compatibility.
Work
done.
So
I
think
the
tl
tldr
is.
I
need
to
stop
talking
here
and
start
talking
to
anthony
and
and
tigran
the
other
thing.
G
I
Hi,
so
this
is
this
is
an
existing
issue
that
josh
created
some
time
back.
The
super
high
level.
The
thing
we're
trying
to
get
done
is
allow
automated
processing
of
collector
logs
for
when
interesting
things
happen
right.
So
you're
you
have
some.
You
have
some
integration
that
you've
set
up
and
you
find
out.
Oh,
I
can't
talk.
My
mysql
receiver
can't
talk
to
my
sql,
because
the
username
and
password
are
invalid
right.
I
So
ideally,
we'd
be
able
to
find
a
precise
error
code
like
a
string
or
a
number
to
match
on
rather
than
saying,
here's
an
actual
here's,
a
long
natural
language,
english
localized
string
that
we're
matching
on
and
then
hoping
to
keep
up
to
date.
I
I
mean
I
think
this
is
there's
a
lot
of
prior
art
for
doing
this
sort
of
thing
in
compiler
error
messages,
for
example.
So
I
think
bogdan
you
had
some
you
had
some
feedback
on.
This
is
similar
to
returning
the
air
in
the
message,
and
I
think
josh
responded
to
that.
I
just
wanted
to
draw
the
community's
attention
to
it,
get
some
more
feedback
or
thoughts
on
the
issue.
H
So
yeah
the
whole
idea.
I
The
grpc
errors
yeah,
I
I
think
that's
that
was
proposed
as
an
example.
The
point
the
I
was
like
this
is
a
straw
man
that
people
have
previously
considered,
but
just
some
some
way
of
saying
like
here,
here's
how
you
put
precise
information
into
your
messages.
I
H
So
that's
why
I
know
you
work
for
google,
but
those
are
not
grpc
errors,
they
pre.
They
are
way
back
in
time.
They
are.
Google
uses
them
not
only
for
rpc
but
uses
them
for
for
io
operations
and
for
a
bunch
of
other
places.
So
I'm
fine
with
using
this,
but
I
think
if
we
want
to
do
this,
maybe
maybe
an
option
for
us
would
be
to
to
copy
the
codes
and
create
our
own
errors
instead
of
using
the
rpc
protocol.
Just
because.
H
H
So
I
think
it's
I
think
it's
a
very
good
idea
to
to
have
these
errors
and
adopt
the
the
google
google
work,
especially
in
in.
H
I
H
More
or
less
the
way
how
how
it
was
designed
and
stuff,
nobody
had
a
half
pager
explaining
why
this,
but
I
understand
now,
why
do
you
want
to
have
this
and
and
in
the
future,
I
recommend,
starting
by
filing
an
issue
with
instead
of
dumping,
the
pr
to
us
without
having
any
context,
give
us
a
half-page
issue
explaining
hey.
I
have
this
problem.
I
think
we
should
adopt
this
and-
and
here
is
a
prototype.
H
J
So
I
have
a
quick
curve,
just
like
recently
like
the
zero
six
zero
four
will
open.
The
elementary
collector
image
is
not
pushed
so
we
are
spamming,
wagner,
rc,
release,
work,
channel
and
cncf,
so
both
then
it's
like.
Can
you
just
push
that
image?
Because
there
was
a
small
in
the
the
release
pr
which
we
have
created?
So
there
was
a
v
as
the
suffix.
So
now
the
image
doesn't
expect
me
so
jurassic.
H
I'm
I'm
I'm
I'm
confused.
So
did
we
not
publish
the.
F
It's
not
really
good.
There's
an
error
in
the
naming
beneath.
C
Just
yeah,
I
just
pasted
the
commands
here
in
the
agenda.
It's
basically
that
the
new
releases
repository
was
tagging
with
a
v0360,
whereas
we
need
without
the
v.
So
previously
we
published
without
the
v
right.
So
to
keep
consistency
with
the
previous
versions.
We
should
publish
images
without
the
v.
C
H
C
H
Okay,
live
live
pushing,
so
it's
there
now
nice.
Thank
you
any
other
topic.
K
K
Yes
yeah,
so
I
just
wanted
to
bring
this
up
to
the
community
here
is
that
there
was
a
change
in
the
behavior
of
the
otlp
exporter
to
connect
insecurely
rather
than
securely,
by
default
in
the
current
main
branch,
and
so
I
opened
an
issue
for
it
and
jurassic.
Thank
you.
You've
already
commented
on
it
just
wanted
to
ensure
that
we
don't
make
a
release
of
0.37
that
changes
that
behavior,
if
possible,.
K
F
Yeah,
what
changed
alex?
What
what
behavior
change.
K
Yeah,
so
so
by
default,
the
otp
exporter
will
try
and
connect
securely
to
whatever
endpoint
you
specify,
and
now
it
no
longer
does
this.
Unless
you
add
the
tls
and
secure
false
setting.
C
So
the
previous
default
was
that
connections
would
be
secured
by
the
fall
right.
Yeah.
C
Change
here
it
was
not
related
to
that.
It
was,
I
think
it
was
just
adding
or
making
aligning
tls
options
with
the
alt
options,
so
making
tls
as
a
as
a
one
extra
node
in
the
configuration
and
as
part
of
that
change,
the
default
for
secure
has
changed
from
you
know:
insecure,
false,
so
a
double
negative
or
insecure,
yeah
false
it
should
be
secured
through
right.
C
So
now,
and
we
are
kind
of
breaking
now,
especially
the
default
configuration
for
distributions
or
for
for
exporters
that
rely
on
the
default
being
secure,
meaning
if
I'm
sending
data
to
you
just
say
a
name
here:
lightstep
they're
using
port
443,
so
they
are
secure
but
and
the
new
default
is
going
to
be
insecure.
So
it's
going
to
fail.
M
My
name
is
jonak.
I
have
a
question
for
pokemons
that
I
have
a
pr
which
is
about
compression
method
for
complete
grpc
so
that
it's
still
waiting
for
to
be
merged.
But
is
there
any
concerns
or
changes
that
I
need
that
is
required
to
do
that.
H
To
be
honest,
is
the
my
main
concern
about
this
is
using
that
dependency?
Is
that
a
very
well-known
dependency
that
we
need
to
bring?
We
can
bring
it
to
to
core
or
or
not
that
really
because
it
it
brings.
For
example,
it
brings
some
very
unstable
dependencies
like
snappy
0.04,
or
something
like
that.
So
are
we
okay
with
bringing
that
whole
package
into
the
core
or
not
yet.
M
D
M
To
solve
this
problem
is
that
we
can
implement
in
our
own.
H
Yeah,
but
so
that's
that's.
My
curiosity
is
this:
the
standard
library
that
everyone
uses
this
golgi
rpc
compression
from
most
most
y
and
b.
H
I
mean
I'm
trying
to
to
start
limiting
the
adding
random
dependencies
to
the
core
and-
and-
and
I
want
to
understand,
is
this-
the
library
that
everyone
uses
or
we
should
not
use
it
like.
H
I'm
looking
for
feedback
from
anyone.
If
anyone
knows
that
this
is
the
right
library
to
use
for
this
thing,
I'm
fine
using
it.
C
I
think
we
might
have
some
compression
on
jager
and
I
can
I
can
take
as
an
action
item
to
to
come
back
and
say
what
we
use
there.
Okay,.
C
I
know
that
for
for
the
query
part,
which
is
you
know,
not
the
jrpc
parts,
we're
using
plain
gorilla
compression
of
http
to
stream
so,
and
I
think
it
might
be
the
case
as
well
for
jpc,
but
I
I
have
to
look
into
yeah
or
perhaps
or
perhaps
you
can
take
a
look
and
and
and
tell
us
what
you
find
on
the
other
side.
H
H
Yeah,
sorry
for
for
not
telling
the
reason,
but
I
was
keep
thinking
about
if
this
is
the
right
thing
to
do
so
yeah,
I
think
we
should
support
multiple
encounters.
I
don't
mind
that
it's
just
about:
how
do
we
support
them?
Do
we
depend
on
these
kind
of
libraries
or
do
we
implement
ourselves
another
option
fyi.
H
Do
you
work
for
for
aws
yeah,
I'm
working
if
alolita
is
willing
to
put
her
name
on
a
issue
to
do
this
later?
I
can
merge
it
now
and
you
fix
it
later,
but
unless
she
doesn't
put
her
name
on
the
issue,
no.
L
No,
I
I
will
vogden
definitely.
H
It
sooner
create
an
issue
to
follow
up
and
determine
what's
the
right
dependency,
because
I
I
did
look
like
it
doesn't
bring
too
many
other
dependencies
so
great
agreed.
But
I
think
you
raised
a
good
point.
F
F
Totally
agree,
I
mean,
I
think
the
suggestion
that
jurassic
gave
is
very
good.
I'd
really
like
to
understand,
you
know,
what's
the
most
common
set
of
libraries
that
are
used
and
then
the
modules
yeah
and
then
use
use
the
best
you
know,
use
the
most
popular
one
really.
H
The
other
options
looking
to
look
into
maybe
aws
if
they
have
any
even
for
http
stuff,
if
they
have
any
compressions,
what
libraries
did
they
use?
Okay,.
H
Again,
what
I'm
trying
to
say
is:
let's
define
we
can
merge
it
right
now.
As
long
as,
as
I
said,
alolita
you
create
an
issue,
put
your
name
there
as
owner
yeah,
and
we
follow
up.
J
I
have
a
quick
question,
so
I've
been
trying
to
use
the
open
telemetry
collector
with
od
lp
exporter.
So
if
I
have
an
tracing
packet
which
accepts
the
otlb
traces
from
the
collector
in
grpc,
so
if
the
packet
is
not
available
for
a
couple
of
minutes,
the
collector
just
say
like,
let's
say
context,
deadline
exceeded
or
something
of
that
sort
errors,
and
in
the
collector
it
says
that,
like
after
a
couple
of
minutes,
it
says
this.
J
The
queue
is
full
and
I'm
unable
to
retry
it.
So
this
just
be
and
the
character
gets
into
a
stage
state
and
after
a
couple
of
hours
I
see
that
okay,
my
backhand,
didn't
get
traces
and
open
telemetry
collector
is
no
more
retrying
it.
So
the
only
way
I
can
ask
the
collector
to
retrace
that
I'm
restarting
the
board,
which
means
all
the
in-memory
cube
traces,
are
just
dropped.
Okay,.
H
So
what
let
me
understand
better,
so,
if
you
are
what
what
you
are
telling
me
is
the
following:
the
cure
gets
full.
H
J
Yep
so,
like
I've
been
seeing
it
for
like
if
the
collector
says
that
the
stack
queue
is
full
so
like
even
for
10-15
minutes,
I
didn't
see
it's
retraying
it
or
drop
like
sending
the
data
back
to
the
back
end
when
it's
available.
So
I'm
just
thinking
that
it's
not
retraying
it
once
the
queue
is
full
is
my
understanding
from
looking
at
the
logs
and
the
data.
H
Shouldn't
be
that
I
think,
can
you
I
I'll
give
a
try
and
reproduce
this,
but
can
you
point
the
configuration
for
for
the
q
retry
part
of
the
otlp
exporter
in
the
timeout,
so
the
config
for
for
the
exporter
essentially.
J
Okay,
so
I'm
just
using
the
defaults
currently
like
do
you
recommend
me
to
use
any
other?
Just
just
tell
me
the
section
of
the
exporter
or
the
exporter.
Okay,
it's
just.
J
With
yeah,
that's
it
like,
I
can
just
paste
it
to
the
issue
which
I
created
yeah.
H
But
yeah
it's
just
it
paints
to
the
issue.
It
doesn't
matter.
It
has
to
be
a
paper
trail,
not
only
speaking
so
people
should
know
about
it.
Indeed,
it
is
exponential
eric.
H
It
is
exponential
with
back
off
retries,
and
I
don't
know
if
the
problem
is
that
the
last,
if
it
gets
to
a
point,
I
think
I
think
we
have
it
for
10
minutes
if
I'm
not
mistaken
the
default
for
max
exponential
retry
and
it
may
get
that
it
may
get
to
the
point
of
waiting,
10
minutes
to
do
a
retry
and
you
may
want
to
to
be
more
aggressive
on
the
retries,
and
so
you
can
play
a
bit
with
the
retry
parameters
and
set
it
to
the
maximum
of
one
minute
to
not
wait
for
more
than
one
minute
for
retrying
in
the
total
one
minute,
and
maybe
the
exponential
should
be
maximum
10
seconds
or
something
like
that
be
more
aggressive
with
bit
trice.
H
Then
then,
because,
once
we
we
fail
to
the
retries,
we
will
drop
that
package
and
we
get
another
one.
It's
it
is
a
any.
It
is
a
cubing
problem
in
general.
How
much
can
we
keep
in
memory,
but
the
fact
that
if
the
beckham
gets
back
alive
and
we
could
not
restart
sending
this,
it
makes
me
feel
we
are
doing
something
wrong?
Do
you
control
the
back
end?
Do
you
know
about
the
back
end
when
it
goes
down
how
how
does
it
go
down?
It's.
J
So
like
we
like,
like,
I
hope,
for
time,
skill
and
we
are
just
building
a
new
tracing
back
end
and
when
we
are
trying
to
insert
it
like
for
some
queries.
We
get
this
context
deadline
executed
when
we
like
in
our
initial
experiments
and
when
we,
when
I
see
this
context,
deadline
exceeded
and
more
and
more
traces
come
up.
H
Okay,
if
you
can
describe
this
as
well
like
in
the
issue,
because
I
did
not
see
this
whole
explanation,
I
will
try
to
reproduce
it
a
bit
on
my
on
our
side
to
see.
If,
if
I
can
reproduce
it
and
understand
better
sure
I'll,
add
more
details
to
this,
but
also,
as
I
said,
play
with
the
parameters
of
the
try,
be
more
aggressive
on
the
retries
and
wait
less
because
otherwise
you
get
to
a
point
of
waiting
long
time
and
the
queue
gets
full.
H
J
H
Yes,
doesn't
show
this,
oh,
it
is
the
last
one
it
says
queuing
and
it's
a
link.
H
And
it
you
see
the
the
whole
explanation
about
this
with
the
picture
of
what
is
happening
and.
B
Thank
you
good
all
right.
I
think
we've
gone
through
the
entire
agenda
now.
I
would
actually
ask
my
question
again
now
that
bogdan
joined,
because
I
suspect
you
may
have
a
some
context
here.
Basically,
is
there
any
detailed
plan
to
make
the
signals
from
ops
report
available
for
consumption?
I
think
right
now,
maybe
there's
a
like
a
web
endpoint,
a
local
host
endpoint
you
can
use,
but
anything
else
like
like
a
like
a
receiver
that
will
pump
them
into
a
pipeline
or
anything
like
this.
H
If
you
want
to
put
them
into
a
pipeline
right
now,
then
the
simplest
way
is
to
put
a
custom
prometheus
receiver
that
points
to
to
that
endpoint.
Oh,
I
see.
Okay,
that's
simplest
way
right
now
we
are
working
on
on
switching
to
hotel,
metrics
and
then
we'll
give
you
better
things.
But
that's
that's
a
that's
the
current
solution.