►
From YouTube: 2021-07-15 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
F
You,
I
guess
and
yeah
don't
embarrass
me.
E
G
I
just
wanted
to
say
hi,
I'm
ludmila,
I'm
from
microsoft,
I'm
working
on
azure
sdk
and
I
I
came
like
after
some
break
before
that.
I
wasn't
measuring
it
for
working
in
this
same
area
as
trust
and
a
sister
team,
so
just
wanted
to
say,
hi
introduce
myself.
I
will
be
participating
from
time
to
time.
You
might
have
seen
me
somewhere
on
the
github.
D
Yeah
I've
spent
time
discussing
all
sorts
of
interesting
nested
span
issues.
So
nice,
yes,
welcome.
Thank
you.
B
I
guess
I
should
say
hi
as
well
my
name's
ryan.
This
is
also
my
first
meeting
of
this,
but
I
work
at
traceable,
so
paval
works
on.
I
worked
on
the
same
project
as
paval.
D
So
so
so
ryan
is
adrian
working
with
traceable
at
all,
or
is
he
stepped
away
from
that.
D
D
E
Mentioned
he's
at
doing
service
mesh
something
somewhere
now
interesting,.
E
Honor,
I
did
mention
that
he's
happy
having
moved
on
from
tracing.
D
People
don't
know
adrian
irene,
cole
is
the,
I
don't
know
founder
primary
maintainer,
I'm
not
sure
exactly
what
it
was
so
he's
a
zip
code
lead
at
the
very
least.
E
G
Yeah
sure
so
I
I
want
to
describe
my
more
like
mental
model.
Thinking
about
this
nested
expense.
So
imagine
there
is
a
database
and
it
makes
a
call.
We
call
it
the
client's
spam,
but
it's
actually
a
database
call
right
and
underneath
it
may
have
different
protocols.
G
Http
grpc,
you
name
it
whatever
proprietary
binary
protocol,
anything
which
can
also
be
instrumented,
and
it's
also
a
client
spam
like
unless
we
want
to
suppress
it.
Somehow,
let's
forget
about
suppression
for
a
moment
and
then
in
future
under
http
we
might
have
more
right.
So
you
can
imagine
the
long
polling
http
with
micro
operations
internally,
which
people
may
want
to
instrument
someday
right,
so
there
could
be
potential
more
layers
there
and
there
could
be
even
more
potential
layers
higher
than
that
above
database.
G
If
there
are
some
complex
operations
that
people
want
to
trace,
I
think
about
it
in
the
way
that
okay,
we
have
those
layers
and
it's
it's
totally
valid-
to
have
multiple
layers.
G
It
may
be
two
verbose
for
some
customers.
Some
of
them
only
want
the
outer
layer.
Maybe
others
only
want
inner
layer
and
probably
by
default.
We
should
show
everything
I
think
the
problem
we
have
today
is
that
first
we
never
agreed
on
this
layering.
We
never
talked
about
the
http.
How
how
we
think
about
http
instrumentation,
for
example,
but
also
the
the
immediate
problem
we
have
is
that
we
have
duplicates
of
the
same
instrumentation,
for
example
http.
G
So
we
have
the
client
libraries
like
azure,
sdks
or
aws,
who
do
http,
calls
and
instrument
them
as
well.
We
also
have
multiple
http
clients
using
each
other
and
producing
the
same
http
spends.
G
E
It
so
from
nikita
from
our
code
base.
Currently
we
have
like
a
client
span,
key
to
prevent
nesting
of
any
client
spans.
E
So
if
we
instead
made
that
key
a
database
client's
key
span
key
and
an
http
client
span
key,
then
we
could
allow
database
and
we
could
allow
http.
We
could
capture
http
under
database,
but
we
could
suppress
the
two
databases
or
the
two
http.
C
C
Remove
duplicated
span,
which
will
which
would
have
the
same
semantic
convention,
applied
to
that
duplicate,
http
duplicate
db
over
there.
Okay,
the
mechanism
for
that.
How
we
want
to
do
that
is,
is
via
specific
keys
in
the
context
and
specific
keys
means
like
client,
db,
client,
http,
client,
whatever
messaging.
What
not.
C
C
C
G
C
G
E
Yeah,
so
my
thought
here
would
be,
I
mean
I
don't
know
how
to
avoid
having
to
add
those
hacks
that
essentially
access
to
the
key,
the
context
keys.
I
know
we
don't
expose
the
context
keys
themselves,
but
wrappers
around
that
that
you
know
users
would
have
to
call
they
would
have
to
participate
in
that
in
that
dance.
E
We,
we
have
instrumenters
for
each
of
the
semantic
conventions
that
already
handle,
like
the
the
client
span,
key
on
the
nesting,
so
that
could
that
will
be
hidden
away
from
users
of
the
instrument
or
api,
and
they
wouldn't
need
to
do
anything
to
extract.
C
Okay,
so
to
summarize
again,
if
I
understood
correctly,
we
will
provide
like
convenient
api
for
manual
instrumentation
to
act
to
participate
in
this
dance,
but
we
still
accept
that
the
client,
the
users
will,
we
are
going
to
you,
not
use
our
convenience
api
and
there
will
be
nested
spans
like
one
for
client
and
one
for
instrumentation,
that's
okay!
We
we
have
politicians,
okay,
okay,.
C
Lower
level
details
like
kls,
dns
connection,
pool
retriever
retrieval
what
else.
G
Oh,
you
try
send
redirects,
I
think
it's
a
different
beast,
but
I
think
we
should
assume
there
are
lower
levels,
so
there
might
be
lower
levels
in
future,
maybe
not
for
every
customer,
not
all
the
time,
but
there
there
might
be
lower
levels
but
like
we
we
should.
This
list
should
be
extendable.
The
list
of
these
keys
right,
but
do
do
you
think
we
need
to
consider
all
of
this
possibilities
right
now.
E
Sort
of
where
it
made
sense
to
me
when,
where
this
proposal
started
me
really
made
sense
to
me
was,
was
that
it
sort
of
apply
each
layer
is
one
of
the
existing
semantic
conventions.
E
C
Okay
and
the
does
it
mean
so
hey
first
quick
question:
will
we
allow
to
some
configuration
to
steal?
I
do
really
want
all
those
nicest.
C
C
One
client
of
ours
wants
to
have
exactly
all
those
nitty
gritty
details
like
dns
connections,
retries.
G
G
C
But,
for
example,
in
in
java
agent,
for
example,
right
now
we
have
this
very
specific
case.
We
have
reactor
native
h
and
web
client
of
reactor
native,
which
uses
native
instrumentation
underneath
and
connection
establishment,
for
example,
that's
native
instrumentation,
but
currently
that
native
in
java
produces
http
semantic
convention
spots.
C
G
And
here
it
is,
I
think
this
is
where
we
don't
have
clarity
around
http
convention,
so
it
should
tell
us
exactly-
and
we
should
affect
like
make
this
clarity.
We
should
tell.
Is
there
it's
a
high
level
span,
the
http
one
that
may
include
retries
my
argument.
It's
not
feasible
and
my
argument
that
each
retry
is
an
http
span,
but
I
don't
want
to
argue
about
it
here.
Basically,
what
I
want
to
say
that
http
convention
should
provide
this
clarity.
C
C
Oh,
that's
maybe
a
separate
question,
so,
okay
messaging
clients,
client's
reducer
span,
should
inject
its
context
into
message.
Underlying
http
connection
can
should
inject
its
request
into
http
and
if
you
have
socket
instrumentation,
you
are
free
to
inject.
That
request
into
tcp
packet
makes
sense.
Okay
works
for
me,
yep.
E
Cool,
that's,
I
think
what
we
were
looking
for
was
just
sort
of
buy
in
to
proceed
with
prototyping
the
proposal
in
java
instrumentation,
seeing
how
that
worked
and
then
taking
that
to
the
spec.
D
G
Where
exactly
does
the
context?
Manipulation
is
problematic,
which
part
of
it.
D
Well,
just
that
we're
going
to
be
using
the
context
more
we're
going
to
be
putting
potentially
more
things
into
the
context
more,
maybe
put
in
there
just
something
to
I'm
not.
I
have
no
idea
if
it
would
be
a
problem,
but
I
know
that
honorable
at
least
has
had
concerns,
as
the
author
of
our
context
has
had
some
concerns
around
the
extensive
usage
of
context,
although
that
more,
that
is
probably
more
about
pulling
it
out
of
the
thread
local
than
putting
things
into
it.
D
So
manipulation
of
the
thread
local,
so
onrag
is
a
better
person
to
to
explain
if
there,
if
he
thinks,
there's
any
concerns,
but
I
just
thought
I'd:
throw
it
up
there
so
something
to
look
out
for.
E
Yeah,
that's
a
good
point
and
from
previous
discussions
with
honoree,
I
agree
it's
mostly
about
the
accessing
the
thread
local,
which
luckily
in
at
least
when
you're,
using
the
instrument
or
api.
We
already
have
the
context
and
we
return
the
new
context.
E
So
that
wouldn't
add
any
additional
reading
from
the
thread:
local,
but
certainly
adding.
It
would
be
adding
more
things
in
there.
D
E
So
in
the
instrumenter
api
we're
already
adding
when
you
start
a
span,
we
add
this.
The
new
span
to
the
context
already
and
return
that
context
and
the
caller
has
to
put
it
set
it
into
the
world,
make
it
current
set
it
into
the
thread
local.
E
So
we
can
add
more
things
into
the
context
before
we
return
it.
It's
still
the
same
number
of
reading
from
and
writing
back
to
the
thread.
Local.
D
Well,
I
guess
each
level
of
instrumentation,
though,
under
this
under
this
scheme,
would
be
doing
that
as
well
though,
and
that
would
that
would
have
all
of
them
would
have
to
be
they're,
not
not
necessarily
coordinated,
but
they're
not
going
to
be
all
using
the
same
instrument,
or
instance
right
they're
going
to
have
their
own.
Each
one
will
have
their
own
instruments.
E
G
And
I
guess,
like
this
low
level
of
things,
that
let's
say
network
level
things
they
probably
would
be
of
less
interest
to
customers.
So
maybe
they
shouldn't
be
on
by
default,
but
basically
probably
most
customers
would
not
be
interested
and
we
should
let
them
disable
it
or
configure
it
somehow.
So
the
performance
heat
will
be
on
the
customers
who
actually
need
it.
F
That's
very
very
if
there's,
if
there's
a
way,
we
could
roll
it
up,
though
I
know
I
know
from
my
standpoint
that
low
level
bit,
we
had
an
issue
with
a
message
queue,
and
that
was
how
we
detected
it
was
you
know
the
the
latency
of
when
we
scheduled
for
an
http
request
to
go
out
until
it
actually
hit
the
wire
that
this
low-level
messaging.
We
actually
have
a
thing
that
tracks
the
amount
of
time
but
from
when
you
schedule
to
when
it
actually
hits
the
wire.
F
I
I
think
that
configuration's
awesome,
I
would
say
disabling.
It
is
option
one,
but
I'd
love
an
even
better
option
where
we
like
roll
up
interesting
information
from
it
instead
of
just
disabling
the
event
so
like
take
the
event,
take
the
time
of
the
event
and
roll
some
sort
of
statistic
or
something
would
be
even
cooler
anyway.
I
like
this
proposal
and
I
wanted
to
say
something
so
apologize.
E
And
josh,
it
sounds
like
that:
you're
in
that's
a
messaging
layer
and
an
http
layer,
so
I
think
the
the
default.
What
we're
proposing
would
be
to
capture
both
of
those
anyway.
It's
just
if
there's
two
http
layers
together
that
we
would,
by
default,
suppress.
F
F
F
What,
anyway,
having
done
a
bunch
of
99th,
percentile
latency
fixes
in
a
past
life,
this
kind
of
stuff
is
super
critical
to
figure
out
like
what
the
hell
went
went
wrong
so,
but
I
think
some
of
it
can
be
rolled
up
and
made
easier
to
consume,
but
the
high
fidelity
modeling
is
nice.
E
Yeah
retries
redirects
that's
going
to
be
a
tough
conversation
in
the
spec
but
needed,
and
I
think
our
instrumentation
today
I
think,
is
a
little
bit
all
over
the
place
because
different
http
client
libraries
handle
that
differently.
Some
do
it
internally.
G
And
I
think
that
this
is
very
consistency,
matters
right.
We
wanted
the
behavior
to
be
consistent
between
client
languages
and
usage
cases
right.
So,
even
if
some
http
client
handles
can
handle
redirects
internally
doesn't
necessarily
how
people
use
it.
So
I
think
this
is
why
retries
have
to
be
separate
spans.
Otherwise
we
cannot
do
it.
E
Properly,
we
do
have
a
test,
for
we
have
so
all
of
the
http
client
libraries
reuse,
the
same
tests
essentially,
and
we
do
have
tests
for
retries
and
redirects
so
that
that
might
be
a
good,
interesting
place
to.
E
Start
all
right,
let's
go
on.
I
I
gave
you
this
nikita,
because
I
was
dying
to
hear
your
your
investigation
findings,
but
I
think
I
quickly
glanced
at
prs
this
morning
and
saw
you
found
something
interesting.
C
Yeah,
so
the
problem
that
we
recently
had
is
that
our
especially
pull
requires
built
all
of
a
sudden
started
to
take
like
huge
amount
of
time
like
two
hours
and
three
hours
compared
to
like
30
mi,
many
minutes
or
20
million
before
one.
One
of
the
of
the
reasons
is
certainly
that
for
some
reason
we
started
to
have
build
cash
thrashing
sometime
in
the
past,
which
means
that,
as
during
all
pull
requests
and
nightly
bills
and
ci
bills,
we
have
this
example.
C
So
one
small
job
publishes
a
small
cash,
this
all
the
dependencies
that
you
need
and
then
all
other
jobs.
Okay,
I
will
take
that
cash,
but
they
actually
need
like
a
tons
more
dependencies,
and
so
every
bill
started
downloading
those
dependencies
again
and
again
why
this
broke.
I
have
no
idea,
I
looked
all
our
github
action
history,
it
it's
actually
quite
small
like
several
weeks
only
and
it's
it
was
slow
for
all
these
weeks.
C
So
I'm
not
sure
that
one
pull
request
actually
fixed
that
probably
we
have
to
separate
our
smoke
bills
smoke
tests
in
in
some
other
way
as
well
to
to
new
to
not
share
the
cache
with
main
builds,
but
that
that's
essentially
a
problem
of
false
sharing,
just
taken
from
the
cpu
up
to
continuous
integration.
Server.
E
So
that
is
there's
the
github
action.
Cache
is
this,
but
this
is
different.
This
is,
is
this
the
one
that
we
put
into
s3.
C
E
C
C
E
John,
I
don't
know
if
you
saw
this
proposal
to
so
just
as
a
hello
for
everyone,
so
we
have
instrumentation
that
instruments,
the
open,
telemetry
api
itself
in
the
java
agent-
and
this
is
how
we
consume
manual
instrumentation,
that
the
user
brings,
that
the
user
writes
and
but
currently
like
most
of
our
instrumentation
like
against
nedi,
we
test
against
different
versions
of
neti.
We
check
that
the
api
signatures
haven't
changed
things
like
that.
We
have
a
lot
of
infrastructure
around
that,
but
open
telemetry
api.
E
E
So
the
way
we
do.
This
is
weird
we
have,
because
we
have
open
telemetry
api
internally
in
the
agent
and
we're
instrumenting
it
also.
E
We
have
to
shade
the
open
telemetry
api
so
that
we
have
essentially
two
different
copies
so
that
we
can
pass
data
back
and
forth,
and
so
what
we
want
to
do
is
move
that
open
telemetry
api
that
shaded
for
instrumenting
artifact
to
the
well.
We
want
to
start
publishing
those
for
each
version
of
the
open,
telemetry
api,
because
that's
then,
what
we
use
for
running,
that's
what
we
use
for.
E
A
A
E
For
having
so,
we
want
these
to
be
pub,
we
want
one
of
these
to
be
published
to
maven
central
for
every
version
of
open,
telemetry
api
and
every
version
of
ext
annotations,
and
so,
if
it's
part
of
the
sdk
repo
that
will
just
automatically
as
part
of
your
build
every
time
you
make
a
release
of
140
141
142
will
automatically
publish
those
to
maven
central,
whereas
our
our
tags,
we
do
try
to
keep
our
tags
in
sync,
but
we
don't,
but
not
necessarily
the
minor
versions.
D
D
Out
of
that
repository,
I
need
to
ponder
this.
I
mean
it's
just
it's
going
to
it's
going
to
be
a
weird
like
people
are
going
to
look
and
look
in
maven,
central
and
like
what
the
heck
is.
This
weird
thing:
why
are
they
repackaging
everything
in
this
strange
way
for
with
exactly
the
same
code
in
it?
So
it's
a
little.
I
mean
it's
definitely
a
weird
like
do
any
other
projects
work
like
do
this
publish
repackaged
versions
of
themselves
of
themselves,
no.
C
D
D
Alternative,
what
is
the
that
the
thing
that
automatically
will
build
based
on
a
tag
and
push
it
up
to
a
repository?
It's
not!
It's
not
attack,
sorry.
What
you
talking
about
yeah,
jetpack
yeah.
Is
there
some
way
we
could
use
jetpack
for
this?
It's
like
just
wondering.
If
there's
a
there's,
I
don't.
I
guess
there
has
to
be
a
build
script
somewhere.
That's
gonna!
Do
this.
E
Yeah
part
of
the
trick
is
that
our
our
infrastructure
for
the
muzzle.
E
For
verifying
signature
changes
across
different
if
signif,
if
there's
any
signature,
changes
that
are
problematic
for
our
instrumentation
relies
on
pulling
down
maven
repo
dependencies
versions.
F
Else
one
suggestion
you
can
have
a
github
action
upstream
that
on
tag,
will
run
your
build
and
pull
down
two
different
github
repositories
so,
like
your
github
action
could
pull
from
instrumentation.
You
write
a
github
action
on
the
the
sdk
that,
when
a
tag
comes
in,
it
pulls
the
instrumentation
like
whatever
you
want
to
pull
and
runs
a
build
between
the
two.
So
the
code
can
live
in
instrumentation,
but
there
would
be
a
tag
that
would
auto
publish
when
the
sdk
gets
a
tag.
F
That
that's
a
thing
you
can
do.
I'm
not
suggesting
this
is
a
great
idea,
but
maybe
maybe
it's
an
interim
like
we
can
try
this
out
for
a
while
see
if
there's
a
whole
ton
of
friction
and
then
decide
where
to
move
it
eventually.
But
it
would
give
you
that
hook
that
you
want
in
a
in
a
tool
we
already
use
yeah.
That
is.
D
D
D
D
All
right,
anyway,
let
me
I'll
ponder
that
we
can
chat
about
that
this
evening
as
well
on
horizon.
E
I
wanted
to
try
and
get
some
more
feedback
on
this.
Our
hibernate
session
modeling
and
actually
the
hibernate
instrumentation,
I
think,
is
a
good
example.
Potentially
of
ludmila's
layering
like
we
could
potentially
like,
should
hibernate
spans,
be
client
spans
because
the
modeling,
the
database
semantic
conventions,
or
at
least
some
of
them
and
suppress
or
optionally,
suppress
jdbc
spans.
E
G
Well,
I
can't
tell
from
like
our
azure
sdks
experience
where
you
see
people,
they
write
code
against
harvard
nate
right
or
you
name
it
client,
library
and
internally.
It
does
some
stuff.
So
people
who
don't
have
expertise,
how
this
library
works
or
hibernate
works.
They
don't
know
what
happens
under
the
hood,
they
might
not
care
or
they
might
care
right,
but
this
span
helps
them
understand
the
connection
between
the
code.
They
write
and
the
telemetry.
E
So
I
think
I
confused
things
by
there's
two
different
there's
two
different
issues
here.
I
think
we're
talking
about
one
is
the
hibernate
spans
in
general,
which
can
be
like
the
inserts
and
queries
which
can
be
considered
duplicates
of
jdbc
potentially,
and
then
there
is
this
weird
session
span,
specifically
that
we
create
which
models
sort
of
the
whole
session
life
cycle
of
the
session,
that
the
user
has
open
and
all
of
those
hibernate
queries
and
updates
are
parented
to
that
session
span,
which
is
weird.
E
It
gives
a
nice
view
of.
If
somebody
wants
that
view
of
their
session,
but
it
sort
of
breaks
our
normal
modeling
of
it
should
have
the
parent
of
whatever
is
in
whatever.
The
current
span
is
like.
If
you
do
a
hibernate
query,.
C
If
you
have
lazy
loading,
and
so
those
sql
queries
should
be
attached
to
something
if
they
attach
directly
to
viewer,
rendering.
That
may
be
strange-
I
don't
know
so
so,
essentially
yeah.
If
you
have
open
session
in
view,
then
you
want.
You
want
to
see
those
sql
queries
for
lazy
loading
sometime
in
the
future,
which
may
be
totally
unexpected
to
you.
C
E
Yeah
I
mean
the
thing
that
they
attach
to
would
be.
Probably
typically,
the
controller
span
say,
screen
controller
span
that
we
capture.
C
C
A
A
A
I
guess
it
depends
exactly
but
yeah
in
theory
in
ideal
world.
You
could
see
that
they're
linked
by
this
fan
and
you
it
won't.
Look
that
weird,
because
I
mean
when
you
have
your
controller
and
executes
a
hibernate
query.
You'll
see
that
your
controller
span
is
the
parent
of
the
hybrid
query,
which
is
linked
to
the
session.
C
E
Okay,
that's
excellent
points
much
more
complicated
than
I
was
hoping.
It
would
be.
E
Yeah,
this
is
a
good.
This
is
a
good
point.
It
kind
of
reminded
me
of
like
the
controller
returning
a
future
or
something
like.
C
C
E
D
I've
been
while
you
all
are
talking
about
all
that
super
awesome
stuff.
I've
been
thinking
about
the
previous
issue
around
publishing
shaded
stuff.
D
Could
we
have
I'm
just
I'm
just
brainstorming
a
little
bit
more
while
it's
on
the
top
of
mind,
could
we
have
a
nightly
job
in
the
instrumentation
repo
that
goes
and
pulls
versions
and
publishes
and
publishes
the
shaded
version
to
or
the
repackaged
versions
to,
the
github
package
rather
than
trying
to,
rather
than
having
the
sdk
repo
published
in
even
central
on
publish,
because
you
can
use
github
packages
as
a
maven
repository,
I
believe
so,
just
as
just
a
thought
that
this
is
something
that
could
happen
on
a
as
a
nightly
job
and
instrumentation
repo.
A
D
The
nightly
job
could
be
scripted,
so
it'll
go
and
grab
all
the
versions
and
see
if
they're
already
there
and
then
only
generated
it
works.
Okay,
I
mean
I
can
imagine
I
can
imagine
my
brain
could
imagine
writing
a
program
that
does
this
seems
like
it
might
be.
A
might
be
an
easier
like
a
more
self-contained
way
to
do
this,
so
just
a
thought,
but
using
maine.
Their
main
thought
was
here
also
to
just
to
publish
this
stuff
not
to
do
maidencentral
but
published
to
a
github
package.
A
E
Yeah,
I
think
that's
not
really
like
a
search
feature
of
maven
repos,
but
a
a
metadata
file
that
oh,
but
I
don't
know
if
they
update
that
metadata
file.
F
Right
it,
it
should
be
updated
by
sonotype
the
that's
it's
part
of
nexus
does
that,
but
the
the
ossrh
they
should
always
update
that
metadata
repos.
You
should
be
able
to
just
pull
that
and
check.
F
Yeah
to
just
check
if
they're
maven
central
and
there
might
be
a
delay
before
it
hits
your
particular
shard,
but
you
should
be
able
to
have
that
maven
metadata
xml
and
I
don't
remember
gradle's
hipster.
You
know
I
do
my
own
dependency
resolution
thing
to
remember
if
it
makes
maven
metadata
xml,
but
I
would
suspect
it
does
by
this
point.
If
it
doesn't,
I'm
sure
someone
opened
a
bug
somewhere
and
we
could
find
it.
F
Right,
I'm
suggesting
you
could
grab
both
of
them,
so
you
could
grab
both
of
those
metadata
from
avon
central
and
check
to
see
if
it's
public
and
then
or
or
are
you
suggesting
sorry,
you
could
pick
the
metadata
packet,
the
metadata
from
maven
central
for
the
open
telemetry
java
sdk
grab
the
maven
metadata
from
your
github
package,
which
should
be
generated
for
any
repository
that
is
maven
friendly.
If
not
that
was,
we
could
look
into
that.
E
F
Like
yeah,
any
dash
snapshot
is
broken.
If
you
don't
have
maven
metadata
in
modern
aether
world,
so
I'd
be
surprised
if
they
don't
have
it
and
if
they
don't
have
it
it's
something
you
could
synthesize
when
you
make
the
package
so.
F
I
know
entirely
too
much
about
packaging
systems
from
the
hellish.
You
know
ivy
sbt
battles,
but
that's
a
that's
a
different
story.
F
Yeah
effectively,
if
you
don't
have
a
metadata
file
in
there
it
you
should
be
able
to
get
one.
And
if,
if
you
can't
I'd,
consider
that
kind
of
a
major
bug
in
github
packaging.
E
Cool
hey.
We
just
had
our
five
minute
our
time
time
box,
and
this
was
just
me
asking
for
help.
Intellij
helps.
Maybe
my
intellij,
knowledgeable
friends
can
take
a
look
at
this.