►
From YouTube: 2022-03-16 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
Yeah,
I'm
I'm
actually
I'm
really
just
here
to
meet
my
team.
I've
been
I've
been
started
on
a
new
team
kind
of
right
at
the
beginning
of
2020
and
a
lot
of
the
people
that
I
work
with.
I
hadn't
really
met
yet
so
it's
good
to
meet
everybody,
but
we
also
did
some
like
road
mapping,
workshops
and
stuff
like
that.
C
Splunk
seems
to
be
pretty
pretty
global,
spread
all
over
the
place.
B
Yeah,
although
people
working
on
auto
are
only
located
in
in
estonia,
poland
and
the
states,
no
actually
in
canada
as
well,
never
mind
yeah,.
C
Yeah,
it's
mid-may.
Are
you
planning
on
going
to
kubecon?
I
saw
you
mentioned
there.
You
were
at
least
thinking
about
it.
B
Yeah
like
well,
I
honestly
wasn't
up
to
date
about
you
know
the
schedule
and
when
it's
gonna
happen,
I've
kind
of
forgotten
about
it.
But
now,
when
you
mentioned
it,
they
kind
of
put
the
idea
in
my
head.
So
I'll,
probably
I'll
see
if
I
can
do
something
about
it,.
C
I
think
you
probably
just
missed
the
like
early
pricing,
but
I
assume
splunk
would
pay
anyway,
so
it
probably
doesn't
matter
that
much.
C
I
don't
have
very
much
on
the
agenda
here
today,
so
if
anyone
wants
to
add
anything
as
usual,
go
ahead.
C
C
Yeah
would
be
good
to
finally
see
some
people.
The
last
to
the
last
kubecon
was
I
mean
you
were
there,
it
was,
it
was
relatively
empty.
I'm
hoping
more
people
will
go
to
this.
One.
C
C
The
yeah,
the
json
protocol
is
still
not
stable,
so
that
one
can't
be
released
and
the
others
depend
on
it.
So
I
don't
think
we
should
release
them
as
stable
until
we
stabilize
the
json
exporter
or
remove
that
dependency.
C
So
the
first
step
of
that
is
done
and
the
module,
which
does
the
transformations
at
least
for
traces
is
done.
I
don't
think
it's
merged
yet,
but
that
will
be.
The
next
step
is
to
merge
that
and
actually
utilize
it
in
the
exporters.
C
So
I
put
a
little
timeline
here:
release
the
stable
core
because
it's
been
a
while,
since
we've
done
that
move
the
exporters
to
experimental
do
an
experimental
release,
because
that's
also
not
been
done
for
a
while
and
then
update
the
exporters
to
use
the
new
module
and
probably
release
again
fairly
soon
after
that,
anyone
have
any
issues
with
this
timeline,
or
with
this
with
this
sequence,
anything
that
I
missed,
or
something
that
I
should
do
in
a
different
order.
C
C
The
transformer
package
is
probably
still
important
for
that,
but
for
now,
since
the
the
trace
exporters
are
becoming
very
out
of
date
and
haven't
been
released
in
a
while
and
are
supposed
to
be
stable,
I
think
I'd
rather
prioritize
that
there's
other
metrics
work
that
needs
to
go
on
anyways
before
we
can
do
a
release
of
the
metrics
sdk.
But
I
as
soon
as
the
as
this
is
done,
metrics
is
still
the
focus
and
I
hope
to
release
it
as
soon
as
possible.
E
C
It
should
do
a
lot
less
though,
since
the
I
apologize.
I
don't
remember
who
wrote
the
pr,
but
we
recently
changed
the
metrics
like
data
model
used
by
the
sdk
so
that
when
it
exports
it
exports
as
a
more
similar
format
to
the
otlp,
so
the
transformation
should
be
much
easier
to
do
for
metrics
and
for
then
the
trace
module
does
a
little
bit
more.
C
Okay,
I
did
create
a
pr
just
maybe
10
minutes
ago
or
15
minutes
ago,
to
release
the
1.1.0.
C
B
I
think
yeah,
the
we
just
dropped
our
release
and
the
new
one
is
due
to
the
colors
package
removed
which
actually
didn't
change
much
in
the
in
the
packages
themselves.
So
I
don't
see
a
need
for
a
release
as
of
now,
but
if
once
the
sdk
is
updated,
it
makes
sense.
Okay,.
C
So
this
only
releases
the
core
or
releases
the
stable
packages.
Usually
we
do
a
release
of
stable
and
experimental
at
the
same
time,
but
because
the
exporters
still
need
to
be
moved
to
experimental.
I'd
like
to
do
this
release
then
move
them
and
then
do
the
experimental
release
so
that
we
don't
miss
those
packages
on
the
release.
C
C
Now
we
automatically
generate
these
change
logs,
based
on
the
labels
on
prs,
using
a
tool
called
learn
a
change
log
and
it
tries
to
detect
which
packages
have
been
changed,
but
because
we
have
two
learner,
repos
or
two
learner
folders
in
the
repo
all
of
the
ones
in
experimental,
just
get
marked
as
other
the
the
tool
is
not
really
smart
enough
to
figure
out
what's
going
on
with
them,
and
I
wanted
to
consider
the
idea
of
having
the
change
logs
be
a
manual
process.
C
I
know
a
lot
of
the
other
sigs
do
this,
but
essentially
it
would
just
require
when
you
make
a
pr
that
you
add
an
entry
yourself
manually
to
the
change
log
and
then
when
we
do
a
release.
Instead
of
creating
the
change
log
for
the
whole
release,
it
would
already
be
there.
We
would
just
bump
the
version
and
make
a
new
section.
C
Obviously
it's
more
manual
work
for
everybody,
but
the
change
log
generation.
The
automatic
change
log
generation
is
actually
quite
a
bit
more
work
for
for
me.
When
I
do
the
releases,
then
I
would
hope-
and
I
you
know
it
slows
the
releases
down
a
little
bit.
It
makes
a
lot
more
work
to
actually
cut
a
release.
C
Yeah
only
for
the
core
repo,
the
the
contrib
revo
release,
automation
is
a
little
bit
more
intelligent
and
actually
generates
the
change
logs
per
package
and
does
a
pretty
good
job
from
from
what
I've
seen.
C
I
assume
silence
is
at
least
not
having
an
objection.
I
guess.
A
C
So
what
I've
seen
the
other
sigs
do
is
create
a
github
action
that
just
requires
that
you
at
least
change
the
the
the
change
log
on
each
pr.
Just
as
a
reminder-
or
you
can
add
a
label
that
says
this
does
not
require
change
log
or
something
like
that.
So
I'll
steal
the
the
github
action
configuration
from
the
specification
or
something
like
that
and
try
to
get
that
rolling.
Probably
tomorrow
morning.
F
Yeah,
so
I'm
working
on
an
issue:
that's
adding
the
insecure
environment,
variable
config
option
and
this
pr,
I'm
kind
of
at
a
stopping
point
where
I
can't
move
forward
until
I
get
some
feedback,
because
our
current
implementation
of
the
transport
security
is
against
what
the
spec
says.
F
F
So
I
just
had
a
few
questions
and
I've
made
a
comment
in
that
pr,
with
a
bunch
of
use
cases
when
it
comes
to
endpoint
versus
what
the
user
said.
So
I
just
wanted
to
see
if
people
can
share
some
thoughts
on.
G
But
it.
D
G
Force
down
below.
C
Kind
of
conflicting
with
itself
is
there
anyone
that
is
familiar
with
the
other
sigs
and
know
what
they're
doing
here.
F
I've
looked
at
the
python
example
or
their
their
exporter,
and
they
they
are
using
the
default
of
secure
and
they
are
looking
at
the
endpoint
2d
determine
if
it
should
be
secure.
So
if
they
didn't
specifically
set
the
insecure
option,
if
they
have
http
in
their
endpoint,
then
they
go
with
the
insecure
options.
F
So,
basically
the
the
comments
I
made
in
that
pr.
I
am
mentioning
what
what
it's
currently
doing
and
what
it
should
be
doing.
Based
on
what
I
saw
with
python.
C
Okay
and
that's
this
comment.
C
So
is
it
possible
to
create
a
secure
transport
channel
without
the
certificates,
though
so,
if
you
just
supply
a
url,
would
will
it
still
be
able
to
make
a
secure
connection?
I
thought
you
had
to
supply
manual
certificates.
I
apologize
I'm
not
that
familiar
with
grpc
honestly.
F
For
what
what
I've
been
testing,
if
you
don't
provide
a
cert
certificate,
it
just
uses
the
default
root
certificate.
C
C
F
Yes,
I
mean
the
the
spec,
doesn't
specifically
say
what
what
you
should
do
when
it
comes
to
con
conflicting
things
like
the
endpoint
is
insecure,
but
you
set
a
secure
transport,
your
your
self,
which
one
should
you
go
with
what
the
endpoint
says
or
what
what
the
user
set
as
the
security.
So
I
I
also
propose
that
if
the
user
specifically
sets
the
security
that
we
should
go
with
that
and
not
look
at
the
end
point
yeah
yeah.
H
Yeah
I'm
trying
to
dig
some
stuff
up.
I
feel
like
I'm
looking
at
the
ruby
implementation
and
there's
only
a
proto
over
http
exporter
and
it
solely
looks
at
the
scheme
of
of
the
endpoint
to
figure
out
secure
or
insecure
and,
like
I
don't
know,
there
have
been
so
many
changes
in
the
spec
world
over
this.
But
I
feel
like
the
insecure
option
might
only
apply
to
grpc,
but
I'm
trying
to
actually
confirm
that.
I.
F
Yeah
in
the
in
the
spec
under
insecure.
It
says
this
option
only
applies
to
otlp
grpc.
H
So
yeah,
so
so
I
don't.
I
don't
know
if
that
answers
anything,
but
that's
actually
how
ruby
is
working
then
so
it
that
the
insecure
option
is
totally
irrelevant
for
for
anything
other
than
than
grpc
and
for
any
of
the
otlp
http
protocols,
either
proto
over
http
or
json.
C
Yeah
I
mean
so
like
your
point
a
here
using
all
defaults.
It
says
spec
says
we
should
create
a
secure
channel,
but
current
implementation
is
insecure.
The
default
is
localhost,
so
I
mean
without
supplying
certificates
or
ignoring
the
like
certificate,
validation
or
anything
like
that.
I
don't
believe
that
it
would
even
be
possible
to
create
to
make
that
secure
by
default,
because
the
the
certificate
wouldn't
be
validated.
F
I
only
question
that,
because
I
set
up
like
a
local
host
app
that
uses
the
grpc
exporter,
that
exports
to
new
relic
and
it
will
only
accept
data
if
it's
secure.
So
I'm
able
to
create
a
secure
channel
by
not
providing
a
cert
certificate
path
and
just
using
the
default.
C
C
C
C
Other
than
I
like,
I
don't
know
how
to
answer
this
question
without
without
the
spec
being
clarified
on
this,
though,.
G
C
Correct:
okay!
Well,
that's
a
that's!
A
bug
then,
because
it
says
here
schema
of
https
indicates
a
secure
connection
and
takes
precedence
over
the
insecure
configuration
setting.
So
if
the
user
said
sets
https,
then
it
should
be
secure,
regardless
of
what
this
one
is.
C
So
I
guess
the
insecure
false
by
default.
I
just
don't.
G
C
Right
now
we
drop
the
protocol.
We
don't
use
it
at
all,
because
that's
what
the
node.js
grpc
module
does
it.
It
doesn't
use
a
protocol,
but
I
guess
we
should
be
using
one
anyways,
even
though
it's
not
going
over
http.
So
it
doesn't
make
a
whole
lot
of
sense
to
me,
but.
C
F
Is
that,
like
the
default
config
settings,
or
does
it
also
take
precedence
over
if
the
user
specifically
created
a
let's
say
in
an
insecure
transport,
but
they
have
https.
C
F
Like
they,
they
can
use
the
grpc
client,
create
it's
insecure,
but
their
endpoint
that
they're
exporting
to
is
https
and
that
works
just
fine
and
that
create,
and
that
should
right
now
the
current
implementation
is.
It
will
just
create
an
insecure
connection,
even
though
they
have
https
in
the
endpoint.
C
Is
the
can
the
user
provide
a
like
a
grpc
client
to
the
exporter
right
now?
I
didn't
realize
you
could
even
do
that.
G
C
Set
insecure
on
that
yeah,
so
I
mean
if
the
user
does
anything
like
that
in
code,
I
think
we
should
respect
what
they
do,
even
if
it
doesn't
make
sense.
I
think
it's
unlikely
that
users
would
would
configure
it
that
way,
but
maybe
they
have
a
reason.
I
don't
know.
F
C
C
You
know,
creating
that
stuff
itself
or
when
it's
configured
with
environment,
variables
and
stuff
like
that,
where
there
isn't
that
much
control
for
one,
I
think
one
makes
sense
like
if
there's
no
endpoint
specified
it's
going
to
localhost
anyway,
we
can't
do
it
secure.
So
I
think
that's
just
a
the.
The
only
thing
that
that
confuses
me
is
the
specs
saying
in
insecure,
is
default
by
error
is
false
by
default.
C
I
think
that
that's
just,
I
think
what
it
doesn't
make
any
sense
to
me
to
even
have
that
option
if
the
default
is
an
insecure
localhost
connection,
and
if
the
endpoint
overrides
the
insecure
variable,
I
don't
know
what
the
insecure
variable
is
even
doing.
D
C
Yeah
I
mean
that
makes
sense
based
on
the
current
wording
of
the
spec
to
me.
I'll
probably
still
create
a
spec
issue
anyways
just
because
it
is
not
very
clear.
F
C
Yeah,
so
I
guess
from
what
amir
just
mentioned
b
here,
where
there's
no
schema
would
be
secure.
You
you
have
it
insecure.
I
think
it
would
be
secure.
C
C
Because
the
spec
doesn't
use
the
grpc
schema
at
all,
so
they
don't
really
have
wording
for
that.
I
guess
I'll
ask
for
clarification
on
that
in
the
issue
that
I
create
on
the
spec.
I
think
grpcs
is
obvious
enough,
but
grpc
on
its
own,
maybe
is
not
obvious.
C
F
C
Yeah
well,
with
b,
we
don't
know
like
it's
just
a
schema-less
url.
We
don't
know
whether
it
has
like
a
certificate
or
not.
With
a
we
know,
there
is
never
going
to
be
a
valid
certificate
because
local,
like
we,
we
are
setting
it
as
localhost,
so
we
know
that
it
won't
be.
I
mean
we
could
make
a
special
case
for
the
localhost
url,
but
I
think
it's
just
treating
it
as
if
the
user
provides
a
url,
that's
a
schema-less
url,
then
you
know
we
would
treat
it
as
secure
by
default.
C
I
mean
I,
I
don't
think
this
email
is.
I
think
the
specification
is
written,
assuming
the
user
is
going
to
provide
a
schema.
The
way
that
I
read
it,
it
seems
like
they.
They
just
assume,
there's
always
going
to
be
one.
C
F
If
I
recall,
we
briefly
chatted
about
this
default
endpoint
for
grpc
a
few
months
back
where
it
should,
it
should
have
been
http,
and
I
recall
some
something
being
said
about.
Well,
we
don't
even
use
the
the
scheme,
it
just
gets
tossed
out,
so
it
doesn't
matter
what
the
end
point
is
what
the
default
endpoint
is.
C
Yeah,
so
the
I
think
we've
actually
switched
to
using
grpc
js
since
then,
but
when
it
was
originally
written,
the
module
that
we
used.
If
you
supplied
a
scheme
it
failed,
it
complained.
C
I
think
grpcjs
ignores
the
scheme,
if
I
remember
correctly,
but
don't
quote
me
on
that,
but
it
doesn't
fail,
but
that
was
why
we
went
with
the
schema
less
default.
Just
because
supplying
a
scheme
to
it
was
was
causing
issues.
I
think
liz
fong
jones
is
the
one
that
I
wrote
that
made
that
pr
originally
and
the
old
spec
like
several,
not
not
that
long
ago,
a
couple
of
months
ago,
I
think,
used
to
be
secure
by
default
and
it
was
switched
to
be
insecure
by
default.
C
So
that's
probably
why
some
of
this
spec
is
a
little
bit
ambiguous
because
it
was
originally
written
when
secure
was
going
to
be
the
default.
But
then
I
think
people
were
running
into
problems
because
the
you
know
localhost
certificate
was
not
working.
You
know
it
was
not
validated.
C
C
C
Anything
else
is
invalid.
So
if
the
user
says
grpc
that's
invalid,
if
the
user
says
grps,
that's
invalid,
all
of
it
just
require
one
of
two
throw
an
error
at
configuration
type.
You
know
at
startup.
If
a
bad
schema
is
supplied
with
an
obvious
error
message
that
just
says
you
must
use
http
or
https,
then
I
guess
the
insecure
flag
is
maybe
only
important.
C
C
So
maybe,
if
you
don't
have
s
in
the
scheme,
then
the
insecure
setting
matters,
but
I
the
spec,
is
just
contradicting
itself.
I
think
until
we
open
an
issue
about
the
insecure
flag,
we
can't
know
a
hundred
percent
what
they
want,
but
I
do
know
that
the
default,
if
you
try
to
enable
security,
will
just
fail
on
localhost
for
most
users,
because
their
root
certificate
is
not
going
to
validate.
C
F
I
Hey
nothing
huge
here
just
want
to
keep
this
rolling.
I
think
the
main
outstanding
thing
left
is
just
talking
about
what
version
we
should
reference
for
the
the
basic
sdk
packages.
I
guess
like
the
options
would
be
like
asterisk
or
you
know,
1.03
or
above
or
something
like
that.
Just
wanted
to.
I
guess,
try
to
make
a
decision
there,
and
then
I
think
everything
else
is
addressed
in
it.
I'm
trying
to
not
complicate
the
pr.
As
far
as
like
modifying
the
example
much
I'm
trying
to
do
a
pretty
direct
port.
C
Just
putting
a
star
in
there-
or
I
guess,
a
a
carrot.
C
My
screen
sharing
thing
is
in
the
way
I
can't
see
my
own
tabs
all
right.
So,
let's
see,
let
me
open
this
pr
just
so
that
we're
all
looking
at
the
same
thing.
A
C
I
C
Yeah,
so
I
guess
the
the
the
the
point
that
florina
was
making
here
is
that
they're
already
out
of
date?
So
when
we
create
this,
it's
it's
starting
out
of
date.
I
guess
the
question
is:
do
we
put
a
star
in
there
so
that
it
updates
itself
and
then,
potentially
you
know,
all
of
the
contrib
prs
would
break
if
we
made
a
release
of
the
core
repository
potentially.
C
Yeah,
but
it
should
broken
examples
block
like
unrelated
pr's
right
like
if
I'm
making
some
fix
to
the
the
file
system.
Instrumentation,
then
that
pr
would
break,
even
if,
like
the
fastify
example
just
happens
to
be
broken
because
of
a
update
from
core.
I
think
that
that's
that's!
C
That's
the
risk
with
the
star
dependency,
the
risk
with
the
just
using
like
carrot
dependencies
is,
is
exactly
this
we're
out
of
date,
so
I
guess
we
either
have
to
commit
to
trying
to
do
as
good
of
a
job
as
we
can
of
just
updating
these.
C
When
we
release
core,
which
will
be
easier
to
do
once,
they're
typescript,
I
think
nobody
ever
updates
them
now,
because
they're
javascript
and
when
you
go
to
update
them,
you
can
never
be
sure
whether
there's
some
breaking
change
that
you're,
not
thinking
of
once
they're
typescript.
I
think
it's
easier
to
do
just
like
update
all
of
them
and
run
a
compile
and
see
what
happens.
I
Well,
maybe
maybe
it
would
make
sense,
then,
to
start
this
one
with
asterisk.
Let's
give
it
a
try.
It
would
only
be
this
express
example,
because
that's
the
only
one
moved
in
this
pr
so
it'd
give
us
a
chance
to
keep
an
eye
on
that,
because
we're
not
like
going
to
convert
all
the
rest
overnight.
C
Does
the
example
compile
as
a
part
of
the
main
compile
like
if
I
just
run
npm
run
compile?
Does
that
include
the
examples
or
is
it
a
separate
command.
I
I
C
We're
going
to
know
anyway,
like
we'll
see
the
list
of
of
broken,
you
know
checks
and
if
the
only
one
that's
broken
is
the
examples
one
we'll
probably
just
know
to
ignore
it.
I
think
it's
fine
honestly.
I
C
B
So
sorry,
I
didn't
catch
that
like
are
you
going
to
change
it
to
asterisk?
Then.
I
B
I'm
justifying
like
using
that
because
then
we
would
get
like
unexpected
breaking
of
the
code.
Otherwise
we
would
get
the
alert
from
a
renovator
or
any
or
something
telling
we
have
a
outdated
dependency,
which
is
you
know,
signaling
kind
of
the
same
thing,
but
one
would
be
with
an
error.
The
other
one
with
like
automatic
pr,
although
like
the
first
solution,
would
be
less
work
until
everything
is
working
for
sure
yeah.
I
I
think
the
situation
it
sets
us
up
for
is
like
everything
can
break
at
once,
but
it's
also
like
a
good
leading
indicator
in
the
sense
that,
like
if
a
user
was
installing
that
instrumentation
package
in
the
latest
version
of
like
the
related
sdk,
you
know
core
or
whatever
else
like
in
theory,
they're
going
to
have
broken
code
as
well.
I
C
I
C
B
A
G
Yeah
but,
but
I
think
our
instrumentation
packages
they're
bound
to
a
specific
major
version
of
coal
right.
It's
not
possible
that
we
up
like
release
a
new
major
of
core
without
making
sure
the
instrumentations
are
compatible
or
also
pumping
them
like
what
do
we
gain
from
it?.
G
Dependency,
we
have
many
other
dependencies
in
code
like
both
dev
dependencies
and
regular
dependencies
and
they'll
all
depend
on
a
specific
current
version
of
core,
so
the
compatibility
is
stated
in
the
package.json,
and
here
we
want
to
like
stop
doing
this.
Only
for
the
examples
like
I'm
not
sure
if
it
makes
sense
to
me.
I
It
would
be
nice
to
like
match,
I
guess
the
ones
that
are
in
the
the
top
level,
although
it's
kind
of
like
there's
a
different
overlapping
set
of
dependencies
like,
for
example,
the
api
package
is
referenced
as
a
dev
dependency,
specifically,
but
there's
some
other
ones.
That
aren't
like
other
instrumentation
like
in
the
express
example.
I
It's
referencing
like
the
http
instrumentation,
is
where
that's
not
referenced
in
the
like
express
instrumentation
package
itself.
So
it's
pretty
specific
to
the
example.
C
Yeah
so
I
mean
the
alternative
is
to
leave
them
like
this,
with
the
carrot
and
rely
on
renovate
to
keep
the
package
jason's
up
to
date,
I
mean
renovate
is
pretty
good.
It
runs
overnight.
So
in
the
worst
case,
where
we
go
one
day
before
it
opens
a
pr-
and
you
know
we
don't
notice
for
24
hours
in
the
worst
case,
assuming
that
we
actually
pay
attention
to
those
there's
only
like
there's
less
than
10
open
pr,
yeah,
there's
10
open
vr's
in
contrib
right
now.
So
it's
not
like
we're
like.
C
B
B
If,
if
that
wouldn't
be
the
case
like
the
alternative
would
be,
we
just
would
have
like
packages
in
our
code
without
us,
knowing
if
we
would
use
looser,
looser
version,
specificators
or
versions
right.
So
in
that
sense,
it's
it's
the
preferred
solution
out
of
the
two
to
actually
make
renovate
catch
those
issues
and
force
us
to
fix
them.
If,
if
they
are
a
problem,.
I
Yeah,
I
guess
like
looking
at
these.
You
know,
as
far
as
like
the
other
dependencies,
keeping
the
existing
carrot
and
just
bumping
like
the
base
version
might
not
actually
be
that
bad,
because
the
biggest
thing
like
now
that
I
think
about
it
that
we
were
trying
to
fix
as
far
as
reference
dependencies
was
specifically
the
particular
instrumentation
being
update
and
that's
now
solved
because
we
reference
like
the
actual,
like
express
instrumentation,
for
example
relatively,
and
that's
that's
really.
The
important
piece
to
me
is
like
that.
I
It's
up
to
date
with
itself
the
rest
is
kind
of
gravy,
so
I'd
be
fine,
just
keeping
the
carrots
kind
of
as
is
and
just
bump.
The
base
version
like
that
would
be
fine
if
we're
worried
about
the
other
risks.
C
I
Yeah,
I
think
the
the
one
other
thing
I
want
to
add,
maybe
to
like
either
the
example
read
me
or
the
maybe
the
top
level
contributing
markdown
file
is,
I
want
to
add
instructions
for
how
to
migrate
a
given
example.
That
way,
if
other
people
want
to
contribute
moving
individual
examples,
they
can
do
that
easily
and
kind
of
follow
the
same
steps.
I
Yeah,
it
could
be
worth,
I
don't
know
if
we
have
many
people
like
looking
for
work
but
like
it
could
be
good
to
create
issues
for
each
one
and
then
like
mark
them
as
like
good,
first
contribution
type
of
thing,
yeah.
B
Yeah,
if
you
would
do
that,
that
would
be
awesome.
I
have
been
surprised
sometimes
by
just
creating
an
issue
for
the
future
me
and
then
someone
else
picking
this
up
the
next
day
or
something.
C
Yeah,
especially
if
you
mark
it
with
good
first
issue
and
stuff
those
seem
to
be,
I
think
that
there
are
probably
more
people
than
than
we
realize.
Sometimes
that
look
through
the
issues
and
then
just
go.
I
don't
know
where
to
start
and
close
it
and
then,
if
there's
something
obvious
like
oh,
I
can
move
the
example.
Why
not.
J
A
Anyone
else
have,
I
guess
that
was
the
last
topic
on
the
agenda
here
surprised.
I
thought
it
was
going
to
be
a
short
meeting
based
on
the
agenda,
but
we
use
most
of
the
time.
A
Okay,
well
have
a
good
week,
everybody-
and
I
will
talk
to
you
all
next
week,.