►
From YouTube: 2020-12-01 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Looks
like
a
pretty
small
turnout.
Do
you
think.
A
It's
just
gonna
be
us,
might
be
robert's
out
running
an
errand
he's
gotta
get
his
manitoba
driver's
license
so
yeah.
He
won't
be
joining
us
today.
B
B
B
I
think
the
main
thing
is
they
want
to
start
transitioning
like
a
lot
of
the
metrics
back
work
to
the
specification,
sig,
there's
kind
of
been
like
a
separate
like
metrics
meeting
on
thursdays,
they're
going
to
keep
both
of
them
for
a
little
while,
but
I
think
they
want
to
dedicate
at
least
like
half
of
the
the
specific
meeting
to
to
metrics
talk.
Just
to
kind
of
I
don't
know,
I
think,
there's
like
a
a
realization
that
most
people
have
been
so
focused
on
tracing
that
you
know.
B
Only
a
select
few
have
been
participating
in
metrics,
so
they
still
want
to
keep
that
meeting.
To
like
talk
about
the
hard
problems,
I
guess,
but
still
kind
of
talk
about
the
decisions
that
have
been
made
and
other
things
at
the
specs
thing
just
to
get
people
spinning
up
on
metrics.
I
guess
get
a
wider
audience
kind
of
spin
spinning
up
on
them
and
I
think
the
long
term
plan
is
when
they
feel
like
they
don't
need
that
thursday
meeting
any
longer.
It
would
just
kind
of
take
place
at
the
tuesday
meeting.
B
B
It
does
seem
like
this
is
something
that
people
are
very
interested
in
doing
like,
despite
the
fact
that
most
back
ends,
most
metrics
back
ends
expect
values
to
be
labels.
I
think
wait.
What
did
I
say,
values
to
be
strings
like
yeah?
It's
it's
true
that
that
is
the
case
today,
but
I
think
there
that
might
not
be
the
case
for
metrics
systems
of
the
future,
and
you
are
kind
of
like
limiting
limiting
them.
B
If
you
do
restrict
things
to
strings,
so
I
think
there's
still
kind
of
this
idea
that
the
apis
at
least
should
accommodate
more
types
and
then
the
export
pipeline
can
handle
those
as
needed
for
for
the
back
end.
I
wouldn't
tend
to
agree
with
that
for
most
things,
but
I
think
once
you
start
to
get
into
like
these
exotic
value
types
that
we
have
for
attributes
like
you
know,
arrays
or
you
know,
maps
like
it
gets
weirder
for
sure,
but
I
that
that
thing
was
glossed
over
a
little
bit.
B
And,
like
the
other
reason,
I
think
we
as
we've
covered
before.
Is
that,
like
you
want
to
be
able
to
attach
like
a
resource
maybe
to
your
metrics
or
reuse?
Some.
B
You
know
bucket
of
attributes
that
you're
putting
on
a
span
for
metrics,
and
it's
like
you
can't
really
do
that
and
it's
it's
painful
for
the
user,
at
least
to
have
to
understand
that
to
understand
the
metrics
restrictions
from
from
that
angle
and
convert
your
attributes
to
labels
and
vice
versa.
B
A
Bother
me
at
the
moment,
yeah,
I
think
it'll
be
a
little
while
before
we
start
tackling
metrics.
B
Yeah-
and
I
think
that's
fine,
I
think,
like
the
people
who
have
been
on
the
bleeding
out
of
my
of
metrics-
have
kind
of
implemented
their
metric
systems
a
few
times,
and
I
feel
like
a
lot
of
the
some
of
the
existing
implementations
out.
There
are
definitely
dated
they're,
definitely
like
a
previous
version
of
kind
of
the
budding
metrics
spec,
and
I
think
that
just
makes
it
more
confusing.
B
A
B
B
B
Mostly,
the
rest
of
the
meeting
was
spent
talking
about
this,
so
these
other
items
didn't
really
get
a
fair
share,
but
this
was
talked
about
yesterday
at
the
maintainers
meeting
and
kind
of
talked
about
today,
and
I
think
the
plan
is
this
document
is
going
to
become
an
otep
and
it's
really
about
like,
as
we
start,
releasing
things
as
stable,
like
a
make
a
stable
tracing
release,
how
how
should
things
work?
What
are
these
stability
guarantees
and
kind
of
how?
B
B
So
yeah
like
it,
I
think
initially
people
were
thinking
about
yeah.
Actually
I
don't
know
all
of
the
details
of
what
was
what's
going
to
happen,
but
I
think,
like
the
one
thing
that
is
becoming
obvious
is
that
things
things
will
have
to
follow
semantic
versioning
for
reasons
of
package
management.
If
nothing
else
I
know
like,
I
think
I
think
there
was
that
there
were
definitely
some
other
schemes.
People
were
looking
at
that
were
a
little
bit
more
like
cutting
corners.
B
I
think
on
on
some
verb,
and
people
were
kind
of
just
bringing
up
the
point
where,
like
even
if
metrics
is,
you
know
unstable
like
if
people,
if
people
are
using
it
there,
there
will
be.
If
you
have
like
a
a
fuzzy
like
minor
version
matching
or
something
you
can
kind
of
pick
up
some
something
that
will
end
up
breaking.
B
You
can
actually
pick
up
breaking
changes,
so
I
know
they
want
to
make
sure
that
that
this
doesn't
happen.
In
most
cases.
A
So
we're
kind
of
going
to
have,
you
know
stable
tracing
apis
and
if
we
do
a
1.0
release
with
a
stable
tracing
but
metrics
not
yet
implemented
and,
as
we
add
metric
support
and
while
the
spec
is
changing,
I
think,
are
we
going
to
be
doing
a
new
major
release?
Every
time
you
do
a
breaking
change
to
the
metrics
api.
B
A
And
then
so,
this
implies
that
we
are
doing
a
major
sorry.
There
was
some
text
up
above
here
if
this
occurs:
tracing
metrics,
blah
blah
blah
we'll
do
a
major
version
bump
for
each
signal
release.
So
every
time
we
do
a
breaking
change
for
metrics,
that's
going
to
be
a
major
version
bump.
B
B
Yeah
for
for
like
languages
that
compile.
B
I
think
there
are
issues
and
potentially
issues
with
when
open
telemetry
starts
shipping
with,
as
like,
first
party
instrumentation
with
other
libraries,
where
you
kind
of
don't
have
the
control
over.
Like
your
api
dependency,
the
expectation
is
that
things
should
work.
People
should
always
be
able
to
be
on
like
the
most
current
version
of
open
telemetry,
and
there
should
not
be
issues.
B
B
A
B
Yeah,
I
guess
we'll
have
to
think
about
that.
I
suppose
like.
B
With
metrics
api
development,
like
I'm
just
thinking
out
loud,
what
would
it
would
it
make
sense
that,
like
we
know
that
the
metrics
api
is
going
to
repeatedly
break
until
we
reach
stability
in
2.0
so
like?
If
people
are
trying
to
experimentally
use
metrics,
they
should
have
fairly
tight
versioning,
and
if
we
just
kind
of
made
a
policy,
we
will
increment
the
the
minor
version
for
every
breaking
metrics
change.
Then,
as
long
as
people
who
wanted
to
who
want
to
experiment
with
metrics
but
not
be
broken,
they
would
have
to.
B
They
could
pick
up
patch
releases,
but
they
would
have
to
you
know,
knowingly
and
willingly
pick
up
the
the
minor
releases
and
expect
some
breakage.
I
think
that
sounds
somewhat
reasonable.
A
B
I
think
that
that's
something
to
to
keep
in
mind,
because
some
people
might
not
be
interested
in
metrics
at
all
until
until
their
their
ga,
so
for
people
who
are
just
using
tracing
and
we
keep
pushing
out
like
you
know,
1.1
1.2,
1.3
1.90,
like
people
who
are
just
interested
in
tracing,
might
not
really
care
and
we
might
spare
them
the
worry
of
of
trying
to
upgrade
or
regularly
upgrade.
A
A
Diverging
versions
of
some
of
the
gems
like
right
now,
we've
been
releasing
all
of
them
every
time,
so
we're
at
like
zero,
nine
zero
of
everything.
But
I
think
now
that
the
api
has
stabilized,
particularly
for
tracing
we're
soon
going
to
want
to
start.
A
Is
there
an
expectation
with
this
versioning
document
that
we're
going
to
release
we're
going
to
continue
to
have
the
same
version
number
for
the
api
and
sdk
moving
forward?
Or
is
it
okay
for
them
to
diverge?.
B
B
A
I
mean
for
tracing
the
api
is
evolving
quite
slowly
at
the
moment.
There's
only
one
significant
change
and
I
don't
know
if
it
would
be
considered
a
breaking
change,
which
is
the
addition
of
trace
state
that
I'm
working
on
everything
else
is
pretty
stable
at
this
point,
whereas
sdk
is
going
to
probably
have
some,
I
want
to
say
probably,
but
possibly
have
breaking
changes,
especially
if
we
change
the
configuration
mechanism,
for
example,.
B
A
A
I
don't
know
if
that
would
be
true
of
instrumentation
gems,
but
certainly
sdk
and
api
gems.
It
seems
like
they
will
need
to
remain
in
lockstep.
How
do.
B
B
B
B
I
don't
know
we're
fairly
autonomous
with
these
things.
I
don't
know
if
the
things
every
time
we
push
out
a
package.
Is
that
a
release
or
is
it
more
centralized
as
in
every
time
something
has
changed
at
the
spec
level?
I
think,
but
I
feel.
A
C
So
yeah,
I'm
actually
not
sure
I
I
actually
I
would
love
to.
Is
there
a
link
to
this
stock?
I'm
sorry,
I'm
a
little
bit
late
here,
but
I
I'd
love
to
look
at
this
a
little
bit
more
closely
because
it's
it
seems
to
imply
that
releases
span
packages
like
there.
There
is
a
larger
concept
of
a
release,
rather
than
oh,
I
updated
something
in
a
package
I'm
going
to
do
a
release.
C
The
the
the
implication
here
seems
like
that
they're
they've
they've
defined
some
concept
of
okay,
we're
doing
a
you
know
a
milestone,
whatever
release
of
whatever
is
you
know
affected
by
that
milestone
and
I'm,
and
so
I
I
think
it
would
be
useful
to
understand
what
that
is
because
otherwise
say
you
know,
for
whatever
reason
we
make
a
you
know,
we
make
a
breaking
change
to
sdk
because
we
decided
that
we
need
to
change
something
and
how
configuration
happens.
C
It
doesn't
make
sense
for
that
to
then
then
force
a
semver,
major
release
of
api
and
everything
else,
because
we
did
that
did
that
change.
It's
suggests
to
me
that
there's
there
they
they
want
to
do
it's
it's.
Yes,
I
I
I
kind
of
want
to
understand
this
a
little
bit
more
before
we
before
we
put
policy
in
place.
B
Yeah,
I
I
chatted
the
link
and
I
know
they
talked
about
possibly
trying
to
spin
up
a
meeting
around
this
document
itself.
B
So
I,
if
I
hear
anything,
I
can
at
least
mention
it
in
in
the
ruby
getter,
but
I
think
the
goal
is
to
turn
this
document
into
an
otep.
B
Ultimately,
so
so
there
will
be
some
like
debates
over
a
pr
at
some
point
also-
and
I
think
ted
wanted
to
have
that
up
sometime
this
week
so
but
yeah
like
as
we
talk
through
this,
it
does
seem
like
a
little
ridiculous
to
have
to
like
bump
the
api
version,
which
should
be
like
incredibly
stable,
like
when
you
make
a
breaking
change
to
the
sdk.
B
Yeah-
and
I
think
you
know
if
you
wanted
to
have
any
kind
of
like
versioning
up
like
at
most,
I
would
say,
like
major
versions,
because
then
you're
yeah,
like
if
you,
if
you
fix
to
a
a
major
version.
I
think
that
would
allow
you
to
at
least
have
some
signal
as
to
like,
what's
compatible
with
things.
B
B
B
But
yeah,
like
I
think,
keeping
version
numbers
in
lockstep
is
probably
probably
a
huge
hassle
and
hard
to
keep
up
with
keeping
major
versions
in
sync
might
be
something
worth
considering,
but
still
possibly
not
feasible,
at
least
for
things
like
api.
I
could
see
maybe
doing
that
for
like
sdk
and
and
instrumentation.
B
A
B
B
B
Yeah,
should
we
move
on
to
our
our
repo.
A
Sure
yeah,
we
cut
a
new
release,
a
little
rocky,
but
we
got
it
up
so
zero,
nine
zero.
I
we
have
a
couple
of.
I
think
we
have
one
breaking
change.
Maybe
that
went
in
since
then
and
a
couple
other
minor
things
we
may
want
to
do.
This
actually
brings
up
version
numbering
we
may
want
to
do
another
release
of
the
sdk
and
possibly,
I
think
the
common
package
in
sdk.
A
Was
this
the
thing
that
eric
ran
into,
I
think
so
I
haven't
replicated
it,
but
it
made
sense.
Common
was
requiring
the
api,
but
it
didn't
add
the
dependency
to
the
gem
spec.
A
So
I
fixed
that
I
was
hoping
eric
would
be
here
today,
so
I
could
ask
him
whether
that
actually
fixed
his
problem
yeah.
So
that
was
a
actually
was
that
a
breaking
change?
I
don't
remember
you
want
to
take
a
look
at
pull
requests
here
and
take
a
look
at
the
closed
ones.
A
A
It's
actually
a
slightly
frustrating
change
because
it
requires
an
allocation
where,
before
we
didn't
need
one
but
all
right,
but
anyway
yeah
this
is
this
only
affects
the
sdk
doesn't
affect
any
other
gems.
A
A
Yeah
now
we
can
wait
until
I
release
the
trace
state
change
right
now.
Trace
state
is
going
to
be
somewhat
unusable
for
most
people,
so
once
I
actually
implement
trace
state
as
a
class,
then
that's
going
to
affect
the
propagators
and
everything
that's
using
it.
So
the
api
will
also
have
a
breaking
change.
So
if
we
wait
for
that,
then
we
can
just
do
0
10
0
and
release
all
the.
B
Gems
yeah:
what
is
the
motivation
for
potentially
cutting
a
release
sooner.
A
The
main
reason
I
probably
should
have
cut
a
release
before
this
breaking
change.
The
main
reason
is
fixing
the
sdk
install
problem.
A
C
I
am
actually
not
entirely
sure,
that's
true,
but.
C
Get
back
to
you,
okay,
thanks.
B
Yeah,
I
mean
either
way
like
if,
if
it's
easy
enough
to
just
release
this
change
as
a
patch
release,
that
might
that
might
be
the
easy
thing
to
do,
and
just.
A
B
B
A
Cool
so
yeah,
as
I
mentioned,
I'm
working
on
trace
date-
implementation,
hopefully
that'll
be
done
this
week.
A
The
so
robert
and
I
or
robert
in
particular,
is
focused
on
getting
things
to
a
point
in
open,
telemetry
ruby,
where
we
can
start
deploying
it
in
production
as
soon
as
possible,
so
ruby
kafka
instrumentation
is
one
that
he
needs
to
get
out
soon.
So
he's
got
a
pr
open
for
that.
A
B
A
No,
no.
What
I
want
to
do
is
actually
instrument
the
otlp
exporter
so
that
we
have
metrics
about
drop
span,
counts
and
you
know
number
of
export
calls
successful
calls
failed,
calls
all
that
stuff.
We
want
to
be
able
to
report
those
metrics
from
the
otlp
exporter,
but
report
them
using
some
other
mechanism,
so
basically
something
pluggable
so
that
we
can,
in
our
case,
use
statsd
to
do
the
metric
reporting.
A
The
ultimate
I
mean
eventually,
once
we
implement
open,
telemetry
metrics,
then
the
metrics
should
be
reported
using
that
api,
but
that
api
doesn't
really
exist
yet,
so
we
need
to
do
something
else,
whether
that's
something
where
we
end
up
like
monkey
patching
a
bunch
of
stuff,
so
we
just
have
a
bunch
of
like
empty
hook
methods
or
we
actually
formally
or
more
formally
define
something.
I
don't
know.
B
Yeah
so
I
mean
it
sounds
to
me
like:
we
would
need
a
at
least
semi-functional
metrics
api,
which
I
think
our
api
is
actually
somewhat
out
of
date.
From
from
what
is
currently
out
there,
I
would
have
to
relook,
but
when
I
was
kind
of
glancing
at
it,
it
was
not
looking
a
lot
like.
I
think
it's
changed.
B
A
A
I
was
thinking
of
having
some
either
a
bunch
of
methods
in
the
otlp
exporter
that
we
can
just
like
either
subclass
and
override
or
monkey
patch,
these
these
empty
methods
that
are
reporting
metrics
or
have
you
know
you
could
arguably
use
active
support
notifications
if
you
want,
but
we
don't
really
want
to
have
that
dependency,
or
we
just
define
some
interface
that
allows
somebody
to
plug
in
an
implementation
of
the
metric
reporting
from
otlp
exporter
and
actually
in
fairness.
A
B
B
B
A
B
I'm
just
throwing
out
ideas,
I'm
not
saying
that
they're
good
or
should
be
done,
but
would
potentially
having
like
a
more
rich
export
result
and
just
kind
of
subclassing,
the
exporter
from
shop
or
for
shopify.
To
kind
of
add
this
quick
and
dirty
stats,
the
matrix
recording
it
might
I
don't
know
if
it's
going
to
be.
A
Fairly
complex
export
result
like
you,
want
to
know
information
about
how
many
retries
were
there.
What
were
the
error
codes
from
the
retries.
B
A
These
are
also
things
that
I
mean
we
can.
We
can
kind
of
declare
them
api,
private
or
something
and
just
document
them
as
being
there
for
monkey
patching.
If
you
need
it
ultimately,
those
same
spots
would
be
like
those.
Those
methods
would
eventually
be
filled
in
with
things
that
are
recording
metrics
using
open,
telemetry's,
metrics
api.
B
Yeah
I
mean
I
guess
that
makes
sense.
We
can
add
these
quick
methods,
they
can
be
api
private,
they
can
be
legitimately
private
and
pretty
much
just
be
like
template
methods
with
no
no
implementation,
and
then
we
could
safely
remove
those
when
we
introduce
a
metrics
api.
B
Right
yeah,
I
guess
that
sounds
good
like
if,
if
these
methods
yeah,
if
we
can
add
in
the
methods
for
the
behavior,
that
we
want
and
and
retain
them
and
just
kind
of
leave
them
as
something
that
that
are
monkey
patchable
for
for
you
for
now
and
then
fill
them
in
with
the
implementation.
Once
we
have
a
metrics
api
that
that
sounds
like
a
good
plan,
so
yeah
tldr,
you
know
what
you
want.
You
know
what
you
need
and
anything
reasonable.
A
Add
in
cool
yeah
we
can
hash
it
out
in
a
pr,
that's
fine
there.
This
is
a
question
for
you.
Actually,
there
was
a
question
that
came
up
about
light
step.
Propagation
does
lightstep,
have
a
propagator
for
hotel
ruby.
I
noticed
that
you
have
wrapper
packages
for
a
bunch
of
other
languages
go
and
python
and
whatever
else
that
set
up
sort
of
reasonable
defaults
for
things
so
pretty
much
pretty
similar
to
what
we
have
in
shopify.
We
have
wrapper
packages
for
go
and
for
ruby.
A
Is
this
something
that
you're
planning
for
open,
telemetry,
ruby.
B
Good
question
we'll
probably
end
up
with
a
wrapper
package.
At
some
point
we
we
haven't
been.
We
haven't,
introduced
lightstep
propagators
for
hotel,
yet
it's
something
it's
something
that
we've
been
talking
about
most
most
of
our
tracing
clients
support
b3.
So
it
was
kind
of
like
this
idea
that
that
was
like
a
common
denominator
between
everything.
A
Johnny
for
now,
slidestep
have
its
own
propagation
header.
B
B
It
is
definitely
the
lightsub
format
a
few
other
people
have
adopted
it,
but
it's
it's
kind
of
one
of
the
more
official
unofficial,
open
tracing
headers.
If
that
makes
sense
like
open
tracing,
was
kind
of
it
went
so
far
as
to
specify
a
tracing
api
and
stop
there.
B
B
Oh
you
at
least
mention
b3
as
a
solution
for
now
and
then
oh
some
conversations
that
light
stuff
and
see
what
our
plan
is
for
this
yeah.
C
B
Yeah,
I
will,
I
will
lay
out
some
options.
I
think
v3
is
an
option.
I
think
cutting
and
pasting
some
stuff
from
our
tracing
clients
and
turning
it
into
a
hotel
would
also
be
an
option
like
I
always
hesitate
doing
this.
It's
like
I
don't
know
anybody
who
has
implemented
propagators
for
multiple
formats
will
probably
have
similar
stories.
I
hope
that
they
have
similar
stories
because
it's
always
like.
B
Oh
it's
easy,
we'll
just
add
another
propagator
and
then
you
add
another
propagator,
and
then
it
takes
like
six
months
of
people
using
them
to
find
situations
where
it
didn't
work
for
them
and
then
several
days
of
debugging
to
figure
out
exactly
what
went
wrong
and
then
fixing
fixing
the
problem.
Like
I
don't
know,
it's
just
so
easy
to
mess
up
and
yeah
easy
to
mess
up
and
hard
to
find
out.
Why
and
where.
A
Not
really
oh
yeah,
sorry,
there
was
actually
a
request
on
on
the
pull
request
from
I
think
the
other
francis
was.
It
was
interested
yeah
he
was
asking.
Are
we
do?
We
want
him
to
continue
working
on
this
or
something
else.
B
Or
we
at
least
wanted
some
of
the
behavior
from
this.
So
the
one
thing
that
I'm
just
and
I'm
remembering
that
kind
of
I
think
I
think
maybe
I
created
the
issue-
that
this.
B
Yeah
that
this
was
trying
to
solve-
and
I
think
I
think
the
issue
still
remains
to
some
degree,
and
I
think
the
issue
is
that
if,
if
invalid
attributes
are
added
to
like
a
span
at
a
bare
minimum,
the
jaeger
exporter
was
breaking.
B
It's
like.
I
think
one
one
of
the
common
things
just
like
a
nil
like
vanilla's,
like
a
value
for
a
span,
will
break
the
jaeger
exporter,
and
I
think
the
spirit
of
this
pr
was
just
that
like
we
should
keep
invalid
data
from
entering
the
export
pipeline
and
that
would
free
all
exporters
to
be
able
to
kind
of
just
trust
the
data
that
they're
getting.
B
A
I
guess
that
would
be
one
question
to
ask:
is
yeah,
so
since
then
I
mean
the
spec
has
become
pretty
clear
about
null
handling
and
that
well
I
mean
sorry,
maybe
not
super
clear,
but
it
at
least
calls
out
null
handling
and
cell.
It
says
that
you
know
you
can't
have
null
valued
attributes.
You
can't
have
nulls
in
array
valued
attributes,
but
it
says
that
doing
so
is
undefined
behavior.
A
We
certainly
don't
want
to
be
raising
exceptions
at
that
point,
because
the
error
handling
spec
says
that
we
shouldn't
be
raising
exceptions
there
so
yeah.
I
think
the
the
question
here
was
whether
we
wanted
to
be
doing
that
validation
eagerly
when
the
attribute
is
added
or
lazily
when
the
span
is
actually
finished.
B
B
If
I
recall-
and
one
of
the
ideas
was
things
that
are
obviously
invalid,
early
on
should
be
dropped,
so
there
was
some
validation
that
happened
up
front
and
that
data
would
be
dropped
so
that
we
don't
retain
memory
for
it.
B
A
Dropping
attributes
that
exceed
max
limits,
so
that's
not
talking
about
the
size
of
attributes.
It's
talking
about
the
counts
of
attributes.
B
B
This
is
based
on
a
meeting
we
had
back
in
march.
So,
like
you,
don't.
A
Yeah,
I
think
this
makes
sense,
because
you're
not
going
to
be
if
you're
not
checking
types
until
finish
time,
then
you're
not
determining
is
this
a
string
that
is
longer
than
x
right
equally,
so
really,
the
only
thing
we're
looking
at
is
making
sure
we're
not
retaining
more
than
n
strings
per
span,
and
then
we
do
the
actual
type,
validation
and
length
validation
at
the
end.
B
Yeah,
you
know,
I
think
that
makes
sense,
and
it
kind
of
goes
with
it's
like
a
very
pragmatic
way
for
us
to
like
not
retain
too
much
memory
so
like
those
upfront
checks
were
just
though
the
number
of
things
seems
pretty
easy
to
do
and
low
overhead,
whereas,
though,
eventually
we're
going
to
isolate
the
collection
and
do
some
do
some
work
on
it
and
we
can
do
the
type
checking
and
size
length,
you
know
size
length
whatever
you
want
to
call
it
checking
on
each
attribute.
There.
A
I
think
it
seems
reasonable
yeah,
there's
always
caveats
around
if
you're,
adding
invalid
attributes
they're
potentially
pushing
out
valid
attributes,
and
then
by
the
end,
when
you
do
the
final
validation,
you
end
up
with
no
attributes,
but
I
think
that
is
a
corner
case
that
maybe
we
don't
care
about
dealing
with
at
this
point,
so
I
guess
the
only
question
here
is:
do
we
want
francis
to
pick
this
up
or
not?
I
think
we
probably
do
so.
A
Yeah
we're
a
couple
of
minutes
over,
I
didn't
have
anything
else.
Was
there
anything
else
you
wanted
to
chat
about.
C
So
I
was
looking
at
how
things
worked.
It's
it
is
possible,
but
but
there's
some
manual
work
required
to
do
a
release
from
a
branch.
The
good
news
is,
I
don't
think
we
need
to
here
so
common.
C
I
I
think
we
I
think
we
can
go
ahead
and
release
0.91
of
common
just
normally,
because
nothing
else
has
changed
in
common
and
I
don't
think
we
need
to
do
an
a
another
release
of
sdk
with
with
the
dependency
bump
uncommon,
because
sdk
already
depends
on
api,
and
so
I
mean
the
the
only
change
to
common
is
that
it
would
depend
on
an
api
so
that
that
that
that
change
would
be
a
no
op
on
the
stk
side.
A
And
the
way
the
dependency
from
sdk
to
common
is
specified.
Will
that
result
in
sort
of
new
installations
just
picking
up
zero
nine
one
instead
of
zero
nine
zero
of
common.
C
C
Only
if
they
bundle
update,
but
that's
also
true,
if
they,
if
we,
if
we
bump
sdk
people,
won't
get
it
unless
unless
they
bundle
update.
So
it's
again,
I
think
it's
not
I.
I
don't
think
there
is
a
a
benefit.
B
C
But
the
the
question
about
releasing
from
a
branch
is
a
is
a
good
one.
It's
it's
complicated,
because
it
means
that
we
have
to
update
the
change
log
in
a
branch
and
then
copy
that
that
that
update
back
into
master-
and
you
know,
there's
additional
steps
that
need
to
be
that
needs
to
happen
there.
So
it's
again
it's
possible
to
do
some
of
those
steps
manually,
and
I
can
write
up
how
to
do
that.
C
If
that's
that's
something
that
we
think
will
be
useful
to
to
have,
I'm
not
yet
sure
whether
it
makes
sense
to
build
the
the
capability
into
the
tooling
we'll
have
to
see
how
often
we
run
into
this
sort
of
thing.
B
Yeah,
I
think
if,
if
we
don't
have
like
a
need
to
regularly
release
from
a
branch,
it
probably
doesn't
make
sense
to
accommodate
it.
But
if
we
do
make
that
part
of
our
workflow,
then
we
can
investigate
it,
and
for
now
I
mean
silly.
Mistakes
are
probably
gonna
happen,
but
I
think
hopefully
we
can
work
around
it.
B
I
was
just
gonna
mention
it
I
I
will
get
some
eyes
on
this:
ruby
kafka
instrumentation.
I
know
I
know
robert
astrid
last
week.
B
It
was
thanksgiving
here,
so
we
had
a
few
days
off,
but
I
know
this
is
something
that
shopify
would
like.
So
you
can
start
using
everything
in
production,
so
yeah
I'll
have
a
look
through
it,
but
generally
I
feel
like.
If
it
you
know
what
you
need
and
if
it's
working
for
you.
A
Then
I
think
it's
a
yeah.
This
is
not
tested
in
production
or
anything
like
that
yet
so
I
am
also
going
to
review
that.
I
started
reviewing
it
yesterday,
but
I
will
keep
plugging
away
at
it
today
and
yeah
when
I'm
happy
with
it
I'll
give
it
a
thumbs
up.
But
if
you
can
also
take
a
look
before
we
actually
merge
it,
that'd
be
great
cool.
We'll
do
all
right!
Well,
see
everybody!
Next
week,
cool
yeah
see
ya
thanks.