►
From YouTube: 2022-01-13 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
It
might
be
the
rain,
but,
like
I
know,
I
cannot.
A
C
I
I
added
the
link.
There
was
a.
There
was
a
request
in
the
pulver
in
the
pull
request
to
discuss
it
at
the
sec
meeting.
So
I
just
added
it
in
here
anthony
had
made
the
request
to
discuss
it.
At
the
same
time,.
E
E
F
Okay,
as
I
pointed
in
my
whole
argument
there,
I'm
just
following
whatever
we
decided
long
time
ago
to
to
do,
and
indeed
this
interface,
I'm
pretty
confident
will
be
implemented
by
users.
So
if
we
want
to
ever
support
extensions,
we
have
multiple
options,
but
the
project
decided,
as
I
said
long
time
ago,
to
go
with
this
pattern
for
for
allowing
extensions
of
the
interfaces,
I'm
not
sure
what
we
need
to
discuss
in
that
pr.
F
We
can
have
a
proposal
to
change
that
decision
and
to
use
a
different
pattern
which
I'm
happy
to
to
review
and
to
comment
and
to
brainstorm,
if
that's
the
case,
but
not
sure
what
we
dis.
What
we
discuss
about
that
pr.
F
C
A
I
have
no
strong
opinion
whether
this
is
the
right
or
wrong
way
to
do
the
thing,
but
I
can
have
a
look
at
the
pr
and
comment
there.
If
you
want
to.
A
A
E
A
No,
that's
fine,
I'm
saying
independently
from
that,
because
you're
bringing
up
a
broader,
broader
issue
right,
whether
this
practice
is
generally
what
we
want
to
do
in
the
code
base
or
we
don't
so
I
guess,
independently
from
the
pr
I
mean,
I'm
not
saying
the
pr
shooter
should
not
be
accepted.
That's
that's
an
independent
decision
but
you're
bringing
up
a
a
a
kind
of
a
practice
issue
right.
Is
that
a
good
practice
or
no
should
we
discuss
that
separately
so
that
it's
not
it's
not.
E
F
F
E
F
Propose
a
solution,
don't
I
have
proposed
a
solution,
there's
a
solution
proposed
in
the
pr
okay.
No
you
you
mentioned.
You
asked
me:
why
do
we
use
this
pattern?
You
didn't
say:
don't
use
this
pattern,
you
said
we
are.
We
decided
in
gaussian
to
go
with
a
different
pattern
and
I
said
okay
sure
we.
We
have
used
this
pattern
in
this,
and
this
is
how
we
use
it.
E
I
I
think
I
did
make
that
concrete
suggestion.
I
said
if
your
goal
is
to
have
this
interface
and
to
be
able
to
extend
it
later,
you
can
simply
declare
the
interface.
You
should
declare
the
interface
and
then,
if
you
need
to
extend
it
later,
declare
an
additional
interface
and
if
you
need
to
ensure
that
both
interfaces
are
implemented
by
something,
you
have
then
have
a
third
interface.
That
composes
them.
F
A
So,
let's
do
this
guys,
I
didn't
have
a
chance
to
have
a
look
at
the
pr
at
all.
I
think
it
will
be
useful
for
me
to
go
over
it
and
understand
what
is
it
that
we
want
to
change
and
I'll
post
my
opinion.
If
that
works,
I'm
I'm,
unfortunately
not
able
to
offer
any
help
right
now,
because
I
just
don't
know
what
the
code
is
about,
but
I
promise
I'll
go
over
that
the
pr
and
we'll
post
it
some
comments
after
and
I
think
we
can
find
the
solution
to
this-
does
that
work.
E
Sure
and
to
be
clear,
this
is
precisely
why
I
put
a
hold
on
it.
I
want
to
ensure
that
other
people
have
a
chance
to
review
and
weigh
in
so
that
we
can
actually
decide
what
to
do
here,
rather
than
have
it
become
part
of
the
code
base
and
then
have
to
deal
with
the
fact
that
it's
there
and
you
know,
have
a
higher
bar
to
changing
that.
A
G
So
this
is
something
that
we
brought
up,
I
think
last
fall,
but
for
the
google
cloud
exporter
we're
still
using
the
convert
to
open
census
helpers
and
then
tossing
it
out
to
our
opencensus
exporter
and
we'd
like
to
change
that,
and
the
other
thing
that
we're
interested
in
is
adding
some
integration
tests
against
the
live.
G
Google
cloud
apis
when
we
previously
discussed
it,
we
were
interested
in
hosting
a
large
portion
of
the
exporter
code
somewhere
else,
so
that
during
pre-submit
we
can
run
tests
using
like
google
cloud
build
or
something
that
run
against
the
actual
apis.
Since
it's
not,
I
don't
think
possible
to
do
that
with
it
living
in
tree.
G
So,
but
this
of
course
brings
up
some
other
issues
which
is,
namely
that
now
there's
a
a
less
nice
dependency
tree
right,
because
the
no
matter
how
we
slice
it.
If
the
majority
of
the
exporter
lives
somewhere
else,
we're
going
to
have
dependencies
on
the
collector
core
meaning
in
order
to
update
something
in
contrib
we
have
now,
we
would
need
to
update
something
in
our
repo
first
before
you
can
update
contrib.
G
So
I
wanted
to
raise
that
before
I
go
ahead
and
open
any
pr's,
just
make
sure
that
I
opened
an
issue
so
that
we
can
have
the
discussion
there
and
maybe
there's
some
compromise
or
proposals
that
would
make
it
better
than
just
hosting
the
entire
exporter
somewhere
else.
So
I'm
looking
for
feedback
and
yeah.
A
And
the
parts
that
you
would
want
to
have
an
in
addition
repository,
they
would
have
a
dependency
on
the
core
of
the
collector
on
p
data,
namely
right
there
and
then,
if
we
make
any
changes
to
p
data,
you
would
have
to
update
your
your
other
repository
first
before
we
can
actually
use
that.
That's
that's
the
problem
right
right.
So.
G
Libraries
yeah,
if,
if
it
helps
it,
may
be
possible
for
us
not
to
depend
on
some
of
the
component
interfaces.
But
I'm
not
sure
if
that's
something
that
like
what
the
high
churn
pieces
are,
that
we
don't
that
we
should
avoid,
depending
on
versus
the
ones
that
maybe
are
acceptable
for
us
to
depend
on.
In
order
to
do
our
testing.
A
This
I
think
this
ties
back
to
the
discussion
we
had
broken
about
declaring
the
stability
of
the
internal
apis
right.
If
we
declared
p
data
as
stable,
then
this
would
be
completely
fine
to
do.
A
I,
I
guess
yet
another
reason
why
we
should
maybe
accelerate
that
effort,
at
least
for
p
data
and
and
just
the
credit
stable,
maybe
for
traces
and
metrics
for
now,
because
logs
are
not
stable
themselves
as
the
data
model,
but
at
least
for
traces
and
metrics.
It's
I
think
possible
to
do
until
we
do
that.
I
think
probably
it's
not
a
very
good
idea
to
to
kind
of
post
it
elsewhere
in
a
different
repository.
Precisely
for
the
reasons
you
your
outline
david,
so
maybe
wait
a
bit,
and
hopefully
we
can
do
that
work
quickly.
A
A
I
don't
know
if
that's
this
two-phase
approach
works
for
you,
I
mean
initially
that
means
that
probably
you're
not
going
to
be
able
to
do
that
type
integration
testing
in
this
repository,
but
there's
nothing
preventing
you
from
actually
having
your
own
repository
with
just
that
component
as
a
custom
build
of
the
collector,
where
you
run
those
integration
tests
on
on
your
infrastructure.
You
could
do
that
right.
G
Right,
we
just
don't,
have
it
yeah,
sorry,
am
I
muted?
No,
we
just
wouldn't
have
it
at
precipitate
time,
which
is
when
it's
useful
to
us
to
have
it
sorry
say
again:
well,
you
wouldn't
have.
G
G
Do
we
have
do
we
have
a
timeline
for
when
we
think
p
data
will
be
stable?
A
I
I
no
we
do.
We
do
have
issues
trucking,
there's
a
there's,
a
road
map,
sorry,
there's
a
milestone
of
issues
where
which
are
which
are
about
making
the
api
stable,
but
I
don't
think
we
have
a
date
attached
to
that,
but
I
think
this
is
yet
another
reason-
and
I
I
this
is
the
second
time
today,
I'm
hearing
yet
another
reason
for
us
to
to
do
the
the
declaration
of
stability
of
the
apis.
B
So
how
about
we
use
this
one
here
so
david
works
very
closely
to
you
know
the
core
components,
and
so
on
is
very
familiar
with
the
code
base.
So
how
about
we
use
this
example
here
as
a
as
our
a
step
right
before
declaring
stability,
because
you
know
we
don't
know
whether
we
are
stable,
if
we
don't
have
actually
people
using
that
in
a
in
a
way
that
is,
stability
is
expected.
B
So
how
about
we
have
we
make
a
promise
here
to
today
that
you
know
you
can
start
using
that
we
are
going
to
try
to
make
it
stable
from
now
on
doing
three
steps
in
making
breaking
changes
so
that
he
can
adapt
his
code
before
or
that
we
promise
not
to
break
compatibility
in
one
version
or
you
know,
and
then,
if
we,
if
we
are
happy
with
the
process,
then
we
start
then
the
process
of
declaring
stability,
because
if
we
just
declare
it-
and
you
know,
we
find
something
that's
going
to
break
new
forces,
then
it's
going
to
be
very
hard
for
us.
A
E
Yeah,
maybe
I've
run
into
issues
a
couple
of
times
when
trying
to
do
contribute
releases
with
that,
where
even
when
we
do
a
two-phase
change,
like
drastic
was
suggesting.
When
we
get
to
that
second
stage,
we
actually
remove
a
deprecated
function.
We
need
to
update
core
and
then
before
we
can
update
contrib.
E
We
need
to
get
the
the
influx
changes
made
and
if
we
add
a
google
one
that
we
need
to
track
down
in
influx
on
google,
and
we
if
we
expand
that
out,
it'll
get
very
hard
to
make
changes
to
core
and
then
contrib.
I
think
if
we
can
stabilize
p
data
that
becomes
less
of
an
issue
because
we'll
stop
making
those
breaking
changes
yeah,
but
that
has
been
a
pain
for
me
a
couple
times
in
the
past,
trying
to
do
controlled.
A
Why
is
it
why
it
is
a
problem
right?
Why
we
don't
want
more
of
that?
Okay,
dmitry
you're
here
right
in
the
call.
Can
you
maybe
tell
a.
A
I
Yeah
I
I
was
looking
through
the
code
based
mostly
and
trying
to
break
it
down
to
the
module
that
we
possibly
can
declare
stable
one
by
one
and
like
break
down
the
dependency
chain,
so
we
don't
depend
on
so
each
of
each
of
these
stable
modules
depends
on
only
on
another
stable
and
it's
pretty
it
it's
pretty
hard,
but
p
data
is
definitely
the
one
that
we
can
already
go
with,
and
then
we
can
decide
how
we
can
break
it
down
further.
A
I
A
I
I
I
Yeah,
but
the
question
is:
do
we
want
to
break
down
telemetry
types,
so
we
declare
each
mod
model
for
telemetry
or
we
can,
for
example,
full
load
data.
We
can
mark
it
as
experimental
and
like
providing
the
whole
model.
The
whole
model
for
the
fold
image
types
with
the
lock,
lock
and
lock
structures
saying
that
it
can.
It
still
can
be
changed
even
if
we
declare
it
1.0.
A
I
A
A
F
We
do
have
a
couple
of
stubs
for
grpc
that
usb
data
directly
instead
of
proto
and
and
two
marshaller
and
marshaller
for
for
json
and
for
protobuf
for
p
data.
I
think
it's
pretty
minimal.
A
Code,
yeah
and
then
there's
semantic
conventions
which
are
auto
generated.
I
don't
think
that's
a
problem
either.
That's
fine.
I
E
Do
worry,
I
worry
about
proposing
to
mark
p
data
or
the
model
module
as
1.0
while
expecting
to
make
breaking
changes
to
it.
I
I
don't
think
we
can
simply
say:
oh
logs
are
special
and
we're
going
to
continue
breaking
those,
but
it's
still
1.0.
A
H
F
I
mean
depends,
we
can
follow
grpc,
which
they
do
have
this
rule
with
experimental.
But
I
don't
know
if
that's
the
best.
A
A
F
But
that
may
be
good,
but
I
would
ask:
how
do
you
do
experiments
if
there
is
no
way
you
can
experiment
with
an
api?
Once
you
put
one
zero
like
besides
logs
logs,
it's
it's
a
different
piece,
because
it's
it's
pretty
spread
and
large
stuff,
but
any
experiment.
How
would
we
do
it
if
we
don't
allow
any
kind
of
experimental
notice
or
something
you
do
that
in
a
separate
module
or.
F
Then
then
it
becomes
it
becomes
breaking
change
if
you
want
to
bring
that
module
to
the
stable
one
correct.
So
so
you
are
creating
an
experimental
module
that
call
it
experimental
whatever
you
want
to
call
it
and
then
when
you,
when
you
are
done
with
that,
and
you
want
to
bring
it
to
the
core
you're
going
to
break.
E
Then
depends
on
what
the
experiments
are
if
you're
making
additive
changes
but
you're,
not
sure
about
the
api
of
those
additive
changes,
keep
them
separate
temporarily,
so
that
you
can
make
those
changes
without
breaking
what
you're
going
to
add
them
to
and
then
once
they're
stable,
pull
them
in.
That's
what
we're
doing
in
the
go.
Sig
with
metrics
metrics
are
all
in
separate
modules.
We
haven't
integrated
any
of
them
into
the
core
module
for
the
the
global
metric
meter
provider
and
tracer
provider,
and
things
like
that.
E
F
E
It
depends
on
what
you're
moving
in
in
ghost
we're
talking
about
moving
some
methods
and
we'll
leave
forwarding
that
that's
behind.
I
think
you
can
do
the
same
with
type
aliases
for
some
types.
But
yes,
I
I
think
there
are
ways
of
working
around
that
and,
as
david
mentioned,
it
would
be
a
break
in
the
experimental
part
anyways.
But
obviously
we
do
want
to
minimize
the
transition
effort.
As
that
transition
happens,.
F
Anyway,
okay,
we
can
do
that,
but
that
means
we
will
not
be
ready
to
to
do
to
put
1-0
david.
Very
soon
I
mean
we,
we
have
a
strong
dependency
on
the
logs
if,
if
there
is
no
other.
A
A
What
good
does
it
do
us
to
prematurely
declare
stability
because
people
start
depending
on
the
code
base
anyway
they
lose
patience
and
depend
on
it,
and
then
we
break
something
because
formally
we
can.
But
then
we
hear
complaints
right.
We
saw
that
happening.
E
A
A
A
I
think
it's
it's
it's
important.
Without
that
it's
we're
just
people,
people
either
we
avoid,
depending
because
it's
totally
I
mean
they
avoid
depending
on
us
or
or
they
depend,
and
then
they
start
making
assumptions
that
okay,
it's
going
to
be
fine,
which
is
not
true,
because
we
we
work
on
this
thing
and
we
break
things.
A
I
don't
know.
What's
the
solution,
I'm
I
mean
the
the
granularity
level
of
the
module.
The
I
understand
that's
how
things
are
supposed
to
work
in
in
the
global
world
right.
You
declare
the
module
the
entire
module
as
stable,
but
for
that
to
work
for
us,
I
don't
know
we'll
have
probably
have
to
split
the
code
base
into
a
much
much
larger
number
of
modules.
F
You
need
the
entire
collector
to
be
split
correct,
because
every
component,
we
declare
components
for
logs
as
well.
We
declare
things
so
so
you
need
to
do
a
lot
of
work.
To
split
that
I
mean
you
can
argue
that
okay,
let's
split
the
logs
out
of
the
data
and
declare
that
the
data
sure,
but
still
it's
it's
a
reasonable
amount
of
work.
It's
not
trivial.
F
G
I
think
for
getting
back
to
our
issue
or
google
cloud's
desire
to
do
things.
I'd
be
happy
to
do
some
of
that
work,
at
least
if
it
meant
that
we
could
do
integration
testing
in
our
own
repository.
So
I
can
sign
up
for
some
of
that.
I
do
think
we
would
only
need
to
split
p
data
and
not
everything
in
order
to
achieve
that.
C
A
A
F
I
If
we,
if
we
separate
logs
to
another
module,
it
means
that
we
have
to
move
it
back
once
it's
it's
stable
and
this
is
going
to
be
a
pretty
breaking
change
for
the
customers.
So
I
would
I
would
vote
for
it's
it's
going
to
be
pretty
inconsistent
for
the
end
user.
Once
we
declare
jf
of
the
whole
collector
when
we
we
have
another
module
for
matchups
and
traces
and
separate
separate
module
for
blocks.
I
don't
think
this
is
desirable.
I
We
should
probably
split
them
all
right
now
and
have
separate
modules
going
forward
or
we
keep
it
in
one
module,
because
if
we
separate
logs
we
should
bring
it
back
once
once
it's
stable
to
another
module
and
that's
going
to
be
breaking
change
for
for
many
users
and
but
as
for
now
it
may
be
that
locks
actually
will
stay
the
same.
Maybe
nothing
will
be
changed
or
some
additive
change
to
the
data
of
logs
can
can
happen
on
which
is
good.
I
think
I
think.
D
Alex
had
a
proposal
like
if,
if
we
split
everything
into
p,.
F
Data
command
p
data
trace
p
data,
metrics
and
stuff,
and
we
just
create
a
module
around
p
data
logs.
Then,
even
if
we
bring
them
back,
then
it's
not
going
to
be
a
that's
a
bad
breaking
change,
because
it's
just
a
go
mode:
change
which
you'll
need
to
remove
the
dependency
to
make
sure
you.
You
have
the
new
code,
not
the
old
one,
but
that's
that's
just
a
very
simple
to
to
fix,
but
alex
that's.
F
That
may
be
good
for
p
data,
but
I
don't
I
don't
know
if
we
can
do
that
for
all
the
components,
all
the
parts
where
we
have
all
of
these
things,
just
an
fyi,
because
first
of
all,
that
would
be
super
confusing
to
have
everywhere
called
logs,
because
you
have
to
alias
the
imports
and
stuff.
So
one
and
one
of
the
thing
for
go
is
the
alias
with
the
the
name
of
the
packages
with
the
name
of
the
structs.
F
Should
ma
should
nicely
compose
a
statement
or
or
a
sentence
there,
so
yeah,
maybe
maybe
worth
doing
this
for
p
data
may
be
worth
guarantee
another
option.
David
is,
if
we
guarantee
you
that
we
don't
break
those
things.
I
don't
think
you
care
necessarily
to
be
one
zero
for
for
for
your
purpose
for
your
solution
like
if
we,
if
we
give
you
the
guarantee
that
is
not
breaking
the
metrics
and
the
traces
that
you
care,
I
think
you
should
be
fine.
I.
G
F
Yeah
yeah,
I
mean
with
influx.
To
be
honest,
my
experience
was
interesting
because
we
were
back
then
we
were
doing
a
couple
of
breaking
changes
it
it
was.
It
was
a
problem
initially
and
a
couple
of
times
but
like
in
the
past
four
or
five
months,
we
haven't
break
anything
into
data
because
we
we
kind
of
reach
pretty
good
stability
there.
There
is
only
one
big
change
in
big
data
which
may
affect
some
of
these
things.
F
That's
the
only
remaining
issue
that
I
know
of
it
shouldn't
change
the
api,
because
that's
why
we
build
that
api
independent
of
the
internal
implementation,
but
it
will
be
the
the
good
exercise
to
prove
us
wrong
or
right
for
for
that.
So
that's
that's
the
only
change
and
he's
actively
working
on
that.
So
I
would
expect
that
in
a
month
or
something
to
be
done
with
that
chain
and
after
that
there
is
not
going
to
be
any
big
change
to
to
the
bidet
at
all.
So.
A
So
why
don't
we
do
this
I'll?
Try
to
so
we
have
the
log
segmenting
later
today
I'll
see.
If
what
the
sentiment
there
is,
if
we
can
actually
make
it
happen
quickly
in
a
few
weeks,
maybe
and
bogdan,
I
need
your
help
because
we
have
a
blocking
issue
there.
A
If,
if
we
can,
then
I
think
it's
still
more
desirable
to
keep
this
as
one
module
and,
if
not
so,
let's
say
give
us
a
couple
weeks
to
make
a
decision
about
that.
If
not,
if
we're
not
able
to
quickly
declare
the
logs
stable,
then
we
explore
the
option
of
splitting
this
into
four
different
modules
right
and
then
we
independently
declare
that
the
common
traces
and
metrics
are
stable
and
logs
remain
as
zero
point
x.
D
F
As
david
pointed,
I
think
I
think,
david
from
your
perspective,
what
I
would
do,
I
will
start
working
on
that
independent
of
we
call
it
one
zero
or
not,
and
once
you
are
done
with
the
implementation.
First
of
all,
we'd
like
your
feedback
about
our
apis
and
stuff,
and
if
things
can
be
improved
because
being
one
of
the
trusted.
F
Extension
of
our
or
users
of
our
p
data
would
be
very
good
to
get
get
your
feedback
there
and
second,
after
we
finish
this,
I
think
it's
gonna
take
two
three
weeks,
and
maybe
we
have
a
better
understanding
of.
Can
we
do
one
zero?
Can
we
just
guarantee
you
some
stability?
Can
we
just
maybe
put
down
a
plan
of
how
to
work
with
this
and
whatever
something
to
do?
Okay,.
A
G
One
of
the
goals
is:
is
writing
integration
tests
and
we've
already,
we've
actually
already
started
doing
some
work
towards
that
in
a
separate
repository
okay,
but
we,
you
know,
we
haven't,
put
anything
in
yet,
and
it's
not
being
used
by
anyone.
So
we
can
always
make
changes
if
everyone
in
this
group
thinks
that
there's
a
better
way
to
do
things.
A
G
It's
an
entire
rewrite
right
now,
so
it's
easy
for
us
just
to
do
the
rewrite
and
do
the
integration
tests
and
leave
the
existing
stuff,
as
is
just
so.
We
don't
break
people,
it's
like
no
one's
going
to
be
using
it
anyways,
because
we're
still
finding
bugs
in
our
new
implementation
and
fixing
those.
F
Tigran
is
not
going
to
be
plugged
back
in
to
contrib
until
we
finalize
these
decisions,
so,
okay,
this
is
what
we
agreed
on.
So
okay,
we're
gonna
continue
working
on
it
independently,
but
we're
not
gonna
bring
it
back
to
contrib,
which
will
cause
us
headaches
if,
if
any,
until
we
finalize
this
decision,
but
I
think
I
think
this
is
another
request
to
prioritize
this.
So
let's
treat
it
seriously
and
and
try
to
to
see
what
is
left
there
and
alex
it's
on
you,
man.
G
I'm
happy
to
help
too
alex
so
feel
free
to
ping
me
if
you
want
feedback
on
something.
Obviously
we
appreciate
everyone's
help
here,
so
thank
you.
A
Let's
go
to
the
next
item:
auto
assign
all
pro
merge.
J
Sorry
bogdan,
I
was
just
saying
that
I
did
some
research
on
this.
I
don't
know
ogden.
You
want
to
lay
out
what
you
were
looking
for.
First
before
I
go
into
what
I
found
and
maybe
that'll
help,
but
there's
not
a
whole
lot
of
options.
Yeah.
F
So
for
me,
for
me,
I
was
looking
at
the
experience
that
they
have
there
in
the
owners
file
with,
as,
as
sergey
pointed
like,
we
want
to
be
able
to
to
have
owners
per
directory
if
a
change
spreads
across
directories
to
make
sure
that
we
have
approvals
from
every
every
directory,
approver
or
top
approver
whatever.
So
I
think
they
have
very
good
workflow
already
defined.
I
I
would
prefer
not
to
have
to
define
another
workflow
just
point
users
to
that
document.
F
I
understand
it's.
It's
a
bit
of
a
problem
with
maintaining
that
I
have
no
clue
how
hard
would
be
that
that
maintain
a
maintaining
process
or
or
stuff,
but
yeah.
This
is
yes
before
and
I'm
looking
for
having
an
experiment
or
or
some
period
of
time
with
with
this
and
then
and
then
make
the
final
call.
If
this
is
the
thing
we
want
to
do
or.
J
Mergify
is
a
sas
product,
but
there's
also
questions
whether
or
not
that
can
get
approved,
but
it
doesn't
have
you
know
the
owner
file
across
component
that
proud
provides,
and
we
already
kind
of
do-
code
owners
and
auto
assign,
but
that's
not
really
the
intent
of
the
issue.
It's
really
to
get.
You
know
that
extra
functionality
it's
and
it's
really
a
question.
I
think
pro
would
be
the
best
solution
for
what
bogdan
wants,
and
you
know
that
that
pr
workflow
it's
it.
J
I
don't
think
it'd
be
a
whole
lot
of
work
to
set
up
it's
just
getting
the
requisite
permissions
and
where
is
it
going
to
be
who's
going
to
maintain
it
where
it's
going
to
be
hosted
and
whatnot,
because
we
don't
need
to
do
much
to
set
up
the
code
review
workflow.
There's
not
a
whole
lot
of
configuration.
J
F
Yeah,
so
the
whole
purpose
of
this
experiment
is
we
passed
long
long
like
we
had
long
long,
chats
about
increasing
productivity
and
and
velocity
in
that
repo,
and
we
believe
that
giving
more
components
more
freedom
to
to
move
to
move
faster
and
stuff
will
will
solve
the
the
the
problem
and
that's
why
that's
why
we
are
looking
into
to
having
the
capability
of
having
owners
and
staff
per
component
and
not
the
code
owners
that
we
have
right
now
I
mean
it's
not
like.
This
is
what
I
want
by
the
way
it's
like.
F
B
Can
you
brian
so
can
you
expand
on
why
mergerfly
would
perhaps
not
get
approved
or
you
know
why,
like
what
are
the
blockers
in
using
that.
J
So
first
there
was
a
callout
by
alita
and
anthony
that
we're
not
sure
if
cncf
allows
sas
products
to
maybe
have
write
permission
on
the
repository.
That's
an
open
question.
We
didn't
really
dig
into
that.
We
were
going
to
look
into
that
if
we
wanted
to
still
experiment
with
mergify,
but
the
other
issue
is
that
mergify.
J
You
know
pr
workflow
that
the
owners
files
would
provide.
It
really
would
just
kind
of
be
a
stronger
code
owner's
file
and
it
it
seems,
like
we'd,
probably
even
leverage
the
code
owners
file
already.
So
I'm
not
I'm
not
sure
how
much
extra
functionality
would
actually
provide
or
how
much
streamlining
of
the
pr
workflow
it
would
help
with.
B
Yeah,
so
we
used
to
use
merger
five
for
jager
and
transfer
alex
question
here.
We
we
were
using
that
with
the
merge
on
green
feature,
so
whenever
apr
had
all
the
approvals
and
so
on,
it
would
automatically
merge
and
now
github
provides
that
out
of
the
box.
So
that's
why
we're
not
using
virtual
fly
anymore
for
jager,
but
I
remember
seeing
that
they
did
have
a
an
extensive
set
of
rules
for
approvals
and
for
auto
merging
and
so
on.
B
It's
been
a
while,
since
I
last
used
merchfy,
so
I
don't
know-
and
I
I
really
like
prowl
I
mean
if
it
can
manage
kubernetes,
it
can
manage
everything
right,
but
the
problem
is
when
we,
when
we
looked
into
pro
for
jager,
for
that,
for
that
kind
of
use
case,
the
problem
is
or
the
problem
was.
We
had
only
a
couple
of
maintainers
on
eager
and
none
of
us
would
really
like
to
spend
time
doing
ci.
So
I
guess
the
question
here
is:
you
know
who
is
going
to
maintain
it?
F
G
Not
really
so
they
used
to
allow
that,
for
example,
c
advisor
I
think
used
kate's
prowl,
which
is
a
google
owned
project,
but
they've
they've
started
kicking
people
out
of
the
nest
and
and
yeah.
B
A
Yeah,
I
understand,
am
I
remembering
incorrectly
didn't.
We
have
mergify
at
some
point
somewhere
in
open
telemetry,
yeah.
B
The
same
use
case
that
we
had
for
for
jager,
so
I
was
using
that
for
merge
on
green
and
for
dependable,
auto,
merge
so
merging
dependables
prs
automatically
whenever
they
pass
ci,
but
we
disabled
for
a
couple
of
reasons.
First,
we
started
requiring
a
a
review
from
a
code
owner
there
before
merging.
So
we
had
to.
I
had
to
go
to
dependable
prs
anyway,
so
I
just
you
know,
started
merging
them
myself
and
the
second
use
case
was
the
merchant
green.
A
B
So
I've
I've
never
requested
permission,
but
I
never
heard
the
ccf
requiring
anything
like
that
before.
So,
if
I
knew
that
I
had
to
ask
for
permission,
I
would
have,
but
it
would.
It
would
be
a
surprise
for
me
to
hear
that
they
have
this
kind
of
requirement
for
the
individual
treasures.
My
understanding
is
they.
Let
us
manage
the
way
that
we
need
the
way
that
we
want.
That's
what
I
understand.
E
I
I
may
be
misremembering,
but
I
seem
to
recall
that
in
the
gosig
we
discussed
having
some
some
bots
added
to
the
repo
and
there
was
concern
that
we
would
not
be
able
to
get
permission
to
have
bots
with
right
access
to
the
repositories
because
of
policy.
So
we
backed
away
from
that.
Maybe
maybe
whoever
told
me
that
was
mistaken,
or
maybe
I
I'm
missing
remembering
that.
But
that
was
a
just
a
concern
that
I
thought
needed
to
be
brought
up
and
clarified.
B
So
emerging
has
a
few
different
features,
so
one
of
them
is
the
code
owners
that
we
are
interested
in
and
the
other
one
is
modify
automatically
merging
things
on
once
they
they
get
into
a
certain
state.
We're
not
using
that
we're
not
going
to
use
that
so
modify
would
not
need
right
permissions
to
the
repositories
from
what
I
understand,
so
it
just
needs
read
permission
because
of
the
code
owners
and
it's
it's
just
going
to
write
a
comment
to
the
pr
or
it
would
just
be
a
check
on
each
single.
E
F
I
mean
it
would
be
nice
to
have
that
as
well
in
that
component,
but
even
if
we
end
up
with,
if
we
follow
the
flow
and
we
end
up
with
the
with
the
label,
we
can
have
a.
We
can
build
a
small
github
action
that,
based
on
that
label,
merges
the
pr
so
I
or
whatever
we
do
it's
more
important
to
make
sure
we
have
a
bot
or
something
that
ensures
the
flow
that
we
we
want
like
and
that's
why
the
pro
was
nice.
B
B
Is
you
know
that
you
know
it's
a
pain,
dance?
It's
it's
not
easy.
I
mean
even
if
it
works
today,
our
cia
needs
a
change
in
the
future
and
things
will
break
that's
just
how
it
is
for
sir.
F
I
mean
we
what
what
are
we
hearing
here?
We
are
afraid
of
using
troll
because
of
the
maintenance
burden
and
nobody
volunteers
to
say.
I'm
gonna
maintain
this
for
for
you
guys.
So
that's
one
thing:
maybe
an
option
for
us
would
be
to
to
raise
this
as
a
to
the
community
level
and
ask
if
we
can
formal
seek
for
for
helping
us
with
this
some
volunteers.
F
If,
if
possible,
I
don't
know
if
we
can
find
that
or
not
in
the
meantime,
I
think
we
should
look
into
mergify
and
see
if
we
can
achieve
the
same.
The
same
solution,
not
100
convinced
we
can,
but
let's,
let's
give
a
try
and
and
see.
If
that's
that's
something
that
we
want
to
to
do,
even
though
it
may
not
merge
or
whatever
anthony
one
thing
you
you
mentioned
approvers.
E
A
B
B
Let
me
try
again,
then,
so,
even
if
it's
even
if
the
feature
that
you
want
is
a
premium
feature,
enterprise
feature
or
whatever
we
have
to
pay
for
it,
we
should
still
consider
you
know
we
it's
not
a
hard.
It's
only
difficult.
It's
not
difficult
to
get
money
from
the
cmcf
for
those
kind
of
those
types
of
solutions.
Yeah.
F
So
I
would
not
be
worried
about
that,
but
I
would
be
worried
if
it
doesn't
give
us
the
full
experience
that
we
we
wanted,
and
I
mean
personally
I
really
like
the
experience
that
cncf
has
and
with
the
code
owners
with
having
people
to
with
review
rights
with
approval
rights
exactly
like
matches
with
our
approvers
maintainers
rights
and
stuff.
So
so
I
think
it's
it's
very
reasonable
to
to
to
have
that.
A
C
Yeah
yeah,
I
just
wanted
to
bring
this
up,
so
I've
been
on
this
journey
of
trying
to
add
support
for
optional
fields
in
our
proto
and
collector.
I
have
gone
through
three
different
prototypes
that
I
haven't
listed
here,
but
I
I
I'll
be
sharing
a
document
at
some
point
in
probably
next
week,
my
my
current.
So
I've
gone
through
the
route
of
using
a
forked
version
of
gogo
that
was
a
bit
of
a
pain.
C
I've
gone
through
the
route
of
trying
to
use
custom
classes
and
from
gogo
that
also
didn't
quite
work
because
it
was
quite
hacky.
The
last
prototype
I
put
together
was
using
the
google
protobuf
library
directly
with
an
extension
called
vt
proto
that
also
didn't
work
from
a
performance
endpoint.
The
performance
results
were
listed
in
a
pr
that's
in
the
repository
today
that
was
quite
poor.
C
So
now,
I'm
on
to
prototype
number
four,
which
is
manually
serializing,
deserializing
the
data
from
otlp
directly
into
p
data.
This
is
the
same
route
that
the
javasig
took
where.
F
This
we
have
to
do
all
I
mean
we
may
be
able
to
do
partial
things,
but,
okay,
let's
see
how
it
works
steven,
it's
an
experiment.
We
are
not
committing
to
it.
C
It's
worth
that,
that's
exactly
my
reaction.
When
I
first
started
this
when
I
heard
that
java
went
and
implemented
manual,
serialization
deserialization,
I
thought
this
is
this
seems
like
the
wrong
approach,
but
having
gone
through
three
different
prototypes.
Now,
I'm
I'm
out
of
options.
So
if
people
have
alternatives.
F
I
mean
maybe
we
can
do
smarter
things
and
modify
the
p
data
to
generate
this
code
for
us,
based
on
the
structs
that
we
have
there
and
stuff
instead
of
manually,
maintaining.
A
F
Easier
right
limited
a
lot
for
our
use
cases,
and
also
probably
for
the
moment.
For
the
moment
we
don't
have
to
read
the
protos
and
be
a
plugin
for
protoc
and
stuff.
We
can
still
do
our
manual
declaration
redeclaration
of
the
structs
that
we
do
right
now.
Let's
give
a
try.
I
don't
know
if
we'll
be
able
to
to
achieve.
A
That
or
not,
but
that
may
be
more
feasible,
actually
more
preferable
at
least
than
writing
all
the
code
manually.
I
suspect
that,
as
we
write
it
manually,
we'll
introduce
tons
of
sterilization
bugs
there
and
we'll
have
to
to
to
chase
the
the
all
of
the
all
of
those
bugs
for
for
years
to
come.
I
think
I'm
I'm
worried
about
it
to
be
honest
and
that's
on
the
input
side
of
things
which,
which
is
concerning,
I
don't
know,
opens
up
the
possibility
of
lots
of
security
issues
there.
B
So,
just
to
share
a
data
point
here,
I'm
not
saying
we
should
do
that,
but
it
is
common
for
client
libraries
to
do
manual,
serialization
or
discernization
of
this
of
that
kind
of
data.
You
know
so
one
example,
I
have
in
mind
is
apache
rift.
I
think
it
is
and
with
the
brave
clients
of
zipkin
for
java,
so
they
they
do
that
serialization
manually
because
they
know
what
exactly
they
need
and
they
just
do
whatever
they
need.
They
manually
encode
the
data
and
send
to
the
server
it's
very
minimal.
B
F
F
It
didn't
it
was
not
a
problem
in
go
in
general,
the
dependency
I'm
referring
to
the
dependency
management,
but
for
us
is
the
performance
hence.
Hence
I
think
if
we
do
manual
right,
we
will
get
even
better
performance
than
right.
Now,
I'm
pretty
confident
on
that,
because
because
the
reason
is
we
can
we
can
do
some
hex
around
some
of
the
things
being
hip
allocated
or
not
or
or
other
other
stuff
like.
There
are
lots
of
things
that
we
can
do.
It
comes
with
a
lot
of
maintenance
costs.
F
I
know
that,
but
for
that
I
think
we
we
can
find
people
willing
to
maintain
that
part
for
for
for
the
performance
gain.
If
we
cannot
find
people
for
pro
I'm
confident
on
this,
we
can
find
people
to
to
maintain.
C
So
we're
we're
over
time,
but
I
just
want
to
bring
that
up
in
case
people
had
any
strong
opinions.
One
word
the
other.