►
From YouTube: Community Meeting (March 21th 2023)
Description
A
And
we
are
live,
hey
everyone!
Welcome
to
today's
community
meeting
I'm
Haze
you're,
really
glad
that
your
regular
guy.
So
to
be
honest,
I
don't
have
like
much
on
the
agenda
today.
But
if
you
have
like
any
discussion,
you
want
to
bring
up
to
make
this
meeting
long.
A
I'm
always
up
so
I.
Basically
just
have
like
I
think
one
or
two
things
on
my
agenda
to
talk
about
and
the
first
one
is
actually
from
the
past
mentees
of
this
mentorship
program.
So
I
keep
getting
questions.
That's
do
we
get
excited
certificates
for
the
Sync
API
mentorship
program
and
I
was
like
I
I,
don't
know,
but
I
decided
to
like
bring
the
discussion
of
today.
A
C
B
Were
talking
about
like
official
certificates
or
because
like
we
can
have
some
badges,
we
had
some
budgets
anyway,
invented
by
Missy,
so
which
was
a
matter
of
extending
them
and
putting
the
name
of
the
of
the
person
that
completed
the
mentorship.
That's
doable
certificates.
It
really
depends
what
they
want,
because
certificates
from
hyper
stack
objective.
It
means
already
something
super
serious
and
official
and
certificate
for
me
is
something
that
somebody
signs
and
like
with
the
name
and
blood
so
not
like
what
they
really
mean
by
certificates.
B
B
A
So,
basically,
the
certificates,
as
far
as
I
can
remember
the
certificate
contained
the
name
of
the
organization
I
participated
in
the
projectile
participate
in
official
signature
from
Google
to
show
like
yeah
I
like
him,
yeah
written,
okay,
I,
don't
think
it's
n,
like
e-signature
I,
guess.
B
Because,
like
legally
I,
don't
know
who
could
sign
it,
I
mean
like
now:
I
I
know
that
you're.
A
B
Signature
has
no
value,
that's
what
I
mean.
Basically,
so
that's
tricky.
One
is
signature,
a
must-have.
A
Well
to
me:
I
I,
don't
think
it's
a
mouse.
We
have
signature.
B
Because,
like
I
said
like
we
can,
we
could
work
with
someone
ask
to
extend
this,
this
official
menti
logo,
with
some
generated
paper,
where
we
say,
like
congratulations
like
certificate,
congratulate
for
completing
the
mentorship,
2022
and
working
on
this
and
that
project
with
that
Mentor.
Maybe
even
we
can,
we
can
do
it.
It's
just
a
matter
of
like
getting
some
design
simple
design
in
place
that
we
can
generate.
The
critical
part
is
just
this.
A
B
A
B
Mean
so
I
I
know
what
white
people
ask
for
it
and
it's
pretty
cool
that
they
can
like
publicly
not
only
state
that
they
completed
the
mentorship
but
actually
has
some
sticker.
Whatever
I
know
we
promised
swags,
but
that's
something
that
is
delayed
and
we're
gonna,
deliver
them
and
certificate.
B
So
we
basically
have
to
have
discussion
because
we
have
to
involve
a
few
people.
We
could,
for
example,
check
with
barbano
and
okay,
because
barbano
she's,
like
she's
the
one
that
is
working
on
the
swag
store
and-
and
we
will
have
like
the
plan-
is
that
this
wax
store
will
have
a
item
called
mentee
package,
whatever
that
you
can
like,
so
the
menti
will
get
a
a
a
voucher
from
us
and
they
can
claim
the
package,
so
nobody
can
buy
it.
B
B
And
and
and-
and
maybe
barbano
can
check
with
this
swag
company,
if
they
could,
together
with
the
with
the
with
the
hoodie,
also
send
a
certificate,
we
will
just
provide
them
with
the
template
and
maybe
in
this
swag
store,
we
could
have
additional
Fields,
where
the
Mandy
would
specify
the
name
and
the
project
that
they
were.
They
were
working
on,
so
that
could
be
done.
B
And
then
we
have
this
super
strong
assurance
that
it's
that
it's
not
fake,
that
it's,
that
people
can
have
such
certificate
printed
out
and
handed
over
only
if
they
really
completed,
because
only
people
that
completed
got
the
voucher.
B
For
example,
so
that
can
be
done
obviously
like
we
didn't
promise
anything
like
this
for
last
year,
so
we
would
probably
have
to
find
out
simply
just
some
work
around,
but
yeah
I
mean,
let's
do
it
if
mentee
is
needed,
let's
do
it,
okay,
whatever
we
can
for
them.
A
A
Okay,
okay,
so
I'm
going
to
start
a
discussion
around
this
unless,
like
other
people
jump
on
it,.
A
Okay,
so
the
second
thing
on
my
agenda:
is
they
useful
interest
in
stuff?
That's
like
have
like
a
lot
of
questions
in
it,
so
I'm
going
to
share
my
screen.
A
Do
we
have
it?
Okay,
very
nice,
Square.
A
Yeah
against
my
screen.
C
A
Okay,
so,
as
you
all
know,
we
are
actually
trying
to
like
Implement
a
contributors
page
in
the
new
community.
Section
we've
been
working
on
so
the
whole
idea
of
these
contribution.
Contributor
selection
is
to
have
a
page
dedicated
to
the
SDK
by
contributors,
which
is
really
nice
for
me
and.
A
Analysis
of
the
all
contributor
stuff
and
how
much
they've
contributed,
how
much
issues
they've
opened
and
yeah.
So
this
is
the
result
right.
So
this
is
the
results
this.
This
is
not
the
total
results
like
so
we
have
two
results,
so
I
just
wanted
to
like
have
your
inputs
and
see
how
we
can
go
so
we
have
to
resolve.
A
D
Yeah
so
see
there
are
some
projects
that
we
own
right,
like
we
are
the
code
owners,
so
we
are
not
necessarily
having
pull
requests
to
go
to
that,
sometimes
to
directly
push
it.
Some
changes
right
so,
for
example,
the
problem
Repository
so
I'm
maintaining
that
so
here
we
should
have
a
column
separately,
for
if
there
is
any
project
which
is
owned
by
the
particular
GitHub
handle,
we
can
mention
that,
for
example,
sawbik
has
cliently
here
is
the
code
owners
there?
So
all
all
these
also
counts
in
the
contribution
right.
D
No,
not
the
master
I'm
saying
like
we
have
our
own
separate,
Branch
right.
Suppose
I
have
created
my
own
Branch,
so
a
b
development
and
then
I
create
one
pull
request.
After
having
set
of
changes
done
I
hope
you
are
getting
my
point.
Yeah.
A
D
Did
it
make
sense
as
well
I
think
it
was
like
we
have
only
these
two
parameters
for
what
about
like
the
maintainers,
who
are
constantly
seeing
different
things
right
in
the
project,
the
project
they
maintain
texting?
The
number
of
issues
discussing
with
the
other
colleagues.
B
Yeah
so
so,
basically
there
are
contributions
like
comments,
reviews.
C
B
Are
the
things
that
abir
talks
about
like
that?
Like
you
like
one
thing?
B
Is
that,
like
it's
cool
that
you
created
a
pull
request,
but
this
request
requires
a
review
of
several
different
actors
and
that's
also
a
contribution
that
is
super
hard
to
measure
and
like,
for
example,
the
Bounty
program,
we're
thinking
about
it's
that
cannot
cover
it
because
that's
pure
maintenance
work
that
is
super
hard
to
measure
and
and
the
same
with
issues
like
sometimes
it's
like
the
the
issues
created
and
somebody
can
be
accounted
for
creation
the
issue.
B
But
then
it's
there's
a
discussion
involved,
a
lot
of
comments
and
again
like
maintainers,
and
some
other
contributors
also
are
involved.
Asking
questions
you
know
like.
So
that's
what
it's
missing
basically
from
the
list,
unless,
unless
issue
status,
is
something
different
than
just
issue
creation.
A
B
It
must
cover
it
because
it's
all
about
its
comments
right,
it's
like
GitHub
archive,
is
like
it
holds
all
the
events,
basically
as
I
imagine
GitHub
archive,
but
I.
It's
basically
every
single
event
that
happens
on
GitHub
goes
to
GitHub
archive.
So
it's
like.
There
are
different
types
of
issues.
Then
it's
a
issue.
It's
like
with
GitHub
actions
like
you
have
GitHub
action
that
reacts
on
on
merge
or
on
pull,
request,
create
or
review
comment
create
or
comment
create.
B
So
there
are
even
for
sure
events
that
that
are
recorded
that
contain
information
about
the
comment
that
was
created
and
author
of
the
comment
not.
D
A
Okay,
so
we
we
have
sorry,
we
have
actually
so
I'm,
not
I'm,
no
expert
in
this
data
stuff,
so
I'm
pretty
sure.
Tony
should
be
able
to
give
us
some
insights
because
she
eventually
like
walked
on
the
whole,
the
other
stuff,
so
I,
because
I
I
have
a
concern.
Also
because
when
I
say
issues
right,
I
was
so
skeptical
about
the
amount
of
issues
I'm
seeing
there
because
I
saw
I
was
feeling
okay.
A
A
But
now
I
have
a
concern,
so
let's
say
for
instance,
right
and
I
just
come
there
and
I
comment:
hey
I'm
interested
in
this
issue
and
I
leave.
So
it's
basically
going
to
count
down
as
contribution
and
basically.
D
D
No,
it
won't
be
counted
as
see
the
comments.
D
Like
anyone
can
do
that
right,
you
have
to
have
a
separate
distinction.
The
comments
in
the
pull
request
and
the
comments
in
the
issue.
D
So
I'm
asking
that
see
when
you
comment
on
the
political,
it
might
be
a
review
comment
or
something
like
around
it
right.
So
there's
a
bandwidth,
consumed
and
there's
ownership
involved
there,
okay
or
the
one
who
is
maintaining,
but
in
the
normal
issues
people
drop
their
ideas
and
all
these
things
that's
perfectly
fine.
We
are
not
explicitly
saying
that
okay
comment
over
here
and
all
this
right,
but
in
pull
request
we
have
to
That
explicit
approval
before
merging
it,
so
they
are.
The
consumption
of
bandwidth.
D
D
B
Also,
my
contribution
right
I
mean
sometimes
it
can
be
just
hey,
I
want
to
work
on
it,
and
sometimes
it
can
be
a
pretty
extensive
comment
with
that
adds
much
more
value.
D
D
B
It
already
for
for
some
time
and
it's
it
basically
sucks,
because
the
sentiment
analysis
doesn't
have
the
context
so
sentiment.
Analysis
can
tell
you
that
short
comment
is
like
it's
still.
It
will
just
give
you
a
sentiment,
but
still
it
it
it
doesn't
it's
not
able
to
tell
the
import
the
value
he
doesn't
know
the
other,
like
the
rest
of
the
conversation.
So
still
the
sentiment,
it's
not
really
of
any
value.
D
C
D
E
One
thing
I
just
want
to
mention
is
that
this
is
not
exclusively
related
to
async,
API
or
open
source
on
this
specific
problem,
like
this
is
performance
related
that
any
company
have
problem
defining
like
how
do
you?
How
can
you
measure
performance
of
a
developer,
for
example?
No,
because
there's
things
you
can't
measure,
that's
the
reality,
so
it
the
important
part
is
that
no
even
with
jira
it
doesn't
matter.
Do
you
have
interpersonal
stuff?
E
So
it's
not
a
native
problem
for
this
I
just
want
to
mention
that
maybe
some
someone
has
figured
this
out
in
general
that
we
can
take
notice
of
and
use
as
guidance,
but
I
also
want
to
mention
that
if
this
is
related,
what's
this
related
to
the
Bounty
program
stuff
as
well
right
or
this
is
completely
unrelated.
Okay,
it's.
B
Actually,
more
related
to
a
a
view
that
we
would
like
to
have
in
the
website
in
the
future,
where
we
have
all
the
contributions,
all
the
contributors
gotcha
but
yeah
as
but
I
agree
what
you
say
like
it's.
It's
a
it's,
not
a
new
problem.
That's
somehow
popped
up!
It's
a
it's,
an
old
issue:
how
to
get
numbers
to
measure
contributions.
B
Numbers
should
never
be
treated
one
to
one
so
they're,
just
supporting
and-
and
there
can
be
like
so
so
from
my
experience,
I
can
tell
you
like
Linux
Foundation
provides
us
with
some
basic
GitHub
stats
and
when
I
was
like
recently
when
I
was
publishing
this
blog
post
about
the
summary
of
last
last
year
and
I
used
to
the
there
and
I
use
their
data
in
the
in
the
article
and
the
diagram
was
clear
that
it's
showing
the
country
all
the
different
contributions.
B
So
it's
not
only
code,
it's
it
was
explicitly
in
the
no
don't
remember
exactly
where,
but
it
was
written
like
it's
a
com.
It's
a
review
comments.
It's
interactions
in
the
issues.
Everything
is
counted
because,
like
it's
really
hard
to
say,
if
again
like,
if
a
comment
in
the
issue,
it's
it's
useless
or
not.
Sometimes
even
a
comment
that
is
like
hey
can
I
work
on
it.
It's
it,
it
might
be
useful
actually,
because
it
pings
you,
you
jump
into
the
issue
as
a
maintainer,
you
can
revamp.
B
Oh
actually,
that's
issue
is
not
no
longer
valid,
I'm
closing
it
and
it's
still
a
contribution
like
somebody
contributed
basically
by
waking.
You
up.
Oh
there's
an
issue.
So
so
it's
still
it's
delay
contribution
I,
would
say
so,
but
yeah
in
total
numbers.
You're
gonna
see
like.
If
somebody
has
this
one
tiny
issue
for
hey
I
want
to
work
on
it.
B
Still
it's
just
one
issue
like
it's.
It's
fine
and
and
also
like
when
you
go
like
even
when
you
go
to
your
profile
on
GitHub,
you
can
see
this
I'm,
not
sure.
If
you
have
it
enabled
you
can
see
this
graph.
That
shows
like
shows
your
code
review
contributions,
issues
or
requests.
So
it
shows
how
your
how
you
really
do
open
source.
If
you
just
comment,
comment
commit
or
you
actually.
B
Yeah
yeah,
so
that
that's
the
same
the
same
or
similar
approach
you
should
take
here
like
a
if
you
will
have
only
issues
and
nothing
else,
so
the
graph
is
pretty
flat
and
if
you
will
generate
some
spam
issues
this
and
like,
if
somebody
will
try
to
you,
know
like
hack
the
system
and
start
doing
this
I
want
to
work
on
this.
I
want
to
work
on
this
in
different
issues.
B
Numbers
will
just
confirm
what
we
will
already
notice
on
our
own.
Like
hey,
we
have
some
spammer
here
that
wants
to
just
override
the
system,
and
probably
it
wouldn't.
It
will
end
up
with
blocking
the
GitHub
user.
B
B
A
A
C
A
Is
interesting
and
now
I
think
the
most
interesting
aspect
of
what
truly
did
was
actually
dividing
the
measures
right,
so
we
have
like
the
way,
get
a
buckeye
Works.
Sorry,
according
to
our
explanation,
the
way
gilobarchive
works
is
because
the
initial
initially
we
were
trying
to
like
get
data
from
2016
to
current
right.
So,
but
we
noticed
right,
it
takes
up.
It
takes
some
time
for
the
data
to
update
and
we
will
be
having
like
different
numbers.
So
what
she
eventually
did
was
say.
A
Okay,
now
we're
gonna
have
like
overall
data
right
and
the
overall
data
is
going
to
be
coming
from
2016
until
2023,
right
so
from
2016
to
2023,
which
means
just
data
from
2016
till
the
end
of
2022.
So
that
is
the
overall
data,
but
the
correct
data
is
going
to
be
coming
from
1223
January
2023,
the
latest
right
you
might
want,
you
might
be
wondering:
why
is
it?
Can
we
just
like,
have
everything
together?
Okay,
but
the
way
you
get
a
bucket
works
is
when
you
call
data
from
2026,
2020,
sorry
2016
to
2023.
A
It
doesn't
give
you
the
data
for
2023.,
so
it
only
gives
you
the
data
from
2016
till
the
end
of
2022,
so
the
yellow,
colon
or
the
yellow
query
only
runs
at
the
end
of
whatever
year
you
called
so,
which
is
why
we
have
like
two
separate
details.
So
if
you're
calling
from
2020
2016
to
2023,
you
are
only
going
to
get
data
from
2016
to
the
end
of
2022.,
so
it
won't
include
data
from
2023.
I,
don't
know!
Why
can
you
share
some
light.
F
Okay,
so
the
dates
are
basically
installed
into
tables
right,
so
each
month
of
we
have
tables
for
each
month.
We
have
tables
for
each
week.
We
have
tables
for
each
day
and
we
have
tables
for
each
year.
F
So,
if
I
write
a
SQL
query
that
I
want
data
from
2016
to
2023
16
to
2022,
because
the
only
tables
that
the
yearly
tables
that
are
available
is
from
2016
to
2022
and
2023
table
hasn't
been
created,
so
it
will
be
created
like
it
will
like
they
will
combine
each
table
for
each
month
at
the
end
of
the
year.
So
this
is
why
we
say
for
weekly
data.
F
This
is
they're,
currently
dumping
data
and,
if
I
say,
okay,
give
me
data
from
2020
to
January
until
present,
because
the
data
is
dumped
each
and
every
day.
That's
the
only
data
available,
but
I
cannot
get
data
like
okay.
Give
me
from
here
this
to
here
this
because
for
2023
the
tables
haven't
been
created,
I'm,
not
sure.
If
I
explained
it
correctly,.
B
Okay,
but
what
blocks
us
from
using
the
the
other
tables
like
can't,
we
just
say:
okay,
so
we
grab
total
from
2016
to
2022
and
then
in
case
of
latest
so
2023.
In
this
case,
we
grab
it
monthly,
like
monthly.
F
So,
like,
frankly,
we
have
archive
data
from
2016
until
the
end
of
2022
right.
So
we
got
the
data
like
the
pre-request,
the
issues
that
a
person
has
owned.
F
Then,
since
we
don't
have
data
included
from
this
year,
we
are
querying
like
for
the
from
January
until
present.
So
this
script
we
are
running
like
every
week
because
of
bigquery
also
has
limits
I'm
still
I'm,
using
the
free
version
on
how
much
you
run
and
also
for
to
query
data
from
GitHub
archive
you
either
have
to
query
by
date
by
month
or
by
year.
So
those
are
the
three
methods
to
create
data
from
gig
tab
archive.
F
So
if
I
am
to
query
data
from
2016
from
this
month
until
2016
this
year
or
from
this
year
2016
until
the
year,
2022
I
will
get
my
then
if
I
am
to
pray
from
a
month,
like
maybe
we'll
say
from
2023
from
month,
January
until
present,
I
will
get
my
data
if
I
want
to
create,
like
okay,
for
instance,
the
beginning
of
February,
on
the
1st
of
February
until
the
end
of
February
I
will
get
my
data
I'm,
not
sure.
If
I'm
experience
explaining
it
correctly.
A
F
F
Data
is
being
updated
frequently.
This
is
why
we
have
two
separate
tables,
one
that
is
current,
like
the
one
that
we're
currently
running
like
every
week
and
the
table
that
we
already
have.
That
is
full
from
2016
to
2022.,
since
the
one
for
2023.
We
are
currently
updating
it
like
every
week
and
git
tab
is
currently
damping
it
every
hour
or
every
day,
I
suppose
so.
This
is
why
we
have
like
two
separate
tables.
B
Yeah
I'm
afraid
to
admit
I,
I
I'm,
not
100
sure
so
so
so
we
have
data
which
is
static
from
2016
to
2020.
Excuse
me.
B
Excuse
me
from
20
2016
to
2022.,
and
then
we
have
data
from
2023.
B
That
is
not
I,
think
because
every
single
day
it's
it's
refreshed
and
but
and
we
fetch
it
weekly
and
dump
it
to
bigquery
right.
B
C
A
So
I
think
watch
I
think
because
it
is
not
possible
for
how
to
like
run
these
two
data
or
these
two
requests
simultaneously
together.
Right.
A
So
I
think
there's
one
query:
I
I,
like
I'm
no
expert
I'm,
just
like
saying
so
I
think.
Let's
say
this
one
tab.
So
this
particular
tab
only
queries
the
data
from
2016
to
2022,
so
in
that
tab,
I'm,
not
sure
if
it
is
possible
to
also
like
run
the
query
that,
like
generates
data
from
2023
to
current,
you
have
to
like
do
that
on
a
separate
tab
or
something.
F
F
A
Yeah
and
that
that
is
what
we'll
be
doing
like,
because
I'll
be
fetching
the
data
from
the
Excel
stuff
right
and
converting
it
to
Json.
So
then
from
there
look
at
the
conversion
I'm
going
to
like
merge
this
to
data
together
to
like
get
the
latest
stuff,
but
now,
yes,
here's
the
football
thought
right,
because,
according
to
the
discussion,
I
was
having
truly
we.
If
we
follow
this
approach
right,
we
folks,
like
bukash,
Jonah's,
Fran
machine
me
eventually.
I
cannot
always
be
the
people
at
the
top
of
the
list.
F
F
B
That
that's
called
spamming
like
a
pro,
but
but
countries
like
but
you're
talking
about
total
right
in.
D
B
It's
huge
but
shouldn't
we
just
enable
like
basic,
like
of
not
sure
if
we
should
call
it
filtering
but
enable
people
to
see
okay,
who
was
in
2026
when
he
said
like
to
check
basically
how
to.
B
Specify
your
date,
hey,
I,
want
to
see
who
was
top
contributor
in
2019.
So
just
you
know
what
I
mean
so
we
so
we
basically
generate
a
lot
of
different
tables
tables,
so
we
have
table
per
maybe
even
week
per
month
per
year,
then
we
merge
them
into
also
a
total
all
time.
B
So
we
have
like
so
like
the
the
main
contributor
view
is
all
time
right
because
like
I
mean
why
not
even
though
I'm
on
the
top,
which
I'm
happy
about,
but
it's
like
yeah
but
we're
showing
all
time.
But
if
somebody
wants
to
see
2018,
that's
a
completely
fine
I
mean
I
mean
like.
Is
there
some
limit?
We
can't
do
it
or
like,
or
you
just
don't
like
me
and
don't
want
to
see
me.
F
Okay
Lucas,
so,
if
I
understand
correctly
right,
so
we
can
have
like
data
from
2016
as
per
month
like
okay
from
November.
What's
the
top
contributor
December
January
blah
blah
blah
until
present,
then
we
have
a
table
like
for
overall
a
year
like
top
contributor
from
2016
per
month,
then
top
contributor
for
2016
a
year.
Then
is
this
how
you
explain
it
right,
yeah.
B
Yeah
like
so,
we
have
several
different,
like
granularity
of
the
information
is
on
different
levels
like
week
month
year,
oh
and
and
then
total
and
then
in
the
end,
it's
Excel,
so
we
can
put
it
into
CSV
and
then
put
it
into
like
read
it
in
the
website
and
do
whatever
we
want
with
the
data
merge.
It
show
it
different
graphs,
whatever
views
but
like
basically
have
different
different
data,
like
not
sure
how
else
I
can
so.
A
So
like,
according
to
what
you
you
like
your
idea,
so
the
question
is
like,
if
I'm
trying
to
like
get
you
right.
That
is
a
lot
of
query
to
run.
F
I
can
reuse
the
script,
but
there
are
limitations.
I
cannot
get
data
from
day
or
weekly,
especially
if
it's
data
for
the
all
data,
the
only
data
that
I
can
get
is
like
month
and
year.
Those
are
the
two
tables
that
I
can
get
like
for
previous
years.
Then,
would.
E
F
E
A
current
because,
when
your
fetch
the
current
the
current
data,
you
can
still
post
process,
filter
out
specific
users
or
specific
dates
or
Etc
shouldn't
the
website
be
able
to
do
that
on
its
own.
F
A
Yeah
I
think
it
is,
it
is
based.
It
is
based
on
the
sorry,
employee,
I
think
it
is
based
on
the
kind
of
data
we
get
back.
It
does
like
includes
pretty
much
it's
like
days
months
and
stuff
yeah
yeah
so
is.
F
Okay,
like
I,
explained
the
only
three
queries
that
you
can
make
is
only
day
a
period
of
this
day
until
this
day,
then
a
period,
the
monthly
then
yearly.
Those
are
the
three
chapters.
E
Yeah,
so
so
during
the
build
phase
of
modeling
our
website.
So
when
you
build
the
website,
it
wouldn't
it
be
possible
to
for
the
build
script,
to
pull
down
all
the
they
said
if
it
hasn't
already
installed
in
a
Json
file,
a
list
basically
locally
and
then
once
the
UI
needs
to
access
it
and
filter
it
or
whatever.
You
don't
need
to
make
any
queries
to
the
GitHub
API.
You
actually
have
it.
F
Okay,
so
one
thing
to
be
really
honest:
the
data
is
large.
Github
data
has
a
bunch
like,
for
example,
this
data.
This
weekly
data
is
like
138
gigs,
already
to
query
this
data
that
you
are
seeing
your
idea,
so
I
cannot
fetch
like
the
past
data.
The
only
thing
that
it
does
is
to
fetch,
like
in
tables,
so
I
have
to
create
like
one
bit
query,
because
the
other
ones
during
the
whole
data.
F
So
it's
easier
for
me
like
to
add
the
dates
or
add
the
months
that
I
want
async
API
then
export
this
data
to
an
Excel
file
or
a
Json,
but
also
there
is
a
limit
to
this.
It
only
creates
I,
think
it's
one
gig
Ace
I'm,
not
sure
I
explained
this.
Also
to
you
there's
a
certain
amount
of
data
that
you
can
get
Excel
from
if
you're
exporting
data.
F
It
only
gives
you
one
gig
of
data
because
also
I'm
using
a
free
version,
I'm
not
using
a
paid
version,
and
even
if
we
were
to
run
like
a
daily
okay
like
give
me
from
2016
on
this
particular
day
and
then
store
it
into
tables.
We
need
also
I
have
also
limits.
I
can
only
use
a
certain
amount
of
gigs
of
querying
data
on
bigquery
and
after
that,
I
need
to
pay,
if
which
is
really
a
lot,
so
those
are
either
delete.
F
F
Then
we
can
have
yellow
data
over
like
from
the
whole
2016.
These
are
the
top
so
like
from
2017
January.
This
was
the
top
contributor
February
those
tables
I
can
do,
but
if
we
are
now
saying
like
every
week
or
every
day,
that's
another
different
story
that
we
need
to
consider
as
well.
I'm,
not
sure.
If
that
explains
it.
B
Yeah
I
mean
in
the
end
we
don't
have
to
have
everything
that,
with
the
first
iteration,
even
having
this
monthly,
that
that's
already
a
huge,
a
huge
thing
and
in
case
of
all
the
data
in
the
end,
we
just
have
to
run
query
only
once
right.
So
for
the
data
2016-2022,
we
don't
have
to
run
it
every
time
every
every
week
again
the
same
query,
because
we
just
want
to
run
it
manually
once
store
it
and
reuse
it.
B
F
A
So
I'm
trying
to
like
understand
what
we're
going
to
have
in
the
UI
right.
So
we
have
this
list
of
data.
We
have
this
filter
drop
down,
which
says
say:
contributors
2016
contributors
right,
so
we
click
on
2016
and
we
get
data
for
2016
right.
So
we
don't
have
to
like
filter
by
month
for
2016.
We
just
get
data
for
2016.,
then
we
can
only
filter
by
month.
If
the
year
is
2023
or
50
years
latest
like
if
the
year
it
goes
to
the
currency.
We
are.
F
F
Yes,
we
can
do
that
like,
for
instance,
right
instead
of
running
this
group,
run
it
monthly
like
from
the
1st
of
March
until
the
end
of
March.
Then
we
get
okay,
we
are
the
top
contributions
or
we
can
run
it
daily,
because
what
I
remember
is
if
we're
running
eight
by
month,
it's
not
a
lot
of
day,
there's
not
a
lot
of
bikes
that
get
consumed
and
there's
a
lot
of
data
that
gets
a
spot
exported
so
yeah.
A
Okay,
now
I
have
a
question
which
means:
how
do
you
turn
on
rendering
this
data
right?
For
me,
like
you,
of
course,
you're
gonna
like
pass
it
to
an
Excel
file,
but
now,
okay,
are
you
gonna?
Do
like
room
2016
passage
to
an
Excel
file,
so
we
just
have
like
an
Excel
file
for
2016.
We
have
an
Excel
five
for
2017.
We
have
an
exam
file
for
2018.
F
A
F
B
And
in
case
of
costs
to
understand
again
it's
the
query
that
costs
not
the
data
that
you
expose
that
you
yes
x,
x,
exports.
F
F
B
Foreign
like
we,
so
we
don't
so
we
because
I
assume
that
your
query
is
already
doing
the
aggregation
of
the
results,
like
counting
already
the
issues
per
can.
Can
we,
for
example,
don't
do
it
and
just
have
raw
data
like
so
basically
in
case
let's
say:
Jonas
have
or
actually
Ace
had
these
10
issues
weekly.
So
instead
of
table
with
A's
10,
we
get
actually
10
rows
with
A's
and
date
of
the
contribution
and
link
to
the
comment,
and
then
we
on
our
own
just
have
a
code
that
Aggregates
it.
B
You
know
what
I
mean
like
more
raw
data
than
that
you
can
operate
on
I.
C
B
So
now,
when
you
show
the
stable,
there
was
Ace
and
issue
starts
wherever
10..
B
So,
instead
of
a
is
10,
can
we
get
an
axle
or
whatever
table,
basically
a
table
where,
instead
of
a
is
10,
we
have
10
times
A's
mentioned
and
there's
a
there's
another
column
saying
it's
issue
start
it's
a
link
to
the
issue
comment
the
date
when
it
was
created.
B
B
So
we
can
access
them
because,
like
later
for
the
UI
I
bet,
people
will
say:
okay,
you
say
you
have
50
issues.
Yeah
I
want
to
check
what
actually
50
issues
you
have
and
we
will
say
sorry
we
just
have
the
number
we
don't
have
links.
You
know
what
I
mean.
A
But
isn't
isn't
that
going
to
just
extend
the
size
of
the
file
like
like
I
said
there
is
a
limit.
C
F
I'm
using
I,
don't
know
which
documentation
there's
no
documentation.
How
to
do
this
in
that
GitHub
is
the
rest.
I
had
to
figure
it
out
on
my
own
honestly,
the
only
thing
that
I
remember
was
like
you
can
only
query.
By
day
by
month,
Empire
and
I
had
to
figure
out
the
rest
of
my
own,
so
in
links
the
only
links
that
I
remember
that
I
can
get
is
like
the
links
to
the
profile
to
the
actor
profile,
not
to
the
issue.
B
Particular
issue
that
we
are
counting
so
I
I
suggest
I'll
make
sure
that
Linux
Foundation
also
invites
you
to
the
dashboards
that
we
have
access
to,
because
they're
doing
something
similar
that
you're
doing
I
bet
and
I
I'm,
pretty
100
sure
that
they're
using
GitHub
archive,
because
what
else
would
they
use
so
you'll,
be
able
to
see
what
they
have
access
to
and
I'll
share
with
you
the
their
Forum,
because
they
have
pretty
cool
people
working
there
are
super
responsive,
so
maybe
they
will
be
able
to
help
you
like
to
guide
you
like,
what's
the
best
way
like
how
they
solve,
because
they
were
solving
it
for
all
the
Linux
Foundation
projects
doing
different
dashboards
Etc.
B
So
maybe
they
have
some
answers.
Also,
there's
this
chaos
project
from
Linux
Foundation.
That
is
like
fully
focused
on
like
on
metrics
of
Open
Source
contributions.
So
maybe
they
also
have
someone
who
can
help
out
to
to
to
somehow
get
access
to
it
or
maybe
even
GitHub.
Maybe
even
Linux
Foundation
can
give
us
access
to
their
raw
data
as
well,
but
yeah
there's
there's
a
lot
of
maybe
so
I
I'll
contact
you
after
the
call
and
make
sure
that
you
have
all
the
accesses
and
access
also
to
people
that
can
help.
D
A
Yeah
I
think
that's
it's
at
least.
If
we
have
more
data
to
like
work
with,
then
we
should
be
able
to
like
provide
more
information
instead
of
just
like
putting
the
whole
stuff.
We
have
out
there
so
mixes
I,
really
hope
it's
you
get
like
get
it
figured
out.
So
should
I
wait
till
you,
you
do
that
or
it
should
be
something
we
can
like
had
as
like
later
on,
like
if
you
can
get
this
current
one
like
like.
We
all
discuss
like
based
on
here.
C
A
No
like
with
this,
no
because
now,
I
have
to
like
start
working
on
the
on
the
first
iteration
right.
So
I
was
thinking.
If
we,
if
I,
can
like
just
like,
implement
the
current
rounds,
we
decided,
which
is
like
having
the
data
for
yeah
like
2016,
then
2017
till
2022,
then
half
data
for
2023
so
current.
A
If
she
can
like
Implement
dots,
then
I
could
because
I
was
also
like
I
want.
What
I
want
to
do
is
work
on
the
first
iteration
and
be
able
to
like
other
contributors,
contributes
to
it,
because
if
I
just
implement
the
old
stuff
that
they're
not
gonna,
have
a
chance
like
improve
themselves.
B
Yeah
but
the
problems
you
can't
really
work
on
even
first
iteration,
because
we
don't
like
whatever
we
have
it's
like
so
truly-
is
using
some
free,
big
Jesus
like
I.
Don't
remember
the
name
that
stores
the
data
and
can
we
can
query
through
it,
which
is
yeah,
bigquery
yeah,
so
we're
using
some
free
versions.
So
we
even
we
even
here
we
have
to
answer
a
question
like
long
term
how
we
want
to
proceed
because
we
it's
gonna,
bring
some
costs
we
have
to
like
so
like
like.
B
A
B
A
F
Long
term,
we
need
to
also
figure
out
like
querying.
It
need
to
figure
out
storage
as
well,
because
we
cannot
be
exporting
to
to
excel
all
the
time.
So
we
need
to
also
figure
out
storage
as
well
like
a
viable
place
where
we
can
store
like,
for
instance,
in
the
query
Cloud.
Then
you
can
create
data
directly
from
bigquery
cloud
until
to
the
website.
So
that's
another
thing
to
also
figure
out
as
well.
So
it's
not
a
walk
in
the
park
as
yet,
and.
D
A
Yeah,
and,
and
also
like
the
the
old
concern
like
yourself
like,
have
a
discussion
around
because
USB
brought
us
something
interesting.
Oh
sorry,
it
was
running
up
this
meeting
now,
but
before
Roundup
you
brought
up
something
concerning
like
assigning,
Badges
and
stuff
right,
and
so
we
need
to
also
like
discuss
which
contributor
get
which
badges.
Like?
What
criteria
are
we
using
to
measure
what
what
parties
someone
gets.
B
I'm,
sorry
to
bother
you
but
like
I,
have
to
I
have
to
escape
the
meeting.
A
Yeah,
so
many
dimensions
anyways,
so
I
will
continue
to
discussing
that
in
the
next
bit
and
anyways.
Yes,
so
I,
you
don't
still
imagine
I
thought
we're
gonna
have
a
short
meeting
that
always
happens.
A
Okay,
okay,
thanks
for
doing
this
call
catch
you
guys
later
in
the
next
coming
to
meet.
Him
have
fun
bye.