►
Description
For reference to the items discussed, please check out
https://github.com/filecoin-project/notary-governance/issues/325
Content Covered on this call
- Goals and Updates for Q1 2022
- Large Dataset
- Fip 204: DataCap Management
- Holiday Schedule and 2022 Meeting dates
A
All
right,
thanks,
galen,
hey
good
morning,
everybody,
my
name
is
kevin
ray,
I'm
relatively
new
to
the
team.
This
is
my
first
time
emceeing
this
so
bear
with
us
and
we're
going
to
walk
you
through
kind
of
what's
going
on
in
the
community
and
get
some
of
your
feedback
here
december
7th.
So
let's
take
a
look
starting
at
what
the
agenda
is.
We
hope
to
cover
today.
A
So
the
goal
high
level
is
we're
going
to
walk
you
through
some
of
the
metrics
and
point
out
a
couple
of
wins
that
we
had
over
the
last
time.
Next
we're
going
to
just
recap
what
the
20-22
goals
were,
so
in
the
last
session
deep
went
into
a
lot
of
detail
about
what
the
metrics
were
around.
These
goals
feel
free
to
go
back
and
watch
that
recording
we're
not
going
to
talk
about
that
at
high
level,
just
mostly
just
to
recap.
So
this
is
still
on
your
radar
for
what
we
have
planned.
A
Then
we're
going
to
open
up
one
of
the
fips
that
we
spent
a
lot
of
time
talking
about
to
make
sure
that
if
there's
any
other
conversations,
you
have
plenty
of
time
to
give
your
feedback
on
it.
After
that,
a
couple
new
issues,
problems
that
we
found
in
the
system
high
level
data
and
then
turn
it
over
to
you.
If
there's
anything
that
you
want
to
add
as
a
reminder,
we
post
this
to
github
repos.
So
if
you
have
any
topics,
any
agendas
that
you
want
included,
we'll
always
blast
that
out.
B
But
first
sorry
gary,
we
wanted
to
address
an
outstanding
issue
that
has
has
come
to
our
attention
before
we
jump
through
the
metrics,
just
because
I
think
it's
probably
on
the
mind
of
some
of
the
notaries
here
on
this
call
the
attention
that
the
falcon
plus
registry
app
is
currently
experiencing
technical
difficulties,
loading
any
of
the
data
this
has
to
do.
B
We
think,
with
some
downstream
effects
to
the
recent
network,
upgrade
one
of
the
commands
that
is
supposed
to
return
an
array,
not
various
other
people
have
come
up
with
workarounds
for
their
work
streams,
we're
investigating
what
it
is
going
to
take
to
get
the
app
back
online.
B
So
we
are
aware
the
development
team
is
aware
it
is
frustrating
and
it
is
a
work
in
progress.
We
really
hope
to
have
more
information
by
the
end
of
today,
so
we'll
see
where
we
land,
but
if
not,
then
we'll
have
more
we'll
post
more
information
in
the
slack
channel
as
we
get
it
so
just
wanted
to
like
pause
and
call
that
out.
B
B
Yeah
faye.
I
also
commented
on
yours
about
your
address
change
issue.
When
I
checked
that
address
and
verify
it
still
shows
that
there
is
data
cap,
so
I'm
not
sure
where
the
address
no
longer
being
verified
comes
from.
So
I
don't
know
if
anyone
else
has
other
evidence
or
experience
of
their
address,
certainly
not
being
verified.
If
so,
let
us
know
about
that.
So
we
can
try
and
see.
If
there's
a
pattern
there.
Okay,
kicking
it
back,
k-ray
from
metrics.
A
Hey
thanks
galen,
so
we
show
this
slide
every
week.
I
just
wanted
to
highlight
one
big
change
that
we
noticed
from
the
last
call
on
the
23rd
to
today
and
that's
for
the
first
time.
We've
actually
got
that
average
time
to
date,
a
cap
below
two
days,
so
we're
just
going
to
kind
of
take
a
moment
smile
a
little
bit
off
this
milestone
that
we're
now
below
that
two
day
mark
and
the
other
thing
to
just
kind
of
highlight.
A
As
we
look
at
like
what
do
we
do
for
2021
and
like
where
do
we
see
that
growth
so
it'll
be
like
a
fun
project
as
we
kind
of
come
on
board?
So
with
that
galen
deep
anybody
in
the
community?
Do
you
have
any
questions
or
any
additionals
on
the
metrics
sweet
all
right
now,
let's
take
a
look
at
what's
happening
on
our
repo
boards
here.
So
just
as
a
check
in
on
that
right
now
we
have
27
open
issues,
which
is
great.
A
That
seems
a
bit
about
the
average
for
the
last
few
months
and
in
the
last
week
we've
had
11
new
applications.
Again.
This
is
right
on
average,
which
is
nice
to
see
that
the
holidays
really
aren't
changing
that
much
as
far
as
the
community
going,
the
last
15
excuse
me.
Seven
days,
we've
had
15
new
applications
which
brings
our
total
up
to
273..
A
If
I
had
to
draw
your
attention
to
anything
on
this
slide,
it's
that
we
have
seen
no
dramatic
shift.
So
there
wasn't
a
drop
off.
There
wasn't
a
sharp
increase.
This
is
pretty
much
par
for
the
course
as
we
go
forward
on
that
with
that
I'll
kind
of
pause.
Again,
if
there's
any
questions
about
the
screen.
A
Wonderful
all
right,
let's
take
a
look
at
what
our
goals
are
for
2022.
So
again
we
talked
about
this
in
a
lot
of
detail,
as
kalyn
mentioned
in
the
chat
feel
free
to
go
back
and
watch
that
youtube.
If
you
want
to
see
how
we're
going
to
measure
these
and
really
an
in
depth
of
deep
provided,
but
just
to
kind
of
keep
this
top
of
mind
before
we
go
into
the
holidays
and
before
we
come
into
2022,
these
five
things
are
like
where
we
plan
to
focus
like
on
our
high
level
goals.
A
So
one
is
get
that
volume
up
around
500.
Petabytes
two
is
that
time
to
onboarding
speed,
so
we
just
talked
about
going
from
two
days
to
one
day.
Let's
see
how
much
lower
we
can
get
that
time
to
kick
off,
then
the
risk
mitigation
making
sure
that
we
have
analysis
on
chain
for
like
who's
actually
using
it.
We
don't
have
any
shady
things.
A
A
Rock,
let's
take
a
look
at
our
next
issue,
which
was
204
from
last
week,
so
out
of
all
the
times
that
we
dedicated
for
the
call
this
204
when
I
went
back
and
watched
the
video
it
accounted
for.
Maybe
50
of
the
call-
and
I
realized
in
that
that
sometimes
other
people
are
talking-
you
might
not
get
a
chance.
A
So
just
consider
this,
like
the
return,
the
part
two,
the
continuation
open
discussion
as
far
as
this
204
for
what
you
might
want
to
bring
up-
and
I
think,
if
we
had
to
put
the
ribbon
on
it,
is
based
off
the
feedback
that
we
got
on
the
calls
on
the
fip
itself.
The
moving
forward
plan
is
going
to
be
for
that
data
cap
allocation
like
taking
that
back.
The
plan
will
be
two
notaries
and
one
root
key
holder.
A
So
I
wanted
to
follow
up
and
make
sure
that
there
was
no
concerns
or
maybe
just
get,
that
friendly
plus
one.
So
this
would
be
a
great
time
if
you
wanted
to
unmute
or
raise
a
hand
or
comment
in
the
chat
and
say
yes
love
this
plan,
or
you
can
write
back.
No,
no
strong
feelings,
whatever
you
guys
want
to
do
or
feel
free
to
be
like
this
is
the
silliest
plan
I've
ever
seen
in
my
life,
and
I
want
to
give
some
impact.
So
I
kind
of
pause
and
looking
for
feedback
for
thoughts.
D
Got
reaction
sounds
like
it
would
cause
more
problems
than
it
will
solve
just
because
that
notary
might
be
offline
so
like
if
somebody
is
being
is
abusing
their
data
cap
and
that
notary's
offline
for
a
week,
we
just
leave
them
abusing
it
for
another
week.
So
if
the
point
of
this
is
to
quickly
react
to
mistakes
with
data
cap
allotment,
I
don't
think
that
that
would
be
helpful.
B
I
I
also
agree
I
like
the
idea
of
trying
to
have
a
cultural
ideal
process
where
we
try
and
ask
the
notary
who
assigned
it
to
sign
this
message,
but
if
they
are
not
available
like
I
don't
think
that
it
should
be
a
blocker
one
thing
that
has
been
discussed
is
this
idea
of
a
multi-sig
of
multi-sigs,
where
we
would
basically
create
you
know
two
from
two
from
column,
a
two
from
column
b.
B
B
You
know
mechanism
where
two
route
key
holders
could
override
this
decision
if
the
message
isn't
getting
approved,
but
I
don't
think
that
we're
at
that
point-
and
I
think
that
historically,
what
we
have
seen
is
getting
root.
Key
holders
to
sign
is
slower
than
notaries
overall,
so
I
don't
think
that
necessarily
solves
it.
B
So
it
sounds
like
we're
not
hearing
any
like
strong
dissenting
opinions
again.
This
fit
is
available
for
people
to
comment
on
and
and
share.
I
think
our
next
step
here
is.
B
We
will
probably
you
know,
we'll
talk
about
it
again
this
afternoon
and
see,
but
then,
if
we
don't
hear
anything
in
slack
channels
or
on
the
fip
or
in
the
afternoon,
call
then
we'll
sort
of
move
forward
with
making
a
recommendation
in
the
content
of
that
that
we
think
the
specific
mechanism
would
be
two
notaries,
ideally
one
of
which
being
the
notary,
who
initially
awarded
the
data
cap,
but
not
a
hard-coded
requirement
and
one
root,
key
holder
as
the
kind
of
threshold
design.
A
A
Sweet
just
consider
this,
like
just
the
friendly
community
housekeeping,
so
we
put
this
on
the
last
just
gonna
keep
carrying
this
forward.
I'm
gonna
update
the
calendar
when
it
gets
a
little
bit
closer
but
january
4th
we're
going
to
take
a
break.
That's
a
focus
on
getting
everything
set
up
on
your
side,
hug,
your
friends
and
family,
and
just
take
that
moment
there
will
be
no
calls
on
both
the
eight
and
four
pacific
time
zones
with
that
the
first
call
of
2022
will
be
january
11th.
So
you
see
calendar
update
with
that
shortly.
A
So,
just
a
friendly
reminder,
you'll
see
some
calendar
updates.
Another
just
keep
you
in
the
loop
from
a
community
standpoint.
Is
we've
been
talking
about
this
idea
of
some
type
of
a
scoreboard,
a
leaderboard
where
you
can
actually
visually
see
how
you're
doing
as
an
ordinary
and
ranking
up
against
everyone
else?
How
long's
the
time?
What's
the
response?
How
is
your
participation
so
we're
just
finalizing
the
drafts
for
the
rfp
we're
going
to
submit
this
out
to
the
community
so
as
soon
as
that,
rfp
is
public.
We're
going
to
put
that
in
the
repo.
A
So
if
there's
an
engineering
or
development
firm
that
you're
aware
of
or
that
you
want
to
recommend,
please
solicit
that.
But
that
bid
will
be
going
out
probably
this
week.
And
then
we
hope
to
get
the
work
started
on
that
right
around
the
holidays
and
the
goal
is
to
get
that
notary
scoreboard
leaderboard.
Whatever
atari
nintendo
reference.
You
want
to
have
up
as
early
as
possible
in
2022
and
then
have
that
on
board.
So
just
consider
that
just
a
friendly
update,
just
keeping
you
in
the
loop
of
one
of
those
other.
A
B
Yeah,
so
I
posted
a
discussion
here,
discussion
topic,
327,
sort
of
asking
the
question.
B
When
we
approved
issue
217
for
this
new
ldn
process,
we
sort
of
wrote
it
saying
that
this
was
going
to
be
an
experiment
to
see
how
well
this
process
worked
and
again
just
like
a
reminder
of
sort
of
the
changes,
rather
than
waiting
for
seven
notaries,
to
jump
in
and
and
say
yes
and
then
creating
the
ldn
and
then
creating
a
multi-sig
of
seven
with
that
threshold
of
four.
B
What
we
did
instead
was
have
once
the
client
submits
a
valid
application
governance
team
like
quickly
approves
it
the
root
key
holders
create
it
and
all
of
the
notaries
are
on
that
multi-sig.
With
a
threshold
of
two,
so
we
increased
the
automation,
we
decreased
the
threshold,
it's
the
hope
being.
Can
we
increase
speed
and
efficiency
for
these
large
clients?
B
So
the
question
here
checking
in
with
everyone
are
there
any
strong
concerns,
arguments
or
ideas
for
changes
to
this
process
that
need
to
happen
in
the
immediate
future
before
we
kind
of
continue
moving
forward.
B
There
are
definitely
things
that
we
are
still
working
on
that
are,
you
know
various
improvements,
but
really
the
question
is
around
like
do
we
feel
very
strongly
that
we
need
to
make
a
change
this
month,
something
that
can
take
becomes
a
an
extreme
or
is
it
changes
to
the
process
that
require
sort
of
more
investigation
and
proposal?
Really,
this
is
around
like
do
we
shut
off
the
ldn
process
because
we
hit
the
hundred
petty
by
limit
or
do
we
keep
it
running.
B
F
Ldn
has
been
a
great
help
of
me,
helping
me
sending
out
deals.
I've
been
able
to
send
out
like
30
terabytes
to
50
terabytes
per
day.
It's
been
great
of
helping
community
members
to
build
their
power
quickly.
I
can
get
lots
of
testimonials
on
that.
F
I
think
the
only
issue
I'm
having
is
the
renewal
process
happens
fairly
quickly.
It's
every
200
service.
You
can
finish
that
in
a
few
days
and
have
to
trace
down
notaries
to
sign
it
again
if
it
can
be
increased
again.
That
would
be
more
helpful.
B
You
know
to
go
from
a
hundred
terabytes
a
week
to
200
a
week,
which
would
mean
that
200
percent
of
your
weekly
would
be
400
tabi
bytes
of
data
cap
per
allocation
right.
So
so
it
is
possible
to
increase
that
weekly
rate,
which
is
not
necessarily
connected
to
the
changes
proposed
in
217..
B
One
of
the
questions
that
came
out
of
the
discussion
around
217
was
like:
should
we
change
how
those
allocations
are
calculated
entirely
and
this?
This
proposal
didn't
really
take
in
to
scope
those
stepwise
allocation
changes,
so
I
think
that's
sort
of
like
a
longer
broader
conversation
around
like
do.
We
want
to
get
rid
of
that
allocation
plan
entirely.
Do
we
want
to
restructure
it?
Do
we
want
it
to
be?
B
You
know
every
ldn
just
gets
a
hundred
teddy
bytes,
once
they've
used
75,
they
get
another
100
and
it's
the
same
for
every
client.
No
matter
how
quickly
they
use
it,
do
we,
you
know,
there's
there's
a
lot
of
ways.
We
could
do
that
those
were
not
covered
in
this
issue,
but
I
appreciate
like
bringing
up
what's
working
and
what's
not
working
so
specifically,
we
can
change
the
weekly
amount
and
that
will
change
your
allocation
to
this
time.
Andrew
you
have
some
ideas,
opinions
and
concerns
here.
D
Yeah
so
yeah,
so
we
have
a.
We
have
an
allotment
and
we
went
through
the
process
of
we've
gone
through
the
process
of
getting
the
renewal
a
few
times.
We
just
hit
some
pain
points
that
you
know
if
the
notaries
weren't
super
helpful
in
working
on
weekends
and
everything
it
would
have.
Actually
I
mean
we
did
actually
stop
the
production
system
for
a
period
of
time
so,
but
it
would
have
been
worse
and
then
like
any
any
sort
of
customer
facing
like
end
user
facing
system.
D
That's
like
not
an
option
so
like
you
can't
shut
down
an
app
because
your
storage
system
is
not
working
or
so
anyway.
I
just
feel
like
I
made
I
just
felt
like
we
could
improve
the
allotments
to
be
much
more
buffered,
so
that
a
user
like
myself
will
basically
never
hit
the
bottom
of
that
barrel
and
never
actually
be
close
to
hitting
the
bottom
of
that
barrel
as
long
as
they're
as
long
as
they're
still
applying
yeah.
Since
that.
B
Yeah
so
on
this
one
like
this
is
just
where
the
readme
is
out
of
date,
because
it
should
be
when
they
have
used
75
percent.
The.
D
C
D
B
But
what
it
sounds
like
is
that
what
you
are
using
in
a
five-day
week
is
greater
than
what
is
being
reported
as
the
weekly
usage.
Or
is
that
not
correct.
D
That's
not
correct.
This
is
just
the
math,
so
just
the
math
of
if
you,
if
you
have
a
100
allotment
for
a
week
right
now,
if
it's
75
of
that
200
of
two
weeks.
That
puts
me
at
less
than
three
days
basically
less
than
three
days
before
I
see
the
bottom
and
our
notary
response
time
is
about
that.
So
that
means
that
a
lot
of
projects
will
hit
the
bottom
so
and
so
either.
D
B
Yeah,
so
it
sounds
like
both
you
and
faye
are
advocating
really
around
the
same
issue,
which
is
the
allocation
calculation
plan
yeah
and
and
not
around
the
like
23
notaries,
on
multi-sig,
with
a
threshold
of
two
you're
talking
about
the
allocation
plan,
which
is
great.
I
I
want
to
talk
about
those
things
and
it
sounds
like
that
is
a
pain
point
for
people
that
are
using
this
process.
That
fundamentally
was
not
in
scope
for
217.
B
So
I
just
want
to
like
pull
this
back
a
step
and
say
that,
like
when
we
wrote
217,
we
said
we
would
check
in
at
100
petty
bytes,
so
we're
we're
checking
in
and
it
sounds
like
the
next
thing
to
fix,
because
when
we
fix
217
I
would
argue
that
the
large
data
set
process
was
not
working
for
a
variety
of
reasons,
including
like
how
long
it
took
to
get
seven
notaries
and
then
the
thresholds
and
all
that.
So
it
sounds
like
the
thing.
The
changes
we
made
in
217
are
helping.
B
The
next
changes
we
need
to
make
are
around
the
allocation
calculation
scale.
But
from
what
I'm
hearing
right
now,
we
don't
need
to
change
aspects
of
the
way
that
we
create
the
multi-sig,
the
threshold
that
is
required,
and
we
are
comfortable
continuing
to
use
this
process.
B
D
Good
for
me,
I
do
have
a
comment
down
below
I,
don't
you
you're
gonna
have
to
judge
whether
this
one
is
relevant
or
not.
Oh,
maybe
I
didn't
move
it
and
fire
it
off
or
no
there.
It
is
so
basically
I'd
say
I'd
argue
even
stronger
that
we
should
go
more
into
this
process,
so
I
think
it
is
working
and
we
need
it
even
more
as
the
default.
B
Yes
again,
love
love
these
like
improvement
directions
and
want
to
want
to
keep
making
improvements,
but
also
it
sounds
like
we're
at
a
place
where
we
are
happy
to
keep
making
improvements,
because
fundamentally
we
think
that
the
improvements
made
in
217
helped
and
then
faye
has
yes,
some
questions
about
allocations.
Okay,
so
what
we
just
like
tying
a
bow
on
discussion,
topic,
327.
B
B
Julian
did
you
oh?
Okay?
Sorry
all
right
we
already
mentioned
this.
Plus.Fill.Org
is
having
a
login
issue
where
the
app
says
something
went
wrong
and
then
is
not
loading
any
of
the
arrays.
B
B
B
Is
that
my
concern
here?
My
fear
with
like
saying
the
first
allocation
is
100
of
weekly?
Is
that
people
can
just
you
know,
blow
up,
get
a
first
allocation
of
200
teddy
bites
and
then
disappear,
and
how
does
that
impact
our
metrics
and
the
network?
And
you
know,
do
we
feel
that
that's
safe
enough
to
try
part
of
the
reason
the
first
allocation
is
low?
B
C
We
could
also
not
sacrifice
the
first
half
week
and
still
get
to
400
percent
right,
because
right
now
we
taper
off
at
the
the
third
successful
allocation.
We
could
just
taper
off
at
the
fourth
success
allocation
as
well,
but
that
doesn't
that
still
doesn't
address
phase
point
on,
like
maybe
the
first
one
is
too
too
small.
So
I
think
that's
still.
C
I
just
don't
want
to
conflate
the
two
things.
I
guess
because
there's
one
one
part
of
this,
which
is
support
four
weeks
of
stuff,
because
two
weeks
is
too
little
to
take
a
production
but
which
I
think
is
very
reasonable
and
then
there's
this
other
angle,
which
is
even
just
getting
started,
might
be
too
slow
of
a
ramp
curve.
So
how
do
we
want
to
address
that?
Or
are
we
in
support
or
not
in
support
or
claiming
necessary
or
not
necessary
kind
of
thing?.
B
Yeah
pausing
here
to
see
if
there
are
any
other
thoughts
and
then
just
wait
like
another
minute
and
then
we'll
kick
it
to
plus
kit
to
share
their
project
so
to
see
if
there
are
other
thoughts
or
opinions
on
sort
of
the
large
data
set
allocation.
F
E
Okay,
I'm
boris
from
class
kid
and
they're
very
happy
to
share
our
project,
our
project
plus
kids,
with
all
of
you
and
so
could
I
share
my
screen.
E
E
Path,
kid
is
a
field
plus
equal
tools,
platform.
It
is
committed
to
providing
ecological
participants
with
functional
operational
and
rich
types
of
tools,
to
reduce
the
difficulty
for
users
to
participate
in
ecology
and
also
enhance
the
convenience
for
developers
and
other
users
and
thereby
increase
the
activity
and
the
pers
and
for
the
ecosystem.
E
E
E
Okay,
this:
this
is
our
website,
as
you
can
see
at
the
search
entry,
it
provides.
B
E
E
E
E
Yes,
I
opened
my,
I
hope,
open
up
our
website
browser,
but
I'm
not
sure
whether
you
can
see
yeah
no.
E
B
B
Here's
here's
the
browser
view,
so
I'm
going
to.
E
E
Okay,
I
will
stop
sharing
from
the
our
from
the
browser
and
only
from
the
pawn
point.
Okay,.
E
G
So
hi
doris,
I
think
I
can
help
you
about
the
sharing
problem,
so
you
yeah,
your
motherland
is
chinese.
Is
that
right?
Yes,.
E
I'll
stop.
Can
you
hear
me.
E
I
decided
to
stop
and
sharing
the
brows
from
your
website
of
well
browser
and
only
use
the
powerpoint
okay,
okay,
okay,
I
will
continue
okay
at
our
homepage.
There
are
three
ministry
parts,
namely
the
search
in
the
dashboard
and
the
ranking
as
the
search
entry.
It
provides
the
trial
of
a
lottery,
a
storage
provider
and
the
client
it
will
be
factored
by
a
client
class
by
the
classified
search
and
the
third.
The
the
second
part
is
a
dashboard.
E
In
this
part.
There
are
many
and
many
some
the
distance
data
data,
including
the
number
of
global
norwich,
the
total
amount
of
data
cap,
etc.
This
data
these
data
are
updated
ever
every
24
hours
and
the
third
part
is
the
banking,
and
here
it
is
in
the
in
many
include
the
lottery,
storage
provider
and
clients.
Each
has
a
default
metrics
and
in
addition,
you
can
get
some
detailed
information
by
checking
certain
errors.
E
E
First
of
all,
for
ecological
participants,
plus
kids,
can
help
you
take
a
overview
of
both
steelcap
and
the
lotteries,
including
global
distribution
status
and
the
allocation
progress
regarding
the
discap.
It
includes
the
times
and
amount
of
and
amount
of,
the
data
allocation
in
the
past
seven
days,
the
proportion
of
allocated
and
unallocated
status
and
the
regional
distribution
of
global
debt
cap.
E
In
this
way,
private
transaction
can
be
effectively
avoided.
You
can
see
this
as
the
flow
flow
direction
of
each
initiate
data
where
data
clients
store
their
data
on
each
miners.
Miners.
E
E
Okay,
that's
all
about
my
sharing.
May
I
get
your
feedback
about
our
product.
E
Next,
I'm
sorry
flat
channel.
B
E
Yes,
we
have
slack
channel
and
I
will
show
you
this.
You
can
get
in
touch
with
me
with
us
about
these
three
channels.
You
can.
Yes,
these
are
flank
channel,
and
these
are
twitter.
B
I
think
the
only
the
only
feedback
I
have
right
now
would
be
putting
those
on
the
website,
your
slack
and
your
twitter
handle
on
the
click
on
the
website
or
like
an
about
or
a
contact.
So
then,
if
other
notaries
or
community
members
have
questions,
they
can
contact.
You.
E
C
C
I
think
in
the
previous
call,
actually
julian
had
a
question
on
how
the
average
time
or
the
efficiency
was
being
calculated.
I
don't
know
if
you
got
a
response
to
that
julian
or
if
we
should
ask
them
to
clarify
today.
C
I
think
maybe
it's
worth
chatting
about
yeah
sally.
If
you
don't
mind,
you
have
this
number
for
each
notary,
you
have
average
time.
How
do
you
measure
the
start
and
the
finish
for
that.
C
A
Yeah
one
other
thing:
I'd
add
to
that
love.
The
presentation
love
the
file.
I
just
found
you
on
slack,
I'm
going
to
send
you
a
dm,
maybe
looping
galen,
and
we
know
that
you
had
a
microgrant
coming
so
maybe
have
some
additional
questions.
I'd
love
to
probe
your
brain
about
that
leaderboard
that
you
were
talking
about,
so
you
might
hear
from
us.
You
definitely
will
be
hearing
from
us
on
slack.
Thank
you.
B
Awesome
eight
minutes
before
the
time
were
there
other
questions
or
presentations.
B
A
Yeah,
I'd
love
to
thanks,
caitlin,
well,
hello,
meg,
jonathan,
charles
anybody
watching
this
recording
my
name's
kevin
ray
with
the
foundation,
and
today
is
the
december
7th
notary
governance
meeting.
So
let's
take
a
look
high
level
of
what
we
kind
of
have
planned
for
this
call.
A
E
A
A
B
Yeah,
so
it's
brought
to
our
attention
that
the
fill
plus
registry
app
is
currently
experiencing
some
technical
issues
when
you
try
to
sign
in
as
a
notary
or
root
key
holder
with
your
ledger
wallet
you'll
get
an
error
that
says
something
went
wrong
and
then
it'll
just
kind
of
be
spooling
as
it
tries
to
count
approved
notaries
and
the
tables
below
don't
populate.
The
dev
team
is
where
that
we're
looking
into
it.
B
We
think
that
it
has
something
to
do
with
the
most
recent
network
upgrade
and
a
change
that
was
made
to
how
a
certain
command
was
returning
an
array,
and
now
it
is
not
so
we're
aware
we're
digging
in,
and
hopefully
we
have
more
better
news
soon.
We
have
been
kind
of
posting
and
tracking
that
in
the
notaries
discussion
channel,
so
you
are
still
able
to
send
messages
directly
through
lotus
to
approve
and
award
data
cap.
B
A
Let's
take
a
quick
look
at
metrics,
so
we
show
this
slide.
We
showed
this
slide
two
weeks
ago.
I
have
two
boxes
around
something
just
to
hopefully
draw
your
eye
to.
The
first
was
the
average
time
to
date,
a
cap.
This
is
a
big
just
everyone
pack
yourself
on
the
back,
because
we
just
hit
under
two
days
so
we're
starting
to
reach
towards
that
milestone.
Galen
literally
doing
it
right
now,
thank
you
and
yeah.
So
congratulations.
A
We
fell
into
that
two-day
mark
and
just
one
of
our
goals
is
to
get
that
going
down
even
more.
The
second
is
the
total
amount
of
data
cap
used
by
clients.
Another
milestone
we
went
over
the
petty
bite
mark,
so
program
is
doing
great.
I
think
that
we're
off
to
like
a
good
2021,
and
this
really
kind
of
sets
the
stage
for
what
are
we
looking
at
for
2022.,
so
pin
any
questions
in
the
voice
or
feel
free
to
drop
them
in
slack.
A
So
what
we
have
right
here
is
a
summary,
sorry
jumping
ahead,
a
little
bit
of
what's
open
in
our
github
issues
right
here.
So
I
think,
if
I
was
to
draw
just
any
kind
of
summary
from
what
you're
looking
at
right
now
is
that
we're
just
kind
of
in
line
with
like
what's
been
the
average
for
the
last
couple
of
weeks,
so
we
have
27
current
issues.
Last
week
was
around
that
same
number.
We
have
11
new
applications.
A
The
reason
I'm
calling
this
out
is
because
in
the
states
it
was
thanksgiving,
we
have
the
holidays
coming
up
for
the
christmas
everybody
celebrates
so
just
kind
of
calling
this
that
we're
still
kind
of
seeing
this
momentum
carry
forward,
which
is
a
great
thing,
so
we're
kind
of
watching
this
making
sure
that
this
holds
tight
and
keeping
an
eye
through
the
holidays.
Here.
A
Now
on
to
that
goal,
so
we
have
these
big
five
focuses
that
we're
really
looking
at
as
we
launch
into
2022
and
on
the
last
call
deep,
really
walked
us
through
how
he
saw
the
milestones
and
how
he
was
gonna
weight
them.
So,
if
you're
curious
for
a
recap
on
that,
the
youtube
link
is
posted
from
the
last
notary
governance
call
on
the
23rd.
So
this
is
a
high
level.
These
are
the
five
number
one
get
that
volume
up
so
right
now
we
talked
about
being
over
100.
A
Now
we're
looking
at
that
500
part,
that's
a
big
jump
and
we're
looking
at
that
volume
to
go
through
the
network.
The
second
is
how
fast
we
can
make
that
speed.
So
right
now
we
just
talked
about
how
our
metric
is
right
below
the
two
days.
The
goal
for
2022
is
get
that
below
one
day,
so
we're
gonna
really
work
hard
on
that
speed
that
turns
around
the
next
is
really
mitigating
that
risk
that
we
have
in
the
network.
A
One
of
the
terms
I
had
never
heard
before
joining
was
sus,
so
I
guess
that's
short
for
suspect.
If
you're,
really
busy
and
galen
is
always
a
fan
of
pointing
these
out,
this
looks
super
sus,
so
finding
more
of
those
sus
as
we
come
in
the
network
is
going
to
be
a
big
yep
as
we
look
at
2022
and
finding
those
out
and
then
number
four
is:
how
do
we
make
this
process?
A
Super
super
simple,
so
thinking
about
onboarding
documentation
really
writing
this
down,
so
anybody
who's
new
to
the
program
doesn't
really
have
to
come
into
it
with
a
level
of
really
deep
understanding.
We're
really
looking
for
like
building
this
out
and
documenting
it.
The
fifth
and
final
is
this
governance
process
that
we're
all
taking
part
of
right
now,
this
ownership
of
the
community
so
more
elections,
more
feedback,
more
scalability
and
mature
networks.
So
those
are
the
big
five.
Just
a
quick
recap.
A
If
there's
something
you'd
like
to
see
that
isn't
on
this
list,
it's
why
we're
having
this
call.
Let
us
know
if
there's
something
that
should
be
here.
This
is
our
project.
Let's,
let's
all
voice
it
so
put
any
questions
in
this
chat
or
live
we'll
kind
of
take
a
look
at
what
we
have
coming
up.
A
A
Is
one
last
opportunity
to
kind
of
voice
what
feedback
we've
got
collected
to
make
sure
that
you're
tracking
it
and
two
offer
you
a
final
opportunity
for
any
kind
of
thoughts
you
have
before
we
kind
of
take
this
recommendation
back
to
that
fit.
So
if
you
remember
this
was
the
issue
about
what
do
we
do?
If
we're
going
to
revoke
that
data
cap
and
like
how
do
we
manage
data
cap
once
it's
been
allocated
to
anybody
on
the
notary
network?
A
A
Now
there
was
a
couple
questions
that
came
up
in
the
morning,
call
like
what,
if
that
notary
was
part
of
it-
and
this
is
part
where,
like
it's
probably
good
for
a
conversation,
so
I'm
going
to
hand
this
over
to
galen
right
now
to
kind
of
maybe
talk
about
any
of
the
intricacies
on
the
level
that
he
has,
that
I
don't
so
galen.
Do
you
want
to
elaborate
on
this
a
little
bit.
B
Yeah,
I
think
one
of
the
things
that
has
been
brought
up
a
few
times
was
this
question
around.
B
Is
there
a
responsibility
that
the
notary
who
awarded
the
data
cap
in
the
first
place
be
involved
in
this
revoking,
and
I
think
one
of
the
where
we're
landing
is
that
we
don't
want
that
to
be
a
blocker
on
the
ability
to
revoke
data
cap.
So
if
that
was
the
requirement-
and
you
know,
for
example,
that
notary
just
happened
to
be
away
unreachable,
not
responsive,
even
for
a
number
of
days,
we
don't
want
a
situation
where
we
have
identified.
B
You
know
fraudulent
behavior,
but
we
can't
you
know,
do
anything
about
it,
because
we're
waiting
for
a
response
from
one
specific
particular
notaries.
So
rather,
what
we
are
leaning
towards
as
a
community
proposal
is
that
we
would
like
to
encourage
the
inclusion
of
that
notary.
B
So
in
this
process,
when
you
know
when
it
is
brought
up
whatever
the
specific
platform,
probably
github
for
the
discussion,
that
we
would
try
and
invite
that
notary
to
be
one
of
the
two,
but
that
it
wouldn't
be
a
requirement,
and
so
that
would
allow
us
to
still
draw
on
the
pool
of
all
of
the
active
notaries
plus
one
root.
Key
holder
to
kind
of
push
through
a
message
to
revoke
data.
Cap
from
someone
see
a
hand
from
meg.
H
Hi
again
yeah,
my
question
is:
how
are
we
going
to
detect?
Is
it
going
to
be
a
bot,
that's
detecting
that
it's
unused
data
cap,
because
I
think
it's
this
isn't
this
connected
to
the
function
that
we're
meant
to
do,
which
is
we
approve
it
and
therefore,
we've
got
to
check
that
it's
been
used
appropriately,
so
who's
picking
it
up.
If
it's
not
the
notary
one
of
the
notaries
that
allocated
it.
B
It
is
not
necessarily
what
are
all
of
the
specific
reasons
why
you
would
remove
data
cap,
which
is
more
of
what
you're
asking
so
one
of
the
things
that
has
been
talked
about
are
like.
Why
would
we
remove
data
cap?
One
example
is
it's
not
being
used?
It
has
become
stale
and
that
may
be
a
combination
of
a
bot
that
looks
at
you
know
when
the
data
cap
was
awarded.
There
are
still
other
questions
that
would
have
to
be
worked
out
to
the
effect
of
right.
B
Now,
when
data
cap
is
allocated,
you
know,
let's
say
there
is:
there
is
a
client
they
get
an
allocation
over
here
from
notary
a
for
five.
You
know
teddy
bytes.
They
get
a
second
allocation
over
here
from
notary
b
for
another
five
teddy
bites
and
they
go,
and
they
make
two
deals
for
two
and
a
half
teddy
bytes.
B
So
if
they
make
a
deal,
they
make
two
deals
for
two
and
a
half
heavyweights.
They
have
five
ten
bytes
of
data
left,
something
happens.
They
need
to
revoke
it.
We
can't
accurately
say
their
remaining.
Five.
Teddy
bites
came
from
this
notary
if
those
two
allocations
came
in
at
the
same
time,
this
is
all
just
like
a
complicated.
B
That
says,
we
don't
necessarily
connect
the
actual
amount
of
data
cap
that
is
used
in
a
deal
all
the
way
back
to
the
notary
who
allocated
it.
So
what
we
would
need
to
do
is
say
things
like
if
a
client
address
receives
data
cap
and
makes
and
does
not
receive
more
data
cap
and
does
not
make
deals
for
a
certain
amount
of
time,
then
we
could
determine
that
that
was
failed.
That
might
be
one
way
that
we
write
kind
of
that
rule.
H
Understand,
I
think,
who
is
who's
who's
policing
it
who's,
detecting
it,
but
it
really
should
be
automated
is
probably,
if
that's
where
we're
headed
that'd
be
great.
B
Yeah,
we
would
want
to
be
able
to
automate
as
much
of
those
the
the
actions
with
which
we
need
to
then
remove
the
data
cap
to
begin
with,
so
those
might
be
fraudulent
behavior.
How
do
we
automate
and
indicate
fraudulent
behavior
we're
working
on
those
questions?
They
might
be
stale
data
cap
and
might
be.
B
I
think
deep
will
actually
be
sort
of
authoring
parts
of
this
fip
and
once
that's
available,
we'll
be
kind
of
broadcasting
it
in
slack
and
people
can
go
signal
boost
it
there
we'll
just
wait.
A
second.
B
Okay,
cool,
okay,.
A
Yeah
just
file
all
this
under
just
a
housekeeping
slide,
slash
update.
The
first
is
that
we're
going
to
be
removing
the
notary
governance.
Call
the
schedule
for
january
4th
we're
treating
this
as
like
a
holiday
block,
so
spend
time
with
your
friends
and
family,
read
over
all
the
github
repos
that
you've
been
behind
on
and
just
take
that
time
to
get
set,
because
on
january
11th
the
new
schedule
for
2022
kicks
off
so
we'll
be
updating
the
calendar,
we'll
be
sending
you
some
direct
pings.
A
If
you
come
to
these
and
we're
just
really
looking
forward
to
2022
kicking
off,
but
if
you're
like
me
and
you
live
and
die
by
your
calendar,
you
will
see
that
change,
so
no
worried
to
write
that
on
a
sticky
note,
this
will
be
updating
just
differently
going
forward.
The
other
second
thing
is
that
we're
looking
at
incorporating
all
of
the
work
that
you
do
and
what
we're
calling
like
a
scoreboard,
you
mentioned
this
kind
of
scope
on
the
last
call
and
we're
wrapping
up
the
rfp
drafts.
A
A
What's
the
communication
like
on
github
coming
to
these
meetings,
all
these
really
play
a
part
in
how
we're
serving
the
community,
and
so
we
want
to
have
a
way
that
we
can
kind
of
demonstrate
and
show
this
going
forward
so
really
looking
forward
to
kind
of
having
some
updates
that
come
through
on
this
one.
If
you
want
to
watch
the
sync
from
this
morning,
we
had
a
guest
join
us
and
they
kind
of
gave
a
demo
from
like
another
display
that
they're
working
on.
So
we
might
see
a
lot
come
from
the
community.
A
We
might
see
this
rfp
specifically
address
it,
but
just
keeping
you
in
the
loop,
because
you
might
see
this
come
out
and
it's
going
to
be
a
really
fun
way
to
kind
of
keep
a
score
on
this
one.
So
before
we
go
on,
if
there
was
any
questions
about
the
schedule
or
this
upcoming
scoreboard
love
to
hear
them,
otherwise
we'll
slide
right
into
the
discussions.
A
B
B
B
When
we
pushed
through
proposal
217
many
moons
ago,
part
of
it
was,
we
said
we
would
check
in
when
this
process
hit
a
hundred
pebby
bites
and
just
see
how
we're
feeling
on
the
experiment-
and
so
just
as
a
quick
recap
of
sort
of
the
scope
of
this
proposal
and
kind
of
the
question
that
we're
asking
here.
B
B
We
put
all
notaries
on
it
with
a
threshold
of
two
that
was
a
major
change
and
then
some
other
small
changes
around
increasing
bot
activity
and
kicking
off
subsequent
allocations
at
75
sealed
a
deal.
So
we're
not
really
talking
here
about
what
are
the
other
ideas
and
changes
that
we
want
to
make
going
forward.
The
question
is
not
so
much
like
you
know
what
is
still
not
working,
because
we
have
ideas
around
things.
We've
heard
some
great
feedback.
B
Andrew
hill
talked
this
morning
about
some
of
them,
specifically
and
and
also
faye
yan
with
some
he
has
a
couple
of
ldns.
He
advocated
very
strongly
for
continuing
the
process
and
also
brought
up
some
of
his
pain
points
both
of
those
individuals
as
clients
had
pain,
points
really
around
the
size
of
the
allocations,
and
how
can
we
get
the
allocation
size
higher
so
that
they
have
more
runway
for
situations
like
what
we're
seeing
today,
where
the
app
you
know
is
not
functioning
and
their
business
is
reliant
on
this
data
cap
right?
B
B
Is
it
still
a
functioning
experiment
as
it's
determined
or
do
we
need
to
like
pull
an
emergency
break
and
stop
the
ldn
process,
because
we
have
identified
that
the
changes
we
made
are
not
functioning
and
not
safe?
So
really
it's
just.
Are
there
any
strong
concerns,
arguments
or
ideas
for
changes
that
we
need
to
make
kind
of
like
right
now
this
week,
so
pausing
here
to
see
how
everyone
on
the
call
feels
about
that.
H
So
I
think
I
think,
gavin
the
I
think
these
changes
have
been
effective
but
there's,
as
you
said,
there's
a
whole
heap
of
other
discussions
and
issues,
and
I
think
it's
probably
time
to
prioritize
those.
B
Yep
yeah,
we
know
there
are
a
number
of
other
things
that
need
to
continue
to
be
iterated
upon
which
we're
excited
to
do,
and
you
know
deep
and
k
ray,
and
I
have
been
getting
some
really
exciting
work
happening
on
some
of
those
fronts,
so
mostly
just
making
sure
that
no
one
in
the
community
thinks
you
know.
This
process
has
gone
off
the
rails
and
has
somehow
created
a
threat
vector
to
the
network.
That
is
a
blind
spot
that
I'm
not
seeing
because
I'm
too
close
to
it
or
something
like
that.
H
B
With
that,
the
other
thing
that
we
we
put
here
on
the
slide
we
mentioned
at
the
top
of
the
call
there's
a
login
issue
with
plus.org
working
towards
resolution
really
bummed
about
it.
As
k,
ray
mentioned
in
the
call
earlier
sally
from
plus
kit,
came
and
presented
their
dashboard.
We
showed
this
dashboard
two
weeks
ago
and
a
call,
and
they
kind
of
talked
through
how
they're
getting
some
of
their
metrics
what
they're
working
on.
So
it's
very
exciting.
B
Additionally,
this
kind
of
spurred
the
the
idea
for
us
to
stand
up
a
new
public
slack
channel.
So
that's
the
fill
dash
plus
dash
dashboards.
B
So
if
you
have
questions
about
how
we're
doing
data
analytics,
if
you
you
know
want
to
be
more
involved
in
those
dashboard
conversations,
we're
trying
to
get
as
we're
now
at
four
different
community
dashboards,
we
want
to
get
a
forum
where
anyone
can
ask.
You
know
you're
reporting
this
metric.
How
exactly
did
you
calculate
that
percentage?
Because
over
here
you
know
we're
seeing
something
different,
so
we
just
want
to
get
a
a
place
where
that
conversation
can
happen.
It's
it's
on
the
public
channel,
it's
open
to
everybody!
C
Thank
you
great
yeah,
adding
the
channel.
I
just
want
to
say
that
it
is
public,
and
that
means
that
anybody
that's
interested
in
asking
questions
and
how
the
dashboards
work
or
engaging
those
teams
building
your
own
stuff
consuming
their
apis
for
other
interesting
use
cases
you
might
have.
You
should
join
that
and
ask
those
questions
like
should
function
more
as
like
our
like
data
analytics
and
dashboards
working
group
for
falcon
plus
so
excited
to
see
more
people
in
there.
E
B
Link
to
that
gary,
I
think
that
we're
that
was
all
of
the
agenda
that
we
had
so
this
point.
We
can
turn
it
over
open
discussion.
Other
questions
comments
concerns
from
the
community
on
what's
happening
in
the
state
of
falcon
plus.
H
I
guess
my
questions.
C
Deep,
oh
she's,
gonna
flag
that
I
think
we
spent
a
little
bit
of
time
in
general,
like
with
andrew
talking
about
efficiencies
in
the
process
and
like
ldns
in
general,
and
support
for,
like
larger
scale
ldns
in
a
way
that
won't
impact.
C
I
think
I
think
the
real
scenario
is
actually
deal
brokers,
meg
so
like
cases
like
textile
bit,
bot
or
estuary,
which
are
themselves
like
a
massive
funnel
for
clients
like
needing
bigger
tranches
effectively
and
so
like
today,
we
we
have
this
thing.
That's
like
200
percent
of
the
weekly
allocation
rate
is
like
the
theoretical
maximum
of
data
cap.
C
You
can
get
at
any
point
in
time
from
now
the
end,
but
the
point
being
made
was
for,
like
a
production
grade,
deal
broker
service
two
weeks
of
runway
is
not
enough
to
make
like
an
enterprise
bet
like
we
should
be
talking
about
four
weeks.
Eight
weeks
like
much
bigger
numbers
in
two
weeks,
and
so
how
do
we
increase
the
upper
maximum?
Like?
C
C
That
means
that
the
client
is
pretty
much
like
guaranteed,
did
not
have
data
gap
several
times
in
their
first
few
weeks
of
trying
to
use
this
process
and
in
general,
as
a
community
we're
trying
to
move
towards
like
this
more
frictionless
data
cap
experience
with,
like
the
amount
of
days
that
people
wait
around
going
closer
and
closer
to
zero,
ideally
so
that
reduces
the
general
onboarding,
ux
pain
of
a
client,
and
so
obviously
that's
at
odds
like
we
have
data.
That
data
shows
us
that
it
takes
a
few
days
to
get
a
data
cap.
C
So
by
giving
a
client
a
data
cab
runway
of
only
a
few
days,
then
like
even
mathematically,
you
reach
the
point
where
we're
setting
ourselves
up
for
failure,
even
with
a
two-week
example
and
and
with
75
percent
of
the
threshold.
C
That's
two
and
a
half
business
days
for
notre
dame's
to
sign
the
next
chance,
which
is
like
only
happens
in
like
the
best
case
scenario,
and
so
I
think
we
do
a
little
bit
of
thinking
on
how
this
scales
more
like
it's
a
good
blueprint,
it's
a
good
like
initial
system
to
iterate
on,
but
it
definitely
does
require
iteration
at
the
entry
point
stage
and
also
at
the
top
end
of
the
spectrum.
But
once
once
a
client
has
shown
themselves
to
be
trustworthy.
C
How
do
we
continue
issuing
data
cap
in
a
friction
free
way
for
them?
So
that
was,
I
think,
one
of
the
bigger
more
time
sort
of
time
taking.
Maybe
I
don't
think
that
I
worked,
but
with
a
discussion,
we've
spent
at
least
amount
of
time
on
in
the
previous
government.
B
The
other,
the
other
thing
that
most
of
the
governance
call
was
spent
on
the
presentation
from
pluskit
of
them
showing
their
showing
their
dashboard
and
talking
through.
B
So
that
was
like
a
majority
of
it,
but
then
you're
hearing
from
faye
as
well
on
the
large
data
set
and
sort
of
what
has
been
working-
and
I
think
the
big,
the
big
like
flag
here
from
my
side
is
we're
still
seeing
you
know
kind
of
disproportionate
amounts
of
diligence
happening
on
these
large
data
sets
compared
to
the
direct
notary
activity
where
there
are
definitely
situations
where
clients
are
being
asked
more
questions
when
working
directly
with
a
notary
for
a
significantly
smaller
amount
of
data
cap
than
the
questions
they
are
being
asked
in
the
large
dataset
process,
and
so
my
major
concern
with
that
is
changing
the
upfront
very
first
allocation
size
to
be
much
larger.
B
You
know,
we've
done
a
lot
to
increase
the
automation
and
and
have
more
tooling,
to
get
out
of
the
way
of
the
notaries
being
able
to
ask
those
questions.
But
for
whatever
reason,
a
lot
of
those
questions
are
not
being
asked
at
all.
So
I'm
just
a
little,
you
know
to
use
k-ray's
favorite,
I'm
a
little
suspicious.
A
little
sus
as
to
you
know,
upping
that
very
first
allocation
with
the
amount
of
automation
that
we've
put
in
place.
B
You
know,
maybe
even
a
better
pathway
is
having
all
enterprise
clients
go
through
a
direct
notary
first
and
then
put
in
a
large
data
set,
and
maybe
that's
part
of
the
diligence
process,
there's
a
lot
of
ways
that
we
can
slice
it.
I
think
the
main
point
here
is
that
the
easiest,
safest,
lift,
would
be
increasing
the
next
allocation
maximum,
so
it
still
ramps
up.
But
right
now
we
we've
only
written
rules
to
say
first,
second
third
allocation
at
200
it'd
be
very
easy
to
say.
H
B
B
There
may
be
growing
pains
in
them
onboarding
on
the
network,
and
you
know
perhaps
that's
warrants
a
bigger
discussion,
but
I
think
that
you
know
it
would
be
a
safe
enough
proposal
to
make
to
the
gov
community
governance
and
say
what
does
it
look
like
to
add
two
more
allocation
calculation
steps
so
from
a
from
a
technical
investment?
That
would
be
a
pretty
as
far
as
I
know,
that
would
be
a
relatively
low
lift
technical
challenge.
H
Okay-
and
I
just
had
one
last
question:
what
happens
to
issue
203?
Does
that
will
that
come
up
in
next
year's
log.
B
Yeah,
it
looks
like
you
asked
some
questions.
B
I
think
that
the
author
of
this
has
been
pretty
fast
to
reply.
I
also
think
that
this
threat
has
gotten
pretty
far
away
from
the
initial
proposal
and
into
a
lot
of
like
interesting,
brainstorming
and
conversation,
but
I
don't
know
that
this
still
stands
as
a
valid
proposal.
So
from
a
process
standpoint,
that's
more
of
a
caitlyn
conversation
as
our
sort
of
governance
lead
as
to
say
like
do
we
just
keep
this
open
and
keep
having
this
conversation
here
or
do
we
move
this
from
an
open
fit
into
some
other
forum?
B
So
I'm
not
really
sure
I
think
for
now
it
definitely
seems
like
there's
good
back
and
forth
happening.
We
should
just
keep
the
issue
open
and
keep
discussing.
H
H
Awesome,
maybe
there's
just
one
other
thing
is:
is
there
a
universal
time
so
that
you
guys
don't
have
to
do
two
of
these
and
we
can
join
the
one
conversation?
I
know
it's
like.
I
know
it's
really
hard
to
find
it,
but
is
there
we
don't
mind
doing
early.
I
B
B
I
I
don't
know
a
way
to
get
all
of
the
world.
H
It's
just,
I
know
that
yeah
just
fumble,
if
we
we
don't
mind
doing
like
early,
is
probably
better
than
late
like
if
it's.
If
it
means
it's
a
reasonable
time
afternoon
evening,
for
others,
we
could
do
early.
H
B
You
know
four
hours
earlier,
it's
still
the
middle
of
the
night
for
most
of
our
teams
in
asia,.
B
Happy
to
happy
to
hear
a
proposed
time
that
would
capture
yeah
everyone's
everyone's
time
zone.
H
H
C
Something
we
can,
we
can
survey
on
actually
to
see
because
our
attendance
also
has
been
dropping
in
this
session,
but
increasing
in
the
other
session,
and
I
wonder
if
that's
because
of
daylight
savings
time
as
well,
that
might
be
influencing
it.
So
it
might
be
worth
tracking
that
a
little
bit
to
see
and
if
you're
watching
this
recording
and
couldn't
make
it
to
one
of
the
sessions.
Tell
us
why,
on
slack.
B
There's
also
this,
you
know
we
create
this
issue
in
the
repo
in
advance
of
these
calls.
So
this
is
another
place
where
agenda
items
and
kind
of
proposed
time
changes
or
you
know
again
to
deep's
point
if
you're
currently
watching
this
recording
head
over
to
issue
325
and
let
us
know
what
time
utc
you
know
would
work
better.
B
Well,
if
there's
nothing
else,
I'll
go
ahead
and
stop
the
recording
and
we
can
sally.
Fourth,
any
other
questions
comments
from
the
community.