►
From YouTube: libp2p weekly sync - January 13, 2020
Description
A
Alright,
hello,
everybody
and
welcome
to
the
Libby
to
be
weekly
sync.
The
pad
is
open.
Please
go
ahead
and
add
your
name
to
the
attendees
list
and
fill
out
your
a
sync
updates
and
we
will
get
to
those
as
we
go.
We
do
looks
like
we
do.
Have
one
agenda
on
the
item
so
far,
so
we'll
hit
that
up
after
the
updates.
If
you
have
any
other
thing
else
that
you.
A
A
B
This
for
memory,
I'm
landing
back
from
holiday-
it
was
great
I-
definitely
needed
need
at
that
time.
We
embarked
straight
away
into
the
rest
net
lab
intensive
workshop
here,
announced
am
I'm,
gonna,
be
traveling
back
to
Tel
Aviv
tomorrow,
but
but
yeah
it
was
a
super
productive
workshop.
We
discussed
a
lot
of.
There
are
a
lot
of
ideas
for
teaching
content
routing
to
fix
content
provision
to
fix
a
number
of
things
that
are
failing.
We
also
discuss
ideas
for
telemetry,
open,
telemetry
and
and
a
bunch
of
other
things.
B
Stab
and
I
are
putting
together
a
technical
plan
to
deliver
and
there's
a
bunch
of
you
are
probably
going
to
get
pulled
in
in
different
capacities
to
help
with
the
initiative.
This
is
so
fixing.
Contouring
is
gonna,
be
a
major
major
major
theme
for
I
BFS
and
Libby
to
be
over
the
next
week's
months,
potentially
definitely
equal
to
one
and
potentially
leaning
over
to
quarter
to
so
so
yeah.
Those
some
people
are
gonna,
get
pulled
in,
and
I'll
open.
B
B
We
are
actually
planning
these
over
time
for
freelance,
so
so
yeah
I
think
sorry,
Gavin
very
two-year,
and
it's
going
to
be
a
two-year
execution
plan,
so
yeah,
that's
gonna,
be
super,
productive
and
and
super
super
focused
effort,
so
yeah
in
line
with
that
said
and
I
have
been
working
on
itemizing
the
technical
plan
for
for
the
immediate
fixes
that
we
want
to
that.
We
want
to
land
and
styling
them
over
time
as
well,
and
so
on.
B
Another
thing
that
I
worked
on
was
on
defining
a
widget
for
near
forum
to
implement
within
the
front
of
their
project.
We
had
originally
discussed
that
we
would
as
part
of
the
scope
that
would
be
delivering
a
widget
for
the
DHT
to
visualize
the
HD
queries,
but
we
pivoted
that
to
visualize
the
routing
table
state,
so
I
pasted
a
link
to
the
definition
of
that
widget
in
case
you're
interested
in
taking
a
peek
at
it
and
yeah
Ben.
We
have.
We
have
a
new
person
coming
in
to
the
team
in
two
days.
B
His
name
is
arch,
you've
probably
heard
of
him,
and
several
of
you
have
participating,
maybe
in
the
hiring
process
and
other
exchanges,
so
so
yeah
he's
coming
in
in
two
days
and
I'm
defining
the
work.
So
this
is
super
super
exciting
he's
initially
going
to
be
focused
on
instrumentation
and
introspection
and
observability
and
going
to
p2p,
but
yeah
next
I'm
gonna
be
migrating
all
issues
from
inability
to
be
proposed
to
the
top-level
brett-o
super
excited
for
this
because
and
applying
the
neibling
taxonomy.
B
This
is
just
the
start
and
I'll
talk
about
this
in
a
few
minutes
in
the
next
agenda
item
after
the
updates
and
I'm
gonna
continue,
I'm
gonna
be
continuing
to
brought
in
with
what
step.
Also,
defining
the
technical
execution
plan
for
unblocking
the
go
IP
affair,
0.5
release
from
the
perspective
of
test
ground.
So
what
is
necessary
from
test
ground
to
actually
unlock
that
release?
B
I
really
really
really
want
to
get
to
reviewing
the
sign
for
your
routing
records
entirely
up
in
doing
like
PHP
or
abused,
but
I
really
want
to
do
a
holistic
review,
because
it
does
touch
of
many
on
many
components
at
once
and
we
are
starting
performance
review
cycles.
Some
some
of
my
I
was
going
to
be
that
so
for
me,
when
I'm
regarding
status
updates.
C
B
Totally
totally
this
is
this
is
so
what
we're
doing
is
really
like
taking
a
bunch
of
proposals,
like
mostly
stabs
proposal
for
content
resolution
from
Group
three,
and
we
are
enhancing
it
with
a
bunch
of
other
thoughts
from
other
groups
to
create
a
unified
view
of
the
plan
that
we
want
to
work
on
and
exactly
the
in
creating
for
each
element.
Kind
of
like
a
like
a
technical
design
proposal,
so
that
people
can
go
off
and
implement
it
and
for.
D
B
So
I'm
not
sure
if
it's
a
great
idea
for
the
like
the
research
side
of
the
pipeline
to
like
deliver
the
same
I,
don't
think
you
have
like
sufficient
view
into
the
design
of
the
code
right
now
to
to
be
able
to
like
go
into
this
level
of
detail.
I
think
this
is
more
on
the
engineering
side
of
the
pipeline
right,
but
definitely
like
once
once
we
done
with
this
like
making
it
to
be
widely
available
everywhere.
Yeah.
C
Yeah
sure,
so
what
I'm
planning
to
do
is
to
kind
of
from
all
the
proposals,
first
of
all,
make
possibly
put
them
together
or
take
bits
and
pieces
from
several
proposals
and
have
a
more
holistic
like
approach
to
some
so
and
then
rank
them
according
to
how
much
effort
is
needed,
what
easy
suggested
to
come?
First
or
second,
what
it
can
be,
they
impact,
etc.
And
then
you
are
going
you
an
IP.
Fs
I
mean
the
teams,
the
engineering.
C
B
Finally,
taking
these
things
and
like
landing
them
and
to
complete
designs
for
implementation,
it's
kind
of
like
the
the
work
that
that
Steph
and
I
are
doing
on
the
incremental
side
of
things,
I
think
where
the
research
can
really
really
really
help.
It's
like
on
the
rethinking
and
the
larger
ideas
right
so.
E
Everyone
so
last
week
by
the
end
of
it,
I
was
a
little
bit
sick,
so
I
didn't
test
the
process
right
in
general,
so
basically
I
addressed
some
of
the
Jacob
slights
of
years,
mostly
on
the
Chesapeake
Bay
examples,
then
I
was
today
debugging
a
regression
issue.
That's
already
appeared
last
week
regarding
parallel
diodes,
now
I
think
I
I
got
to
the
base
of
the
issue.
Oils
I
was
talking
which
I
can
before
the
call
about
it.
E
Then
I
also
started
working
on
the
shell
artist,
which
basically
much
a
Jessica
to
be
contributed.
Work
done
I
think
like
a
year
ago,
and
we
were
thinking
about
using
it
to
replace
WebSocket
start
to
improve
our
browser,
support
and
I've
been
doing
the
first
step
or
nota
factoring
it
to
use
two
F
from
space
API
and
now
I
need
to
do
a
bunch
of
other
stuff.
E
Basically,
for
this
week,
I
went
eventually
to
get
to
it,
but
I
need
to
update
my
ex
in
the
multi
stream
so
like
that
that
reproduces
and
also
switch
from
the
previous
connection
interface
to
the
new
one
and
also
the
Sigma
traitors,
instead
of
costumes.
Besides
that,
for
this
week,
I
want
to
get
all
other
peers
that
I
have
open
landing
in
Jasper
HP
or
once
Jacob
s
time
to
review
them
and
help
with
other
really
stuff.
That
appears
and
finally,
I
will
be
out
gnarly
life
from
Friday's
Friday
in
the
next
two
weeks.
A
That's
super
close
we've
fixed
a
few
minor
bugs
in
JSI
PFS
as
part
of
that
integration,
and
so
we're
working
on
finish
off
those
final
bugs
and
then
we'll
target
a
release.
Candidate
I'll
find
our
release
candidate,
which
should
be
this
week,
so
we
should
be
launching
in
the
next
week
or
two
and
I
think
JSI
BFS
will
also
be
able
to
launch
in
the
next
couple
weeks,
so
sue
excited
to
wrap
that
up
and
then
we'll
start
flush,
finished
flushing
out
the
rest
of
our
technical
plans
for
the
next
quarter.
B
D
Everyone
I'm
back
I'm
back
from
the
dead
yeah
I,
don't
have
much
of
an
update
because
I've
only
been
back
at
work
for
two
hours
or
so,
but
basically
ya,
know
I
took
I
took
about
a
month
off
and
it
just
really
like
disconnected
from
everything.
Pl
and
I
feel
refreshed
now
and
really
excited
for
a
good
2020.
So
I'll
just
be
cashing
out.
Probably
the
next
few
days
most
this
week,
and
it
looks
like
the
first
thing
I'll
be
doing-
is
orchestrating
the
whole
perf
review
process
for
the
for-loop
e2p.
D
A
F
I
did
some
stuff
last
week
on
the
routing
records
PR,
so
you
kind
of
pull
in
where
all
those
feedback
there's
some
changes
that
I
still
haven't
pushed
for.
Go
live
p2p,
but
it's
nothing
major.
It's
just
the
name,
changes
that
took
place
in
the
core
PR.
So
again,
there's
like
one
place
where
Cola
p2p
was
relying
on
some
interface
methods
and
I'll
have
to
do
a
typecast.
I
just
have
to
be
in
that
up
a
little
bit
and
push
it
otherwise,
I'm
gonna
chill
it,
and
this
visa
had
a
comment.
F
F
I
get
that
so
we
can
try
and
actually
use
it
in
production
when
I'm
talking
about
on
Friday.
So
that
should
be
pretty
easy
and
then
yeah
I
just
try
to
close
the
loop
on
the
routing,
Records
I
think
this
is
the
other
thing
for
this
week.
There's
some
other
noise
related
stuff
I
just
want
to
go
through
the
issues
and
make
sure
I'm
not
leaving
in
bed.
It's
easy
on
the
table.
F
The
next
noise
thing,
in
my
mind,
once
I'm
fairly
confident
that
it
will
work
within
goal,
is
to
try
and
set
up
the
JavaScript
Interop
test,
because
those
guys
are
making
very
good
progress,
and
they
have
enough
to
test
with
now
also
try
to
make
progress
on,
though,
that
might
not
happen
for
the
next.
Like
might
be
a
few
days
off.
F
F
I
hadn't
missed
this
is
my
first
time
actually
doing
and
doing
it.
So
far,
it's
been
pretty
good.
I've
been
like,
like
sort
of
copied,
the
placebo
plan,
just
like
you
know,
and
then
I
stole
a
bunch
of
like
helper
code
from
this
swap
plan.
Things
like
getting
the
addresses
of
the
remote
peers
and
like
just
interacting
same
service
and
stuff.
So
it
wasn't
bad
like
the
I
wonder.
Maybe
I
should
try
to
keep
write.
Some
notes
about
the
things
that
were
a
little
non-obvious
is
like,
while
they're
fresh
in
my
head,
yeah.
F
F
Let's
see
it
so
just
the
sync
service
itself,
like
wasn't
super
obvious,
like
how
it's
a
like
hat
I,
was
sort
of
expecting
that
the
runtime
environment
would
just
give
me
a
list
of
all
my
remote
peers
or
whatever,
but
instead
like
each
peer
registers
themselves
and
gets
information
back
that
way
and
I
haven't
done
like
it
looks
like
I
can
do
more
with
that
same
service
like
didn't,
register
your
own
subtree
and
stuff,
like
that,
but
I
haven't
looked
into
that
at
all.
So
like
what.
D
F
Is
there
and
then
like
synchronizing
I,
guess
like
it's
pretty
like
in
the
the
bit
swab
test
it
just
like
using
it
throws
out
a
ready
state
and
waits
for
that
and
it
frozen
and
stayed
there
again
so
I
just
I'm
doing
that
as
well.
But
is
that
like
I,
don't
know
if
it's
like
the
correct
way
to
do
it,
I
guess
as
long
yeah.
B
D
F
B
D
B
D
F
G
G
The
only
thing
is
not
clear
is
how
to
deliver
the
the
events
for
monitoring
the
internals
of
the
misstate
into
the
plication
level
scoring
function.
That's
the
only
thing
that
haven't
solved
yet
so
I'm
waiting
for
you
subscribe.
We
are
submerged
so
I
can
know
it
finish
up
the
barracks,
hands,
PR
and
then
get
work
on
squaring
function.
That's
my
measly
update,
okay,.
B
I,
let's
do
one
thing:
I
promise
and
you
can
hold
me
accountable
and
like
kick
me,
if
I
don't
do
it
to
review
the
pit,
so
I'm
gonna
be
traveling
tomorrow
and
I
have
really
early
wake
up
tomorrow,
so
either
I
get
it
I
get
to
reviewing
this
on
the
plane
tomorrow
or
I
review
it
on
on
Wednesday,
and
that
will
be
that
it's
blocking
of
me
right
now.
So
Wednesday
is
the
latest
that
you'll
get
a
review.
B
C
Mm
yeah,
I
didn't
say:
I
was
at
the
research
intensive
workshop
and
each
working
trying
to
organize
a
you
know
being
there.
So
the
main
thing
was
to
try
and
identify
the
main
problems
that
hold
back
IP.
Recently
p2p
from-from
scaling
up
becoming
faster,
becoming
very
good,
much
better
than
they
are
today
and
looking
into
the
future.
So
for
this
we
previewed
a
bunch
of
papers
identified
first
identified.
C
A
B
B
The
whole
thing
that
this
spec
introduces
is
a
normalized
taxonomy
for
issue
labeling
across
or
with
initially
for
gonna
be
to
be,
but
with
the
intention
to
propagate
it,
to
all
Libby
to
be
repose,
including
those
that
are
not
divided
by
protocol
labs,
so
that
we
can
discover
surface
index
issues,
work
demands,
feature
requests
and
so
on,
according
to
a
normalized
model.
That
will
allow
us
to
then
classify
and
plan
and
organize
the
work
that
we
want
to
do
by
different
dimensions
by
epics
features,
components,
complexity,
size
topics,
impact
and
so
on.
B
So
this
is
not
so
labeling
I
want
to
make
make
clear
that
just
labeling
issues
is
not
the
end
goal
here.
Right.
Labeling
issues
is
important
as
an
initial
step
to
help
us
navigate
the
forest
of
everything
that
needs
to
be
done
in
Libby
to
be
one
way
or
another,
whether
it's
fixing
a
bug,
whether
it's
working
on
a
feature
request,
whether
it's
responding
a
question
to
responding
to
a
question
right.
B
We
I'm
planning
to
do
to
act
on
this
this
week.
So
basically
the
idea
being
the
idea
of
this
labeling
taxonomy
is
to
serve
as
a
building
block
to
allow
us
to
organize
work,
much
much
much
better.
So
what
we're
doing
here,
given
the
fact
that
and
right
now,
issues
are
inherently
organized
by
the
component
where
they
belong.
This
classification
is
very
useful.
B
So
it
does
so
things
like
what
functional
area
it
talks
about,
whether
it's
the
connection
manager,
connection,
bootstrapping,
Matt,
traversal,
peer
routing
content,
routing
discovery.
Essentially,
the
functional
area
corresponds
to
an
abstraction
of
something
in
the
stack
right,
a
particular
particular
area.
It
does
not
talk
about
specific
components
right,
so
that
is
a
separate
lab
label.
B
It
is
perfectly
fine
for
that
issue
to
have
only
a
functional
area
and
for
the
analysis
process
behind
that
issue
to
later
come
in
and
the
sign,
but
labels
for
that
link
that
issue
to
a
particular
component.
A
component
could
be
floods
up,
gossip
sub
can
tht
noise,
quick,
SEC,
I,
Oh
or
anything
that
is
actually
a
rattle
right
now
in
most
cases
right.
B
So,
if
you
think
about
like
what
this,
what
the
mapping
is
with
the
current
architecture
of
issues,
then
the
area
would
mostly
mostly
belong
to
like
maps
to
something
that
exists
and
goal
a
p2p
core
right
and
the
component
actually
relates
to
the
actual
repo,
where
a
particular
change
or
issue
is
taking
place.
Then
another
dimension
for
classifying
issues
is
the
difficulty
of
the
issue.
So
an
issue
can
be
trivial,
easy,
medium,
hard
or
expert
level.
B
I've
tried
to
define
a
rubric
I'm,
pretty
sure
that
you
know
we
need
to
calibrate,
and
this
rubric
is,
is
going
to
change
over
time
and
as
we
deploy
and
we
start
classifying
issues,
people
will
have
different
understandings
of
difficulty
and,
as
a
team,
we
will
like
need
to
calibrate
and
converge
into
what
we
expect
you
know
is
gonna,
be
kind
of
like
a
normalized
definition.
I
did
try
to
like
add
some
heuristics
here.
So,
for
example,
a
trivial
issue
is
something
that
can
can
confidently
be
technical
bite
by
a
newcomer
who
is
widely
unfamiliar.
B
Wouldn't
it
be
to
be
so.
There
is
like
some
some
unfinished
sentences
aspects
I
see
here,
but,
for
example,
a
an
expert
level
issue
is
something
that
requires
extensive
knowledge
of
the
history
implications.
Ramifications
of
the
issue,
as
well
as
a
dependence
on
a
b2b
stack
so
stuff
related
to,
for
example,
multi-select
2.0,
will
probably
be
expert
level
and
something
that
can
be
done
by
anybody
in
the
community
will
probably
be
a
trivial,
trivial
difficulty
level.
B
Then
there
is
another
dimension
which
is
size,
which
is
basically
the
amount
of
work
that
an
issue
that
initial
entails.
The
kind
of
the
issue
is
also
important
to
do
to
capture,
so
that's,
basically,
the
kind
of
scribes
the
nature
of
the
issue.
An
issue
could
be
a
bug
and
improvement.
It
could
be
a
tracking
issue,
for
example,
there's
an
issue
in
cat
DHT,
which
is
the
critical
path
towards
DHT
efficiency
right,
and
this
is
an
umbrella
issue
that
there's
many
other
issues.
B
We
can
capture
that
here
and
I'm
pretty
sure
that,
as
we
start,
classifying
issues
we'll
expand
this
list
pretty
rapidly
and
then
I
also
wanted
to
model
the
fact
that
we
want
to
like
sometimes
study
issues
based
on
larger
stories
that
we're
working
on
or
larger
initiatives
that
we're
working
on
and
a
model
that
by
using
the
topic
keyword.
So
the
topic
is
a
particular
theme.
Isn't
is
a
theme
that
is
attached
to
that
issue.
B
So
if
an
issue
has
to
do
with
Docs
or
it
has
to
do
with
interoperability,
then
it
can
get
classified
in
that
matter.
One
thing
that
I
want
to
mention
is
that
the
classification
of
labels
between
descriptive
labels
and
execution
labels,
which
I'm
going
to
go
into
right
now,
is
deliberate
in
the
sense
that
then
descriptive
labels
can,
in
many
cases
be
assigned
by
the
author
of
the
label
it
off
of
the
issue
itself.
B
So
in
the
future
we
may
want
to
have
a
bot
such
that
when
somebody
opens
an
issue,
it
pings
them
on
github
and
kubernetes.
Has
this
I
think
russ
has
this
as
well,
and
it
tells
them
hey
write
a
comment.
Mention
me
the
board
and
tell
me
which
labels
you
want
me
to
apply
because
and
an
author
of
an
issue:
if
they're,
not
a
contributor
to
the
red
book
can
not
apply
labels
only
contributors
and
try,
adders
and
writers
can
can
can
actually
apply
labels
to
issues.
So
we
can.
B
We
can
circumvent
this
limitation
by
using
a
bot,
which
is
what
large
projects
do
out
there
if
we
want
to
invert
the
control
of
applying
descriptive
labels.
Now
there's
another
category
of
labels
which
are
execution
labels.
So
this
is
these.
Labels
have
to
do
with
the
workflow.
So
now
we
have
an
issue
how
we
gonna
work
on
it.
B
If
we're
gonna
work
on
it
right,
so
things
like
priority
and
the
status
of
a
particular
issue
are
things
that
our
workflow
over
relevant
right
and
finally,
I
wanted
to
model
as
well,
while
I'm
calling
heads
advisories
for
issues
so
things
like
hey.
This
is
a
good
first
issue.
If
you're
a
newcomer,
then
this
is
a
hint
write
it.
This
is
a
good
first
issue
for
you
to
work
on
or
or
things
like
needs,
a
decision
needs.
Try.
Aging
needs.
Analysis
needs.
B
Author
input,
it
needs
team
input,
right,
I,
think
well
we'll
play
by
the
year.
It's
like
this
is
an
initial
proposal.
All
of
this,
like
aesthetics,
the
execution
side
of
things,
relies
on
some
kind
of
workflow
right.
So,
as
I
said
before,
having
a
labeling
and
a
classification,
taxonomy
is
not
the
end
goal.
It
is
a
building
block
that
allows
us
to
create
workflows,
to
group
work
and
to
tackle
work
by
understanding
what
that
work
is
and
categorizing
it
so
that
we
can
create
workflows
on
top
of
it
right.
B
This
is
this
is
just
the
start
of
bringing
structure
into
the
the
goal
mp2
into
the
NIP
e2p
project,
as
we
discussed
in
Costa,
Rica
and
so
on.
We
are
all
craving
or
get
it
better
organization
and
better
structure
for
the
work
that
we
do
and
better
clarity-
and
this
is
just
the
initial
step
to
allow
us
to
navigate
the
forest
of
everything
that
we
need
to
do.
B
That
has
been
open
against
our
rappin,
so
the
plan
to
implement
this
is
we're
gonna
be
using
github
actions,
or
the
proposal
here
rather
and
like
I,
want
to
get
a
sense
of
what
you
think.
What
everybody
here
thinks
about.
This
is
I'd
like
us
to
use
a
bot,
sorry,
a
github
action
that
basically
allows
us
to
declare
the
labels
and
the
descriptions
of
the
labels
and
the
color
of
the
labels.
And
all
these
things
in
a
llamó
file
or
an
adjacent
file.
B
B
So
we'll
need
to
show
this
work
we'll
need
to
distribute
this
work,
we'll
need
to
figure
out
a
way
to
do
this
as
a
team
right.
So
that's
more!
That's
what
I
wanted
to
what
I
wanted
to
bring
to
the
table
today,
just
to
like
give
you
a
walk
for
in
a
presentation
of
like
all
the
thinking
that
is
gone
into
this
and
how
we
plan
to
to
move
forward.
So
yeah
I
just
want
to
open
it
up
for
questions
I'm,
mindful
that
we're
over
time.
B
A
Overall
yeah
I
think
it
looks
good
and
it's
gonna
enable
us
to
use
some
stuff
also
in
Zen
hub
to
cleanly
coordinator
issues,
because
we
talked
about
in
November
about
reorganizing
and
getting
away
from
okrs
and
getting
a
prioritized
backlog
for
people
to
jump
into
and
moving
stuff
around
with
labels
and
making
filtering
a
lot
easier.
Yeah
that'll
be
great
I'm
gonna
plan
on
going
through
this
fully
tomorrow
and
then
adding
comments
in
github.
But
overall
this
this
looks
good
to
me.
F
B
So
that's
there's
two
options
here
and
there
might
be
others
that
may
have
not
thought
about.
The
simplest
option
is
to
disable
the
issue
tracker,
which
will
basically
make
the
tab
disappear
entirely
and
in
my
leave
people
a
bit
puzzled
because,
like
I
would
say
most
people
on
github
expect
to
find
the
issue
track.
Like
the
issues
tab
on
repos.
That's.
B
B
We
forward
them
to
the
right
place,
but
still
we
automate
whenever
somebody
opens
an
issue,
we
automate
the
transfer
due
to
the
top
level
to
the
top
level
issue
to
the
top
level
rapport
and
by
the
way,
when
we
transfer
issues,
the
old
links
are
still
preserved,
so
github
redirects
old
issues,
all
the
order,
the
old
link
to
them
to
the
new
location
of
the
issue.
So
we
should
him
so
that
does
not
like
break
broken
links
is
not
what
worries
me.
It's
more.
B
B
B
Do
you
prefer?
So
if
you
prefer
to
close
down
the
issue
tracker
and
add
a
note
in
the
readme
of
all
in
a
repose,
then
do
a
1
and
if
you
prefer,
to
leave
the
issue
tracker
open
with
a
big
notice
at
the
top
and
create
having
a
github
action
that
transfers
issues
to
gonna
be
2p,
then
do
it
too
right
so
so
kind
of
okay,
like
a
count
of
3,
1,
2,
3
and
4
to
invisible,
okay,
so
I
think
we
have
like
unanimous.
D
B
That's
great
alright,
so
if
every
everybody
on
this
call
can
go
through
the
taxonomy
tomorrow,
give
you
a
thumbs
up
or
leave
comments.
There
are
some
dangling
sentences.
I
realized
I
won't
get
to
them
tonight,
but
just
be
mindful
that
yeah
just
I'm
aware
of
that.
Besides
that,
if
just
give
a
thumbs
up
or
thumb
or
add
any
comments
there
and
by
the
way,
like
I,
think
some
of
these
labels
will
adopt
immediately.
B
Some
of
these
labels
will
adopt
once,
as
we
add,
as
we
mature
a
will
workflow
so
don't
feel
daunted
by
while
we
have
so
many
things
that
we
need
to
like
apply
right
now.
It's
like
we're,
not
gonna
start
applying.
All
of
them
immediately
right
the
ideas
to
apply
these
functional
area
component
kind
and
just
start
applying
complexity
and
difficulty
as
we
start
inspecting
issues
and
triage.