►
From YouTube: CHAOSS.D&I.July.22.2020
Description
CHAOSS.D&I.July.22.2020
A
A
A
You
and
I
can
it's
always
like
a
just-
it's
kind
of
a
kind
of
a
tag
team.
I
will
say
for
the
people
on
here
so
for
asta
and
tola
and
and
iran,
it's
a
really.
This
is
a
great
working
group.
If
you
want
to
kind
of
practice
your
skills,
it
kind
of
leading
a
meeting
for
30
minutes
to
an
hour.
You
have
a
really
nice
accepting
group
of
folks
here
who
kind
of
help
you
along.
So
it's
just.
It's
really
good
practice
in
that
regard,
all
right.
So
thanks
elizabeth.
A
B
Yeah
actually
matt
asa.
If
you
want
to
jump
in
and
let
us
know
how
things
are
going.
D
We
think
it
went
well
actually
we
got
to
know
a
lot
about
how
applicants
and
viewers
will
react
to
the
documentation
that
required
to
go
through
and
like
how
it
can
be
restructured
to
just
be
more
intuitive.
Like.
D
Independent
of
the
path
they
take
to
get
to
the
process,
they
should
actually
understand
what
they're
trying
to
do
and
what
they're
trying
to
achieve
so
right,
maddie
pointed
out
for
reviewers
that
qualitative
and
quantitative
divisions
would
not
that
like
useful
on
a
more
practical
term.
So
I
got
rid
of
the
qualitative
part,
because
reviewers
are
less
likely
to
edit
the
actual
markdown.
In
the
comment
it
would
be
faster
if
they
just
went
through
the
actual
tick
marks
on
the
checklist
and
give
their
feedback
in
the
comments
below.
D
D
D
E
I
don't
have
much
to
add
to
that
that
covers
it
pretty
well,
there
are
some
documents
that
we're
working
on
mostly
asa
is
working
on
that
that
show
kind
of
a
revised
system.
I
I'm
not
going
to
share
them
here,
but
I
know
that
I
I
will
tell
you
that
they
have
been.
E
She
has
been
working
very
hard
on
revising
that
and
getting
it
into
a
more
more
usable
and
more
friendly
format,
one
of
the
big
holes
that
we
realize
that
we
can
turn
into
metrics
thanks
to
elizabeth
and
matt's
feedback
and
other
people's
feedback
as
well,
is
that
we
we
don't
really
have
a
lot
of
accessibility,
related
aspects
to
our
event,
metrics
right
now.
A
lot
of
it
is
based
on
promoting
diversity,
and
the
inclusion
is
not
as
much
found
there.
E
So
that's
that's
gonna,
be
a
big
focus
for
us
too.
We
might
want
to
be
working
in
the
dnr
working
group
too.
That
might
be
a
next
set
of
metrics
would
be
event
in
project
accessibility,
metrics.
So
it's
we
got
a
lot
in
the
works
and
it
looks
like
it's.
We
got
a
lot
from
this
pilot
testing
and
we
want
to
thank
everybody
that
was
part
of
that
too.
So
we're
going
to
be
sitting
out
there
thanks.
E
B
A
quick
question:
okay,
you
might
not
be
able
to
answer
this
yet
matt
and
master,
but
I
was
just
curious
when
you
said
you
were
looking
at
revising
the
levels
or
the
different
categories
of
the
badges.
Are
you
adding
more
in?
Are
you
just
changing
kind
of
the
requirements
for
each
level.
D
Like
we're
taking
the
old
ones
and
making
them
more
metric
independent,
this
previous
system
is
somewhat
like
based
on
how
many
metric
deliverables
you
need.
So
it
could
be
kind
of,
I
think,
difficult
to
scale
up
if
more
metrics
are
added
to
the
system.
E
Yeah,
I
just
shared
the
gist
in
the
in
the
in
the
working
group
meeting
minutes
and
I
I
I'm
sorry
I
hope
that's
okay,
but
I
I
just.
A
It
does-
and
I
think
is
this-
was
this
the
comment
that
is
this
addressing
the
comment
that
shoya
and
I
had
when
we
were
doing
the
review
asta.
A
D
A
D
E
F
A
I
think
to
show
you
or
to
myself
what
first
set
second
set.
Third
set.
Fourth
set
meant,
so
I
didn't
know
yeah,
we
weren't
sure
what
to
do
with
it.
So
I,
if
I'm
following,
I
think
what
asta
put
in
there
or
what
matt
put
in
the
chat
and
what
asta's
been
working
on
that
just
the
one
just
above
the
in
the
here.
Let
me
just
share
my
screen.
A
E
Right
there
were
two
concerns
that
I
had
going
into
this
system.
It
was
the
best
we
had
at
the
time,
but
the
two
concerns
that
I
had
were
one
we
we,
if
we
add
more
metrics,
then
we
have
to
scale
the
skilled
at
passing,
silver
and
gold
in
some
way
or
another,
and
the
other
concern
that
we
had
to
address
was
the
fact
that
that
event
criteria
our
hierarchy.
E
Well,
it
was
an
ideal
for
the
situation
that
we
have
now.
It
kind
of
addresses
the
events
by
one
dimension
at
a
time
which
is
just
size
when
there's
lots
of
other
dimensions,
you
have
to
see
it
by
so
that
was
kind
of
it
was
a
working
set
that
we
could
get
a
review
system
going
on,
but
I
think
this
I
think
this
alternative
approach
is
excellent.
A
So
a
comment
two,
I
think
you
had
two
issues
in
there.
One
was
with
respect
to
a
growing
set
of
dni
event.
Metrics.
A
Then,
then,
this
table
becomes
different
because
you
know
if
we
have
things
like
wheelchair
accessibility
say
as
a
new
metric,
the
maximum
number
of
metrics
becomes
six.
I
think-
and
if
we
add
another
metric,
the
max
number
becomes
seven
and
so
on
and
so
forth,
and
we
have
to
kind
of
readjust
what
it
means
to
be
gold
and
what
it
means
to
be
silver
in
passing.
A
E
A
Yeah,
so
did
people
have
thoughts
on
that?
That's
a
it's
a
totally
fair
point
and
something
that
we
will
necessarily
have
to.
A
A
B
F
B
B
More
scalable
definitely
agree
with
that,
and
then
I
guess
I
you
know,
I
don't
know
if
that's
something
that
we
want
to
keep
track
of,
if
that,
if
that
in
itself
would
be
unwieldy,
where,
if
you
know
somebody
was
curious
of
like
what
what
were
the,
what
what
were
the
things
that
they
had
going
on
for
them,
like
you
know
what
I
mean
like,
is
there
a
way
for
someone
to
like
kind
of
go
back
in
history
and
say:
oh,
they
got
a
silver
because
they
had
they
didn't.
B
A
E
Yeah,
I
had
an
idea
that
kind
of
sprung
out
of
that
too.
Just
now,
we
could
have
something
like
a
certain
percentage,
see
the
the
criteria
at
the
bottom,
the
alternative
approach-
austin,
you
can
also
put
comments
on
this
too.
Is
that,
like
I
was
thinking
if
the
the
basic
intermediate
and
ideal
are
separate,
metrics
categorized
into
how
hard
they
are
to
make
that
metric
come
true
for
the
event
or
project,
as
I
understand
it,
so
it'd
be
like.
E
If
you
need
a
certain
percentage
like
it
may
be,
sixty
percent
of
basic
seventy
percent
of
intermediate
and
eighty
percent
of
ideal
and
advanced
check
boxes
are
checked.
Then
you
would
be
able
to
you'd
have
to
meet
every
criteria
previous
and
then
all
the
criteria
above
that
is
to
get
silver
or
gold
just
just
to
have
some
kind
of
levels
of
badges
still.
But
to
be
I
we
can
come
back
with
more
of
a
concrete
idea
of
that
too.
A
Yeah
percentages
may
not
be
bad
because
I
I
had
kind
of
picked
out
when
elizabeth
was
talking
like
if
we
have
15
event
metrics.
That's
a
lot
to
ask
folks
to
to
address.
It
might
be
correct
to
ask
them
to
address
those
but
to
receive
a
goal.
That's
so
we're
trying
to
like
balance
a
meaningful
process
with
a
lightweight
process,
and
also
because
diversity.
E
And
inclusion
is
hard
to
hard
to
be
perfect
at
for
sure
that
I
think
gold
is
still
attainable
without
perfection.
A
All
right,
this.
A
C
B
I
was
just
going
to
say:
are
we
still
thinking
about
making
people
re
reapply
like
every
year
or
something
like
that,
because
that
would
also
keep
people
from
you
know
getting
in
at
this
ground
level,
where
there's
five
metrics
and
they
hit
them
all
and
they
got
a
gold
like
and
as
the
metrics
grow
they
don't
grow
their
their
event
at
all,
but
they
still
look
like
a
gold
level
event.
So
I
think.
E
B
Kind
of
just
a
way
to
keep
it
keep
it
going
yeah
because
we
do
want
them
to
improve
over
time.
You
know,
and
especially
as
as
you
know
like,
I
think,
of
an
event
that
I
hosted
you
know
like
10
years
ago,
would
look
very
different
than
I
would
host
today,
just
because
you
know
there's
so
much
more
knowledge
and
so
much
more,
even
even
accessibility
in
as
an
event
organizer
like
it's
so
much
easier
to
find
like
volunteers.
B
That
will
be
guides,
for
you
know
visually
impaired,
for
instance,
whereas,
like
10
years
ago,
that
was
like
such
a
scary
thing
and
you
know,
like
people,
probably
were
very
afraid
to
even
approach
that,
so
I
think
yeah
as
time
goes
on,
it
will
also
get
easier
for
the
event
organizers
to
to
fill.
Some
of
these.
I
think.
A
Yeah
and
maybe
listening
to
you
talk
to
that
point
like
when
you,
when
you
as
an
event,
are
in
your
one
great
the
the
five
wherever
my
thing
was,
the
five
that
are
the
appropriate
five
are
meaningful
and
then,
in
year,
two
you're
already
doing
those
five
anyway.
A
So
really,
what
we're
asking
for
is
just
tell
us
that
you're
continuing
to
do
those
five
that
you
did
last
year
and
here's
a
few
more
that
we'd
like
you
to
attend
to,
if
you're
still
interested
in
the
gold
badge
or
you
could
just
stay
at
silver.
A
A
A
E
A
Okay
and
then
I
think,
matt
in
your
second
point.
Maybe
it's
been
a
while,
since
I
remember
both
your
points,
but
your
second
point
with
this-
and
I
think
elizabeth
alluded
to
this
too,
is
is
when
they
receive
a
badge.
A
So
let's
say
that
elizabeth
submitted
she
submitted
one
called
liz
fest
and
it
did
did
everything
right
and
so
there's
a
gold.
So
she
wants
to
put
this
gold
badge
on
her
site.
You
know
she
took
the
time
to
do
and
she
wants
to
represent
this.
So
how
do
we
give
that
badge
to
her
and
indicate
that
this
badge
was
attained
via
the
metrics
that
were
released
in
you
know?
Whatever
january
2020.,
you
know
like
that.
Those
were
the
criteria
she
went
under.
I.
A
A
Yeah
and
then
do
we
do
we
care
that
somebody
might
be
able
to
take
the
badge
and
just
put
it
on
their
site,
even
though
they
didn't
go
through
the
process.
Do
we-
or
I
think
we
talked
about
this
last
time
and
we're
like
we
don't
have
the
capacity
to.
E
Saul
has
talked
about
having
a
cloudflare
instance
that
does
some
kind
of
dns
redirection
and
then
make
sure
that
the
the
site
posting
the
svg
is
the
right
site.
I
think
it's
also,
okay,
that
we
can
put
up
like
link
it
make
a
linking
svg
that
goes
to
the
issue,
the
pull
request
that
was
accepted
for
the
badge
like
people
have
talked
about
that
before,
I
think
that's
without
having
an
instance
it's
the
best.
We
can
do.
A
A
B
B
F
A
A
I
think
it's
a
good
idea,
because
I
would
like
to
see
the
as
a
reviewer.
I
would
like
to
see
the
workflow
again
and
I
think
you
had
enough
feedback
that
it's
gonna
create
a
change
from
both
the
applicant
and
the
reviewer
side
of
things
like
some.
E
Okay,
I
agree.
That's
a
good
idea,
something
I
hadn't
thought
about
before,
like
austin
said
too:
okay.
A
A
A
People
work
through
this
process,
there's
the
ways
that
we
track
successful
applications.
You
know
what
I
mean
in
or
or
do
we
there
are
ways
that
the
svg
gets
distributed
to
people
there's
ways
that
we
think
about
metrics
releases
moving
into
the
future.
So
there
are
a
number
of
moving
parts
that
that
we're
thinking
about-
and
I
think
they're
the
right
set
to
be
thinking
about.
A
So
this
is
a
question
for
everybody.
At
what
point
do
we
go
live
with
this
and
say
some
of
these
issues
will
be
sorted
out
as
we
go,
because,
as
all
of
you
that
have
been
involved
in
projects
know
if
we
try
to
solve
everything
right
here
and
right
now,
we're
going
to
grind
this
to
a
halt
and
and
all
of
our
efforts
to
solve
everything
will
be
fruitless
because
it'll
something
will
break
anyway
when
it
goes
live.
E
I
know
we're
looking
at
the
end
of
august.
That's
what
we
talked
about
at
ossna
too.
Okay,
so
we
that's
that's
kind
of
the
public
date.
E
I
think,
as
long
as
we
have
a
assistant
that
we
it's
hard
to
say
that
we
have
a
system
that
we
consider
robust
and
then
and
that's
when
it
goes
live
because
then
we'll
never
consider
it
robust,
but
we
have
to
think
about
that
one.
I
think
asa.
You
probably
have
something
to
say
about
that
too.
Right.
A
E
Yeah,
so
we
have,
we
actually
have
a
meeting
right
after
this
that
we're
going
to
be
wrapping
up
the
the
findings
of
the
pilot
testing,
okay
and
putting
together
something
small
and
then
and
then
we're
we're,
also
working
actively
on
on
the
documentation
that
changes
and
also
the
changes
in
the
checklist
and
everything
like
that.
So
we
we
we'll
probably
want
to
start
probably
toward
we'll
figure
out
when
we
want
to
start
the
next
pilot
testing
and
then,
after
that,
I
think
those
final
changes
are
enough
to
go.
B
Might
also,
it
might
also
make
sense
that,
because
of
the
kind
of
state
of
the
world
that
the
event
part
is
probably
not
even
really
applicable
yet.
So
you
might,
I
mean
just
an
idea.
You
might
want
to
just
open
the
project,
part
of
that
and
then
do
the
event,
because
it
seems
like
the
event
stuff
had
more
metrics
with
them.
That
need
to
be
like
re
like
worked
up
and
reviewed
and
released,
whereas,
like
a
project,
might
not
have
as
much
work
to
be
done.
Does
that
make
sense?
B
B
E
A
A
I
I
like
that
idea,
like
we'd,
probably
want
to
do
a
pilot
on
the
projects,
because
I
think
the
last
pilot
was
just
aimed
at
events.
Wasn't
it.
D
A
A
A
A
Like
I
think,
it's
better
to
have
a
minimal
minimally
small
system
that
people
can
understand
than
a
comprehensive
system
that
nobody
gets
yeah
all
right,
cool,
great
any
other
comments
or
questions
on
the
dni
badging
stuff.
A
Very
cool
the
the
next
are
kind
of
just
freer
for
your
radar.
So
please
keep
in
mind
the
metrics
release.
The
current
candidates
are
under
review
and
there's
three
metrics
from
dni.
A
A
So
if
you
could
again
just
take
a
look
at
these
currently
under
review,
and
then,
if
you
have
any
comments,
even
just
it
looks
good
to
me
is-
is
helpful
on
the
issues
that
would
be
really
helpful,
for
everybody
should
take,
maybe
15
minutes
to
get
through
all
three
of
them.
A
What
is
this
window
again
in
time?
It's
the
end
of
the
month.
Okay,
so
we'll
be
on
the
29th
next
next
wednesday.
So
then
we
can
just
kind
of
close
the
issues
right.
Actually
the
public
comment
period
goes
through
then,
and
then
we
as
a
group
have
a
chance
to
kind
of.
We
have
a
little
bit
of
flexibility
to
close
those
things
out.
So
I
think
next
week
in
the
following
week,
we'd
probably
be.
D
A
All
right
and
then
the
last
thing
is
something
I
just
want
to
bring
up.
It
looks
like
we're
going
to
have
an
opportunity
to
work
with
the
ford
foundation
to
take
an
internal
look
at
chaos
as
a
project
from
a
dni
perspective,
so
elizabeth
shawn
and
I
have
been
talking
with
one
of
the
program
officers
at
the
ford
foundation
to
to
kind
of
over
the
course
of
a
year,
take
a
look
at
how
the
chaos
project
operates,
with
an
eye
on
diversity,
equity
and
inclusion.
A
So
are
there
things
that
the
chaos
project
is
doing
well
in
this
regard,
and
are
there
things
that
we
can
improve
on
in
this
regard,
and
over
that
year,
recommendations
would
would
come
from
this
work
to
provide
improvements
for
the
project,
so
design
guidelines
is
what
I
called
them
there
things
that
we
can
do
to
help
improve
diversity,
equity
and
inclusion
in
the
project.
So
this
would
be
an
internal
look
at
the
project.
A
A
And
then
the
hope
is
is
that
over
the
course
of
the
year
from
this
self-reflection,
we
can
not
only
develop
metrics
that
projects
should
attend
to
with
respect
to
dni,
but
also
probably
incorporate
some
of
those
in
if
it's
our
badging
process
or
ways
that
we
can
help
other
projects
think
about
their
own
dni
from
a
design
perspective.
What
are
what
are
like
the
design
characteristics
that
we
can
do
to
help
improve
these
things
right.
A
So
how
do
you
actually
build
a
project
to
best
support
diversity,
equity
and
inclusion?
So
this
is
it's
really
just
kind
of
an
fyi,
so
I
hope
this
is
interesting
to
people,
and
I
don't
know
if
people
have
comments
on
this
at
all.
A
And
so
last
week
I
think
it
was
last
week
too
there
were
a
couple
kind
of
design
guideline.
Do
you
remember
this
discussion,
and
so
I'm
might
from
my
own
perspective.
We
don't
have
to
do
it
now,
but
from
my
own
perspective,
I'm
trying
to
think
about
how
the
design
discussions
from
last
week
with
respect
to
dni
may
or
may
not
tie
into
this.
This
potential
support
from
the
ford
foundation
all
right
cool,
any
other
issues
that
folks
have
for
dni
at
the
moment.
A
It's
good
to
hear
your
voice
so
well
now
that
we've
all
said
hi
to
salah.
A
We
could,
I
think
we
can.
We
can
close
this
meeting
and
again
from
just
a
work
perspective
if
you
could
take
a
look
and
comment
on
any
of
the
metrics
that
are
under
a
review
right
now.
That
would
be
super
helpful
for
over
the
course
of
the
next
week.