►
From YouTube: CHAOSS Metrics Models Working Group 2/15/22
Description
Links to minutes from this meeting are on https://chaoss.community/participate.
A
I
agreed
all
right
well,
hi
everybody
I'd
like
to
welcome
you
to
the
metrics
model
meeting,
it's
good
to
see
friends
here
again,
so
why
don't
we
go
ahead
and
get
started?
We
do
have
one
open
pr.
It's
really
not
worth
taking
a
look
at
too
deeply
sean.
I
tagged
you
on
that.
It's
just
it's.
A
A
Thank
you
very
much,
so
all
right,
so
we
a
couple
things.
C
A
I
want
to
talk
about
today.
Is
elizabeth,
had
put
together
a
tool
kit?
Okay,
so-
and
I
think
it's
straight
now
so
anyway,
we
we
have
the
metrics
models,
which
are
these
right
and
the
metrics
models
are
a
collection
of
metrics
that
are
meaningful
in
some
way
right.
So
whatever
that
focus
area
is,
we
all
know
that.
D
A
We
have
associated
metrics
that
help
define
this
metrics
model.
I
think
we
need
to
fix
this
a
little
bit
just
to
align
with
the
toolkit,
but
and
then
I
think
we
have
two
implementations,
then
at
this
point
and
kevin
and
elizabeth
and
sean
were
on
the
other,
the
community
call
today.
I
think
this
is
kind
of
where
we
are
landing
with
this,
that
that
we
have
ways
that
these
metrics
models
can
be
implemented
and
in
this
particular
case,
in
the
case
of
the
dei
event,
badging
program.
A
Yes,
exactly
yep
yeah,
so
I
think
this
was
this.
Finally
like
resonated
with
me
after
the
talk
today
and
then
just
kind
of
thinking
about
it
this
afternoon
that
in
our
metric
oops,
sorry
in
our
in
our
metrics
model,
we
have
this
implementation
section
and
sean.
This
would
be
where,
if
it's
not
something
that
can
be
necessarily
implemented
from
trace
data,
we
can
create
a
toolkit.
A
B
For
those
that
haven't
been
on,
the
community
calls
daniel
scardo
from
gramor
lab,
and
I
and
a
few
other
folks
are
getting
working
on
a
chaos
wide
software
contribution
team
to
try
to
strengthen
our
software
development
community.
Our
software
contributor
community
around
these
metrics
models,
using
both
grimorlab
and
auger,
to
implement
metric
models.
F
B
B
Don't
think
it
has
to
be
a
chaos
tool.
We
would
welcome
the
inclusion
of
that
tool
in
the
chaos
project,
but
but
it
does,
I
don't
think
we're
we're
snooty.
That
way.
A
E
I
lost
my
mute
button,
so
I
found
it
yes
it
it
resonates.
So
I
think
they're
I
love.
I
love
the
idea
of
toolkits
and
having
a
way
to
point
point
users
to
this
this
this
document
that
provides
provides
utility
for
them
yeah.
E
A
B
Yeah,
here's
here's
how
I
would
frame
it.
I
I
would
say
that
we've
built
metrics
that
we've
decided
can
be
when
they're
put
together.
They
create
a
model
that
provides
something
useful,
however,
that
metrics
model
is
not
intuitively
implementable
in
a
consistent
way,
so
just
the
metric
definition
isn't
sufficient
to
implement
the
metric
model.
It's
these
word
that
matt
used
about
what
elizabeth
did.
B
B
I
think
the
met
I
think
the
toolkit
provide
is
what
ma
gives
the
metrics
model,
especially
for
these
things
that
can't
be
measured
with
trace
data
utility,
because
then
you
have
something
concrete
that
you
can
look
at
and
understand.
It's
like
layers
of
an
onion
yeah
in
some
ways.
Yes,
I
I
think
without
these
tool
kits,
that
would
be
hard
to
realize
these
metrics
models
and
any
thing
approximating
consistency.
A
A
B
Yeah
yeah
yeah,
I
don't,
I
don't
think
it's
non-health
yeah
yeah.
I
think
I
think
I
think
what
I'm
articulating
is
that
one
of
the
things
that
we
can
do
as
a
community
to
keep
chaos
itself
sustainable
is
help.
People
walk
the
whole
path.
Yes,
and
this
toolkit
is
a
really
big
contribution
to
that
process.
I
think
yes,.
F
One
question
so
this
toolkit
is,
is
for
the
for
the
each
of
the
single
metrics.
They
have
a
way
to
implement.
It
may
provide
something
like
a
toolkit
or
the
way
to
implement
it.
Do
we
have
any?
I
mean
the
comprehensive
way
to
to
combine
those
metrics
in
those
model
in
this
model
to
represent
as
a
value
or
number
to
say.
Okay,
this
the
final
score
of
this
matrix
model,
the
final
score
of
the
matches
like
to
using
a
number
to
describe
that.
G
H
G
F
Yeah,
we
don't
have
to
be,
you
know
using
the
normalized
the
way
to
say.
Okay,
all
those
matrix
models
have
got
the
the
one
number,
but
for
the
age
of
single
matrix
model,
we
got
one
number
to
represent.
If
you
have
this
such
way
to
do
that,
we
would
recommend
to
do
so
to
show
that
how
to
combine
those
metrics
as
a
matrix
model
represented
as
a
score.
I
I
think
scoring
is
based
on
like
we
are
not
assigning
weightage
to
any
particular
metric
within
a
model
like
scoring.
Is
you
either
you
check
mark?
Yes,
you
have
implemented
that
metric
or
you
have
assigned
some
voltage
to.
For
example,
if
you
look
at
this
di
beijing,
we
have
a
code
of
conducted
event.
We
are
not
assigning
any
waiting
or
any
scoring
to
this
metric.
The
scoring
came
either.
Yes,
it
is
implemented
or
it
is
not
implemented
within
that
toolkit.
D
F
F
Yeah,
this
is
a
small
nice,
it's
more
nice
yeah,
it
could
say:
okay,
it's
already
implemented
and
also,
for
example,
for
the
code
of
conduct
event.
So
how
much
like
it
has
been
achieved
at
this
goal
like
we
use
the
arrow
to
10
10
means.
Okay,
this
this
matrix
has
been
mirrored
and
it's
achieved
the
fully
score.
A
G
Mean
we
do
ask
also
like
can
people
find
it?
Is
it
publicly
available?
Do
it
does
it
does?
It
include
xyz
parts
in
the
code
of
conduct,
so
I
mean
we
do
kind
of
drill
down
a
little,
but
at
the
end
of
the
day
it's
still
just
checking
boxes.
I
guess.
I
G
F
I
mean
we,
I
agree,
and
I
think
we
don't
have
to
do
this
so
hurry
and
we
we
will
say
that
it's
like
annie,
like
like
matt
mentioned.
We
can
do
step
by
step
forward
to
first.
We
we
introduce
this
toolkit
for
each
of
the
single
matrix
model
and
then
we
will
see
if
you
need
our
score
or
number
to
represent
the
final
result
yeah
and
for
now
I
think
the
checklist
in
this
I
mean
dui
event,
my
badging
matrix
model
is
it's
good
enough.
A
E
E
I
I
think
we
do
want
to
be
careful
in
in
us
assigning
the
scoring
to
it
more
use,
use
language
where
the
person
who
is
adopting
this
model
could
determine
what
weighting
and
what
scoring
would
be
important
for
them
yeah.
I
think
that.
C
There
are
situations
where
needing
to
assign
a
checklist
like
a
boolean
would
create
complications
and
it
depends
on
the
specific
metric.
So,
for
example,
criticality
is
it's
designed
around
a
number,
whereas
secure
design
principles,
it's
another
metric
is
naturally
a
boolean.
So
I
think
it
depends
on
the
metric.
I
I
B
C
I
So
if
you
go
to
the
metric
model
tab,
we
have
an
implementation
section
like.
If
you
go
to
the
metric
model
like
a
di
model
yeah,
we
have
a
implementation
section
where
we
have
this
toolkit.
So
in
this
implementation
we
also
provide
like
implementation
in
augur
or
grimola.
A
A
B
E
Even
even
if
they
can
be
derived
from
from
trace
data,
I
think
there's
there's
still
value
in
creating
the
toolkit
so
that
we're
we're
not
dependent
on.
B
Chaos-
software,
for
example,
so
well,
there's
a
there's,
a
presumption
that
the
metrics
that
are
like
if
it
can
be
derived
from
trace
data,
there's
some
presumption
that
the
metric
definitions,
regardless
of
the
tool,
are
defining
how
the
model
is
implemented.
Yeah.
F
I
think
it's
very
common
things
because
in
some
communities
they
already
have
some,
I
mean
the
platform
or
tools
they
exist
to
marrying
to
measure
the
community
health,
so
they
have
to
find
a
way
to
implement
it
if
they
provide
these
toolkits.
We
know
how
to
do
that
yeah,
even
even
we
already
have
solutions
in
google,
web
or
auger.
We
can
do
the
similar
thing
to
following
the
guideline
of
the
toolkit.
A
Yeah
all
right,
this
is
great,
so
elizabeth.
Thank
you
so
much
for
taking
the
time
on
that
toolkit.
I
think
it
was
really
helpful
in
the
metrics
model
discussion
as
a
whole,
so
that's
great,
because
this
is
something
that
we're
gonna
run
into
or
come
up
against
at
some
point
and
it's
nice
to
start
having
this
discussion
now
and
see
it
in
practice,
and
I
always
like
to
see
things
kind
of
in
like
on
paper
so
to
speak.
You
know
what
I
mean.
D
I
have
a
simple
question:
do
we
have
to
push
our
results
of
this
di
and
to
the
chaos
office
website.
B
E
B
Badging
is
a
badging
is
something
we've
created
and
it's
something
that
can
be
accomplished
in
a
standard
peer-reviewed
way
for
an
event,
I
think
some
of
the
metric
models
that
can't
be
measured
with
trace
data
for
a
project
that
are
not
event
based.
Those
are
different
things.
Those
would
be
laborious
to
badge
and
I
think
it's
sufficient
to
let
the
projects
determine
through
these
word.
I
cannot
remember
tonight,
toolkit
toolkits,
you
know
you
know
just
to
say
that
we
followed
the
toolkit
from
chaos.
G
To
to
june's
point,
I'm
sorry
to
interrupt
to
jun's
point.
I
I
completely
understand
what
you're
saying,
because,
if
someone's
using
this
toolkit
and
their
results
are
different
or
they
like,
it,
took
them
way
longer
than
two
hours
or
they
had
some
feedback.
G
I
would
really
like,
eventually,
when
we
publish
these
to
have
some
sort
of
feedback
mechanism
so
that
people
can
add
to
these
toolkits
or,
like
you
know,
comment
on
them
or
something
to
make
it
a
little
more
collaborative,
because,
quite
honestly,
a
lot
of
these
were
just
flat-out
guesses
on
my
part,
like
I
don't
actually
know
if
it's
gonna
take
one
to
two
hours.
That's
what
I
guess!
That's
what
I'm
guessing
my
best
guess.
So
I
think
it
to
jun's
point.
G
I
think
that
would
be
awesome
if
people
did
kind
of
make
it
more
collaborative
and
if
even
if
they
don't
post
the
results,
if
they
want
to,
you
know,
engage
with
us
on
that.
I
think
would
be
great.
I
A
This
is
a
great
conversation,
so
could
we
I'm
gonna
move
us
to
the
next
one,
so
I
think
we've
made
it
through
here.
A
We
want
to
just
talk
a
little
bit
about
the
release
process,
because
there
are
actually
a
number
of
metrics
models
that
are
honestly
kind
of
ready
for
release.
We've
gone
through
and
kind
of
answered
the
questions
of
why
you
should
care
we've
thought
about
some
of
the
metrics
that
helped
kind
of
define
that
metrics
model
I
mean,
we've
been
doing
some
quite
some
of
the
work
in
terms
of
getting
these
released
so
kevin.
E
Yeah,
I
suppose
I
suppose
I
can.
I
can
talk
about
this
okay,
so
I
did
so.
I
did
create
this
document
to
kind
of
outline
to
allow
us
to
compare
the
metrics
release
process
with
a
with
a
proposed
metrics
model
release
process
and
the
I
will
say
this,
so
the
the
metrics
release
process,
though
the
way
we
treat
the
metrics
in
chaos
is
kind
of
like
we're
creating
a
standard
so
a
standard
round
community
health
metrics.
So
each
metric
gets
its
own
markdown
page
and
we
go
through
a
30-day
review
period.
E
Any
changes
to
the
metrics
require
us
to
go
back
through
that
release
process
again
and
there's
a
there's,
a
whole
checklist
that
we
go
through
when
we
create
these
documents
and
the
the
reason.
The
reason
for
that
is
to
kind
of
create
a
rigor
and
validity
in
how
we're
we're,
defining
and
presenting
these
metrics.
E
We
release
it
as
a
pdf
which
is
kind
of
a
static
snapshot
in
time
of
the
of
all
of
the
metrics,
and
then
we
also
keep
a
kind
of
a
a
running,
continuous
release
of
metrics
on
the
website,
so
that
that
continuous
release
allows
us
to
continue
to
to
edit
to
make
changes
to
the
metrics
in
between
the
the
six-month
release
cycle.
E
So
for
the
metrics
model
release
process,
I
am
proposing
something
less
less
rigorous
and
more
informal.
E
So
I
don't.
I
personally
don't
believe
that
these
models
have
the
same
kind
of
permanence.
As
the
as
the
metric
standards
do.
I
think
I
think
we
have
to
be
a
little
more
flexible
with
these
documents
and
kind
of
treat
these
documents
as
kind
of
an
ongoing
conversation.
E
So
I'm
so
I'm
proposing
that
we
have
no
set
cadence
for
the
release.
We
would
we
release
these
when
they're
complete
and
we
go
through
an
informal
review
process
and
by
informal.
What
I
would
envision
is
discussions
in
this
group
and
collaborative
editing
and
maybe
presentations
of
the
model
to
relevant
working
groups
and.
B
B
E
Yeah,
additionally,
the
an
additional
problem
that
we're
going
to
run
into
as
we
start,
creating
more
and
more
models
is
that
some
of
these
models
are
going
to
be
very
similar
to
each
other.
E
So
there
are,
there
are
probably
four
or
five
models
that
I
can
think
of
that
are
going
to
be
very
similar
to
how
welcoming
is
a
project,
for
example,
and
when
we're
creating
these
models.
I
think
it's
going
to
be
interesting
to
explore
each
of
those
different
models,
even
though
they
are
very
similar.
E
So
if
we're,
if
we're
really,
you
know
rigorous
about
the
structure
of
these
these
releases
and
we
we
kind
of
treat
these
as
kind
of
these
these
static
or
permanent
documents
that
I
think
I
think
we
kind
of
lose
out
on
some
of
these
conversations
that
we
can
have
so
so
I
guess
what
I'm
saying
is
these:
these
documents
have
they're
a
little
more
impermanent
right.
E
We,
we
shouldn't
be
afraid
to
create
a
new
model,
that's
similar
to
an
old
model,
and
we
shouldn't
be
afraid
to
abandon
a
model
if
it's
not
interesting
to
us
anymore.
B
I
So
how
it
is
different
as
like,
we
are
reviewing
the
metrics
already
release
metrics,
we
are
iterating
them,
we
are
changing
them
and
then
we
are
re-releasing
them.
It
will
be
the
same
thing
with
the
model
we
are
treating
them.
If
we
feel
any
change,
we
can
review
them
again
and
you
treat
them
and
then
re.
Release
them
and
same
is
the
continuous
release
we
have
for
the
matrix,
like
I'm
trying
to
create
a
differences.
I
E
Yeah
and
we
can,
we
can
iterate
them
and
edit
them
and
they
can
grow,
but
a
metrics
model
is
not
the
it's
not
the
definitive
way
to
understand
this.
One
thing
that
we're
trying
to
answer
right.
So
how
welcoming
is
a
project
there
could
be?
There
could
be
multiple
different
models
that
let
us
explore.
How
welcoming
is
a
project
and
we
don't
need
to.
We
don't
need
to
create
a
definitive
model,
for
it
is
what
I'm
saying.
B
Yeah,
I
mean
the
way
I
see
these
being
applied
in
practice
by
ospos
and
tech
companies
is.
The
metric
model
is
an
inspiration.
It
provides
a
place
to
start
and
they
may
add
and
remove
things
that
are
more
applicable
for
their
context,
but
but
we're
helping
them
not
start
from
zero
when
it
comes
from.
It
comes
to
creating
useful
products
from
the
collection
of
metrics.
E
So
in
this
process
I
think
there
are
two
questions
that
we
kind
of
need
to
discuss
and
and
iron
out
a
little
bit
further,
and
that
is
what
is
the?
What
are
the
artifacts
that
belong
in
the
metrics
model
release?
Is
it
just
that
metrics
model
document
or
how
do
we
decide
what
metrics
need
a
tool
kit
is
this?
Is
this
a
conversation
we
need
to
have
now
or
or
can
we
just
say
we're
just
doing
this
metrics
model
document
and
we
can
have
this
toolkit
conversation
later?
E
So
that's
one
question
I
have
and
then
I'll
drop
the
other
question
that
I
have
and
then
I
will
be
quiet
because
I've
talked
a
lot.
E
E
A
I
I
I
think
we
should
release.
It
is
like
how
I
perceive,
or
maybe
we
can
implement
it
in
the
future.
These
are
the
ways
we
are
listening
from
the
community
that
they
are
saying
they
need
these
things
we
they
are
not
implemented
yet,
but
these
are
the
thought
processes
of
the
community.
A
E
C
From
other
people,
I
think
that
the
value
of
the
toolkit
is
high
enough,
that
it
might
be
incorporated
into
the
model
itself,
since
they
are
roughly
one
to
one.
C
I
don't
know
if
I
would
hold
back
on
releasing
the
model
before
finishing
the
tool
kit,
but
I
don't
think
I
would
consider
the
model
to
be
useful
until
I'd
considered
the
actions
in
the
toolkit
and
had
a
feeling
for
how
practical
it
was
to
actually
run
through
these
metrics.
A
So
I
had
two
questions
on
that
lucas
one
were
you?
Are
you
suggesting
that,
for
example,
this
toolkit
right
here-
I
don't
know
if
you
can
see
my
screen
this
okay,
so
this
toolkit?
Are
you
suggesting
that,
like
this
text,
would
simply
be
added
to
this
document?
Because
right
now
it's
two
documents
right.
A
Yes,
I
am
okay
and
then
you
are
also
agreeing
that
it's
okay
to
release
a
metrics
model
without
having
these,
but
that's
something
we
should
probably
strive
for
if
we're
going
to
be
releasing
metrics
models
to
at
some
point
have
a
deployment
in
some
form
or
fashion.
C
I
think
that's
a
good
paraphrase.
I
think
that
before
releasing
a
metrics
model,
we
want
to
really
believe
that
it's
practical
yeah,
that's
fair.
C
C
B
Insanity
check
yeah,
that's
insanely,
wise.
A
A
You
know,
10
metrics
models
and
eight
implementations
and
then
in
at
the
end
of
year,
one
we
have
25
metrics
models
and
11
implementations
and
at
the
end
of
year
two
we
have
100
metrics
models
and
12
implementations
like
I
don't
want
that
to
get
really
wide
over
time.
Those
two
should
be,
I
think,
fairly
fairly
in
sync
with
one
another.
E
So
if
I'm
understanding
that
correctly
so
you're
wanting
to
have
a
requirement
for
every
model
that
we
create
to
include
a
toolkit
is
that
correct.
C
I
think,
depending
on
the
model,
a
toolkit
may
be
necessary,
but
it
depends
on
the
model,
and
I
think
that
before
we
consider
a
model
finished
that
we
think
about
what's
in
the
toolkit
and
be
sure
that
it's
practical,
I'm
not
certain,
I
have
a
clear
enough
definition
of
a
toolkit
in
mind
to
to
be
really
confident
about
that,
like
I
want
to
think
about
a
toolkit
for
every
one
of
the
models
and-
and
I
believe
what
it's
about
is
fundamentally
getting
real
about
applying
the
model.
Would
you
agree?
C
I
would
say
that
for
for
the
purposes
of
my
own
goals
that
I
would
feel
unsatisfied
with
what
I
had
delivered,
if
I
didn't
include
a
toolkit.
G
If
it
helps
at
all
that
whole
document
that
I
wrote
took
me
about
an
hour
and
a
half
like
making
a
tool
kit
is
not
that
hard.
I
don't
think,
especially
if
all
the
pieces
are
there
and
you're
just
kind
of
pulling
it
together
in
one
place
and
kind
of
adding
and
filling
in
details
about
like
what
you
think
the
time
will
be,
and
you
know
breaking
it
down,
so
it
could
take
longer,
but
it
was
not
unwieldy.
E
In
the
in
the
last
meeting,
we
did
also
have
a
discussion
about
limiting
the
number
of
metrics
that
we
include
in
these
metrics
models
like
trying
to
trying
to
keep
the
conversation
down
to
a
manageable
number
of
metrics,
rather
than
this
is
every
single
metric
that
we
can.
Think
of
that
would
that
would
be
applied,
maybe
just
pick
four
or
five
metrics.
That's
that's
much
more
manageable
for
a
tool
kit
than
20.
E
A
F
B
Model
size
and
throughout
the
day,
the
graph
of
models
sounds
exciting.
Yeah.
A
Okay,
this
is
good.
Thank
you.
I
personally,
I'm
all
for
releasing
metrics
models,
even
if
they
don't
have
an
implementation,
but
to
lucas
think
we
need
to
about.
Like.
Is
this
implementable?
I
just
I
want
to
get
these
metrics
models
in
front
of
people,
but
I
think
we
as
a
group
need
to
have
like
a
a
very
conscious
effort
towards
implementation,
whether
it's
through
a
toolkit
or
whether
it's
through
a
jupiter,
notebook
or
grafana
dashboard,
or
something
like
that-
that
we
don't
get
those
too
far
away
from
each
other.
G
Yeah
we
just
wanted
to
add,
because,
at
the
end
of
the
day,
that's
why
people
come
to
chaos
right
is
to
not
just
see
all
the
ways
that
they
can
measure
stuff
but
like
how
do
they
do
it?
And
what
does
it
mean
and
like
the
deeper
conversations?
That's
what
they
want
it.
You
know
you
can
go
to
github
and
see
how
many
issues
you
have
open
right
now,
you
know,
and
a
lot
of
that
stuff
is,
is
readily
available.
G
A
Right
on
all
right,
cool,
all
right
kevin
did
you
did
you
want
to
talk
up?
You
had
a
second
question
too.
Yes,
yeah
and.
E
I'll
just
I'll
just
re-uh,
so
so
what
I'm
hearing
is
that
the
toolkit
is
not
mandatory
for
the
release
process.
So
the
release
process
that
we're
talking
about
is
really
just
about
that.
That
metrics
model
document
minus
the
toolkit
and
we
can
add
toolkits
in
at
a
later
date,
either
into
the
document
or
as
a
second
document.
A
C
That
could
sort
of
stand
in
for
the
process
that
usually
accompanies
a
metric,
where
there's
there's
a
pretty
good
amount
of
checking.
That
goes
into
the
new
method.
But
in
this
case
we
would
say
it's
not
as
strict,
but
it
definitely
does
exist.
We
do
below
and
review.
E
I
think
we
should
hold
off
on
that
until
we
have
decided
on
the
kind
of
how
we're
going
to
present
the
the
models
and
which,
which
leads
me
into
that.
Second
question
that
I
had
is,
and
that
is
how
are
we
going
to
present
these
models
and
my
thought
on?
It
is
actually
building
on
what
both
elizabeth
and
emma
have
have
been
have
been
mentioned
in
the
past,
and
that
is
a
knowledge
base
on
the
website.
E
So
so
I
I
would
like
to
over
the
summer
when
we
do
our
website
revamp.
I
would
like
to
create
a
project
where
we
we
basically
create
a
metrics
knowledge
base
on
the
website,
where
we
can
interact
with
these
models
and
link
to
the
toolkits
and,
if,
if
possible,
I
would
like
that
knowledge
base
to
have
some
ability
for
others
to
to
engage
with
it.
E
I'm
not
sure
what
that
would
look
like
either
the
ability
to
to
edit
or
like
a
wiki,
yeah
yeah,
maybe
a
wiki
I
was
kidding.
E
E
I
know
I
love
wikipedia,
but
the
design
of
what
this
knowledge
base
would
look
like
is
is
a
that
would
be
a
completely
ongoing
discussion,
but
in
general
like,
if
we're,
I
don't
think
it
would
be
helpful
to
create
a
big
pdf
of
all
of
these
models
together
and
all
of
the
toolkits
together.
E
I
think
the
way
that
people
would
use
these
is
more
in
kind
of
a
searchable
knowledge
base,
and-
and
maybe
you
have
the
ability
to
print
off
a
pdf
from
that
knowledge
base
page,
but
it's
matching
with
the
previous
conversation
it's
this
is
less.
E
A
E
But
this
would
be,
it
would
be
printable
from
the
website,
rather
than
let's
create
a
pdf
to
to
capture
the
information
right,
so
the
individual
user
could
go
to
the
website,
that's
fine
search
the
knowledge
base
and
and
then
print
the
pdf
content
that
they
that
they
would
like
right.
So
it's
not
a
it's,
not
a
pdf
that
we've
created
and
stored
in
advance
yeah.
Yours.
A
C
I
wonder
about
using
actually
using
the
wiki.
C
I
think
that
it
fails
the
goal
of
having
things
look
really
great
right.
Wiki
always
looks
pretty
choppy,
but
it
is
editable
and
fluid
and
we
can
post
what
we
have
sooner
rather
than
later.
E
E
To
do
this
task
also,
we
could
we
could
just
design
design
the
web
pages
around
markdown
pages
as
well
so
symbol,
and
that
would
be
a
little
similar
to
the
way
we
do,
the
the
metrics
currently
and
honestly,
the
the
knowledge
base
and
the
markdown
pages
they
may
be
compatible.
I
wouldn't
know
until
I
got
in
to
look
at
those,
but.
H
E
A
F
I
G
G
A
Hire
a
designer,
so
okay,
we're
out
of
time.
A
Thank
you
yeah.
This
is
good.
Thank
you,
kevin
for
kind
of
walking
us
through
that
process.
So
if
you're
there-
and
you
can
hear
us
know
that
we're
saying
thank
you.
Yes,
thank
you.
Thank
you.