►
From YouTube: CHAOSS Common Working Group 4-15-21
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
We
have.
We
have
a
few
things
on
the
agenda
today,
so
bot
activity
is
a
common
metric.
So
this
is
something
that
came
up
in
the
dni
working
group.
So
we'll
talk
about
that,
we
can
go
through
a
review
of
open
issues
and
prs,
and
then
we
can
talk
about
progress
on
current
metrics,
we'll
probably
probably
skip
the
time
waiting
for
some
interaction
since
sean.
Just
just
told
me
that
he
can't
make
this
event
so
we'll
probably
pump
that
one
to
next
time.
A
Cool
common
user
story
cool.
Do
we
want
to
add
that,
up
before
we
talk
about
progress
on
current
metrics,
because
those.
B
A
Get
long,
I'm
totally
fine
I'll,
just
move
that
I'll
move
that
off
a
bit
cool.
Is
there
anything
else
we
need
to
talk
about,
because
I
missed.
I
missed
the
last
meeting,
so
I
I
kind
of
looked
through
the
agenda.
It
looked
like.
The
only
action
item
was
for
sean
he's,
not
here.
B
Yeah
so
it
kind
of
to
me
at
least
it's
fairly
self-explanatory.
So
just
we
had
been
talking
about,
obviously
the
high
presence
of
bots
in
projects,
and
I
mean
we
had
even
done.
You
had
kind
of
pointed
out,
like
the
the
level
of
engagement
that
bots
can
have
inside
of
a
repository,
and
it's
probably
something
we
need
to
be
explicit
about
just
because
there's
so
much
filtering.
B
A
Yeah
agreed
and
and
the
extent
to
which
some
of
these
projects
use
bots
so
particularly
in
the
cloud
native
space,
where
I
do
a
lot
of
work
so
like
in
in
kubernetes
you
you
issue,
commands
to
the
bots
and
the
bots
actually
do
all
the
things.
So
there's
no
like,
like
a
human
being,
doesn't
merge
a
full
request.
That's
a.
D
A
Gets
merged
by
the
bot
after
it
gets
an
lgtm
and
lgtm
and
a
slash
approve
from
someone
who's
on
the
in
a
file
that
says
that
they're
allowed
to
approve,
and
then
the
bot
sees
that
these
commands
have
been
met
and
all
the
tests
pass
and
then
the
thing
gets
merged.
B
I
mean
even
matt
snell
in
the
badging
program.
I
mean
he
does.
He
does
something
similar
I
mean
he
does
when
the
badging
process
is
over,
he
issues
an
end
command
and
every
things
get
wrapped
up
based
on
that
command
and
on
the
other
end
the
bot
is
always
the
first
to
respond
to
the
submitters.
Just
saying
thanks.
So
I
kind
of
have
a
couple
different
scenarios,
even
even
in
badging.
A
Yeah
yeah
the
bots
are
the
bots
are
super
useful
and
I
think
they're
becoming
a
lot
more
common
in
use
lately,
because
I
think
you
know
we
have.
We
have
tooling
that
do.
Does
this
a
lot
better
than
than
it
used
to,
and
I
know
that
when
I
do
when
I
do
metrics
for
vmware
one
of
the
first
things
I
do
on
the
time
to
first
response
is
filter
out
as
many
of
the
bots
as
I
can
possibly
find.
A
I
don't
know
because
we
haven't,
we
haven't
written
this
metric,
yet
I
think
we
probably
we
probably
need
to
do
yeah,
I'm
not
sure
that
this
is
a
single
metric
or
if
this
is,
if
this
is
multiple
metrics,
I
mean
I
think
that
there's
certainly
a
metric
around
the
volume
of
bot
activity
which
is
kind
of
what
you
know
what's
been
documented
here.
But
I
do
think
that
bot
activity
is
something
that
would
be
a
filter
on
a
lot
of
other
metrics.
A
Yeah
eric,
how
does
how
does
the
grimoire
labs
tool
set
handle
handle
bots
like?
How
do
you
feel
you
filter?
You
have
an
option
to
filter
those
out
right.
E
A
Them
yeah.
D
D
A
That
might
be
a
filter
or
something
yeah.
That's
a
really
good
point.
That's
something
that
I
think
matt
and
I
were
kind
of
exchanging
some
emails
about
earlier
earlier.
This
week
was
because
we
talked
about
the
spot
activity
and
then
I
was
I
was
issuing
a
pr
in
the
k
native
repository
and,
if
you
add
brackets
and
wip
in
front
of
it,
it
adds
a
do
not
merge
tag
or
do
not
merge
label
to
the
to
the
pull
request.
A
C
A
That's
actually
a
huge
challenge,
because
a
lot
of
the
bots
aren't
named
bots.
This
is
one
thing
that
I've
found
in
the
vmware
data
is
we
have
there's
one
bot,
that's
like
it's
called
cfcr
or
something
that,
and
you
know
we
have
like
cicd
bots
that
are
named
strange
things
and
we
have
automated
accounts
that
are
not
intuitively
named
things,
but
you
start
to
once
you
start
to
dig
into
the
data
and
you
look
at
the
data
you're
like
wow.
A
That's
got
to
be
a
bot
and
you
go
out
and
you
look
at
it
and
you're
like
yep.
That's
a
bot.
I've
got
to
filter
that
one
out,
but
it's
not
it's
not
obvious.
I
mean,
I
think,
that's
what
gary
was
saying.
Was
you
kind
of
you
kind
of
have
to
identify
some
of
the
accounts
that
are
bots
in
order
to
be
able
to
reliably
filter
them
out.
A
B
A
Okay,
cool,
so
can
I
give
you
that
action
item
to
create
issues
and
docs
for
these?
Yes,.
C
Are
you
thinking
of
these
two
as
the
two
separate
metric
or
a
filter
within
a
metric
like
bot
activity
and
then
is
a
filter,
can
be
volume
or
ratio
to
human
to
bot
activity.
B
Yeah,
I
think
they
both
could.
So,
if
I
look
at
say
like
new
issues,
just
new
issues
or
new
contributors
right
I
mean
I
could
filtering
on
just
the
volume
of
bot
activity
in
this
narrow
window
or
closed
issues
or
closed
merge
requests.
A
B
It's
funny
yeah.
It
makes
it
funny
like
when
you're,
taking
a
look
at
who's,
doing
the
merging
like
in
that
scenario,
the
the
human
is
pushed
somewhere
in
the
chain
that
we
might
not
otherwise
look
just
when
we're
looking
at
huge
volumes
of
data.
D
So
matt
in
your
ratio
example.
So
would
it
be
something
like
so
like
the
the
human
issues,
the
command?
So
that's
like
one
interaction,
but
then
the
bot
does
like
five
things:
to
get
it
merged
and
deletes
the
branch
and
closes
the
issue
and
does
all
the
other
things.
So
that
would
be
like
five
or
whatever
so
it'd
be.
Is
that
is
that
kind
of
what
you're
thinking
or,
like
all,
the
regular
just
comments
and
stuff
like
that.
B
A
F
F
Yeah,
that
is
correct.
I've
I've
looked
at
kubernetes
bot
activity
in
that
particular
case.
It
actually
improves
transparency
in
the
project
and
kind
of
distribute
some
of
that
admin
rights
to
people
that
otherwise
wouldn't
have
it.
A
So,
in
order
to
issue
a
lot
of
these
commands
to
the
the
bot
you
do
in
a
lot
of
cases
have
to
be
what
we
call
an
org
member
so
so,
which
requires,
I
think,
like
two
people
to
say,
yeah,
that
person's
legit
and
then
and
so
there's
you
know
it
a
contributor
ladder,
there's
certain
steps
that
you
have
to
take
in
order
to
be
able
to
get
there
to
be
able
to
issue
certain
things
to
the
to
the
bots.
B
Would
there
ever
be
a
scenario
that,
like
totally
making
this
up
but
like
an
issue,
might
be
closed
by
a
bot
because
it's
old,
but
an
issue
could
still
be
closed
by
a
human
just
because
the
issue
is
done
because,
like
what
you
described
on,
there's
really
no
ratio.
Question
like
it's
just
it's
where
the
human
event
occurs.
F
B
The
bot
to
do
it
like.
B
A
F
A
F
And
the
the
bot,
the
bot
itself
may
have
admin
privileges,
but
that
doesn't
mean
that
other
people
don't
have
admin
privileges
either.
So
I
would,
I
would
imagine
that
anything,
a
bot
can
do
maintainer
or
admin
can
do
as
well,
and
they
probably
still
do
to
some
degree,
although
maybe
not
as
maybe
not
as
much.
A
Yeah
in
kubernetes
it's
pretty
rare,
because
we
really
want
people
to
interact
with
the
bots,
but
you
know
if
something
if
something
just
went
sideways
and
got
really
messed
up.
You
know
maintainers
that
admins,
for
the
repository
could
step
in
and
fix
things.
A
F
A
Yeah,
because
you
can
so
you
can
kind
of
see
this
like
somebody
issues
a
pr
they
talk
about
what
it
does
and
then
there's
the
bots
add
things
like
you
know
the
cla
the
size.
It
adds
a
like
an
area
and
a
sig,
and
then
you
know
a
person
came
in
and
said
you
know
hey
here.
Here
are
some
some
things
that
I
think
you
need
for
this.
F
F
A
Oh,
maybe
I
mean
this
is
all
it's
it's
proud,
as
the
robot.
I
think.
F
A
Oh
okay,
yeah!
No,
I
haven't
seen
the
alexa
bot.
I
assume
that
that
was
probably
merged
into
the
the
prowl
yeah
functionality,
but
yeah
it's
some
combination
of
guessing
and
some
combination
of
knowing
based
on
based
on
where
in
the
code
base,
the
the
changes
are,
and
so
it
adds
these,
and
so
this
the
cla
it
adds
you
know
like
like
our
cla
bot
does
or
it's
like
our
dco
bot.
Does
the
size
is
automatically
determined
based
on
number
of
lines
or
something
like
that.
A
F
In
in
in
looking
at
the
bot
data,
that's
the
that
was,
or
in
looking
at
kubernetes
bot
activity.
That
was
sometimes
a
problem
for
me,
trying
to
figure
out
how
the
bot
made
the
decision
to
improve
it.
In
this
case,
for
example,
like
did
yeah
did,
was
it
done
out
of
band?
Was
it.
A
No
okay,
so
I
know
what
happened
here.
The
there's
two
lgtms
here
and
a
bot
needs
a
slash,
lgtm
and
a
slash
approved.
But
if
one
of
these
people
is
in
the
owner's
file
and
they're
allowed
to
approve
things
if
they
put
the
lgtm
command
on,
it's
treated
like
a
slash,
approved.
A
A
Is
absolutely
fascinating,
I
guess
like
matt,
you
just
end
up
down
this
rabbit
hole.
How
did
this
happen
and
where
did
this
come
from.
B
So
in
terms
of
the
two
metrics,
then,
is
the
ratio
makes
a
ton
of
sense
to
me
at
this
point.
Kind
of
what
you
had
just
shown
on
does
the
volume
make
sense,
just
the
pure
volume
of
bot
activity.
C
A
B
E
Go
ahead,
georg,
I
just
captured
it
in
here
and
my
thought
was
a
concerted
effort
to
go
through
all
of
the
metrics.
We
have
and
issue
pull
requests
to
just
add
it
where
it
makes
sense.
Okay,
I
don't
think
having
a
separate
document
describing
a
metric,
only
saying
hey
bob
could
be
a
filter
on
all
these
other.
No,
I
don't
think
that
makes
sense.
A
A
A
E
C
Oh
you
can
hear
now
yeah,
okay,
okay
yeah,
so
I
have
a
list
of
two
tasks
before
resolving
this
issue.
The
first
one
would
be
to
add
the
working
of
repository
template
to
the
community
handbook
thanks
for
creating
that
new
page,
and
the
second
would
be
to
removing
this
template.
Wait
a
second.
Let
me
search
yeah.
The
second
one
is
to
remove
this
template
directory
from
the
working
group
common
and
replicate
this
in
the
three
position
that
galaxy
first
one
is
in
the
community
handbook
itself.
A
I'm
on
me
and
talking
to
myself
so
to
make
sure
I
understand
what
we're
what
we're
doing
with
this
is,
where
we're
basically
taking
the
templates
out
of
the
individual
working
group
repositories
and
creating
a
standardized
template
place,
which
is
so
so.
This
is
great.
Thank
you.
So
much.
F
This
might
be,
this
might
be
one
of
those
things
where
in
the
readme,
if
we
have
a
standard
structure
for
the
readme
in
each
repository,
perhaps
we
have
a
pointer
that
points
to
the
template,
the
same
way
that
we
would
have
a
pointer
that
points
to
the
code
of
conduct
or
the
us.
So
maybe
we
need
to
think
about
this,
including
that
in
the
standardized
working
group
read
me.
A
We
have
another
another
issue,
names
and
definitions
of
people
looks
like
this.
Is
yours
georg?
Where
are
we
on
this
one.
E
I
need
to
refresh
my
memory:
what's
the
title
again.
E
Oh
yeah,
so
in
our
metrics
we
refer
to
people
based
on
the
different
roles
that
they
play
in
the
different
actions
and
activities
they
do
in
open
source
communities.
E
So
contributors,
members,
people,
submitters
authors,
reviewers
observers
and
sometimes
they're
used
interchangeably,
sometimes
even
use
them
interchangeably
within
the
same
metric,
and
I
I
thought
hey
would
be
nice
to
define
clearly
what
each
one
of
those
roles
are
and
when
to
use
them
to
have
a
more
clear
metric
definition
or
all
of
the
definitions
would
benefit
from
having
clearly
defined
okay.
Here
we
are
talking
about
reviewers
only
and
it's
always
the
same
across
all
the
metrics.
C
I
guess
this
is
coming
across
I'm
hearing
this
third
time
like
it
was
in
the
journal
meeting.
It
was
last
time
discussed
in
the
evolution.
Now
we
are
discussing
that
we
need
to
define
some
common
terms
like
in
in
the
risk.
It
was
more
on
the
dependency
side.
Here
we
are
looking
at
the
contributors,
that's
where
I
propose
the
glossary
terms
that
we
define
different
terminologies.
C
We
use
within
the
metric
that
and
then
kevin
again
suggested
on
that
that
we
define
those
as
a
matrix,
but
like
now
the
point
is:
if
we
define
contributor
as
a
metric,
then
we
don't
need
the
glossary,
but
if
we
are
not
defining
contributor
as
a
metric,
but
as
a
term
we
are
using,
then
we
need
a
glossary
terms
to
define
things.
That's
my
two
cents
on
this
so
like
my
idea
is
to
have
a
glossary
of
the
terms
those
which
are
not
maybe
metric
but
used
within
those
metrics
as
a
different
terminologies.
A
I
am,
I
am
super
guilty
of
this,
it's
sort
of
a
I
don't
know.
I
think
it's
just
kind
of
my
background
and
blogging
is
like
you.
Try
to
you,
try
to
change
up
the
words
you
don't
want
to
use
the
exact
same
word
in
every
single
sentence,
but
that
is
super
imprecise
and
not
probably
what
we
should
be
doing
when
we're
defining
metrics,
and
so
I
know
I'm
really
guilty
of
this.
F
Yeah,
the
the
to
the
to
what
vanadium
said
my
my
point
was
that
creating
a
glossary
kind
of
creates
duplication
and
then
it's
it's
also
another
document
that
we
have
to
maintain
and
do
we
maintain
that
in
multiple
places,
whereas
our
the
key
purpose
of
our
group
is
to
define
metrics
and
and
metrics
are
basically
our
glossary
terms
right,
so
a
contributor
should
be
defined
and
we
should
include
in
that
definition
synonyms
and
at
the
base
level.
F
D
I
I
agree
with
that,
but
also
in
case
you
all
didn't
see,
george
posted
a
link
to
our
terminology-
page,
that's
in
our
handbook.
I
did
not
even
know
that
this
was
there.
So
thank
you
because
this
is
exactly.
I
think
what
we're
we're
talking
about
is
a
terminology
guide
of
like
what
we
mean
when
we
say
these
things.
E
D
Well,
perhaps
it
could
just
be
an
extra
section,
then
I
I
totally
agree
with
I
mean
I
totally
understand
what
you're
saying
kevin,
but
I
also
think
as
a
as
a
person
coming
in
who
doesn't
know
anything
about
chaos
like
they're,
it
might
be
helpful
to
have
just
like
a
high
level.
D
Quick,
you
know
definition
list
of
this
is
what
we
mean
so
like
if
you,
when
we
say
maintainer.
This
is
what
we
mean
and
also
here's
the
metric
that
we
use
to
measure
this,
because
if
you
know
I
just
think
of
like
somebody
who's
just
looking
for
that,
you
know
introduction
to
chaos
and
they're
not
sure
to
make
them
read
through
the
whole
metric
just
to
say
what
do
I
mean
by
maintainer,
I
think,
might
be
a
little
heavy
heavy-handed.
So
I
I
don't
know.
D
F
C
By
terms
we
are
focused
here
on,
the
discussion
is
around
the
terms
we
use
when
we
are
defining
a
particular
metric
within
that
matter.
We
use
certain
terms
which
are
not
itself
a
matrix
somewhere,
but
we
are
using
those
terms
and
having
a
clear
definition
of
those
terms
is
important.
That
is
a
discussion
I
think
so.
B
D
I
think,
like
a
word
like
community,
which
we
use
a
lot
in
our
metrics,
can
mean
different
things,
either
in
individual
metrics
or
just
in
general.
Like
do
we
mean
people
who
use
the
software
who
contribute
to
the
software
who
follow
the
project
on
twitter,
who
have
interacted
with
them
once,
like?
I
think,
like
a
term
like
that,
would
be
helpful
to
confine,
and
maybe
that
is
a
metric
like
how
many
members
are
in
your
community.
D
So
that
probably
is
a
metric
and
there's
different
filters
on
how
you
measure
that
but
yeah,
I
think
I
think
there
is
meridin
in
the
discussion
for
sure.
A
Okay,
all
right,
so,
sadly,
we
we
lost
garrett
halfway
through
that
that
discussion
so,
but
we
have
the
we
have
the
notes
and
come
back
to
us.
This
is
just
the
issues
review.
That
was
a
great
discussion.
Thank
you.
I
feel
like,
I
feel
like
it's
something
we
need,
but
I
do
think
that
I
think
you're
right
kevin.
We
need
to
think
very
carefully
about
how
we
do
this
so
that
we
don't
end
up
with
definitions
all
over
the
place
and
duplication.
B
F
No
go
ahead
and
finish
mine.
I
was
just
going
to
say:
well,
that's
the
that's
the
issue
that
we
have
with
all
of
our
metrics
right,
so
we
we
define
what
it
is
and
then
we
provide
just
kind
of
a
general
like.
This
is
one
way
that
you
might
look
at
it
right.
Every
metric
can
be
viewed
as
a
count
for
the
most
part
right.
If
it's,
if
it's
change
requests,
we
can,
we
define
change
requests,
but
then
we
we
offer
some
way
of
looking
at
change
requests.
F
So
you
know
what
we've
defined
this
change
request.
It's
a
thing:
hey,
let's
count
them
so
everything's
kind
of
a
count,
they're
not
all
counts,.
F
But
often
times
there
can
be
multiple
measurements
right.
So
when
you
get
the
purpose
of
the
change
request
metric
in
general
is
to
define
what
a
change
request
is,
and
it's
it's
less
important
that
the
when
we
look
at
it
we
we
offer
ways
that
you
could
look
at
it,
and
one
of
them
is
account
right
or
some
metrics
are
very
specific
ways
of
looking
at
this
metric
term.
F
I
don't
think
the
inclusion
of
that
measurement
means
that
we're
not
defining
the
terms
right.
We
are
defining
those
terms
as
metrics,
so
the
the
change
request
metric
is.
It
defines
what
a
change
request
is.
A
Yeah,
I
agree
with
kevin,
with
the
caveat
that
we
should
get
a
little
bit
more
rigorous
about
making
sure
that
we
do
indeed
define
some
of
these
terms.
So
when
you,
when
you
create
the
contributor
metric,
you
know
having
a
really
clear
definition
of
by
contributor
we
mean
this
and
and
then,
if
we
have
you
know,
things
are
subcategories
of
other
things.
So
maybe
a
maintainer
is
a
type
of
contributor.
A
Onto
I
think
the
others
are
new
metrics
and
we
don't
have.
I
don't
see
daniel
on
the
call.
I
don't
know
that
we
have
we're
about
at
the
end
of
the
time,
so
we
probably
don't
have
time
to
talk
about
the.
So,
let's,
let's
talk
about
sorry
matt,
let's
talk
about
com,
I'm
all
over
the
place.
Today,
let's
talk
about
common
user
story.
B
No
problem
so
in
the
weekly
call,
I
had
brought
up
the
idea
of
what
I'm
at
this
point
calling
user
stories.
I
wouldn't
I
don't
know
that
story
is
necessarily
something
that
is
the
name
anymore,
but
I
I
posted
one
yesterday
in
dni
and
I'm
just
trying
to
post
one
here.
B
So
the
idea
is
to
to
just
try
to
help
people
understand
how
metrics
could
be
brought
together
and
not
just
metrics
like
as
identified
in
a
focus
area
but
metrics
that
might
be
in
part
from
common
metrics
that
might
be
in
part
from
evolution
metrics.
That
might
be
in
part
like
how
do
we
aggregate
these
to
answer
questions
or
help
people
orient
themselves,
and
so
here's
one?
B
We
have
new
issues,
closed
issues,
change
requests,
so
it's
kind
of
like
these
are
linked
anyway
through
filters,
but
this
would
just
be
a
way
of
kind
of
drawing
it
forward
to
somebody,
and
then
the
end
of
this
is
you
know,
just
by
understanding
these
five
things
together
helps
me,
get
a
better
understanding
of
how
events
affect
project
activities
and,
if
I'm
not
seeing
changes
in
project
activities.
Maybe
I
want
to
change,
and
I
wanted
to
do
that
with
an
event.
B
E
B
I
have
to
change
how
I
present
at
the
event
or
how
we
why
we
run
the
event
or
something
like
that
right.
So
I'm
not
seeing
any
any
change
then
I
probably
need
to
do
something
about
it.
So
this
is
just
a
user
story.
We
just
put
this
out
in
front
of
people
hear
metrics
that
you
want.
I
want
to
take
a
look
at,
and
this
is
what,
through
with
respect.
B
F
A
So
typically,
I've
seen
these
in
kind
of
kind
of
this,
this
format
and
it's
usually
something
that
like
so,
if
you
think
about
like
a
traditional
product,
environment,
she's,
just
something
that,
like
the
product
or
user
experience,
people
come
out,
you
know
and
they
and
it's
something
you
do
within
within
the
project.
F
A
You're
building
it's
usually
based
on
research,
so
it's
usually
based
on
kind
of
user
experience,
research
and,
and
then
you
put
together
these
stories
based
on
the
research
and
it
helps
you
build
products,
but
that
feels
a
little
bit
different
than
what
matt
was
describing.
So
I'm
I'm
curious
if
I'm
on
the
wrong
track.
F
D
Oh
sorry,
I
think
this
is
an
awesome
way
to
highlight
for
potential.
D
If
we
were
business
potential
customers
to
how
they
can
use
our
metrics,
I
think
that's
what
matt
was
trying
to
get
at
like
just
have
some
examples
of
oh
that
applies
to
me.
Oh,
I
I
want
to
understand
me
as
a
user.
I
want
to
understand
the
impact
that
my
event
is
going
to
have
on
my
traffic.
How
can
I
measure
that?
C
I
was
thinking
of
it
in
two
ways,
as
elizabeth
explained,
like
first
we
create
some
scenarios
and
define
this
okay.
I
want
to
assess
the
impact
of
a
dna
program
or
something,
and
then
we
give
it
to
the
users,
and
then
we
get
the
feedback,
whether
they
are
using
it
in
a
similar
way
or
another
way
that
we
can
add
to
that.
So,
like
we
initially
come
up
with
some
scenarios
that
we
assume
can
be
helpful
to
the
user,
and
then
we
assess
them
how
they
are
adopting
it.
B
So
I
think,
dawn
to
your
point
about
whether
or
not
what
you
wrote
like
kind
of
matches
with
what
what
is
here-
and
I
think
the
answer
is
yeah-
I
was
gonna
make
a
so
as
a
whomever
that
person
might
be.
I
want
to
perform.
You
know
I
want
to
get
a
better
understanding
of
the
impact
that
my
events
have
on
traffic,
so
that
I
can,
I
would
have
to
add
to
it.
So
you
know
like
in
order.
B
A
But
this
so
this
this
may
or
may
not
be
helpful
for
you,
because
the
way
these
usually
work
is
there
are
a
number
of
user
personas.
So
you
know
we
have
these
at
puppet
and,
like
one
was
a
sysadmin
one
was
a
developer.
One
was
a
you
know.
We
had
these
these
defined
personas,
but
it
feels
like
what
you're
talking
more
about
is
like
use
cases
which
I
feel
is
different
than
what
I
just
put
there
as
a
template.
A
Okay,
with
that,
we
are
one
minute
over
any
any
other,
quick
things
before
we
wrap
it
up.