►
From YouTube: Kubernetes SIG Testing 2017-09-26
Description
Meeting notes: https://docs.google.com/document/d/1z8MQpr_jTwhmjLMUaqQyBk1EYG_Y_3D4y4YdMJ7V1Kk
A
A
Publicly
recorded
and
will
be
posted
to
YouTube
shortly
on
the
agenda
today,
just
first
off,
you
know:
we've
started
this.
This
nice
trend
of
having
like
sort
of
a
daily
stand
up
that
is
available
to
folks
outside
of
Google,
which
I
think
we've
missed
due
to
a
scheduling
snafu.
So
if
anybody
has
any
quick
announcements
and
go
over
that
sort
of
a.
A
A
1:8
I
have
some
draft
of
slides
about
an
update
that
I'm
going
to
try
giving
to
the
community
meeting
tomorrow
when
the
release
start
yeah
Thursday
after
the
release
has
gone
on
successfully
out
the
door
tomorrow
and
Tim
will
speak
a
little
bit
to
update
small
and
CNCs
conformant
stuff.
So
first
off
stand
up.
Anybody
doing
anything
super
urgent
that
we
need
to
collaborate
on
today.
C
So,
first
of
all,
thank
you
for
getting
the
stand-ups
going,
because
it
has
been
tremendously
helpful
and
I'm,
hoping
that
during
burndown
periods,
that
we
continue
that
trend
and
maybe
even
when
we're
not
in
bring
them.
But
this
document
that
I
just
shared
all
I'll
share
my
screen
as
well,
so
I'll
go
through
it
quickly
find
it.
C
This
is
it
okay,
so
essentially,
I
was
paired
with
Ben
elder
to
go
through
this.
The
idea
of
what
does
it
look
like
to
actually
have
a
policy
around
response
to
outages,
and
how
can
we
set
that
our
expectations
and
visibility
with
the
community
so
that
people
have
an
idea
what's
going
on
with
to
the
testing
status,
so
we're
not
just
hitting
retest
all
and
flooding
the
poor
infrastructure
with
even
more
more
requests.
C
So
essentially,
what
we
did
was
came
up
with
an
overview
of
the
problem,
basically
statement
with
as
much
growth
as
we're
having
it's
pretty
much
time
for
us
to
proclaim
status
on
tests,
infra
and
testing
issues.
So
people
have
some
awareness
of
that
and
outcomes.
Essentially,
we
want
to
provide
real-time
messaging
to
the
community.
C
So,
ideally,
as
people
become
more
familiar
with
this
testing
infrastructure
and
we
make
it
more
fun
jabal
across
cloud
environments,
this
will
provide
an
opportunity
for
more
and
more
people
to
become
involved
and
also
partition,
because
it
is
simply
not
fair
that
Google
it
has
to
bear
the
burden
of
this
on
their
own.
So
lets
you
know
many
hands,
make
light
work
so
guiding
principles
of
this
particular
effort.
C
To
do
this,
where,
essentially,
we
want
to
make
this
easy
for
the
person
on
call
so
part
of
what
we're
doing
is
to
add
an
additional
layer
of
support,
which
is
a
communications
role
that
helps
the
person
doing
the
tech
work
not
have
to
also
be
the
person.
That's
updating
the
status
page
or
communicating
with
the
community
in
terms
of
questions.
C
I've
seen
this
model
work
extremely
well.
It
also
provides
the
person
who's
doing
the
shadow
work
for
communication
is
in
the
underlying
processes
and
triage
process,
so
that
in
the
future
they
may
be
more
equipped
to
be
a
primary
call.
We
want
to
do
this
as
minimally
as
possible.
There's
some
debate
about
whether
having
a
pair
is
minimal
or
not
I'd
invite
anybody
to
with
opinions
on
that
to
weigh
in
we
want
to
hone
this
process
over
time.
So
we
just
said:
we'll
do
an
initial
cut
of
this,
but
ideally
over
time
will
expand.
C
Ours
will
expand,
expand
capabilities
in
terms
of
integration
with
the
community
and
with
chatbots
and
whatever
else
we
can
do
to
make
this
easy
and
lightweight.
So
there's
a
narrative
here
this
process.
In
a
nutshell,
basically
it
goes
through
what
it
would
look
like
to
have
this
process
fully
operational
and
there
are
some
things
that
need
to
be
determined
like
what
is
the
thresholds
for
when
an
incident
is
actually
declared?
What
is
the
resolution
time?
C
How
do
we,
what
is
the
cadence
for
communications
back
to
the
community,
all
those
things
we'll
figure
out,
but
at
the
the
meet
the
sort
of
base
structure
of
all
this
is
in
place,
so
I
welcome
comments
and
would
love
to
get
more
feedback
on
this.
But
ideally
this
will
get
drafted
up
into
a
proposal
and
put
on
the
community
page
and
hopefully
ratified
and
I'd
love
to
see
that
happen.
The
next
few
weeks,
any.
D
C
Much
like
sig
level
things
this
is
the
this:
is
a
strategy
run
in
a
sec
implementation,
not
a
not
an
SLA
for
the
project
as
a
whole,
so
this
is.
This
is
just
codifying
existing
practices,
because,
right
now
the
test
on
call
is
already
established,
but
what
they
don't
have
is
a
way
for
the
community
to
get
a
good
visibility
into
their
existing
processes.
So
this
is
really
providing
that
thanks.
That
helps
clarify.
D
E
A
lot
good
for
some
teams.
They
found
that
the
they
do
a
secondary
or
primary
rotation,
where
the
secondary
shadows
for
one
week
it
does
comms,
perhaps
the
next
week,
their
primary
generally,
and
that
way
the
when
you
do
this
shift
change
you
already
have
expose
to
what
was
going
on.
It
sounds
like
you're
talking
more
about
just
having
like
a
comms
person.
That's
not
going
to
be
primary.
C
But
that
puts
us
that
puts
another
large
burden
on
your
organization
and
I.
Don't
think
that's
fair!
So
what
I'm
trying
to
do
is
come
up
with
something
we
can
do,
even
if
it's
trivial,
just
to
try
and
help
out
and
get
some
community
support
again.
I
feel
that
there's
a
serious
inequity
in
terms
of
how
much
work
it
is
for
all
of
you
to
do
that.
C
F
B
F
C
And
just
FYI
I'm
still
I'm
still
pounding
the
pavement
to
get
Microsoft
to
pony
up
some
some
as
your
stuff
to
be
able
to
do
some
this
stuff.
So
it's
just
it's
a
process
and
I
feel
like
this
is
a
great
place
to
start,
so
we
can
start
getting
those
practices
in
place
and
then
yeah
eventually
I
would
love
to
out
primary
secondary
because
I
just
sitting
next
to
somebody
or
virtually
or
otherwise,
and
watching
what
they
do
is
incredibly
informative.
A
F
Specifics
like
how
quickly
we
want
to
escalate
some
more
discussion
of
how
we
want
to
actually
do
the
status
page,
I've
been
kind
of
informally
discussing
out
some
people.
We
haven't
really
like
reached
complete
consensus.
How
that
should
be
done?
It
seems
like
leaning,
towards
just
something
static,
posted
somewhere
and
like
a
JSON
co-op
with
the
data
and
link
to
a
github
search
for
the
issues,
but
like
we
should
actually
read
that
up
somewhere,
yeah.
C
I
think
we
we
do
the
high
level
policy
and
then
we
talk
about
how
to
implement,
because
there's
there
definitely
some
nuances
about
how
we
do
it
and
I
definitely
I,
don't
want
it
to
be
yet
another
maintenance
burden
to
actually
keep
the
thing
up
so
yeah,
it's
let's
try,
write
it
as
easy
as
possible.
Hi.
A
These
are
the
tests
that
are
failing,
and
we
know
that
they're
failing
and
we're
going
to
ignore
it.
These
are
the
tests.
We
don't
care
so
much
about
because
they
take
forever
so
on
and
so
forth,
as
I
went
through
this
exercise.
So
first
off
just
some
annotations
release
point
it
blocking
our
intent
is.
All
of
these
tests
should
basically
pass
three
times
in
a
row
before
we
cut
a
release.
That's
mostly
true,
except
for
a
couple
minutes.
A
Ladies
sweets,
this
one
and
the
couplet
sweet
seems
to
have
a
bunch
of
flakes
that
just
are
spreading
around
enough
that
it's
going
to
be
difficult
to
get
those
to
pass
three
times
in
a
row.
The
serious
needs
take
a
long
time.
So
if
we
really
wanted
to
wait
for
three
consecutive
passes
and
be
15
hours,
many
of
the
upgrade
tests
have
a
lot
of
known
failures
that
were
just
like
ignoring
so
you
know
they
fail
and,
and
our
read
we're
still
going
to
be
happy
with
those
many.
A
B
A
We're
not
going
to
wait.
The
full
three
runs
like
some
of
them
take
12
hours,
so
it
would
take
you
know
a
day
and
a
half
to
get
three
consecutive
passes
and
that's
assuming
it
oflike
and
then
finally,
we've
had
some
issues
with
scalability
and
API
latency
with
large
clusters,
and
so
we're
looking
at
two
tests
that
are
in
the
master
blocking
dashboard,
we're
not
blocking
on
them,
we're
just
using
them
as
information
to
seeing
whether
or
not
this
is
whether
this
issue
has
been
solved.
A
You
can
notice
like
a
lot
of
what
I'm
doing
in
the
issue.
Here
is
crossing
out
test
cases
once
we
notice
they're,
not
problem
I'm,
attempting
to
link
two
issues
and
pull
requests
so
just
sort
of
give
people
a
heads
up
on
like
what
the
status
of
this
is.
If
we
have
an
explanation,
if
we're
explaining
it
away
as
humans
that
this
failure
is
acceptable,
what
is
that
explanation?
Who
did
it?
Why
so
that
sort
of
tribal
knowledge
I
feel
like
has
been
lost
in
the
past,
calling
out
the
flaky
tests
and
stuff?
A
A
A
Code
base
to
modify
to
skip
those
tests
would
be
helpful,
I
think.
Maybe
we
have
the
tools
for
that
stuff
in
place,
but
it's
just
sort
of
stopped
us
from
really
adhering
to
that.
Dumb,
stupid,
simple,
only
cut
the
builds
when
all
of
the
boxes
are
going,
I
find
for
that
day,
it'll
be
soon
and
I'm
sure
yeah
I
just
wanted
to
say,
like
I
recognize
I'm
as
a
human
I'm
duplicating
some
of
the
effort
and
the
tools
we
have
available
to
us.
Oh
there
was
one
other
one
I
wanted
to
call
out
triage.
A
The
triage
board
has
been
tremendously
helpful.
We
found
that
one
of
the
scheduler
predicates
test
was
failing
or
like
that's
weird.
We
wonder
if
it's
limited
to
the
downgrade
tests,
and
so
I
was
able
to
just
take
the
text
and
paste
it
into
triage
and
see
that
actually
is
hitting
a
number
of
jobs,
not
just
the
downgrade
tests.
In
fact,
it
might
just
be
all
of
the
serial
jobs
are
affected
and
the
downgrade
chop
happens
to
be
one
of
those
serial
things.
E
Question
I
just
wanted
to
note
for
a
test
great.
In
particular,
we
have
an
internal
feature
where
you
can
add
annotations
for
things
like
this
test
is
known
broken.
The
suite
is
known,
broken
it's
not
on
the
external
side,
yet
I
think
we're
looking
at
doing
that.
But,
like
people
know,
people
want
to
be
able
to
have
alerts
on
the
different
tabs
in
different
ways.
So
yeah.
E
E
D
The
end
of
this
cycle,
it
would
be
really
nice
to
have
a
post-mortem
on
testing
that
basically
summarized
these
are
the
known
biggest
offenders.
I
know
that
Eric
did
a
great
job
for
a
while
where
he
was
doing
summaries.
Those
were
fantastic
and
we
actually
did
a
lot
of
work
and
1/8
to
push
certain
end-to-end
tests
to
be
out
of
the
main
line
suite
or
to
prevent
them
from
being.
You
know
critical
blockers
that
they
were
in
the
past.
A
Well,
so
it's
it's
almost
like
you
knew
one
of
the
things
I
was
going
to
present
and
I
really
want
a
hammer
to
the
community
when
I
present
it
and
also
I.
Think
like
topics
like
this
would
be
great
for
the
retrospect.
If
I
create
a
test,
good
stuff,
I'm
sure
there's
some
internal
things,
I'm
gonna,
just
try
and,
like
you
know,
I'm
trying
to
consume
the
tools
this
to
cumin
feedback.
I
have
I'll
provide
to
the
retro
and
we
can
pull
into
home
state
testing
planning.
So.
D
I
would
I
would
just
like
a
canonical
issue
that
we
create,
and
we
own
and
I'd
be
happy
to
execute
on
and
put
resources
to
execute
on
to
fix
the
intent
tests
that
were
the
biggest
troublemakers,
and
we
did
that
last
cycle
and
we'll
do
it
again
this
cycle.
So
if
we,
if
there
are
known,
if
we
have
no
nodes,
I'm
happy
to
put
resources
on
it,
okay.
A
So
I'm
gonna
flip
through
these
here
I'm,
not
gonna,
try
and
present
everything,
but
I'm
gonna,
try
and
squash
some
of
this
down
to
like
10
to
15
minutes
for
community.
Hopefully
y'all
can
see
the
screen.
It
says:
take
testing
update,
okay
cool,
so,
like
I
said,
I
wrote
this
September
14th
many
much
of
it
might
be
out
of
date
right
now.
Please
call
me
on
that.
Our
salon
channel
is
friggin
amazing.
This
is
what
life
and
say.
Testing
is
is
like
kind
of
on
a
daily
basis.
A
A
B
A
Credit
for
all
this
stuff-
because
everybody
here
has
done
some
amazing
work,
so
Velasquez
Velasquez-
is
this.
Awesome
thing
basically
allows
you
to
at
the
moment
we're
using
it
to
least
GCP
projects
for
a
kitten's
job
instead
of
having
to
maintain
a
single
TCP
project
per
job
that
were
running
adoption
of
this
sort
of
kicked
up
pretty
quickly
over
the
summer.
A
A
Of
infrastructure,
one
of
the
things
I
liked
about
it
out
of
the
box
was
that
there's
sort
of
a
dashboard
where
you
can
pretty
quickly
see
how
it's
doing
and
generally
whether
or
not
it's
healthy.
If
I
zoom
out
a
little
bit,
there
have
been
some
times
where
boss
COS
has
had
some
issues
so
generally,
if
I
start
to
see
a
bunch
of
issues
in
PRS,
we're
like
cluster,
has
trouble
coming
up.
If
I
come
here
and
notice
that
the
graphs
are
sort
of
spiking
up
or
down
in
an
odd
direction.
A
That's
a
sign
that
something
is
off.
This
was
something
we
had
a
presentation
on
inside
testing
a
little
while
ago,
links
slides
design
dock
all
that
good
stuff,
uber
Nader
super
awesome
an
example
of
goober
Nader
at
work,
I'll
just
sort
of
walk
through
some
of
the
new
features
here,
so
hey
G,
there's
a
test.
Failure
if
I
want
to
know
more
about
how
that
test
failure
happened.
I
can
click.
A
This
link
here
to
see
standard
out
and
standard
error
from
the
test
itself,
instead
of
having
to
click
through
to
the
raw
build
log
and
sift
through
all
sorts
of
stuff.
I
can
also
tell
that
in
this
particular
test,
run
333
tests
passed
to
failed
if
I
want
to
know
which
tests
passed
specifically
I
can
go
down
to
the
folded
lists
here
and
see
which
tests
were
skipped
and
passed.
A
So
we're
working
on
making
this
more
reusable.
If
I
have
time,
I'm
actually
going
to
try
and
like
this
stuff,
coal
created
a
robot
called
the
issue
creator,
which
is
basically
replacing
three
we're
called
munchers
wonders
of
things
that
sweep
through
github
and
do
stuff.
So
we
basically
removed
a
couple
months-
and
this
is
the
thing
that's
responsible,
for
example,
creating
issues
for
clusters
of
failures
from
that
triage,
dashboard
I
linked
a
little
while
ago.
Q-Test
is
basically
the
canonical
way
of
running
kubernetes
tests.
It's
the
thing
that
lets
you
stand
up
a
cluster
test.
A
The
cluster
tear
the
cluster
down,
get
results
from
it,
some
it's
for
those
of
you
who
are
familiar
with
hack
eto.
This
is
sort
of
their
replacement
for
that
it's
coded
in
a
way
where
you
can
have
different
deployers
for
different
cloud
providers.
So
we
have
a
cops
deployer.
We
have
gke
deployer
we've
also
recently
added
support
for
triggering
the
no
d2e
tests
through
this
and
there's
also
been
support
added
to
stand
up
multiple
kubernetes
anywhere
clusters
in
an
effort
to
use
this
as
the
canonical
way
of
doing
end,
end
testing
for
a
federation.
A
Thing
to
sync
labels
getting
past.
That
planter
is
a
really
cool
thing
where
eventually
we're
going
to
live
in
this
brave
new
world,
where
basil
is
used
to
build
everything
for
kubernetes,
it
will
be
the
thing
that
manages
all
of
the
dependencies
instead
of
having
to
make
all
the
dependencies
inside
of
a
docker
container.
But
right
now
we're
still
in
a
place
for
basil.
A
We
did
some
awesome
stuff
with
metrics
gotta
thank
col
for
this.
He
basically
went
through
recomputed
how
we
look
at
flakiness
on
a
per
commit
basis
and
a
per
job
basis,
and
so
one
of
the
things
I
really
want
to
draw
your
attention
to
is
this
table
here,
which
shows
the
flake
iasts
PR
jobs
over
the
past
week,
and
so
I
can
tell
right
now
that
the
e
to
e
GCE
at
cd3
job
is
the
freakiest
job
and
the
test
in
it
that
flicked,
the
most
is
job
should
run
a
job
to
completion,
something
other.
A
But
oh
cool,
look
synapses
in
the
test
name.
So
right
now,
if
I
want
to
be
the
most
helpful
developer
and
fix
the
flaky
is
text
in
the
flaky
ax
stop
I
would
want
to
go
start.
Looking
at
this
particular
say,
gaps
test
if
I
wanted
to
work
on
the
second
flaky,
as
tests
of
the
most
latest
job
I
probably
want
to
take
a
look
at
this
coop
CTL
client
test
case.
A
A
Okay,
so
this
is
pretty
cool.
You
can
also
sort
of
see
that
right
now
as
of
today
or
recently,
that's
don't
look
quite
right
here
unless
our
jobs
are
really
that
awesome.
But
you
know,
for
example,
a
single
commit
has
a
twelve
point.
Three
permit
twelve
point,
three
percent
chance
of
hitting
a
random
job
failure.
That
has
nothing
to
do
with
that
commit,
and
it
has
a
two
point:
six
percent
chance
of
any
single
job.
Looking
we.
A
Want
to
give
it
up,
we
are
deprecating
lunch,
github,
no
more
munch
github
only
proud,
please!
Well,
alright.
We
are
still
working
a
little
bit
on
making
sure
that
it's
operationally
friendly,
since
it
is
the
basis
of
our.
So
thank
you
right
now,
so
we've
sort
of
expanded
its
ability
to
export
metrics.
We
recently
split
them,
unders
wealth,
not
that
recently
I
guess
back
at
the
beginning
of
the
month,
we
split
the
munchers
up.
So
there's
one
that's
responsible
for
running,
just
the
submit
queue
and
another
one.
A
That's
responsible
for
running
like
the
approved
Handler
and
the
release
of
the
thing
and
all
the
other
things
that
we're
looking
to
eventually
replace,
and
we
have
a
bunch
github
you
can.
But
this
is
sort
of
the
dashboard
that
we
can
use
to
make
sure.
For
example,
our
github
API
token
usage
is
okay.
You
can
see.
Sort
of
this
is
what
it
looks
like
when
you
come
out
of
code
freeze,
this
event.
B
A
Could
go
into
comments
on
the
specific
lines
of
code
that
our
family,
wind
checks?
You
can
slash
hold
on
a
pull
request
and
the
Bob
will
automatically
apply
it
do
not
merge
label.
You
can
also
just
automatically
trigger
this.
If
you'd
like
with
working
progress,
thing
that
shrub
plugins
gotta
go
away,
I
want
to
make
sure
that's
not
that
demented
and
the
the
slack
we've
also
started
to
turn
on
slack
events,
the
the
one
that
happened
when
I
wrote.
A
This
was
that
we
start
pinging
slack
whenever
somebody
manually
merges
a
commit
so
hopefully
lava
lamp
was
the
build
cop
or
had
a
really
good
reason
to
do
this.
But
if
it
looks
strange
and
out
of
the
ordinary
now
we
know
who
to
go
talk
to
you,
because,
ideally
everything
is
getting
merged.
Just
one
bot,
some
other
cool
stuff
deck
has
a
nice
log
uri,
that's
pretty
easy
to
guess,
and
a
super
discoverable
we're
getting
to
the
point
where
you
can
stream
logs
from
deck.
I.
A
Think
that's
actually
something
I
want
to
remove,
because
we're
working
more
on
it
from
a
1-9
basis.
You
don't
have
to
hard
code
bots
names
anymore,
when
you're
launching
a
crowd
and
yeah
just
also
sorts
of
stuff
I'm
running
away
pastime
here,
but
I
would
like
to
clean
this
up
in
terms
of
and
proud
was
ready
for
you
to
use
to,
because
we
actually
have
clean
docks,
clean
plugins,
it's
basically
common
sense,
I.
A
Architectural
boundaries
seem
to
be
working
out
pretty
well,
and
it's
just
become
a
much
better
experience
and
then
the
test
great
thing
I
think
most
of
the
community
knows
about
this.
Now,
there's
this
pre
submit
dashboard
that
I'm
sort
of
looking
at
instead
of
do
burn
a
turn
now
to
see
whether
or
not
things
are
going
okay
and
they
are
for
the
most
part.
This
is
how
I
noticed
that
the
the
cops
AWS
job
wasn't
working
when
I
turned
it
on
this
morning
is
like
oh
look.
A
It
was
working
here
and
failing
here
and
oh
look.
It
looks
like
that
bottom
box,
there's
the
thing
that
changed,
and
so
it
turned
out.
It
was
the
testing
for
change,
good
troubleshooting
tool,
and
the
summary
tab
is
a
really
great
just
like
quick
plants
that
are
all
of
the
pre
submits
okay
without
me
having
to
look
across
a
ton
of
different
pull
requests
and
if
I'm,
tired
of
looking
at
that
in
table
form,
I
can
also
look
at
in
little
box
form
and
I
can
see
the
hey.
It's
mostly
green.
That's
cool.
A
Let's
see
triage
is
cool,
I,
probably
don't
have
enough
data
to
see
the
thing
anymore.
Oh
it's
there.
Occasionally,
that's
nice
triage
is
awesome.
Who
owns
conformance
we've.
Had
this
question
come
up
a
little
bit,
I
lurk
around
in
the
CN
CF
conformance
working
group,
and
yet
the
idea
is
that
cig
architecture
performance
because
conformance
defines
what
a
kubernetes
is.
It's
going
to
be
used
to
define
what
a
certified
kubernetes
is.
So
with
that
I'm
gonna
hand
it
over
to
Tim
st.
A
D
Keep
it
long,
I'll
just
give
a
tldr,
so
the
CN
CF
conformist
group
is
moving
ahead
with
getting
a
certification
process
in
place
around
the
current
conformance
tests,
and
you
see
I've
actually
done
a
very
good
job.
Doing
a
detection
of
API
coverage
of
the
conformance
area
of
what
ap
eyes
are
being
covered.
What
ap
eyes
are
not
being
covered
and
believe
it
or
not.
We
actually
have
really
low
API
coverage
a
lot
lower
than
exactly
shocker.
D
So
what
you'll
probably
see
in
the
coming
months
ahead
is
a
bunch
of
turn
on
the
actual
tests
themselves.
Right,
we'll
try
to
manage
the
churn
in
a
meaningful
way
and
make
sure
that
we
do
it
in
a
slow
and
mindful
and
judicious
manner,
but
I
highly
suspect
that
the
intent
test
week
will
be
beefed
up
for
the
most
part.
They
have
more
and
more
and
more
tests
added
to
them.
D
But
we
should
also
be
doing
into
mindful
manner
because
there's
a
bunch
of
known
gotchas
that
you
can
have
exist
in
the
test
suite
for
a
long
time,
such
as
waiting
on
hard
variables
or
hard
timing
dependencies
all
kinds
of
other
issues.
So
that's
that's
the
brief
synopsis.
I'll
pause
for
questions,
real
quick.
D
Was
in
chat,
I'll
go
back
in
a
talk
with
a
guy
from
NEC.
It
was
in
chat
and
I'll
post
a
link
to
it
ended
up
in
the
notes.
That
would
be
great
yeah
as
it
was
like
20%
or
something
it
was
yeah.
It's
it's
really
super
long
and
there's
also
a
bunch
of
information
with
regards
to
the
CN
CF
group
wanting
to
put
metadata
inside
of
the
test
so
that
students
are
readable
because
a
lot
of
the
intent
tests
are
kind
of
like
job
does
stuff.
D
Sig
group
voila,
you
know
it's
hard
to
decipher
unless
you're
actually
have
written
the
test
or
go
and
crock
through
the
code,
so
they're
putting
some
jiggery
in
place
to
annotate
the
tests
and
have
another
tool
which
will
be
able
to
rip
through
annotation
to
be
human,
readable,
parsable
thing
about
what
the
test
does
and
why
it
does.
What
does.
B
D
A
PR
and
I
can
make
the
PR
and
I
actually
was
in
the
chat
channel,
so
I
put
across
LinkedIn
CC
the
group.
So
if
you're
in
cig
testing,
there
is
a
suggesting
mention
this
PR,
which
is
linked
above
cool
Thanks.
That
issue
outlines
the
metadata
format
that
they
want
to
use.
So
if
you
see
stuff,
please
at
the
sig
that
bots
really
helpful
for
notifying
other
folks
in
the
sig
I,
don't
know
who
turned
that
on
by
default,
but
I
likes
it.
G
A
B
A
Burndown
week,
right
now,
but
I'll
put
it
on
the
agenda
for
us
to
discuss
next
week.
If
anybody
has
any
feedback
on
things,
I
wildly
missed,
misstated
or
focused
too
much
on.
During
that
brief
walk
through
those
slides.
Please
ping
me,
like
I,
said:
assuming
the
release
goes
well
tomorrow,
I'm
gonna
try
and
spend
some
time
polishing
this
up
and
walking
through
some
actual
examples
of
that
like
slash,
hold
and
work
in
progress
functionality
and
some
of
the
things
that
Eric
suggested
folks
try
out.
I
might
be
tempting
the.