►
From YouTube: 20190804 sig testing commons
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
C
So
I
have
a
but
I
had
just
failed
to
bring
up.
I
want
to
have
a
way
to
get
progress.
Reporting
from
the
the
framework
since
runs
can
be
multiple
hours
long,
especially
if
you
have
more
than
just
conformance
terms.
The
way
I
have
envisioned
doing
that
is
via
just
like
a
web
hook.
So
the
ginko
already
has
this
idea
of
custom
reporters
which
get
invoked
before
a
test
runs
after
a
test
runs
all
that
sort
of
stuff,
and
it
has
a
ton
of
information
that
a
human
would
really
like
to
know.
C
C
So
I
I
want
at
least
to
be
able
to
fire
off
this
information
to
a
web
hook
that
the
user
can
configure.
Probably
if
you're
like
an
environment
variable
and
like
so
my
personal
intent
is
to
use
that
for
sana
buoy,
but
the
web
hook.
Implementation
will
be
generic
to
where,
even
if
you
want
to
run
like
a
local
web
server
so
that
you
are
intercepting
those
and
just
printing
that
data
right,
that
would
even
be
more
useful
than
a
lot
of
the
log
output.
C
A
I'm
just
thinking
it
seems
like
a
product
of
Ginkgo
right,
because
Olli
ginko
would
know
the
full
suite
of
tests
that
you're
running
on
the
framework.
The
framework
would
have
no
idea.
So
passing
in
a
web
book
sounds
fair
right
now.
I
have
it,
but
I
always
knew
that
doing
the
custom
reporter
would
be
the
best
way
to
get
progress.
Yeah.
C
I
mean
so
that
was
one
thing:
I
mean
right
now
we
just
have
logs
as
the
default,
even
though
I
like
logs
are
good
to
have,
but,
as
least
as,
what's
being
printed
out
to
standard
out,
I,
don't
know
if
that
should
be
the
default.
That's
a
I
think
a
bigger
and
a
separate
change.
If
that's
what
you
were
meaning
well,.
A
If
there
was
some
type
of
metadata
that
was
in
the
log
output
that
actually
gave
you
status
because
that's
a
problem,
we
currently
don't
solve
it.
Like
you
know,
the
problem
with
progress
is
that,
like
it's
gonna,
be
relative,
like
you're
gonna
see
progress
of
like
you're
executing
X
number
of
tests,
your
X
test
done
it'll.
Give
you
a
no
idea
of
how
long
it'll
take
right.
C
Sure
sure
right,
but
it's
it's
a
step
in
the
right
direction,
and
even
if
you
only
had
one
test
left,
there's
no
way
objectively
to
know.
Is
it
a
five-minute
test?
Is
it
a
scale
test?
That's
going
to
take
an
hour,
so
I
think
the
benchmark
needs
to
be
just
number
of
tests
done
that
sort
of
stuff.
How
many
are
we
targeting?
How
many
have
we
finished?
C
How
many
have
passed
and
failed
and
I'm
fine,
with
also
omitting
that
in
the
cusp
in
the
default
reporter
or
like
sending
these
sort
of
summary
information,
all
sorts
of
standard
out
or
in
the
logs
like
I,
can
integrate
that
too,
but
one
way
or
the
other,
we
have
to
have
that
custom
reporter
a
custom
logic
to
print
what
we
want
yeah,
they
agree.
Okay,
so
you
it's
just
like
do
not
see
it
just
over
the
web
hook,
but
also
in
the
logs
I.
A
C
D
D
D
Because
it
also
attracts
transient
dependencies,
which
is
the
kind
of
which
is
why
it's
so
easy
to
kind
of
lose
track.
So
it
looks
like
a
lot,
but
for
my
experience,
what
ends
up
happening
is
that
there's
usually
like
one
import
somewhere
that
pulls
in
like
10
different
things.
So
if
you
pull
that
one
import
out,
you
can
usually
go
through
that
list
pretty
fast
or
faster
than
you
all.
A
A
B
D
C
B
A
A
E
A
A
A
He
kind
of
does
things
on
his
own
time
against
whatever
agenda
he
decides,
which
is
a
little
frustrating
at
times,
because
we
get
PRS
that
are
unrelated
to
the
work
that
we're
doing
I
don't
know
if
anyone's
talked
with
him
at
all,
because
he's
in
Japan,
so
Europeans
would
have
a
better
chance
to
train
with
him.
Maybe.
A
A
A
What
he's
doing
is
adding
a
lot
of
restrictions
to
style
inside
of
the
PR
blocking
jobs.
I
know
necessarily
think
that's
a
requirement
because
we
may
want
to
burn
all
this
to
the
ground
at
some
point
and
if
you
have
12,000
style,
checkers
and
your
bird
ethics
to
the
ground.
Well,
of
course,
you're
gonna
break
all
over
the
place,
he's
going
to
spend
more
time,
updating
the
all
the
different
associated
style
checkers
than
you
will
actually
making
the
code
refactor.
A
C
E
C
Jordan
Liggett
kind
of
took
over
because
he
was
like
farther
along
on
an
implementation.
It
seems
like
he
had
a
separate
set
of
conversations
and
had
resolved
to
just
say
we
will
remove
most
of
that
logic
and
just
say
we'll
only
air
if
we
have
like
these
like
out
of
memory
taints
and
those
sorts
of
like
known
bad
state
issues,
and
it
seemed
like
you
disagreed
with
that.
But
now
I
feel
like
I'm
playing
middleman
between
the
two
of
you,
so
I'm,
not
sure
what
its
gonna
take
to
for
us
to
agree.
What
we.
A
C
And
then
there
was
also
some
question:
jordan
Liggett
head
suggested.
We
add
some
sort
of
flag
to
say
even
if,
like
let's
say
somebody
is
having
this
problem,
we
should
have
a
way
to
flip
off
this
behavior
entirely
and
just
start
up
run
the
test.
Don't
care
about
anything
at
all:
I,
just
screwed
that
okay,
because
he
you
know
that
it
was
difficult
than
how
to
implement
that.
C
Because
I
can
it
make
sense
at
a
test,
initialization
step
to
say:
okay,
we'll
just
start
the
test,
don't
worry
about
any
of
this
weighting
logic,
but
there's
a
bunch
of
tests
which
will
do
something
destructive
and
then
call
into
that
code
path.
That
says
like
wait
until
I'm
in
a
testable
state
again,
and
so
if
they
provide
this
flag
that
ignores
start
up
initialization
stuff,
it
makes
I
think
an
unknown
number
of
tests
really
awkward
or
maybe
I.
Don't.
C
It's
the
same
one,
that
was
that
I
mean
it's
just
the
pool
request,
yeah
I,
think
the
discussion
is
all
there
and
I
mean
I
had
an
implementation
that
already
had
that.
But
a
lot
of
it
was
from
offline
conversations
because
it
was
just
an
awkward
thing
because
I
started
it.
He
took
over.
It
fell
to
his
backburner,
so
I
took
over,
and
so
there
was
these
all
these
slack
handoff
notes.
Ok,.
A
Well,
let's
let
me
take
a
look
at
it
and,
if
there's
more
history
here
in
this
issue,
please
link
it
and
then
I'll
take
a
look,
but
I
do
think
that
we
need
to
get
this
in
place,
because
this
is
a
problem
for
a
number
of
deployment
scenarios
where
people
deploy
in
a
fully
taped
their
environments.
From
beginning
and
a
good
example,
this
common
use
case
2,
is
also
cloud
providers
as
well
default
lead
configure
everything
to
be
tainted
on
entry
and
I.
C
F
A
A
G
B
G
Quick
question
we're
going
through
and
trying
to
generate
metadata
on
all
of
the
tests,
and
one
of
the
things
we
found
is
to
two
things.
If
there
are
tests
they
used
string
interpolation
at
they
can
it
level
we
cannot
promote
them
to
conformance
it.
It's
particularly
used
in
in
patterns
where
they
are
using
a
variable
tree
and
on
the
innermost
loop
interpolating
those
variables
in
to
the
end
of
the
string.
So
two
things
there
we're
trying
to
find
ways
to
do
so.
It
will
walk
the
code
tree
and
get
as
much
metadata
as
possible.
G
Some
things
we
don't
have
yet
are
the
nested
describes
and
and
expects
inside
the
ginko
language
so
that
we
can
clearly
identify
that
full
string
that
we
use
inside
of
test
grid
and
an
API
snoop
to
identify
a
test
to
be
able
to
link
it
to
a
specific
test.
You
know
within
the
source
code.
Does
anybody
have
some
experience
looking
through
the
ast
or
finding
ways
where
we
can
generate
more
metadata
than
what
we
have
in
the
in
the
comments
field?
I
think
I
think
this
name.
A
C
Yeah
so
it
and
unfortunately
it
doesn't
provide
you
with
the
code
block
for
it.
However,
you
you
could
almost
you
could
still
leverage
that
to
at
least
get
that
full,
like
the
the
call
site
and
the
full
descriptor
and
then
use
that
in
combination
with
a
custom
script
to
go
there
and
parse
the
ast
and
get
the
comments.
That's
the
the
best
thing
I
could
think
of
and
with
the
string,
interpolation
I
know
the
frustration
that
you're
having
because
even
locked
ast
you
have
to
have
like
kind
of
infinite
number
of
like.
C
G
C
F
C
C
G
A
G
C
A
So,
in
the
early
days
it
would
take
a
while
to
get
PRS
merged
on
c
is
not
very
responsive
because
he's
pretty
busy
doing
stuff
he's
pretty
much
the
sole
owner
maintainer
of
that
repo.
Now
he
does
merge
the
PRS
eventually
or
address
them.
It
just
takes
a
while.
So
you
got
to
factor
that
into
the
equation.
A
One
thing
I
want
to
do
is:
maybe
we
can
table
this
and
talk
about,
take
a
look
at
hippies
document,
and
maybe
we
should
evaluate
you
know
if
we
could
have
a
pony
what
what
are
all
the,
how
many
who's
a
band?
What's
the
horn
look
like
was
me,
look
like
because
I
think
in
the
ideal
stage,
instead
of
trying
to
make
inko
do
all
the
things,
and
maybe
we
can
abstract
can
go
away
to
the
point
where
we
could
eventually
use
the
framework
to
to
call
in
some
other
sweet.
G
I've
got
a
prototype
PR,
that's
very
ugly,
that
I
think
there's
a
little
bit
of
the
opposite.
It
still
uses
a
test
binary,
but
it
adds
a
go
dog
flag
that
is
just
instead
of
running
the
go
dog
suite
ends
up
running
the
our
site.
Instead
of
running
our
current
swig
runs,
go
down
and
I
rewrote
some
of
the
loading
functions
so
that
it
would,
depending
on
reading,
go
dog
only
initialized
in
maan
Guto
specific
parts
of
our
framework,
but
I
ran
into
some
some
pain
points.
I'm.
A
E
A
If
anything,
wrapped
the
free
work,
that's
the
best
mentality
that
mean
the
framework
was
written
to
wrap
ginkgo
behaviors
for
the
most
part,
and
if
there
are
things
where
we
can
do
wrap
it,
you
mark
a
little
bit
cleaner.
That's
that's
primarily
the
child.
With
this
group
anyways
like
we're
trying
to
abstract
away
the
dependencies
currently
just
so,
we
can
actually
vendor
it
in
a
clean
way.
A
That
is
a
that
is
a
it's
a
huge
problems
that
the
community
needs
and
wants
to
have
solved.
The
community
needs
a
framework
that
they
can
leverage
that
basically
knows
kubernetes
and
without
fat.
They
basically
recreate
the
universe
in
a
terrible
way
or
they
vendor
in
everything
and
which
is
also
just
as
bad.
So
this
is
needed
by
both
the
cane
communities
developers,
but
also
the
ecosystem
and
developers.
Well,.
G
It
might
be
useful
for
me,
I
know
we're
talking
about
the
document
that
describes
our
pony
favorite
Pony
world,
but
what
it
is
that
our
current
framework
provides
that
the
far
framework
not
ginko
what
it
does
that
that
we
that
we
want
to
keep
and
then
what
I
mean
is
by
what
it
provides
to
the
people
writing
the
tests
like
they
expect
like
there's,
did
the
namespace
deletion.
That
is
another
issue.
G
I
want
to
see
if
the
namespace
deletion
can
be
sped
up
by
using
at
CD
3,
and
we
provide
like
just
knowing
that
we
provide
namespace
deletion
would
be
great,
but
I,
don't
know
if
there's
a
place
where
that
stock
community
worthwhile
creating
another
document
to
make
sure
that
information
is
gathered
by
people
who
understand
that
and
I've
written
that
framework.
Well,.
A
You
kind
of
conflated
two
separate
things.
The
namespace
deletion
is
a
separate
problem
which
we
leverage
inside
of
the
test
framework
as
a
way
to
get
to
reduce
collisions
in
the
test
and
have
parallelization
so,
but
that's
a
super
useful
thing
that
we
should
fix.
Anyways
deserve
right,
you
haven't.
Do
you
have
that
issue
logged
because
Clinton
said
he
was
gonna.
Take
a
look
at
her
resources,
I.
G
A
G
Largest
of
clusters,
if
the
namespace
still
has
things
in
it,
it
will
consistently
take
around
20
seconds
225
seconds
and
if
there's
I
think
there
was
something
in
entity,
3
were
instead
of
looking
at
the
objects
that
are
contained
within
it.
It
just
ensures
they
will
be
deleted
and
returns
instantly
or
fairly
instantly
that,
yes,
the
namespace
will
go
away.
G
There
was
two
options:
one
was
queue
up
and
delay
the
the
broken
names
based
deletion
until
later,
and
the
other
ones
go
fix
namespace
to
solution,
so
it
doesn't
have
thee,
I,
didn't
I,
don't
think
we
created
the
one
for
actually
go
spectrum
into
a
solution
you
go
for.
Please
fix
the
pointing
framework
to
delay,
deletions,
pull
screen
up
at
the
end
of
the
entire
run,
not
that
that's
just
I
mean.
C
You
keep
changing
the
way
namespaces
are
deleted,
definitely
seems
out
of
the
mission
statement
of
this
group
so
like
who
we
need
to
go
talk
to
because
I'm
sure
there's
some
trade-off.
If
we
change
something,
it's
gonna
speed
it
up,
but
there's
gonna
be
some
downside
right
or,
if
not
like.
Why
don't
we
just
kick
the
issue
to
that
group
and
fix
it.
A
Yeah,
because
it's
API
machinery-
and
is
this
oldest
time
it
is
it's
unnecessarily
complicated
as
all
hell.
So
that's
the
problem,
and
so
we
kicking
things
to
another
group.
Just
basically
means
like
yeah,
you
put
another
rock
on
a
rock
pile,
we'll
get
to
it.
You
know
when
the
heat
death
of
the
universe
happens
so.
C
A
A
Polygamy
well
part
of
the
test.
There's
two
parts:
there's
part
of
the
testing
framework
blocks
forever,
even
if
the
name
spaces
in
the
deleting
state
you
are
like,
and
part
of
it
should
be
like
after
a
period
like
that
when
the
suite
has
ended
like
there
shouldn't
be,
like
500
namespaces
hanging
around
saying
deleting
right,
but
one
would
hope
that
by
the
time
the
suite
is
that
you
should
force
delete
all
of
them.
Mm-Hmm.
F
C
C
A
G
Know
we
were
talking
about
delaying
looking
at
the
what
was
on
the
document
until
people
get
a
chance
to
it's
not
too
long
at
this
point
it
might
not
be
nice
just
to
get
the
feedback
on
what's
there
found
so
that
as
I'm
tuning
and
looking
at
framework
options,
I
have
the
feedback
from
this
meeting,
but
that's
okay.
It's.
A
A
G
And
could
its
component
config
something
that
we've
started
using
on
new
features
so
that
we
know
whether
they're
enabled
or
not?
Because
it's
part
of
me
part
of
this
is
pain.
Points
for
for
us
is
discovering
whether
a
particular
endpoint,
parameter
or
or
property
is
available
to
be
tested,
because
it's
not
it's
like
an
able
yeah.
A
The
Kaukauna
config
should
tell
you
if
it's
enable
or
not,
but
it's
not
it's
not
uniformly.
It's
not
done
it's
a
very,
very
great
alpha
where
it's
a
alpha
work
in
progress
and
we're
constantly
changing.
So
that
falls
underneath
the
auspices
of
sequester
lifecycle
and
folks
are
actively
working
on
it.
There's
a
separate
working
group
to
address
that
particular
set
of
problems,
but
the
I
wouldn't
I,
don't
think
we
can
rely
on
it.
Yet
it's
not
there.
A
E
A
A
G
And
that's
currently
all
defined
as
far
as
the
test
in
just
that
string.
But
we've
started
adding
other
metadata
inside
the
comments
and
I'm
just
trying
to
figure
out
which
ones
that
we
actually
want
out
of
a
green
field
and
mapping
those
existing
things
that
we
currently
depend
on
I'm,
trying
to
how
they
would
map
to
I.
A
G
C
Like
you,
you
would
actually
have
to
run
the
test
room
right,
yeah,
yeah,
that's
the
thing
so
to
analyze
the
entire
test
suite
you
can't
really
do
it
like
you'd
have
because
some
things
are
just
gonna
be
broken.
They're,
not
gonna,
run
in
the
precook.
It's
gonna
fail,
so
it's
never
gonna
run
the
image
so
yeah
for
two
thousand
tests.
It's
just
impossible,
with
three
thousand
tests
kind
of
impossible
to
do
and
then,
like
you
said,
you
hit
all
those
other
ast
walking
problems
with
deeply
nested.
Things
were
things
with
code
or
string
interpolation.
A
Of
these
things
come
down
to,
like
heavy
will
define
sweets.
Next,
one
is
like
selection
based
on
metadata
that
that,
to
me
sounds
just
like
a
sweet
right
yeah.
Your
sounds
kind
of
like
a
sweet
too,
because
you,
you
would
hire
quickly
organize
things
if
you
actually
had
a
sweet
structure
that
made
sense
well,.
C
A
A
A
A
C
G
A
A
G
I've
been
looking
at
that
a
bit
closer
and
getting
very
intimate
with
all
of
the
what
we
have
available
in
swagger
JSON
to
query
in
any
form,
and
we
do
have
some
parameters,
but
we
don't
have
sequence
of
operations,
tying
things
together
to
do
those
sequences.
Those
are
things
that
we're
gonna
that
are
are.
A
G
Is
primarily
focusing
on
Aaron's
request
for
I
want
a
file
I
want
I,
want
one
group
of
people
to
promote
lis
behaviors
and
in
other
people,
I'm
promoting
the
test
and
I'm
reviewing
in
freedom,
because
some
of
that's
not
that's
not
an
explicit
and
bdd
framework.
It's
definitely
not
the
way
that
we
have
it
apparently
I.
A
A
G
C
Only
seen
a
few
versions
there's
been
one
that
I
saw
that
was
actually
was
put
as
a
serial
test.
It
seemed
almost
like
to
be
safe.
It
was
like
we're
doing
a
lot
of
an
operation
and
it's
flaky,
if
it's
not
in
parallel
but
again
like
as
things
got
better
scale,
that
I
said
probably
wasn't
necessary
and
then
the
other
one
I
mean
just
all
the
destructive
tests
right.
Those
have
to
be
run.
Serially.
A
A
A
G
We've
got
this
where
we
would
want.
You
know
what
we
want
out
of
our
framework.
It
might
be
useful
in
at
least
some
instances
to
point
to
some
several
frameworks
that
do
it
in
a
similar
way,
what
we're
describing
just
so
that
it's
somebody's
we're
creating
a
new
vocabulary.
I'm
gonna
wanna,
make
sure
we're
talking
about
the
same
thing.
A
Yeah
I
think
that
should
come
up
with
lexicon
in
this
document.
So
usually,
if
you
look
at
some
other
specs
that
have
a
glossary
or
a
lexicon
listed
at
the
top,
so
like
you
kind
of
created
your
own
document
format
here
when
like.
If
you
look
at
the
examples
in
the
enhancements
repo,
there
are
plenty
of
ones
that
outline
a
well-defined
structure
for
how
they
we
do.
Proposals
within
the
kit
and
a
lot
of
them
actually
refer
to
a
section
where
they
have
lexicon
at
the
top.
F
A
A
So
I've
got
a
bunch
of
action,
aims
to
file
issues.
Let
me
just
list
them
out
here
and
what
is
to
take
a
look
at
John's
issue
there
with
white
lists
or
PR
folks
on
this
call
need
to
review
hippy
stock
and
we'll
have
a
more
informed
discussion.
Extent
and
I
need
to
create
an
issue
with
regards
to
namespace
dilution,
umbrella
issue
with
namespace.