►
From YouTube: 20190702 sig arch conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
the
only
agenda
item
I
had
listed
here
was
to
discuss
John's
proposal
before
I
get
into
that.
We
met
a
couple
of
weeks
ago
right
after
the
last
session,
we
had
a
grooming
session
where
we
went
through
the
backlog
and
we
did
a
lot
of
refinement
and
I
did
another
pass
recently
of
things
that
overlapped
with
this
and
cleared
out
a
bunch
of
those
items.
A
B
I
have
it
would
seem
to
have
the
capacity
of
imagination
to
understand,
logistically
how
it's
going
to
work,
I'm
kind
of
hoping,
maybe
some
time
away
from
the
computer,
will
help
me.
Imagine
the
logistics
better,
but,
like
my
my
only
interest
in
this
is
hopefully
the
use
of
more
structured
metadata,
as
represented
by
Emmel,
will
allow
us
to
break
up
this
process.
That's
right
right!
B
Right
now,
like
the
same
person,
you
have
to
simultaneously
decide
whether
or
not
this
is
a
behavior
that
should
be
conformant
and
then
also
whether
or
not
the
test
has
written
exercises.
This
behavior
with
the
appropriate
hygiene
we
expect
of
conformance
tests,
and
we
want
to
split
those
up,
but
I
don't
entirely
understand
how
this
will
the
mechanics
of
how
this
will
help
us
split
that
up,
like
I,
can
see
that
okay,
great
we'll,
have
a
game.
B
Oh
file
that
enumerate
a
bunch
of
behaviors
and
we
can
off
over
here,
go
through
the
turn
of
like
getting
the
right
people
to
identify.
If
those
are
the
right,
behaviors
I'm
still
having
difficulty
sort
of
tying
that
the
tooling
together
to
make
sure
that
we,
we
still
have
like
a
good
gate
in
place
to
ensure
or
enforce
that
when
a
given
test
you
know
like
they're,
the
behaviors
are
maybe
being
tagged
by
IDs.
B
I
think
of
it
in
the
same
way,
where,
like
we
kind
of
have
API
changes,
kind
of
need
to
go
off
to
a
special
set
of
people,
even
though
those
changes
could
live
sort
of
anywhere
in
our
code
base,
I'm
trying
to
imagine
how
we
would
do
something
similar
for
performance
tests,
you
know
one
thought
is
we.
We
suggest
that
people
move
their
tests
out
of
sort
of
the
saiga
own
packages
over
to
like
a
conformance
directory
and
we
get
approval
that
way,
but
that
that
just
doesn't
feel
right
off
the
top.
B
B
Think
in
an
ideal
world
being
being
able
to
throw
a
file
at
a
cluster
and
then
wait
a
bit
and
then
see
behaviors
afterwards
sounds
great
I,
don't
know
if
that's
really
how
the
majority
of
our
conformance
tests
work
right
now.
Maybe
John
has
done
a
better
survey
of
all
of
this
than
that.
I
have
so
I'm
kind
of
interested
in
how
like
I
want
I
want
to
see
us.
Try
that
I
would
love
for
my
skepticism
to
be
proven
wrong.
B
A
So
there's
a
piece:
that's
missing
for
me
that
I
wanted.
So
we
have
a
bunch
of
stuff
inside
of
API
machinery
inside
of
our
types
that
go
write
whatever
they
are
for
across
the
different
types
right
and
what
I?
What
I
envisioned
was.
Some
type
of
annotation
well
form
annotation
that
then,
as
you
do
some
type
of
generate
whatever
that
may
be
right,
then
it
would
actually
go
through
and
create
some
output.
Our
set
of
output
artifacts,
where
this
to
be
one
through
in
you
know
where
one
of
them
could
be
the
that
behavior.
A
So
like
this,
this
is
a
comment
I
made
in
the
doc
and
his
his
comment
was
regarding
the
media.
We
actually
pretty
far.
We
could
get
pretty
far
without
it
and
I'm,
like
isn't
the
purpose
of
this,
to
be
automation,
for
us
to
be
able
to
do
that,
and
so
I
was
a
little
I.
Don't
quite
know
the
answer
this
and
hopefully
we
can
send
John
the
recording
and
he
can
help
eliminate.
B
B
Looks
pretty
sane
or
we
lock
a
bunch
of
humans
in
a
room
and
shake
them
until
the
animal
comes
out
of
their
brains,
and
then
we
sort
of
understand
that
oh
yeah,
that's
sort
of
a
good
corpus
of
behaviors
that
we
should
write
tests
to
either
way
like
we
have
this
pile
of
gamal
I
then
want
to
understand
the
logistics
of
tying.
All
of
our
you
know
what
tests
do
you
need
to
write
to
satisfy
these
behaviors?
That
sort
of
thing
I
feel
like
maybe
you're
approaching
it
from?
A
C
A
D
Look
I
have
a
question
too
and
sorry
if
this
was
answered
in
the
minute
or
two
I
was
late,
but
there
were
kind
of
two
sets
of
tests.
There
were
the
kind
that
were
just
like
the
crud
but
based
API
stuff,
which
seems
like
you
just
consumed
the
existing
API
to
say
all
mutable
fields
can
be
edited,
but
then
there
was
the
extra
layer
which
was
he
just
sort
of
I
thought
hand,
waved
and
said
there
will
be
more
complex
things
and
will
maybe
have
to
do
those
manually
and
I.
D
A
A
Think
you'd
have
to
somewhere
in
the
pot
API
you'd
have
to
like
enumerate
the
list
of
behaviors
that
you'd
expect
and
like
either
within
the
API
as
themselves.
You'd
have
to
say
these
are
the
guarantees
of
the
behaviors
and
annotate
that
inside
of
the
types
that
go
from
or
whatever
API
as
they
are
right,
these
are
the
behavioral
expectations
of
the
API
is
well.
D
B
A
His
mock
the
way
he
mocks
the
test
would
allow
you
for
anybody
to
write
a
test.
It
has
this
behavior
of
characteristics,
but
it
wouldn't
it
wouldn't
be
feed-forward,
so
I
think
that's
what
he
means
by
is
like
you
can
get
pretty
far
today,
just
by
mocking
the
existing
test,
and
you
might
be
able
to
go
backwards
from
here
back
to
that
generated,
yam
l2
as
well.
If
you
have
the
specified
Mach
right
because
he's
basically
adding
tags
in
a
specific
way
to
the
tests
that
allow
you
to
say
this
is
the
behavior.
A
B
I
didn't
see
it
in
a
large
part
of
the
proposal.
I
just
saw
it
in
the
portion
called
coverage
tooling,
which
talked
about
how
would
we
tie
back
or
signify
that,
like
today,
as
written
with
using
all
of
the
gingko
stuff
that
we
use
when
we
say
conformance
doc,
it
like
how
it
needs
know
that
that
conformance
thought
it
one
implements
this
behavior
with
Heidi
whatever,
and
he
was
like,
let's
just
slap
another
tag
with
the
idea
right,
yeah,
that's.
A
B
And
I,
just
like
I,
want
us
to
stop
leaning
on
tags
to
the
point
where
like
if
we
have
to
rename
the
test
because
apparently
like
nobody
here,
actually
cares
or
bothers
with
the
fact
that,
as
written
by
ginkgo,
this
is
supposed
to
be
like
a
human,
readable
description
right.
We
describe
the
thing
and
then
we
sort
of
say
the
different
behaviors
that
it
should
do
and
so
like
we
could
change
just
the
text
of
the
it
entirely
to
correspond
to
one
of
those
IDs
or
something
but
I
think
like
attempting
to
shove.
B
More
metadata
into
tags
is
just
bad
news.
Bending
this
past,
the
point
of
breaking
and
and
that's
just
because,
like
tags,
are
generally
used
to
sort
of
categorize
a
whole
bunch
of
things
across
the
disparate
set
of
tests.
Where's
here
I
think
a
behavior
is
going
to
be
exercised
by
exactly
one
test.
It'll
just
be
a
one-to-one
mapping
and
I
don't
want
to
create
a
couple
hundred
tags
all
over
the
place.
A
B
Then
those
fields
have
to
be
released,
test,
name
and
description,
and
so
test
name
could
be
our
stand-in
for,
like
the
behavior
ID
or
whatever.
This
was
all
ostensibly
used
so
that
we
could
like
generate
Docs
of
all
of
the
conformance
things
and
again,
I
feel
like
that,
should
probably
just
tie
right
back
to
these
behaviors.
C
B
C
C
In
addition
to
to
where
we're
talking
about
using
the
DSL
that
we
currently
have
around
described
and
then
context
context
and
and
it
it,
we
tend
to
put
all
of
our
our
test
structure
in
the
same
place
as
that
as
those
nested
the
description
and
then
tagging
information,
rather
than
having
a
place
where
there's
test
functions,
that
we
utilize
within
those
descriptions
and
when
we're
talking
about
generating
or
was
it's
generating
the
test
scaffolding
I'm
in
a
way
where
people
can
easily
pick
up
and
based
on
the
test
description,
the
behavior
will
have
an
automated
set
of
please
fill
in
the
blanks
here.
C
A
All
right
so,
if
I
take
a
step
back
and
try
to
summarize
the
feedback
so
far
in
some
actionable
way,
which
this
will
be
a
total
stand
minute,
like
my
particular
thing
that
I
don't
like
is
I,
really
want
a
way
to
specify
the
auto
generation
in
the
mid,
with
metadata
from
the
types
that
go
and
are
used
in
all
the
tests
or
just
on
like
the
crud
tests.
I
want
to
be
able
to
I
wanna,
be
able
to
annotate
and
just
to
have
the
generation
to
its
business
right
now.
A
A
Right
then,
from
there
some
of
the
scaffolding
could
be
auto-generated,
but
we
don't
have
to
use
it.
We
could
basically,
it
could
be
stumps
that
we
could
use
to
fill
in
the
details
of
what
we
needed
to
do
to
create
it.
Wrapping
this
conformance
hit
behavior,
so
I
liked
actionable
feedback
that
actually
makes
any
sense
to
people.
B
Yes,
I
I,
think
that
makes
sense
and
maybe
I'm
just
sounding
like
a
broken
record.
Like
I,
it
sounds
like
you're
really
interested
in
the
automation
piece
that
auto
generates
as
much
of
the
behavior
files
and
then
tests
that
are
driven
by
those
behavior
files
as
possible,
whereas
I'm
interested
in
just
sort
of
figuring
out
what
the
format
of
those
behavior
files
should
be
and
where
they
live,
so
that
I
can
start
to
get
to
the
business
of
tying
or
existing
corpus
of
tests
to
those
behavior
files.
B
Thinking
that
these
problems
can
sort
of
be
solved
in
parallel.
If
we
can
agree
on
the
specific
on
like
the
written
format
of
this
stuff
and
how
to
tie
back
to
it,
then
like
we
can
have
people
going
off,
and
you
know
rain,
starting
a
bunch
of
behaviors
and
manually,
creating
those
files,
or
we
can
figure
out
how
to
automatically
get
the
right
annotations
in
the
places
and
generate
all
of
those
files.
But
I'm
trying
to
it's
that
decoupling
that
I'm
interested
in
again
right.
The
decoupling
of
is
this
the
right
list
of
behaviors
versus?
B
Are
these
tests
written
the
right
way
and
I
and
I
view?
This
is
really
moving
us
toward
that?
If
we
can
I
I
just
haven't
quite
figured
out
the
mechanics
and
hopefully
I'll
be
able
to
brainstorm
it
in
topic
on
the
PR
of
how
like
we
could
use
the
Gamo
file
to
like
get
the
pass
of
highly
bottlenecks,
people
to
agree
that,
like
yeah,
these
are
behaviors
that
we
want
and
they're
all
sort
of
like
TBD,
and
then
we
can
go
off
and
have
people
identify
tests
hit
those
DVDs.
B
C
A
C
C
B
Know
I
I
feel
like
a
lot
of
our
generation.
Approaches
require
a
pretty
significant
commitment
of
time,
upfront,
the
maintenance
down
the
road.
So
just
don't
underestimate
it,
it's
not
intractable
for
sure,
and
it
has
advantages
but
you're
trying
to
enforce
things
that
we
can
all
talk
about
in
text,
but
are
hard
to
do
any
other
way.
B
Like
maybe
help
me
understand,
I
have
this
skepticism,
where
I've
looked
at
things
in
the
past.
That
attempt
to
like
look
at
an
open,
API,
spec
and
then
try
and
auto
generate
a
bunch
of
crud
tests
or
something
based
on
that's
best
that
we
can
fit
verify
like.
Are
we
sort
of
you
know,
fuzzing,
all
the
appropriate
fields
and
all
the
appropriate
values?
B
B
Because
yeah,
so
the
crowd
again
started,
the
validity
of
the
data
didn't
seem
like
it
was
going
to
be
of
that
much
value
for
this
group,
because
we
are
sort
of
more
interesting
than
the
behaviors,
so
I'm
kind
of,
and
maybe
maybe
Tim.
You
have
some
ideas
too,
like
how
could
we
like?
How
are
we
supposed
to
annotate
these
fields?
So
we
know
what
they
do.
A
That
that
was
the
missing
piece
for
me.
I
didn't
know
the
answer
so
like
that
was
the
one
that
I
asked
it
on
the
cap
and
I.
Think
that's
the
one,
that's
the
blocker
for
me.
It's
like
how
do
I
annotate
the
existing
API
types
so
that
I
can
actually
auto
generate
the
data
that
I
want
in
a
meaningful
way,
because
without
auto
generation
without
actually
wrapping
some
portions
of
the
API,
it's
just
somebody
needs
to
go
through
with
fine-tooth
comb
and
just
do
all
this
work.
A
B
Yes
and
I'm,
trying
to
advocate
that
I
think
this
approach
allows
both
of
those
to
happen
in
parallel,
whereas
the
system
we
have
in
place
does
not
and
that's
I.
That's
why
I
want
us
to
sort
of
try
and
land
the
specification
and
the
time
of
the
stuff
together,
so
that
we
could
take
a
crack
at
auto-generating
the
stuff
and
also
we
could
go,
get
the
papyrus
out
and
fine-tooth
comb
and
have
a
working
session
with
the
right
people
or
something.
D
Yeah
I
mean
when
we
talk
about
this
I,
think
that
makes
sense
to
me
Aaron
in
the
sense
that
he
said
the
crud
test
for
this
group.
Don't
maybe
matter
that
much
because
nothing
is
getting
to
the
conformance
level
with
like
the
basic
crud
functionality,
not
working.
So
how
do
you
explain
the
more
complex
things
and
in
the
document
he
just
says,
you'd
have
a
series
of
manifests
and
a
series
of
conditions,
and
my
question
there
was
like
that
just
sounds
like
a
human
written
test
and
even
a
subset
of
our
test.
D
It's
just
a
very
simple
set
something
up
test,
something
else,
whereas
we
have
a
lot
more
complex
things
with
like
tearing
down
the
API
server
and
then
checking
something
or
and
then
reinstating
it
like
it's
multiple
steps,
sometimes
their
actions,
sometimes
their
tests.
Sometimes
your
API
driven,
sometimes
our
environment,
driven
and
I
would
worry
that,
like
we're,
gonna,
effectively
kind
of
move
the
burden
from
we're,
not
writing
these
tests,
but
we're
effectively
writing
all
those
functions
and
then
aliasing
those
in
some
sort
of
vml
document
and
it's
just
sort
of
a
work.
D
It's
still
the
same
thing.
It's
still
we're
left
with
now
the
CML
document.
How
do
we
ensure
coverage?
How
do
we
ensure
it's
doing?
What
we
want
its
don't
and
so,
but
I
felt
like
we
were
just
kind
of
shifting
where
we
were
confused,
but
I
do
agree
that
the
idea
of
having
a
separate
document
of
these
are
the
behaviors
is
just
useful
in
its
own
right
and
it
sounds
like
that's
what
you're
saying
and
that
you
would
find
useful
decoupling
that
from
the
test,
yeah.
B
B
The
API
coverage
a
very
chunky
level,
just
path
and
verb
is,
is
one
sort
of
map
of
the
territory,
and
then
we
could
make
that
map
more
granular
by
certain
figuring
out
like
what
are
the
different
fields
that
were
setting
in
all
these
tests
to
figure
out
how
much
of
that
recovery,
but
like
the
actual
territory
itself,
is
all
of
the
behaviors.
Yes,.
A
What
I'm
looking
for
is
more
of
a
spec
of
how
you
define
two
behaviors,
like
you
see
how
the
ginkgo
itself
is
a
behavior
driven
testing
infrastructure
and
if
we
can
map
away
for
declaratively
defined
behaviors,
which
you
know
what
the
expectations
are
for,
in-state
right
that
you
defined.
What
is
the?
What
is
the
behavior
I'm
expecting?
What's
the
end
goal
at
the
end
state
that
I'm
trying
to
achieve
in
some
standard
fashion
that
we
can
really
auto-generate
pieces
of
it
and
with
Oh?
C
One
of
the
things
that
I
was
exploring
is
is
how
different
behavior
driven
development
libraries
stew
separate.
These
concerns
in
a
precise,
in'
and
manner,
and
a
couple
of
the
other
ones
I
saw
were
based
on
gherkin,
which
comes
from
the
cucumber,
and
that's
that
spec
world
and
one
of
the
things
that
seems
to
apply.
Well
it
the
the
directory
structure
and
the
format
actually
matches
the
area
suites
and
suite
that
we
lit
that
is
defined
in
the
cap
currently,
and
what
I
really
dig
is
that
you
can
have
these
files
defining.
C
Given
these
steps
and
it's
step
step,
step
step
when
we
have
these
step
step
step
step
then
there's
an
expectation
at
the
end
and
there's
no
code
in
that,
and
it
can
still
be
auto-generated.
It's
still
a
file
format
right,
but
then
we
can
reuse
the
sections
over
and
over
again,
the
test
code
gets
where
we
can
reuse
it
a
much
simpler
and
completely
separate
the
definition
of
done
and
the
behavior
definition
into
files
that
are
actually
part
of
the
test.
So
the
definition
of
done
is
part
of
the
the
go
test
run.
C
Might
it
be
worth
exploring
because
it
does
provide
a
lot
of
the
code
generation
because
I
think
one
of
our
objectives
here
were:
let's
get
some
test
scaffolding,
let's
make
sure
that
we
can
generate
and
validate
the
behaviors
and
have
some
stuff
to
to
say
if
our
conformance
test
behavior
coverage
has
increased.
That's
our
our
end
goal
and
at
least
some
exploration
into
existing
frameworks
that
really
target
that
well
might
be
worth
some
time.
I.
A
It
was
because
it
was
further
days
ago
and
behavior
driven
testing
and
go
didn't
exist
and
a
lot
of
the
a
lot
of
people
came
from
the
Ruby
cucumber
land,
where
it's
much
more
about
to
find
I
did
a
quick,
Google
search,
I
just
pasted
it
in
chat
that
somebody's
created
a
library
around
this
and
might
might
be
useful
before
we
go
too
far
in
the
spec
to
enumerate
what
state
space
for
declare
or
what
state
space
exists
for
doing.
Behavior
driven,
auto-generated
Suites
exist
in.
B
Buddhist
and
and
hippies
comment
on
on
John's
Capp
enumerates
that
and
a
bunch
of
articles
that
describe
how
to
use
that
library,
as
well
as
to
other
libraries
in
addition
to
a
kinko
cuz
yeah.
That's
going
to
be
my
other
comment:
I'm,
not
super
well
steeped
in
the
universe
of
urn,
the
land
of
behavior
driven
development
come,
but
it
could
be
that
we're
trying
to
reinvent
a
wheel
here
and
it
would
be
worth
exploring.
B
D
Maybe
somebody
who's
more
verse.
Can
you
exploit,
like
all
these
things
with
keywords
and
looked
at
cucumber
and
I've
used
something
called
the
robot
framework
and
they
try
to
say
that
they're
simplifying
things
by
like,
in
their
expression,
decoupling
the
code
from
the
behaviors,
but
it
effectively
just
becomes
a
proxy
and
I?
D
Don't
you
get
any
simplification,
if
you
say
like
when
a
user
logs
in
and
that's
a
keyword
like,
it's
called
a
keyword
instead
of
a
function
but
somewhere
down
the
line,
it
still
ties
to
a
function,
so
it
I,
never
really
felt
like
I
got
simplification
from
it,
and
it's
just
like
we're
taking
the
code
in
the
test,
creating
keywords
so
that
we
can
put
it
in
the
amyl,
but
we're
still
still
just
like
coding
it
in
the
amal
effectively.
It's
just
with
fancy
or
keywords:
instead
of
function
names,
what's
the
difference,
it
is
it.
B
Feels
like
it's
sort
of
a
similar
scenario
right
where
I
think,
like
the
hope
and
dream
of
typical
behavior
driven
development
stuff
is
like
you:
go
get
the
business
people
who
are
super
busy
and
off
doing
other
things
so
like
right
in
plain
English
what
the
business
logic
is
supposed
to
be,
and
then
you
hand
that
to
the
developers
and
the
developers
go
and
do
it
and
we
have
a
similar
situation,
except
instead
of
businesspeople
of
people
like
Clayton
who
are
printing
for
busy.
And
so
we
can't
get
a
lot
of
their
time.
B
A
What
are
the
fundamental
premises
of
John's
proposal?
He
tries
to
retrofit
what
we
have
today
like
we,
never
it's
hard
for
me
to
say,
because
I
know
fully
well
how
much
effort
it
would
be.
We've
never
actually
said
like
in
an
ideal
world
who
burned
it
on
the
ground.
What
what
are
the
features
and
ideas
that
they
would
want
to
have
right?
A
You
know
we
tried
doing
this
early
on,
but
like
no
one,
we
were
too
busy
building
the
system
and
you
know
like
building
the
race
car
while
driving
it
that
we
never
actually
stopped
to
say
like
okay.
What
is
it
that
me
really
wants
from
this
apparatus,
and
we've
tried
several
times
and
they've
failed
I,
don't
know
if
what
we're
doing
right
now
is
they
to
be
trying
to
say
in
an
ideal
world?
A
What
is
it
that
we
really
want
and
if
we're
gonna
do,
that
I
would
say
like
we'd,
even
take
us
to
effect
further
and
saying,
like,
let's
start
specking
from
a
very
high
level,
one
of
the
features
that
we
want,
I
don't
know
if
I,
that's
a
tenable,
even
I,
don't
know
that's
a
tenable
thing
to
do.
Is
it's
an
amount
of
work
when
you
say
features,
are
you
talking.
A
B
We
don't
have
structured
metadata
around
the
tests,
we
don't
we
don't
fit
well
into
you
know
or
broader
types
of
like
I
want
to
go
run.
This,
like
I,
think
actually
a
key
thing
Tim
to
bring
up
is
when
we
started
the
IDI
test
Suites,
we
were
addressing
it
as
an
IDI
test,
our
own
cluster.
We
had
a
little
bit
of
behavior
at
that
time,
but
we
didn't
have
external
client
libraries.
We
didn't
have
a
lot
of
the
tools,
and
so
it
was
always
much
more
of
a
half
integration.
B
Half
e
de
test
and
I
think
some
of
those
assumptions
underpin
to
with
conformance
as
we
did
retro
actively
take
e.t
test
written
in
this
fashion
that
are
somewhat
closely
coupled.
We
don't
have
a
lot
of
description
about
what
the
desired
behavior
is
anywhere
near
the
tests
other
than
maybe
in
the
code
and
some
comments
with
them.
B
B
So,
like
I
think
it's
more
productive
for
us
to
specify
our
ideal
for,
like
Pat,
sorry
I'm,
using
too
many
words
to
specify
like
what
the
behavior
is
awful,
be
and
then
second
look
at
awesome,
tooling.
That
could
consume
that
or
write
some
crappy
tooling.
That
could
like
tie
that
to
retrofit
onto
our
existing
stuff
or
whatever,
but
I
think
if
we,
if
we
try
and
reinvent
the
testing
framework
universe,
first
I
feel
like
that,
will
slow
us
down
significantly
yeah.
B
Are
we
doing
a
good
job
of
articulating
concrete
requirements
that
are
feeding
into
some
channel
of
people,
cuz
I?
That
part
of
what
Tim
says:
I
agree
with
like
I
struggle
daily
on
dealing
with
eee
and
trying
to
think
about
things
that
reduce
complexity
for
execution
and
editors
and
humans,
and
all
that
I
think
this
group
has
done
a
good
job
of
articulating
connection
between
the
test
code
and
what
it
actually
is
trying
to
do.
So
there
are
some
small
things.
B
I
mean
we're
not
going
to
stop
asking
people
to
write
eee
tests
in
the
next
six
months,
so
the
problems
getting
worse,
I
think
maybe
some
of
your
some
of
your
concern.
Aaron
is
the
the
busy
beavers
were
living
in
the
direction
and
we'll
always
need
to
put
things
underneath
em
that
kind
of
steer
them
no
matter.
What
ideal
outcome
we
have
is
don't
get
me
wrong.
I'm
super
okay,
with
the
concept
of
light
somebody's
gonna
go
off
and
write
a
whole
bunch
of
conformance
tests
or
a
whole
bunch
of
tests
that
exercise
behaviors.
B
C
Am
and
looking
at
the
way
that
the
various
testing
frameworks
are
connected,
I
think
if
we
spend
a
little
bit
of
time
exploring
their
low-level
connectivity
I
think
we
can
combine
the
the
two
approaches:
finding
one
that
uses
this
separated,
behavior
and
testing
and
continue
to
use
the
existing
ones
so
that
we
don't
lose
momentum.
That's
where
trying
this
experiment.
C
I'd
like
to
see
what
we
have
on
the
generation
side,
I
know
that
Clayton
was
saying
that
the
API
using
the
API
machinery
would
be
hard
to
change
and
three
modifications
there
for
the
auto
generation.
But
if
maybe
we
spend
just
a
little
bit
of
time
exploring
what
we
have.
That's
that's
a
lower
hanging
as
far
as
and
what
gets
generated
into
the
open,
API
spec,
because
there
is
the
open,
API,
v3
I,
think
that
adds
a
few
more
things
and
finding
those
hooks
that
would
allow
us
to
add
this
little
bit
of
metadata.
C
And
how
can
we
pick
that
up
in
the
tool
chain
without
and
I'd
love
to
maybe
get
a
little
more
of
explanation
of
what
the
difficulties
are
and
I
know
that
it
sounds
that
it's
that
really
complex
API
machinery
is
amazing
and
I.
Don't
have
the
mind
to
grasp
the
specifics
of
it,
but
I
would
love
to
to
hear
what
are
the
the
things
that
can
be.
Is
we're
trying
to
define
the
space
I'm,
putting
my
hands
over
all
the
edges
and
that's
one
thing:
I
can't
grasp
Clayton.
B
It's
a
deep
conversation,
I
might
say
it's
just
not
structured
for
someone
to
go
I,
don't
think
it
hits
the
core
audience,
because
a
lot
of
this
is
like
with
clear
language
and
like
I,
think
this
gets
to
that.
What
we've
been
saying
is
with
clear
language
and
clear
description.
It
makes
it
easier
to
understand
what
tests
need
to
be
written
writing.
The
tests
is
not
a
huge
chunk
of
the
problem.
B
C
Really
dig
the
during
the
definitions
and
the
caps
I.
Don't
know
if
you've
seen
this,
but
when
you
throw
code
blocks
into
markdown
and
you
write
the
word
feature
it
color
syntax
highlights
the
feature
sets
in
Guren,
so
we
could
theoretically
go
through
and
we'd
have
to
have
a
little
more
styling
around
that
our
style
of
conformant
of
tests
needs
to
say,
given
a
starting
points
table.
C
This
is
what
we're
wanting
it
to
do
and
they
don't
have
to
understand
the
specifics,
but
we're
throwing
it
up
to
the
kept
level
and
saying
you're
kept
needs
to
describe
the
behavior,
and
then
it
won't
be
accepted
until
the
tests
are
written
that
match
that
literally
they're
matching
that
behavior
pass
and
that
it's
passed
off
by
people
who
are
experts
in
that
area
that
it
tends
to
I.
Don't
know
it's
something
to
think
about,
but
I
really
dig
having
the
definition
of
done
in
the
cap
in
a
language
that
we
use
to
verify
a.
B
That
are
just
some
of
the
fundamental
on
here.
There's
supposed
to
be
something
in
there
got
a
test
plate
and
I,
see
no
in
I've,
seen
decent
test
plans
and
I've
seen
people
write
decent
tests,
I
do
think
and
I'm
not
as
I
haven't
looked
at
the
core
tests
in
a
while.
We
actually
have
pretty
good
feature
testing
in
general,
I'm,
less
worried
about
new
features.
B
Getting
added
these
days
in
the
middle
ground
before
that
that
came
in
early
and
there's
gaps,
but
like
it's
a
good
again,
those
are
like
those
tests
need
to
be
validate,
for
conformance
is
probably
sufficient
in
terms
of
the
work.
That's
already
been
happening
there,
it's
the
oldest
and
most
entangled
features
that
have
some
of
the
weakest
tests.
It
seems.
A
How
can
we
wrangle
this
into
a
set
of
finite
action
items
that,
besides,
like
you,
know,
burning
the
world
and
starting
from
scratch?
What
are
some
finite
set
of
actuated
as
we
could?
We
could
go
for
it.
I
think
I,
think
his
specification
requires
a
lot
of
extra
data
and
I
know
that
we've,
if
everyone
else
could
take
a
read
of
that
spec
and
maybe
provide
some
useful
feedback,
but
maybe
in
parallel
I
think
I.
Think
defining
the
requirements
of
an
ideal
world
are
not
a
bad
thing
to
do
either
right.
A
If
we
actually
started
to
write
down
what
it
is
that
we
really
want
and
then
maybe
slowly
over
time,
we
can
you
take
what
we
have
into
that
state
space,
but
we've
never
actually
done
that,
like
in
an
ideal
world.
How
would
we
do
this?
They
ignore
everything
else
that
we
currently
have.
What
will
be
the
requirements?
What
would
be
the
details
and
then
over
time
we
can
map?
What
we
have
is
try
to
push
it
in
that
direction.
A
D
I
I
don't
think
that
that's
an
awful
choice
either
considering
the
fact
that
we
only
have
what
is
about
200
conformance
tests,
that's
not
intractable
and
then
every
existing
test,
even
if
we
wanted
to
promote
it
to
conformance
and
like
tie
it
to
some
ID,
there's
still
a
manual
human
step,
so
we're
kind
of
trading
out.
Theoretically,
possibly
you
know
moving
code
from
one
set
of
tests
to
another
set
or
tying
it
with
a
code.
I
know,
there's
a
step
either
way
here
like
this.
C
B
A
The
primary
goal
right
now,
because
we
have,
if
we're
trying
to
do
this
better
depository
business,
which
we've
said
I,
don't
know.
What
here
is
this
2019?
Do
we
start
saying
this
in
like
2016,
I
lost,
track,
I
think
was
2016
or
off-site,
and
we
were
talking
about
this
and
Daniel's
prognosticating
the
future
and
it
died
on
the
vine
and
but
if
we
actually
want
to
do
this,
we
need
to
make
sure
that
the
libraries
that
we
create
are
actually
imported
and
don't
drag
in.
A
The
entire
universe
of
kubernetes
and
testing
infrastructure
requires
us
to
import
the
world
and
we've
been
breaking
that
apart
slowly,
but
surely
so
that
way.
As
we
get
like
cloud
provider,
integration
like
CCM
in
particular,
they
want
to
be
able
to
test
their
stuff
using
a
lot
of
the
good
pieces
that
we
have.
B
A
B
The
review
work
and
the
agreement
has
been
the
hard
part
up
till
now.
Somebody
just
needs
to
have
the
review
time.
That
is
a
very
good
goal,
though,
because
that
coupling
is
a
painful
coupling,
so
it
sounds
like
it's.
It
is
still
worth
investing
significant
effort
in
taking
the
thing
that
we
have
and
making
it
better.
B
Okay,
yeah
I,
don't
know.
Maybe
this
is
the
group
to
dream
big.
It
just
seems
like
if
we
already
have
the
staff
part
time
dedicated
to
like
incrementally
improving
the
existing
thing.
Maybe
that's
the
that's
the
that's.
The
momentum,
inertia
path
for
us
testing
is
always
happy
to
host
anybody
who
is
willing
to
create
some
magical
new
testing
framework
that
solves
all
of
our
problems
or
reef
actors
are
existing
testing
framework
stuffs
stuff
any
and
all
are
welcome
a.
A
Well,
I
think
I,
don't
think
these
things
should
be
overinflated,
I,
think
describing
in
a
document.
An
ideal
set
of
scenarios
is
a
tenable
thing
that
won't
require
your
factor
in
the
universe
right
and
see
how
that
maps
into
John's
proposal
and
how
you
get
a
migration
path
eventually
towards
the
state
you
want
to
be
in,
even
if
it
takes
a
long
time.
I,
don't
think
is
also
an
untenable
thing.
If
people
have
the
time
to
do
it
me
I
don't
have
time
to
do
it.
Just
don't
yeah.
B
C
B
A
B
A
Hopefully
some
of
this
was
fruitful.
Let's
see
if
we
can
follow
up
on
the
action
items
and
get
moving
in
the
meantime,
I'm
a
poke
on
the
conformance
Channel
in
the
next
couple
of
weeks
for
people
to
who
are
interested
in
helping
to
do
do
reviews.
So
we
can
move
some
of
these
things
through
the
chain
in
the
meantime.
So
if
you
are
able
and
willing-
or
if
you
work
for
VMware
I
can
voluntold
you.
B
We
should
probably
like
speed
up
namespace
deletion
and
make
that
easy
anyway
by
default,
but
like
do
we
want
to
revisit
whether
it's
possible
to
use
other
mechanisms
to
specify
conformance
or
you
can
bypass
creating
a
namespace
as
well,
but
these
tests
all
kind
of
need,
one.
It
gets
weird
when
you
share
them
and
I
feel
like
that's
more
complex
I
was
you
know
the
interesting
thing
about
the
namespace.
One
is
most
of
the
things
those
tests
can
get
arbitrarily
close
to
being
instant.
If
we
fix
the
async,
it
may
just
not
be
that
bad.
B
B
Like
yeah
I
can
drive
noodling
on
that
a
little
bit
more
because
I
felt
like
it
was
a
really
well-written
test,
like
it
flowed
pretty.
Well,
it
sort
of
did
set
up
and
moved
from
one
thing
to
the
other,
but
it
wasn't
complicated.
I
felt
like
each
step
sort
of
verified
stuff
and
it
kind
of
built
on
some
of
the
things
beforehand,
and
it
did
a
pretty
good
description
of
what
I
would
expect
a
limit
range
to
do
with
pods
and
to
like
break
that
up
into
a
bunch
of
more
hermetic.
Things.
A
Think
that's
just
a
framework
reppin
problem
I
think
you
could
probably
just
yet
have
a
second
implementation
that
just
doesn't
do
the
namespace
iteration,
but
they're
also
like
to
get
back
on
my
old
horse
for
a
long
time
ago.
Namespace
deletions
the
only
reason.
They're
still
a
problem
is
because
that
legacy
code
from
sed
two
days
we
had
the
data
model,
was
a
strict
hierarchy
versus
the
way
it
is
in
ICD,
3,
yeah.
B
Well
at
it,
the
the
async
protection
really
is
just
because
we
were
really
paranoid.
I
think
we're
at
the
point
where,
if
the
API
server
accepts
deletion-
and
it
doesn't
to
clean
up
at
the
end
like
this
is
ginkgo
is
really
bad
at
this,
but
like
I
would
absolutely
I
can
go
help
drive
this,
because
this
probably
is
the
biggest
performance
when
we
can
make
ete
sweets
today.
This
probably
could
reduce
ete
time
by
a
significant
chunk,
and
so
just
for
that
reason
alone
doing
the
bare
minimum
I'm
happy
to
go
like
push
on
that.
B
I
should
have
done
this
before
somebody
and
I'll
bring
other
people
in
if
necessary,
to
get
over
the
hump
one.
Other
thing
I
feel
like
sort
of
updating
honors
talking
about
at
the
time
we
have
left
was
one
of
our
P
ones
about
sort
of
the
master
dementia
list
for
the
conformance
tests
likes
to
call
you
and
John
like
I,
think
we've
consolidated
a
lot
down
to
the
AG
and
host
image.
I
am
in
currently
incapable
or
do
not
have
the
sufficient
things
to
like
build
and.
C
B
The
windows
variants
of
these
images
going
forward
I
think
it
would
be
useful
to
think
there's
some
work
out
there
that,
like
having
cloud
built,
eat
some
of
this
image
pushing
for
us
which
I
would
like
to
investigate
using
instead,
so
we
don't
have
to
rely
on
Googlers
pushing
buttons,
but
rather
like
one
of
PR
merges
the
thing
just
automatically
gets
built
and
pushed.
It
would
break
things
up
into
sort
of
a
two-stage
process
where
you
push
the
PR.
D
That's
actually
one
of
my
action
items
for
the
questing
Commons
as
well
is
just
to
reassess
that
ticket
holistically
and
see
where.
Where
did
we
want
to
draw
the
the
finish
line
and
have
a
far
away?
We
are
from
there
because
I
know
claudia,
has
a
great
I
think,
like
12
of
those
images
into
a
single,
the
Aegean
host
and
I.
Think
for
sure
again,
oh
hey
almost
30,
almost
13.
Okay.
D
E
D
Will
run
I
had
done
some
sort
of
alteration
on
the
end-to-end
suite
so
that
I
could
track
like
which
can
which
images-
the
conformance
test
specifically
used.
I'll
rerun
that
code
and
see
how
many
images
were
down
to
on
that,
because
that's
that
was
the
one.
This
group
was
concerned
about
reducing
the
most.
B
A
All
right,
we
are
at
time
hippie
when
you
go
through
to
try
and
distill
down
this
conversation.
Could
you
please
put
action
items
into
the
dock
so
to
keep
us
honest,
then
tag
us
and
then
it
can
actually
go
back
and
address
them.
Then
we're
at
time
so
should
call
it
thanks.
Everybody,
okay,
happy
Tuesday,.