►
From YouTube: 20190308 sig testing commons office hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
For
us
to
do
this,
it
it
became
untenable.
So
that
was
the
whole
premise
behind
it.
But
what's
happened
in
recent
years
is
it's
become
unmaintained
for
the
most
part,
so
a
lot
of
the
core
people
that
actually
wrote
it
don't
work
on
that
piece
of
code
anymore
and
myself
included
I
only
kind
of
try
to
live
it
alone,
but
now
at
VMware
we
we've
decided
that
we
are
going
to
plunk
down
and
help
to
make
things
better.
I
know
that
patrick
has
been
involved
in
some
of
this
stuff
for
a
long
time.
B
So
having
him,
you
know,
he's
been
pushing
the
ball
along,
but
I
think
what
I
want
to
do
is
kind
of
get
some
state
space
of
where
we're
at
today
and
then
talk
about
potentially
some
some
tangible
things
we
might
want
to
try
to
accomplish
in
the
in
the
115
times
room
but
I
think
from
the
from
today's
perspective.
The
first
thing
we
want
to
start
to
talk
about
is
like
what
are
some
of
the
core
system
problems
that
we
see
today.
B
C
Well,
when
I
started
beginning
of
last
year,
I
was
basically
looking
for
a
test
suite,
but
I
could
use
outside
of
cabin
areas.
Basically
something
that
helps
me
write
and
to
end
tests
and
I
notice
that
inside
Kuban
it
is
for
framework
was
kind
of
okay,
but
it
was
completely
unusable
outside
of
compute,
and
it
is
cabinet
is
because
of
a
huge
dependency
tree.
C
Support
codes
are
in
separate
packages
that
are
optional,
so
someone
can
take
the
core
framework
leave
out
the
vendor
part
and
still
have
something
that
works
against
a
generic
keeping.
It
is
cluster.
So
that's
as
far
as
I'm
concerned
a
solved
problem,
conceptually
I
think
that's
the
point
that
you
are
trying
to
push
is
that
use
vendor
specific
code
shouldn't
even
be
in
kubernetes,
and
now
cloud
providers
could
take
the
generic
framework
and
actually
take
fee.
C
Take
their
own
code
into
another
repository
build
their
own
end-to-end
test
suite
run
their
own
testing
and
it
all
there
has
to
be
in
community
so
that
that
historic
development
that
pushed
more
and
more
cloud
for
specific
specific
code
into
that
framework.
But
that
could
be
reversed
now.
For
me
personally,
that's
not
a
primer
main
main
motivation
now
to
to
make
that
change,
because,
as
I
said,
my
my
problem
has
been
solved
but
I
think
for
Kuban.
C
C
Just
just
the
other
day,
someone
was
starting
to
use
test
utils
image,
which
is
a
picture
that
I
didn't
even
know
about,
and
it
handles
things
like
they
are
kept,
I'm
changing
or
changing
from
where
images
are
pulled.
So
it's
for
testing
in
an
air-gapped
cluster,
but
that's
for
some
reasons,
probably
for
no
good
reason
outside
of
a
framework,
and
now
the
framework
depends
on
it.
So
someone
might
also
want
to
look
at
what
what
different
utility
packages
do
we
have?
Are
they
designed
consistently
I?
Think
things
like
that,
but
but
would
be
useful.
C
So
right
now
we
will
left
the
framework
inside
in
the
traditional
place,
test
e
to
e
framework
and
people
who
want
to
use
it
need
to
pull
from
given
it
is
slash
Cuba
Nettie's,
which
is
frowned
upon
because
it
basically
is
in
the
repository
we'll
keep
it
at
his
implementation
in
our
stable
API
promises.
Oh
that's.
Another
potential
work
item.
B
So
I
want
to
go
around
the
audience
to
to
see
who's
all
on
the
call
that
you
know
have
their
kvetching
session
and
I'm
sure
there
will
be
echoes
and
sentiments
that
are
similar
to
Patrick's.
But
what
I
want
to
try
to
do
is
is
take
from
that
kvetching,
a
finite
number
of
things
that
we
can
address
in
the
near
term
and
try
to
plug
them
in
and
prioritize
them
and
execute
on
one
at
a
time.
So
so
DIMMs.
A
We
definitely
have
a
spaghetti
of
Python
calling
script,
calling
go,
calling
shell
scripts,
you
know
the
whole
spaghetti
right,
so
we
need
to
untangle
that
mess,
and
you
know
the
cube
chest
too,
and
things
like
that
will
help.
That
is
one
aspect
of
it.
The
other
aspect
of
it
is
I
mean
people
should
be
able
to
understand
the
stuff
and,
right
now,
it's
very
difficult
to
understand.
A
So
there
is
a
problem
with
discovery.
There
is
a
problem
with
you
know:
patching
stuff.
There
are
flags
in
there
which,
where
once
useful,
but
no
longer
useful,
we
found
cleaned
up
those
because
we
don't
know
who
is
using,
for
example,
right.
So
that
is
one
set
of
problems.
How
do
we
make
sure
that
people
who
are
debugging
stuff
find
it
easy
to
navigate
through
the
code
base
to
figure
out
where
they
have
to
make
a
change?
A
So
they
can
be
helpful
and
not
wait
for
other
people
who
who
have
the
knowledge
to
help
them
out
right,
so
self-help
discovery
is
one
of
the
things
that
I
want
to
push
for.
The
other
one
I
want
to
push
for
is
whatever
comes
out
from
here.
We
should
be
able
to
use
to,
for
example,
stand
up
end-to-end
test
suite
for
Excel
cloud
providers,
for
example,
where
they
don't
have
two
vendor
kaykai
repository
to
actually
write
the
end-to-end
test
code.
A
So,
for
example,
one
thing
we
could
do
is
the
stuff
that
Patrick
halfs,
you
know
has
separated
out
in
terms
of
framework.
We
can
move
that
to
the
staging
repository
publish
in
a
separate
repository
just
like
we
do
for
API
API,
missionary
component
base
and
all
that
and
then
get,
for
example,
the
external
cloud
providers
to
write
test
cases
against
that
using
that.
A
B
One
of
the
problems
that
exists
currently
inside
of
the
intent
testing
framework,
whether
or
not
people
are
really
aware
of
the
details-
is
that
it
uses
the
internal
client
versus
the
external
client
and
I
do,
and
that
was
only
because
it
grew
organically
right
and
I.
Do
think
that
in
order
for
us
to
be
able
vendor
it
externally
is
that
we'd
have
to
shift
the
code
to
use
the
external
clients
as
a
consumer
as
a
composable,
consumable
and
I
totally
agree
with
you'll
find
my
common
theme
that
exists
across
every
single
listing.
B
That
I
work
on
a
lead
is
that
we
need
to
create
tools
that
are
composable
that
do
one
job
well
right
and
right
now
the
intent
testing
framework
is
very
much
a
monolith
right
and
it
grew
very
organically,
so
it
hasn't
been
decomposed
or
teased
apart
and
you'll
find
that
there
were
fits
and
starts
in
the
past
for
us
to
disentangle
this.
But
there
was
never
a.
B
There
was
never
a
driving
set
of
producer.
Consumers
such
as
this
group
to
really
make
traction
so
I
think
I
think
there's
definitely
user
stories
that
exist
now
that
weren't
as
prolific
in
the
past,
but
are
prolific
now,
such
as
a
user,
creating
a
wanting
to
create
an
intent
testing
framework
for
external
controllers
that
they
create
your
operators
and
wanting
to
reuse
the
best
practices
that
kind
of
come
for
free.
B
If
you
actually
use
a
well-defined
factor
framework
right,
just
like
Patrick
C's
case
or
even
in
the
case
of
cloud
providers
being
able
to
test
their
integrations
externally,
using
the
same
framework
that
that
kubernetes
uses,
that's
always
been
Michael
but
I
think
what
happened
is
it
was
too
soon
and
there
weren't
enough
people.
It
now
I
work
for
a
big
company
again,
and
we
have
horsepower
so
we
can
actually
throw
down
and
try
to
address
those
problems
again.
C
There's
another
angle:
to
that
sharing
framework
is
one
thing:
sharing
the
existing
tests
is
another
in
Cuba
notice.
See
see.
Is
it
sixth
origin,
where
we're
doing
that
right
now
already
with
the
tests
that
are
under
tests
e
to
e
storage
test
suite,
it's
a
package
that
uses
a
framework
to
implement
certain
tests
and
those
tests
can
be
instantiated
for
different
storage
drivers
and
that's.
C
Actually,
it's
wrong,
it's
it's
a
it's
well
separated!
The
problem
still
is
that
these
share
tests
are
in
the
kubernetes
repository.
So
from
a
code
perspective,
it
would
be
better
to
have
that
separated
out
alongside
for
framework
somewhere
or
I.
Don't
know
where
exactly
we
were.
We
were
discussing
that
a
bit
whether
that
is
something
that
stays
in
Cuban
eighties
or
also
needs
a
separate
repository.
I
want
to
just
to
remind
everyone
that
we
also
have
not
just
a
framework.
That's
share,
but
also
test
code
right.
A
C
The
current
storage
tests,
one
that
I've
talked
about
they
just
you
as
standard
API
is,
but
there
are
other
tests,
I.
Think
for
for
networking,
for
example,
why
even
even
the
test
fit
I
just
mentioned
for
the
entry
storage
drivers,
they
depend
on
additional
api's,
but
they're
I
think
we
for
the
networking
test
that
could
be
shared
between
providers.
Those
tests
are
ones
that
need
some
way
of
calling
cloud
provider
specific
functionality,
just
creating
custom
in
a
certain
stage
or
in
a
certain
set
up
and
then
running
a
test
against
it.
A
Right
so
ideally,
we
would
have
the
test
framework
that
could
be
important.
We
would
have
a
set
of
common
tests
which
do
not
depend
on
a
specific
cloud
provider,
and
then
we
will
have
test
Suites
for
individual
cloud
providers,
maybe
in
the
external
cloud
for
right.
So
that
combination,
but
then
we'll
have
the
tests
in
multiple
places.
So
then
it
will
become
important
that
you
know
we
published
to
the
same
spot.
So
we
can
go
check
what
we
break
right.
A
D
C
B
I
want
to
make
sure
you
get
around
to
other
people
like
Steve
has
got
pretty
much
fresh
eyes
into
the
into
the
framework
and
the
both
the
glory
and
the
horror
cuz.
There's
there's
both
right.
Let's
be
fair
right.
What
did
you
guys
always
do
so
so
Steve?
What
are
some
of
the
key
things
that
you
see
from
the
outside?
Looking
in.
E
Some
one
of
the
things
yet
that
I
had
written
I
think
it's
been
talked
about
before
is
just
you
know,
knowing
what's
there
like
I
did
a
PR
to
help
with
that
numb
notes.
Argument.
Last
week
and
I
know
like
the
John
who
I
work
with.
You
know
he
suggested
one
method
and
then
I
found
another
one
which
did
it
better
and
then,
when
in
the
discussion
we
weren't
sure
which
one
we
should
use
or
not
use
so
I
think
a
lot
of
that
is
just
understanding.
E
You
know
what
is
there
and
what's
not
there,
which
I
think
is
a
bunch
of
times
and
I
know
tonight,
walked
through
a
thing
once
and
we
found
some
places
where,
like
the
clients
getting
generated,
but
if
we're
using
a
parallelization
there's
there
could
be
erased
conditions,
for
you
know
two
different
parallel
inky
annotations
of
it
could
two
different
clients
and
stuff
so
think
of
that
just
getting
cleaned
up,
and
then
you
happy
to
have
her
feedback
and
I'm
happy
to
come.
Hell
so
can
help
me
with
some
list
of
things.
B
Yeah,
it
grew
very
organically
again
out
of
necessity
and
what's
what's
kind
of
a
conundrum
that
kind
of
exists
with
a
lot
of
this
stuff,
especially
in
the
Indian
tent
testing
framework
space
is
a
lot
of
the
people
in
the
very
beginning
of
the
project
you
should
have
seen
in
the
very
beginning.
It
was
all
bash
like
the
beginning,
video
and
it
was,
it
was
a
nightmare.
No
one
could
run
it.
B
I
would
understood
it,
and
then
we
had
to
factor
into
sort
of
a
behavioral
level
test
that
used
sort
of
the
intro
client
of
the
API,
but
it
grew
very
organically,
but
it
never
got
decomposed
and
factored
properly.
So
I
think
you
know.
All
of
Patrick's
points
are
totally
valid,
but
I
think
what
I?
What
I
want
to
try
to
do
is
still
go
around
with
other
people
who
might
have
other
interesting
observations
and
requirements
or
requests
and
then
try
to
come
up
with
a
you
know.
B
B
F
I've
I'm
all
over
the
place,
as
far
as
the
components
that
I'm
looking
at
I
think
it's
it's
been
difficult
for
me
to
get
a
lay
of
the
land
and
I
was
and
I
have
benefited
in
all
the
different
areas,
a
limited
for
components,
I
think
from
a
usage
perspective
from
my
side
trying
to
have
more
points
of
introspection
because
I
got
of
people
asking
for
documentation,
but
I
tend
to
do
a
lot
of
generating
mine
flow
from
from
looking
at
how
the
tools
works,
and
that
hasn't.
Usually,
that's
worked
well
for
me
with
it.
F
This
is
actually
that
complex
that
I
haven't
been
ate
by
looking
for
patterns.
Is
the
like
here,
I'm,
really
seeing
them
resonating
with
everybody
else.
Here
we
can
I
can
feel
the
organic
and
growth
and
not
understanding
the
whys,
and
even
when
I
go
back
in
the
source
code
or
try
to
correct
tickets
so
that
when
these
things
were
added,
it's
often
still
a
bit
fuzzy.
For
me,
I.
F
I
think
it
one
of
the
nice
touching
points
if
we
wanted
to
get
some
documentation
are
on
features
because
we're
pretty
heavy
user
of
Vito
I'm,
finding
I
think
between
all
of
us.
We
should
probably
look
at
patterns
that
are
useful
for
upstreaming
pachinko
or
for
writing
a
really
small
components
of
features.
We've
added
a
ginkgo
and
writing
a
standalone
library
for
that
trying
to
pull
those
out
and
not
even
have
them
inside
kubernetes
or
even
in
our
test
framework,
but
trying
to
get
others
to
use
them.
F
A
Hippie
hacker
I
wanted
to
tease
apart
something
that
you
were
doing
this
or
and
you're,
probably
still
doing,
which
is
how
would
what
we
want
to
do
here
help
with
figuring
out
like
the
API
coverage,
the
feature
coverage
and
things
like
that
I
remember
you
you
were
trying
to
inject
like
HTTP
headers
to
you,
know
all
that
stuff
that
you
were
trying
to
do
just
to
get
a
handle
on
what
we
are
actually
testing
versus.
What
is
the
full
lay
the
land
that
we
should
be
testing
yes
to.
F
Follow
up
on
that
just
last
night,
my
time
I
was
hanging
out
with
the
globe
on
folks
in
India
and
I
went
through
a
little
bit
of
their
flow
and
what
their
pain
points
are.
So
I
think
the
thing
that's
needed,
because
I
would
love
to
have
maybe
weekly
or
something
pair
sessions
where
we
we
go
through
and
pair
with
folks
who
are
writing
tests
and
look
at
their
flow
and
and
suggest
tools
back
and
forth.
So
we
learn
from
each
other
and
one
thing
they
had
noticed:
they're
wanting
to
expand.
F
There
are
no
wet
api's
they're
hitting
quite
easily,
and
it
was
we
we
talked
about
going
through
and
setting
up
the
audit
logging
I'm
a
little
sad
that
I
think
it
wasn't
stable
enough
for
dynamic
audit
syncs
to
make
it
to
bait
it
before
14
and
I'd
love
us
to
give
a
little
bit
of
a
look
at.
Can
we
get
for
115
for
sure?
Can
we
get
that
to
beta?
And
what
that
would
allow
us
is
for
folks
who
are
running
any
clusters
by
default
will
have
a
command.
F
F
F
If
I
want
to
go
to
that
endpoint?
Here's
the
list
of
applications
if
they've
submitted
audit
logs
and
they're
loaded
in
they
can
click
on
these
other
applications
and
see
how
they're
using
that
endpoint.
Then
this
may
be
go
in
a
bit
farther
than
you're,
then
you're
asking.
But
as
far
as
like
the
test
framework
itself,
can
we
get
it
to
show
us
steps?
F
Can
we
get
it
to
possibly
link
back
to
the
I've,
been
trying
to
free
ways
to
link
it
back
to
the
line
number
so
that
when
we,
when
it
comes
in,
you
could
just
pop
it
up
in
your
editor
back
to
the
point
where
that
got
hit,
there's
been
some
ideas
around
using
the
audit
ID
so
that
we
as
we
send
that
request
out
logging,
where
it
is
look
at
the
audit
object
coming
back.
That's
one
way,
but
I.
B
A
common
theme
I
always
heard
like
with
the
antenna
tests,
is
it's
really
hard
for
people
to
understand?
What's
feeling
right
like
the
most
common
thing
is
like
you
need
somebody
who
can
actually
go
backwards
and
sort
of
spelunk
through
the
tests
to
understand
how
we
got
to
this
point,
what
the
test
is
actually
really
doing.
I,
don't
necessarily
think
that
that
is
a
problem
with
the
framework
per
day
as
much.
B
If
you
wanted
to
swap
it
out
with
something
else,
you
should
be
totally
doable
in
the
context
and
the
data
for
how
you
describe
tests
is
abstracted
into
something.
That's
generally
reusable,
okay.
So
if
we
ever
decided
to
switch
frameworks
over
time,
we
push
it
into
a
way
where
we
can
get
it.
We
can
get
at
that
data
becomes
exposed
to
the
user,
it's
enough
of
metadata
in
the
context
of
individual
tests
to
be
able
to
output
the
information
that
they
need
to
actually
debug
these
things
that
exists
again,
I'm
totally
talking
about.
B
If
I
had
a
pony
world
I
do
realize
that
what
we
have
today
and
what
I
want
are
two
very,
very
different
things
and
we're
going
to
need
to
figure
out
how
to
get
to
that
shiny
city
on
the
hill
in
a
incremental
fashion,
because
the
tests
are
so
important
for
the
release
of
kubernetes
right.
D
Go
to
commute
back
from
mode
from
work,
usually
when,
when
you
have
the
call,
but
today
I'm
not
not
ready,
so
I
can
talk
freely.
So
what
about
test
I'm,
an
artisan
in
in
this
area,
but
my
impression
when
I
get
to
end-to-end
test
it
is
that
it
is
too
much
to
take.
It
is
difficult
to
get
started
and
and
I
think
that
we
should
provide
something
that
allows
people
to
to
run
quick
test
to
run
simple.
D
I
think
that
a
good
example
to
make
test
that
you
use
aboard
was
a
boy
because
running
tests
in
Sona
boy
is
like
doing
anything
in
in
in
kubernetes.
So
it
is
really
simple
for
for
people
that
don't
want
don't
want,
although
they
don't
have
time
to
to
get
into
all
the
detail,
to
start
the
esta.
But
the
problem
is
that
not
everyone
want
to
all
the
tests.
Maybe
I
want
to
run
a
quick
I,
don't
wanted
to
wait,
40
minutes
or
one
hour
to
get
my
tester
on
I
want
to
sometime.
D
We
recently
ran
a
cubed,
miss
our
survey
and
we
discovered
that
very
few
people
run
conformance
on
on
their
test.
Only
cluster
and
I
think
that
this
is
important.
We
we
have
to
make
people
running
test
influence,
also
on
production
on
cluster,
to
check
that
they
are
happy
after
after
a
variable
grade
or
ever
exchanges
the
cluster.
So,
in
my
opinion,
usability
is
paramount.
D
A
B
So
yeah,
the
purpose
of
sort
of
way,
was
to
make
it
easy
for
people
to
diagnose
much
easier
from
a
user
experience
because,
like
you
couldn't
just
take,
there
was
no
conformance
image
originally
right,
they
didn't
even
exist.
There
was
crude
kids
and
you
couldn't
use
coop
khun's
outside
of
Google's
infrastructure,
because
it
was
baked
around
the
test
in
front
so
like
you
could,
and
if
people
factored
it
out
over
time
to
so,
you
could
use
it.
But
then,
like
there's
a
bunch
of
other
details,
you
need
to
like
part
of
what
suddenly
does
today.
B
Is
it
if
you
look
at
some
of
the
scripts
that
exists
inside
of
the
around
the
intent
test
framework
upstream,
like
they're
kind
of
pathological?
You
know
it's
literally
like
SSH
into
boxes
and
doing
crazy
stuff
right
when
we
already
have
patterns
in
kubernetes
to
solve
these
problems.
So
we
solved
this
problem
and
so
doubly
with
plugins
right.
So,
like
you
have
a
plugin
which
basically
goes
it
does
a
data
collection
all
right
and
I.
Think
the
intent
testing
framework
over
time
can
be
pushed
into
a
way
where
you
know
it.
B
It
allows
the
extensibility
to
for
doing
these
things
in
a
very
simplified
fashion,
but
I
think
that's
going
to
take
that
particular
being
of
the
tests.
I
think
needs
to
occur
further
down
the
road
because
of
where
we're
at
I
think
currently
the
decomposing.
What
we
have
into
a
set
of
composable
libraries
is
probably
the
first
order
of
business
just
because
of
to
try
and
put
a
bow
on
the
outside.
What
would
would
be
like
you
know?
Your
tire
house
is
on
fire,
but
I
got
a
nice
paint
job
on
the
outside
of
the
house.
I
I
You
know
start
thinking
about
how
to
remove
that
framework
as
well
and
end-to-end
tests,
and
it
doesn't
seem
like
it'll-
be
easy
just
just,
but
the
way
the
tests
are
set
up
and
so
I
just
want
to
take
a
few
months,
just
listen
and
see
what
the
problems
are
and
see
where
I
can
help
in
this.
In
this
space.
A
C
Don't
think
we
deleted
any
provider,
or
at
least
I
didn't
what
I
did
was
I
I
created
an
interface,
and
that
interface
is
what
the
framework
now
depends
on
in
those
cases
where
it's
traditionally
used
to
have
cloud
provide
a
specific
code
and
the
implementation
of
that
interface
is
elsewhere.
Embiez
provider
packages
and
and
those
then
depend
on
whatever
that
code
needs
to
get
its
work
done.
So
I,
don't
think
anything
was
lost
in
the
process.
It
just
got
moved
around.
Okay,.
I
Yeah,
and
especially,
as
were
like
introducing
new
providers
to
the
ecosystem
we
need
to,
we
need
to
have
a
story
around
like
this
is
how
you
can
run
conformance
or
not
performance.
So,
like
you
know,
if
we
have
a,
if
you
have
a
test
suite
that
we
want
providers
to
run,
we
need
to
have
some
reasonable
process
that
they
can
follow
to
run
those
tests
against
or
clusters
without
them
having
to
like
for
kubernetes
or
do
something
that's
really
inconvenient
for
them.
I
think.
C
For
conformance
for
the
main
point
is
that
those
tests
are
limited
to
what
giving
you
beneath
this
API
itself
provides.
So
you
can
take
a
theory.
You
should
be
able
to
take
Akiba,
Nettie's,
end-to-end,
test,
suite
binary
setup,
cube,
configures
environment,
variable
against
your
cluster
and
run
the
test
suite,
but
that's
what's
already
doable
now.
A
All
right,
and
did
you
also
think
about
like
a
a
sample
application
that
can
be
installed
and
then
the
application
will
use
the
cloud
provider
specific
features,
and
then
you
can
write
tests
to
see
if
those
features
are
working
properly
like
it
needs
a
volume.
It
needs
Ingres
this
that
right
and
then
we
test
whether
those
were
set
up
and
working.
Have
you
thought
about
that?
That
way
of
testing
also
I
have
not.
I
B
What
are
the
topics
that
comes
up
pretty
often
between
over
the
time
over
history
across
working
on
this
stuff
was
in
the
beginning.
We
wanted
behavioral
level
tests
right
and
there
wasn't
a
lot
of
behavioral
level
testing
frameworks
that
existed
at
the
time.
You
know
ginko
was
by
far
the
most
mature
that
exists
at
the
time,
but
there's
a
lot
of
shade
that
gets
thrown
in
ginkgo,
but
I.
Don't
necessarily
know
if
it's
ginkgoes
fault
to
be
honest.
B
B
So
do
folks
have
like
any
insight
or
thoughts
around
B.
It's
got
to
be
a
behavioral
level.
Testing
River
can't
be
straight
tests
from
the
way
its
kind
of
structured.
You
could
write
a
framework
of
your
own
to
do
that,
but
I
think,
given
what
we
have
today.
We'd
have
to
do
this
in
a
layered
model
and
have
to
iterate
to
get
to
that
approach.
A
Yeah
the
one
thing
that
I've
been
trying
to
do
for
a
while
is
nuke
the
flake
attempts,
because
that's
very
confusing
and
I
can
understand
why
it
was
added.
Basically,
it
was
added
for
first
testing,
so
you
run
once
if
it
fails.
Try
again.
If
it
fails,
then
you
then
you
mark
it
as
a
failure.
But
then
we
are
not
limiting
the
flag
to
just
test
that
use
fuzziness,
but
we
are
doing
it
for
every
test.
A
So
basically,
what
happens
is
if
there's
failure,
the
test
runs
again
and
you
will
see
two
sets
of
output
and
you
scratch
your
head,
why?
The
thing
was
run
twice
and
whether
there
was
a
mistake
made
and
you
will
have
two
sets
of
logs
and
you
don't
know
which
one
is
from
the
failed
run.
You
know
other
than
going
through
the
time
stamps
and
changed
segments.
Okay.
B
A
C
Another
usability
issue
that
I
have
with
Keigo
ginko
at
the
moment
is
how
we
marked
markup
tests
that
need
to
run
and
tasks
that
need
to
be
ignored
because
they
depend
on
alpha
features.
This
feature
tag
plays
poorly
with
people
above
interacts
poorly
with
people
trying
to
invoke
for
tests
with
memory.
C
For
example,
if
I
have
a
test
suite
for
my
one
CSI
driver
and
I,
you
will
skinca
focus
with
a
word,
but
selects
tests
belonging
to
that
driver
I
accidentally
also
enable
all
of
the
features
that
are
normally
not
in
a
because
they
have
a
feature
tag.
So
I
have
to
come
up
with
exactly
the
right
combination
of
it
for
think
of
focus,
link
or
skip
to
to
avoid
running
too
many
tests,
and,
let's
just
I,
just
find
that
awkward.
B
One
thing
that
can't
go
lacks,
which
I
really
wish
we
had,
and
this
kind
of
dovetails
over
with
Fabrizio's
comment
is
deep.
Geeko
is
very
flat:
it's
not
hierarchical
right.
We
have
hierarchy
inside
of
the
framework
that
we've
created,
but
it's
not
a
it's,
not
a
explicit
hierarchy.
It's
very
implicit
and
with
a
lot
of
other
testing
frameworks
you
could
be
able
to
like
I,
want
to
run
this
suite
of
tests
right.
We've
created
implicit
hierarchy
and
designed
through
tags
versus
explicit
hierarchy
through
Suites
right
and
that
that's
kind
of
a
problem
with
ginkgo
itself.
B
Ginkgo
doesn't
have
the
support
for
Suites
at
least
like
when
we
first
used
it.
It
doesn't
and
I,
don't
think
it's
been
wildly
updated
in
recent
memory.
But
if
you
look
at
things
like
J
unit,
for
example,
right
J
unit
is
very
sophisticated
and
has
multiple
incantations
and
a
whole
explicit
hierarchy
that
you
can
use.
C
Well,
it
even
inside
the
same
suite
we
currently
have
some
tests
that
need
can
run
and
some
tests
that
can't
run,
and
it
would
be
nice
to
say
well
it
might,
you
know
I'd
be
right.
It
might
be
just
be
that
we
want
to
say
run
this
suite
with
the
default
tests
that
need
to
run
excluding
the
features
but
yeah.
D
C
C
B
I
think
that's
a
totally
legit
thing
we
can
do
and
I
think
that's
that's
the
way
I'm
trying
to
approach
the
problem
is
one
like
list
out
all
the
problems.
People
see
and
then
try
to
put
them
into
buckets
and
those
buckets
then
become
like
high
level
requirements,
but
I
don't
want
to
own
all
of
the
spec
work
on
this
I
would
like
to
take
team
with
this
group
and
have
people
work
together
on
a
spec
but
I
think
before
before
I
even
get
to
that
which
we
can
probably
do
asynchronously.
B
You
have
a
couple
more
things
on
the
agenda
which
are
like
you
know
we
should.
We
should
also
be
the
clearing
house
for
execution.
That's
currently
ongoing
and
one
of
the
one
of
the
agenda
items
or
one
of
the
action
items
I
need
help
with
or
I'd
like
somebody
to
take,
which
is
going
to
be
not
me
is
we
need
better,
auto
labeling
for
discovery,
and
we
also
need
to
triage
the
backlog.
Auto
labeling
for
discussion.
B
I
can
do
the
dirty
work
of
the
triaging,
some
of
the
backlog,
but
I,
don't
label
for
discovery
should
go
into
the
owners
files.
Currently,
the
only
label
that
exists
is
sync
testing
and
I
personally
hate
that,
because
it
gets
overloaded
when
people
do
at
sig
testing.
What
I
prefer
is
something
like
an
area
label
like
area
testing
and
if
you
look
at
things
of
how
we
actually
fix
khomeini
m
to
be
able
to
to
manage
and
triage
things
in
a
meaningful
way
to
do.
B
We
added
the
area
cube
ATM,
so
every
PR
that
was
inbound.
We
can
actually
look
through
in
the
sig
as
needed
so
that
we
knew,
if
the
even,
if
things
happen,
asynchronously
in
a
large
community
we'd
have
people
doing
PR
requests
from
randos
like
all
over
the
place
that
use
pieces
of
the
puzzle,
and
they
want
to
make
it
better,
and
we
should.
B
We
need
to
help
guide
that
in
a
good
way,
but
before
owners
files
were
not
sufficient
enough,
because
sometimes
it
would
even
assign
certain
people
or
people
who
go
on
vacation
all
kinds
of
other
problems.
So
as
an
action
item,
I
would
like
potentially
somebody
to
kind
of
look
through
the
test
area
and
see
if
we
can
apply
a
better
label
for
triaging
as
we
kind
of
iterate
the
future
I
personally,
like
area
testing
I,
think
that's
the
cleanest
one.
E
Something
came
up
where
books
using
Sena
blade,
we're
trying
to
run
conformance
stance
and
one
or
more
nodes
have
paint
on
them,
which
means
that
they
can't
deploy
the
right
images
to
all
the
nodes.
So
then
things
just
kind
of
hang
up,
so
John
had
gone
through
a
bunch
of
discussion
about
it
and
I
recommend
we
create
a
design
doc
cuz,
we
kind
of
go
back
and
forth
and
get
up
so
probably
make
sense
just
to
walk
through
that
document
at
the
bottom,
and
then
people
can
comment.
B
I
think
what
we
can
do
for
this
one,
because
we're
you're
not
gonna,
get
anything
for
114,
no
matter
how
hard
you
try
right
now
is
that
we
can
as
an
action
item
as
a
group.
We
can
try
to
take
a
review
of
this,
and
I
will
put
it
on
the
first
thing
for
the
next
call
for
us
to
go
through
the
document
proposals
and
try
to
figure
out
how
we
could
address
this.
Does
that
seem
like
a
reasonable
approach?
Okay,.
E
This
is
nothing
that
came
up
where
again.
This
is
from
John's
lists.
We
wouldn't
be
here
today,
but
so
we're
in
supporting
air
gapped
environments,
so
the
places
where
they
can't
you
know,
pull
images
directly
from
the
internet.
We
found
some
more
places
where
the
images
that
are
referenced
are
not
in
that
util
an
effects
package,
but
they're
spread
abroad
about
so.
If
you
scroll
down,
there's
a
link
to
where
John
actually
found
yeah
that
line
there,
whether
they're
not
getting
centralized
I
guess
so.
B
That
one
is
there's
a
meta
problem
that
exists,
which
should
be
part
of
the
broader
set
of
requirements,
is
to
minimize
the
number
of
dependents
images
that
we
have
and
to
make
them
more.
Generic
and
accessible
we've
had
a
problem
in
the
past,
where
only
Googlers,
actually
own
the
test,
images
and
a
far
larger
audience
of
people
actually
use
the
tests,
especially
especially
since
we
opened
up
the
test
suite
to
the
wild.
So
we
need
to
get.
B
E
They're,
pretty
large
too,
if
you
sum
them
all
together.
One
of
the
film
engineering
folks
at
gamer
was
doing
a
an
offline,
install
or
offline
run
of
performance.
I
think
he
the
buddy
ended
up
with
was
like
gigabyte
files.
He
had
loader
docker,
and
that
was
skipping
all
of
the
GPU
testing
to
stuff
IP,
something
else
we
could
look
at.
How
can
we
reduce
the
size
and
downloads
just
to
run
a
test.
B
B
What
I
think
I
should
do
is
an
action
item
is
break
down
some
of
these
things
into
buckets
and
start
a
initial
formulation
of
what
could
be
a
spec
or
a
CAC,
but
I
think
I
want
the
community
to
kind
of
own
this.
It
helped
work
on
the
pieces.
I
don't
want
to
own.
All
of
the
you
know,
shiny
beacon
on
the
hill
or
whatever
I
want
I
want
it
to
be
owned
by
everybody.
Who's
working
on
this
stuff.
B
B
Right
so
we
have
a
couple
of
action
items.
One
is
for
me
to
work
on
the
beginnings
of
the
spec
for
us
to
actually
break
down
some
of
the
pieces.
There
are
also
take
a
look
at
the.
If
folks
can
take
a
look
at
the
current
Tate's
issue
that
exists.
That
would
be
helpful.
You
know,
and
your
settings
going
to
update
the
labels
and
I
think
that's
everything
is
there.
Is
there
anything
else
that
folks
want
to
discuss.
B
So
my
goal,
hopefully
by
next
time
in
two
weeks
time
for
now,
would
be
to
actually
have
concrete
a
concrete
set
of
items
we
can
actually
work
on.
So
hopefully,
if
we
have
a
high
level
spec
and
we
look
at
what
we
have
today,
we
can
probably
translate
some
of
that
into
an
actual
backlog
that
we
can
start
to
work
on.
I
I
B
C
C
B
C
B
Well,
if
you
do
get
the
talk
accepted,
feel
free
to
let
us
know,
and
we
will
try
to
do
uplift
for
you,
so
community
outreach
in
retweets
and
all
the
other
stuff,
that's
associated
with
that,
just
to
kind
of
build
up.
The
inertia
and
I'm
gonna
try
to
do
that.
Actually,
after
this
meeting
once
I
post
to
youtube
to
get
broader
awareness
that
you
know,
this
is
an
effort
that
affects
a
lot
of
people
and
I.
Think
it
can
be
very
interesting
and
you
know,
come
come
join
us
yep.