►
From YouTube: 20191028 sig testing commons
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Here
we
are
so,
as
we've
been
working
through
things
over
time.
We
have
the
larger
level
picture
of
the
world
that
we've
been
slowly
trucking
away
against.
George
has,
thankfully,
kept
us
honest
by
like
routing
issues
our
way.
So
if
there
are
a
set
of
PRS,
what
you
need
to
look
at,
we
just
pretty
much
go
through
them.
Are
there
any
other
agenda
topics
or
things
that
we
want
to
discuss
today?.
C
A
B
C
A
A
Agenda
is
at
the
point
where
we
need
to
start
adding
a
little
bit
more
jurisprudence
and
figuring
out.
What's
the
right,
layering
model
that
we
want
to
have
of
where
these
these
calls
are
used
from
whether
or
not
they
need
to
be
in
the
framework,
whether
or
not
they
need
to
be
in
a
sub-module
the
framework
or
whether
not
they
should
be
in
the
specific
test
suite
for
leverage.
Is
it
and
I
think
they
need
to
that
probably
may
case
by
case
basis,
I,
don't
think,
there's
a
magic
number.
A
C
I
think
it's
because
and
cuz
you're
on
the
call
you
can
kind
of
speak
on
this
by
I
think
the
intention
was
to
divvy
up
some
of
those
four
new
contributors
or
for
the
community
to
take
and
so
kind
of
having
that
concrete
guideline
might
be.
It
might
be
useful,
but
I
I
agree
that
we
have
to
kind
of
judge
it
on
a
case-by-case
basis.
D
Ok,
yeah
me:
yes,
okay,
it's
like
so
thank
you
for
pick
up
this
topic
so
and
yeah
I,
directed,
see
and
review
see
the
parties.
If
we
have
consensus,
moving,
fought
for
this
direction
and
yeah
and
lump
my
concern
is
that
we
we
can.
We
will
be
able
to
create
another
invited
dependency
between
packages
with
this
this
year,
issues
so
so
I
director
ready
to
broke
this
kind
of
thing
with
in
this
work
carefully.
D
E
A
F
C
Yeah
I
think
like
when
we
initially
started
this
for
factoring
work.
The
framework
was
in
a
pretty
bad
state,
so
we
said
just
shuffle
stuff
around,
as
is
just
to
get
us
and
get
us
in
a
decent
state
but
yeah,
but
yeah
I
agree
that,
like
we've
done
enough
work
at
at
this
point,
where
maybe
we
shouldn't
like
the
default
shouldn't
be
shuffle
code,
it
should
actually
be
like
shuffle
and
improve
or
possible.
Yes,
I
agree.
G
It
works
most
of
the
time
like
I
would
say
up
to
99,
98
percent
of
the
time
and
trying
to
find
this
last
bit.
If,
if
you
scroll
up
just
a
little
bit,
this
is
my
comment
on
identifying
exactly
which
audit
events
are
generating.
The
problem,
if
you'll
click
on
or
just
look
at
that
for
cotton.
Four
days
ago
comment
the
second
one
I
suspect
it's
related
to
the
ete
framework,
not
updating
the
user
agent.
That's
the
code,
and
it's
in
that
commit
referenced
above.
G
We
I
haven't
gotten
a
whole
lot
of
feedback
on
this
other
than
it
sounds
like
that's
a
good
idea,
but
now
that
it's
not
quite
the
working
as
expected,
I
thought
I'd
put
it
in
front
of
the
framework
folks
and
say
what
is
the
the
best
way
to
do.
That
is
this
an
approach
that
there
were
other
approaches
we
could
have
taken
to
linking
together
a
particular
API
call
to
an
audit
log
entry.
G
G
It
when
we're
hitting
this
function,
it's
right
when
the
before
each
is
called
and
if
we,
if
the
client
set,
is
not
available
yet
we're
creating
a
kubernetes
client
for
this
particular
call.
It
may
be.
My
if
client
said
155
is
wrong.
We
go
through
and
and
on,
159
see
if
the.
If
the,
if
the
component
texts
are
in
existence
and
if
not,
if
they
are,
can
cap
them
together
and
have
pinned
them
to
the
user
agent
string,
one.
A
G
The
crux
is
I'm,
getting
client
calls,
so
client
is
getting
I
guess
the
the
client
is
getting
called
without
the
context
being
set
at
all
so
I'll.
If
you
go
back
and
look
at
the
issue
issue
that
I
reported
or
the
issue
that
he
reported
Liggett
brought
in.
If
you
look
at
the
actual
events,
if
you
scroll
down
a
bit
in
the
in
the
queries,
you
can
see
that
the
user
agent
is
hitting
the
II
D
test
is
hitting
it,
but
it
doesn't
have
and
the
test
applied.
G
A
A
Versus
like
string
matching
with
the
output
of
the
conformance
files,
because
you
have
the
straight-
you
have
the
entire
text
set
of
all
the
conformance
tests
that
are
basically
done
through
the
dump
of
the
additions,
because
they're
explicitly
done
through
a
separate
file.
So
this
is
an
appending
one
week.
There's
a
bug
here
for
sure
that
we
should
fix
them
solve
to.
Is
that
I?
Don't
think
that
this
is
the
only
way
you
could
do
it
I
think
there's
many
ways
you
could
do
it.
F
A
G
It
that
one
it
has
the
the
string
and
the
filename,
but
there
are,
the
string-
is
not
complete.
There
are
multiple
strings
that
are
exactly
the
same:
they're
not
unique.
There
was
another
effort
or
we
talked
about
adding
a
unique
identifier
or
something
so
that
wouldn't
we
wouldn't
have
to
go
this
other
route
because
they
are
unique
profile
but
they're
not
unique
across
the
data
set,
and
we
don't
know
what
the
source
code
coming
through
well.
A
A
Now
that
said,
you're
fixing
the
other
problem.
If
we
were
to
look
at
it
in
more
detail,
why
is
before
each
nut
being
called
every
single
time?
One
thing
for
sure
this
is
the
before
each
for
everything.
I
would
definitely
have
locking
on
this
call,
which
currently
does
not
have,
because
you
can't
execute
these
things
in
parallel,
so
I
can
I
can
even
see
an
escape
hatch.
Well
framework
is
global,
but
you
could
get
race
conditions
here.
A
G
Well,
what's
confusing,
is
there
shouldn't
be
the
clients
that
doesn't
exist
yet
and
anytime
it
does
get
called
to
create
it.
It
still
should
be
it's
it's
as
if
for
making
a
client
call
within
the
code,
and
it's
not
using
this
client
the
one
created
in
before
each
is,
if
those
calls
are
being
another
client
is
somehow
available,
I'm
not
quite
sure
how
that
would.
A
Work,
that's
what
I
said:
there's
no
locking
on
this
call.
If
you
have
multiple
before
each
the
for
each
is
being
called.
If
your
suite
was
being
run
in
parallel,
cuz
like
that
exists,
all
over
the
place
in
parallelization
was
handed
before.
Probably
this
thing
was
even
it
was
added
after
this
thing
was
originally
unit,
then
there
should
be
a
lock
up
here
in
this
lock
basically
said
like
before.
You
even
do
this
check.
A
You
know
you
have
a
lock
on
the
before
each
that
will
eventually
serialize
just
this
portion
and
that's
totally
fine,
because
the
actual
execution
of
the
test
will
all
be
in
parallel,
so
I'm,
totally
fine
with
the
before
age
being
locked
and
then
have
the
different
lock.
But
the
way
this
is
written,
it's
totally
possible
for
you
to
have
multiple
clients.
That's.
G
A
How,
because
we
have,
how
could
you
not
have
this
added?
Well,
then,
there's
a
bug
somewhere
else
right
like
if
you
you
could
bisect
what
I'm
trying
to
say
is
if
you
bisect,
if
you
put
the
locking
here-
and
you
still
have
the
bug,
one
is
better
to
have
the
locking
anyways
well,
we'll
definitely
burst
empty.
Our
second
part
we'll
be
like
okay.
We
eliminated
that
set
of
possibilities
like
what
we're
deep,
deeper
down.
A
Some
point
so
so
this
this
piece
here
adds
a
context
for
everything
going
through
a
given
client
set
to
have
this
preamble
right.
So
that
means
every
call
will
have
this
preamble.
Api
Snoop
is
gating
that
preamble
that's
using
that
to
consume,
but
some
conformance
calls
are
missing
the
preamble
and
why?
Why
is
that?
A
fair
statement?
Yeah.
G
We
think
the
test,
it's
our
the
the
issue
and
scroll
down.
You
can
see
the
dash
dash
added
in
the
other
test
for
the
other
endpoint.
So
if
you
scroll
to
the
I
think
the
last
selection
you
can
see,
these
are
audit
IDs
and
the
operations
actually
scroll
up
a
bit.
I
guess
we
didn't
include
one
that
has
the
the
second
selection
or
the
list
API
extensions
resource
definition.
It
has
the
Ute
test
coming
in
and
cout
controller
manager.
G
Those
are
the
user
agents
reported
by
those
binaries
when
they
hit
the
API
server
and
if
you
we're
trying
to
append
for
ete
test,
there's
usually
a
dash
dash
at
the
end,
and
if
we're
appending
that
here
this
person
here,
you
see
we're
filtering
to
find
where
the
user
agent
has
API
machinery
in
the
test
name.
And
so,
if
you
scroll
to
the
right
that
second
code
block
you'll
see
the
other
end
points.
G
Sorry,
the
second
code
block
and
that's
where
dash
dash,
gets
added
and
then
the
entire
string
context
name
to
identify
which
test
is
currently
speaking
to
the
API
server
and
some
of
them
are
missing
yep.
Because
then
we
look
at
that
other
endpoint
we
select
and
we
see
well,
these
are
the
endpoints.
We
know
it
is
hitting
and
there's
tui
de
test
binary
hits
that
don't
include
the
test
string,
it's
weird
because
it's
happening
later,
yeah.
A
So
the
my
first
thing
when
I
looked
at
this
is
I
know
so
when
I
was
in
my
head,
I
know
that
you
could
have
parallelization
in
the
test
framework,
so
it's
possible
to
have
multiple
client
sets.
So
you
know
you
can
get
to
the
point
where
you
could
get
a
weird
race
condition
where
each
one
comes
through
or
something
now.
If
you
just
eliminate
that,
then
you
have
the
second
question:
it's
like.
G
A
A
C
G
Gone
through
and
because
it's
hard
for
me
to
know,
if
I
do,
this,
I
could
go
through
the
the
query
of
all
the
audio
entries
and
select
everything
where
we're
hitting
the
API.
Where
the
you,
your
agent,
isn't
that
correctly
and
give
you
a
percentage
but
I,
don't
know
what
it's
tied
to
is
it's
hard
I
have
to
do
everything
in
sequential
and
then
look
yeah.
A
G
Don't
find
them
until
we're
looking
at
them
really
closely
and
we're
the
way
we're
this.
We
have
we're
missing
coverage
for
promotion.
Why
are
we
not
hitting
these
end
points?
It's
very
specific
calls.
It's
the
same
calls
every
time,
which
is
it
little
by
sec,
didn't
get
rid
of
the
lock,
but
I'm,
also
interested
in
other
ways
that
are
very
precise
and
tying
together,
an
API
call
to
to
the
audit
log
we,
and
because
it
allows
us
to
do
other
things
that
analyzing
clients
not
the
ete
test.
We
did
so.
G
A
Have
a
report
plumb,
you
could
plumb
unique
identifiers
through
the
calls
and
then
have
like
a
verbose,
verbose
flag
that
you
could
use
to
like
get
the
data
out
like
you
should
be
able
to
get
that
data
out
today.
If
you
do
v10
or
something
like
that,
that
you
probably
get
the
entirety
of
the
context
of
the
calls
all
the
way
through,
but
that's
inside
of
API
machinery-
that's
not
in
like
that's
not
in
the
testing
framework
right.
So
that's
basically
says
like
I.
A
G
A
G
The
problem
right
now
is
most
of
our
analysis
comes
from
what's
already
in
the
audit
logs,
whatever
is
in
the
bucket
from
a
resulting
job.
So
when
we
start,
if
we
change
what
we're
analyzing,
then
we
have
to
add
more
stuff
to
the
jobs
or
information
that
I
can't
go
back
and
look
at
previous
executions.
A
I
think
we
can
there's
a
couple
approaches.
We
can
take
I
think
doing
the
first
one
to
try
and
figure
out
why
the
con
the
config
doesn't
seem
proper
cuz
you're
in
Fink
should
be
initialized
it
in
debugging.
That
one
first
makes
a
ton
of
sense.
Then,
based
upon
the
data
you
get
back,
you
know,
then
taking
the
the
second
approach
of
of
trying
to
figure
out
getting
that
data
out
of
the
the
client
and
seeing
where
it
is
coming
from
and
then
figure.
If
we
need
to
plug
through
more
logs
or
not.
A
E
That
one
is
actually
kind
of
block,
I'm
still
worried
so
some
time
ago
we
added
an
important
restrictions
file
into
the
end-to-end
test
framework
and
it
turns
out
whenever
the
import
restrictions
are
not
actually
thrown
on
a
so
the
currently
the
import
restrictions.
Is
it's
not
gonna
work
if
we
used,
if
we
just
start
using
it,
so
I'll
have
to
go
back
and
still
fix
it
and
then
turn
it
on,
and
this
one
is
blocked
on
that
work.
A
E
Although
one
is
an
interesting
one,
so
the
instrumentation
people
they
deleted
a
lot
of
files
for
setting
up
Prometheus
in
the
in
the
cluster
of
directory
and
to
go
along
with
that
they
come
they've
removed
all
the
mentions
of
primitives
from
the
end
when
there's
framework
so
to
operate
soiree.
So
right
now,
just
two.
So
the
big
question
is,
you
know
if
they
only
people
that
are
using
these
are
the
people
from
six
core
instrumentation.
E
A
I'm,
okay,
with
them
removing
dependencies
for
specifically
for
this
deployment
of
Prometheus,
there's
a
bunch
of
other
details
inside
of
the
code
itself,
as
well
as
the
demons
that
export
Prometheus
metrics-
and
this
is
basically
the
deployment
of
Prometheus,
so
I'm
totally
fine.
If
they
want
to
remove
it,
that's
their
business
and
they
own
it
so
go
for
it.
A
A
A
G
A
A
A
E
C
A
A
Lustre
file
systems
are
only
used
or
hyperscale
big
big
deployments
like
they're
only
for
like
specialized
use
cases.
No
one
would
dare
do
this
for
a
non
bare
metal
use
case
if
they
did
they're
crazy
so,
and
this
is
being
recorded
so,
if
you're
out
there
in
the
world
listen
to
this
recording.
This
is
a
crazy
use
case,
but
I
like
I,
like
your
take
on
it.
If
they
want
to
go
into
the
wilderness
like
let's
let
them
without
pulling
it
through
for
the
common
use
case,.
F
C
A
So
I
what
so
everything
cascades
my
github
cascades
to
my
Twitter
for
everything
else,
because
the
Tim
st.
Clair
at
Google
originally
had
like
Eckles
ahead
of
me
on
a
bunch
of
things.
He
changed
his
name,
so
damn
it
so
then!
So
originally
he
had
Eckles
everywhere.
So
I
found
a
mound
colliding
at
all.
That
would
exist
across
also
Sims
back
in
2009.
A
C
Don't
I,
don't
remember,
like
I,
know,
Patrick
and
like
Patrick
spots
up
and
then
George.
There
was
another
issue
that
you
raised
earlier,
but
like
isn't
there
some
work
around
like
getting
rid
of
e
to
be
common
or
like?
In
what
scenario
do
you
use
e
to
be
common
versus,
like
it's
throwing
it
in
an
actual
past
package?
Well,.
A
A
A
C
A
She
is
six
months
actually
this
week,
so
she
gets
paid
this
week.
F
F
F
C
A
F
A
Right
now,
I,
like
only
a
limited
subset
of
the
tests
you
can
see
here,
which
is
like
container
pro
runtime
class,
are
actually
using
these
dependencies.
But
the
problem
is
like
the
signature
depth
graph
of
been
during
the
framework.
Is
everything
of
grenades
so
step?
One
is
to
minimize
the
depth
graph
of
been
during
the
framework,
so
people
can
use
it
outside
of
the
core
right
and
that's
gonna
the
end
state,
because
people
are
to
do
vendor
it
outside
of
the
core
but
like
if
you
actually
see
what
they
do.
A
F
G
The
client
for
polling
or
what
watch
and
in
looking
is
that
it
seems
like
some
of
the
logic
for
iterating
over
that
is
there's
some
conversation
around
it.
I
need
to
find
a
ticket
real,
quick,
but
could
use
some
some
thoughts
run
it
since
it
seems
like
trying
to
iterate
through
all
of
them
doesn't
provide
much
value.
Apparently.
F
A
So
just
it
broadly
increasing
test
coverage
for
watch
behavior
is
reasonable.
Like
what
happens
is
you
have
inside
of
the
API
server?
You
have
the
scatterer
code
for
every
single
resource
type,
and
then
you
have
a
gather
which
goes
back
to
the
storage
layer,
so
verifying
that,
like
every
single
resource
type
independently
actually
goes
to
the
similar
code
pass
and
the
behaviors
correct
is
a
reasonable
set
of
tests
that
actually
does
more
than
just
increase
the
numbers
right.
G
It
seemed
I
guess:
I
was
trying
to
get
to
the
thing,
but
there
was
some
feedback
from
I
get
ready.
There
wasn't
a.
A
F
A
F
A
C
The
biggest
hurdle
for
me
is
like
the
feedback
loop
like
if
I
add
a
test.
It's
really
hard
for
me
to
test
that
my
test
works
and
it's
oftentimes
easier
to
like
open
the
PR
to
see
if
it
works
and
a
lot
of
times.
I
need
to
like
push
commits
like
three
four
times
to
like
for
the
CI
system.
To
like
verify.
C
A
A
That,
like
I,
totally
open
PRS
to
verify
things
and
I
use
that,
as
my
actual
bot
versus
me
actually
doing
the
so
like
everything
I
do
is
like
imagine:
I
had
12
washing
machine
lines.
It's
like
you
put,
you
put
your
load
in
over
here
and
you
can
work
on
something
else.
You
put
it
loaded
over
there
and
then
I
come
back
over
here
and
just
see
if
it
actually
failed
or
not,
then
I
open
it
up
and
like
oh
crap
recorders
or
so
okay,
we.
G
Do
the
iteration
pretty
quickly
here
and
I?
We
have
some
some
documentation,
we
walk
through
that
they
use
for
each
test,
it
goes
through
and
they
just
have
like
a
living
document
on
the
left
and
they
execute
a
section
of
that
and
it
runs
the
ete
test,
I'm,
allowing
them
to
click
on
the
line
numbers
and
go
through
I
love
to
help
find
some
way
to
do
more
pairing
or
onboarding
people
to
writing
desk.
And
it
may
be
me,
collaborate
on
that
posters
beat
them
up.
For
that
sure
or.
C
Maybe
like
we
can
add
a
script
in
hack
or
something
that
will
run
that
will
easily
run
all
the
tests
in
a
package
or
it's
just
something.
That's
more
like
easy
consumable,
because
I'm
always
like
looking
up
how
to
run
name.
The
tests
based
on
the
context,
names
or
the
episode
called
like.
You
know
how
we
have
like
tags
and
tests
like
I'm,
always
looking
up
what
the
regex
is
for
that
to
put
see.