►
From YouTube: Kubernetes SIG Testing 20170-08-08
Description
Meeting notes: https://docs.google.com/document/d/1z8MQpr_jTwhmjLMUaqQyBk1EYG_Y_3D4y4YdMJ7V1Kk
A
Testings
weekly
meeting
so
today,
I
just
wanted
to
talk
real
briefly
about
owners
files
and
then
I
plan
on
handing
off
to
Eric.
They
just
talked
about,
but
though
so
he's
got
for
getting
the
community
to
stop
police
flakiness,
now
handoff
the
Simms
st.
Claire,
to
talk
about
what?
If
Fe,
is
a
project
sonically
so
I
will
just
share
my
screen.
Real
quick
for
the
owners
thing,
but
y'all
are
welcome
to
click.
The
link
in
the
meeting
notes
see
here.
A
A
This
isn't
like
super
relevant
to
state
testing,
except
in
fact,
if
anything,
I
try
to
ignore
all
the
testing
itself,
the
workflow
and
describing
how
this
works.
The
only
little
bit
that
is
sort
of
related
to
state
testing
is
while
I
was
going
through.
This
I
noticed
that
there
is
some
sort
of
history
left
over
where
some
of
those
files
have
the
words
of
signees
in
them,
and
so
I've
been
going
through
to
try
and
remove
any
usage
of
assignees,
which
sort
of
picked
up.
A
This
notion
that
there
are
some
voters
files
in
vendor
directories,
and
that
reminded
me,
we
have
an
issue
open
about
like
what
do
we
do
with
that?
We
want
to
alter
the
submit
queue
to
ignore
that
I
believe
Stephen.
A
couple
other
folks,
who've,
been
working
on
munchkin
up
lately
have
run
into
that,
but
so
this
may
not
necessarily
represent
sort
of
be
ideal
for
how
this
process
can
most
effectively
and
efficiently
be
used.
A
But
I
try
to
at
least
document
sort
of
what
this
thing
that
the
world
is
from
the
way
the
process
works
to
to
some
behaviors.
That
seemed
a
little
quirky
with
it
and
even
trying
to
deep
link
into
some
of
the
actual
testing
for
related
code
that
implements
this
and
I
plan
on
updating
this,
like
updating,
relevant
community,
basing
documentation
to
tie
back
into
this.
This
is
all
in
the
goal
of
trying
to
improve
the
health
of
our
learners
files.
A
I
think
a
lot
of
the
problems
we
have
are
symptomatic
of
too
few
reviewers
and
not
really
effective,
policing
of
others
files.
So
today,
I
sort
of
tripped
and
fell
over
the
fact
that
there's
a
test,
that's
failing
and
Contra,
that's
blocking
any
contrary,
lated
PRS
from
being
merged.
Because
although
it's
super
awesome,
you
have
to
submit
cue
looking
at
contra,
the
tests
that
are
failing
are
owned
by
somebody
who
is
no
longer
on
the
project,
so
sort
of
identifying
these
files
and
either
we're
moving
them.
A
If
nobody's
going
to
pick
up
ownership
or
campaigning
for
people
to
take
ownership,
that's
something
that
it
would
be
wonderful
to
automate,
but
in
the
meantime,
I'm
just
sort
of
trying
to
as
a
human,
not
take
this
stuff.
On
with
that
I'll.
Stop
sharing
anybody's
interested
in
sort
of
hearing
more
about
this
work,
I'm
going
to
go
ahead
and
sort
of
try
and
carry
this
forward
under
the
contributor
experience
state,
but
I
thought
it
might
be
relevant.
A
B
So
I
emailed
a
proposal
about
you
know
there
have
there's
a
little
bit
of
a
thread.
I
think
last
week
about
you
know.
Flaky
tests
and
on
the
flakes
actually
pretty
high
I
think
is
something
like
30%
that
commits
on
experience
flakiness,
which
is
probably
a
floor,
because
we
don't
necessarily
retest
every
commit.
So
it's
at
least
thirty.
B
You
know
sews,
but
but
it
basically,
you
know
people
are
having
a
bad
time
and
what
I
want
to
try
and
do
is
empower
the
community
to
do
something
about
it
without
and
make
it
be
that
where
they
can
do
something
for
themselves
rather
than
I,
don't
want
our
sig
to
sort
of
become
the
you
know,
enforcer
of
the
build
cop
and
sort
of
do
everything
you
know
sort
of.
Have
the
community
assume
that
we
will
solve
the
problem
for
them
because
we
might
not?
B
Even
you
know,
we
don't
have
good
ideas,
we
don't
have
visibility
into
other
cigs
and
which
cigs
are
having
problems,
because
it
could
be
that
you
know
one
sig
is
totally
happy
with
their
flake
rate
where's
another
one.
It
is
you
know
if
you're
unhappy
and
so
I
would
like
these
six
to
talk
to
each
other
rather
than
just
sort
of
have
us.
You
know
salt
decide
on
things
and
so
I
wrote
a
doc
that
is
sort
of
trying
to.
B
B
The
basic
you
know
mechanic
is
to
encourage
people
to
a
if
they
are
having
problems
with
flakiness
to
go
check
out
our
metrics,
which
should
highlight
which
tests
are
problematic
in
these
days
they
should
show
which
Stig
is
there,
and
then
then,
after
that
sort
of
you
can
change
the
sig
describe
a
thing
to
sort
of
have
flaky,
which
will
take
it
out
of
the
PR
job
that
we
validate
passes
before
merging
in
code
and
put
it
into
the
flaky
suite
which
runs
after
merging,
so
that
should,
hopefully
you
know
still
the
test
runs,
but
and
so
they,
the
the
team
that
has
that
test
and
values.
B
That
test
can
continue
looking
at
it
and
it
hopefully
addy
flake
it
and
then
the
rest
of
the
community
you
know,
doesn't
have
to
be
dragged
down
by
that
test.
So
that's
kind
of
the
proposal
and
I
mostly
you
know,
I
wanted
to
bring
it
up
in
this
sig
so
that
we
could
talk
about
it
so
that
it
seemed
more
like
a
recommendation
of
our
community,
our
you
know
our
sig
that
we
were
giving
to
everybody
else
as
opposed
to
me.
B
Just
you
know,
arbitrarily
sending
email,
so
I
haven't
actually
received
a
whole
lot
of
comments
on
the
doc,
but
and
so
I
don't
know.
If
that's
just
everybody
you
know
more
or
less
feels
thumbs
up
or
if
you
haven't
read
it
or
what's
going
on.
But
if
anybody
has
any
feedback
here,
I
will
I
would
love
to
hear
it.
A
Well
speaking,
for
myself
anyway,
I
think
it's
a
step
in
the
right
direction.
You
didn't
see
any
comments
for
me
because
I
haven't
gotten
around
to
it,
got
too
many
other
squirrels,
but
I
think
it's
super
important
I'd
like
to
be
able
for
us
to
talk
about
this
at
the
community
meeting
on
Thursday
and
have
a
broader
email
sent
out
to
kubernetes
death.
Looking
at
this
from
overly
spacing
perspective,
my
question
is
sort
of.
C
A
This
implies
for
the
frequency
of
these
flaky
tests,
like
how
often
are
people
going
to
get
data
points
on
whether
or
not
these,
so
this
feels
less
like
fixing
flaky
tasks
right
and
more
triaging
them
out
of
the
critical
path
which
I
think
is
yes
empowering
like
empowering
them.
The
question
is
better.
We
just
basically
just
like
shoving
all
this
flaky
stuff
off
to
the
side,
and
nobody
ever
looks
at
that.
So
don't
actually
think
the
flaky
tests
are
part
of
the
critical
path
for
cutting
a
release.
A
If
I
look
at
the
test
thread
dashboard
for
release
master
blocking,
there
are
some
GCE,
slow
and
gke
slow
jobs,
but
I
am
not
sure
if
those
I
think
those
avoid
all
of
the
flaky
and
disruptive
tasks.
So
it
could
be
that
what
we're
saying
to
a
sig
is
by
triaging
the
test
out
you're,
effectively
no
longer
guaranteeing
that
that
functionality
works
for
any
future
release
of
kubernetes.
Until
you
address
that
until
you
make
that
test
work
or
you
rewrite
it,
so
it's
not
flaky
or
whatever,
but
I.
A
Think
like
it's
crazy
I've
been
using
I.
They
write
like
at
least
if
we
can
start
accurately
describing
the
state
of
the
world
and
I.
Think
stamping
a
flaky
contests
that
are
flaky
would
be
helpful
and
I
think
getting
people
to
pay
more
attention
to
those
drawing
more
attention
to
those
metrics
would
also
be
helpful.
So
I'm,
largely
in
favor
of
this
I,
do
like
I
personally
intend
to
help
drop
some
comments
and
maybe
work
Smith
a
bit,
but
I
really
think
having
something
to
the
next.
A
B
Yeah,
that's
actually
good
point
about
I'm,
not
sure.
If
this
low
sweet
only
runs
this
low
tests
or
if
it
runs
the
entire
suite
I
should
look
into
that,
and
you
know
part
of
the
you
know
my
expectation
is
that
what
I'm
wanting
you
know
what
I'm
wanting
it
to
do
is,
if
is
a
cig,
who
is
unhappy
with
sig
bars
tests?
B
I
would
like
them
to
one
make
sure
that
sig
vu
is
communicating
to
sig
bar
in
one
way,
which
is
why
I'm
suggesting
opening
and
issue
and
second
have
them
be
involved
in
the
PR.
So
I
would
hope
that
you
know
the
only
solution
is
not
just
to
you
know.
Have
us
immediately
kicked
a
job
out
of
the
parallel
Suites,
but
to
sort
of
I
think
it
could
actually
just
be
that
maybe
the
other
team
isn't
aware
that
this
is
impacting
people.
B
So
my
hope
is
that
you
know
part
of
this
is
that
it
starts
a
discussion
and
then
xigbar.
The
sig
who
come
to
a
discussion
at
AU
sig
bar
is
going
to
work
on
this
this
week
and
it
will
get
fixed,
and
so
you
know
we
don't
actually
need
to
kick
it
out
of
the
cube,
because
yeah
I
would
much.
Rather
if
we
figure
out-
and
maybe
this
proposal
won't
work
at
all
and
we'll
have
to
be
back
here
in
a
month
or
in
1/9,
coming
up
with
a
different
idea.
B
A
No
I
fully
agree
being
able
to
sort
like
we
do
have
a
dashboard
I
personally,
never
put
in
front
of
a
community
that
velodrome
dashboard
that
sort
of
plot
the
consistency
and
flakiness
stuff
over
time.
I
feel
like.
Maybe
we
could
better
display
that
data
or
make
it
look
a
little
more
like
scary,
when
I
think
when,
like
I
skating
too
high
or
consistency,
is
dropping
too
low
and
just
the
other
thing
throw
out
there.
A
I
think
I
took
an
action
item
during
sig
release
that
not
all
of
the
tests
have
the
Signum
appended
to
them
and
like
there
was
a
lot
of
really
great
organic
effort
done
during
the
recent
fix
it,
but
they're
still
like
I,
think
ballpark.
Thirty
percent
of
the
tests
still
don't
have
a
sitting
attached
to
them.
So
I
just
heard
the
wood
rock
was
going
to
try
and
chase
some
people
down
inside
and
I'm,
going
to
try
and
use
come
up
with
the
list
of
tests
that
don't
have
six.
A
Yes,
maybe
see
if
we
can
call
attention
to
that,
because
this
is
all
predicated
on
having
those
signals
inside
of
the
test
names
like
again,
it's
a
really
silly
lo-fi
thing,
but
once
you
start
seeing
the
big
names
in
the
metrics
JSON
file
that
calls
out,
which
are
the
freakiest
tests
right,
that
almost
looks
really
actionable
and
starts
coming
close
to
those
carefully
slow-roasted
handcrafted
emails.
You
were
sending
out
a
little
while
ago
about
like
the
worst
offending
jobs
that
six
could
fix,
but
at
least
seem
to
get
some
attention.
A
I
hope,
I'm,
not
the
only
one
talking
where
at
least
I'm
activating
the
opinions
of
the
group.
That
I
think
this
is
the
path
forward.
D
Eric
I
had
one
question
that
we
noticed
in
origin:
it's
always
easier
for
a
developer
on
a
PR.
The
hits
look
like
you
touched
the
hit
retest
and
forget
about
it
and
I'm
wondering
like
so
opening
the
issue,
the
added
value
of
that
on
top
of
having
automated,
like
the
dashboards
like
okay,
this
is
the
flakiest
test.
Is
you
were
saying,
putting
like
a
human
behind
it
and
making
it
obvious
to
this?
There
are
people
being
affected
by
it.
D
Am
I
understanding
that
correctly,
cuz,
I,
guess
just
the
issue
process,
it
I,
don't
know
it's
just
easy
to
bypass
it
and
hit
retest
and
forget
about
it
and
hope.
Somebody
else
does
something
about
it.
I
feel,
like
you
know,
when
nobody
feels
like
they
need
to
do
it.
They
just
hope
somebody
else
will
I.
B
Think
that's
a
huge
cultural
challenge
like
at
one
point
we
actually
required
when
you
would
say
test
this.
You
know
when
you
would
say
retest
you
would
have
to
link
to
an
issue,
but
you
know
I
guess
one
of
the
there
is
an
out
because,
like
just
like
we're
having
issues
with
the
approval,
you
know
how.
Now
you
have
to
say
that
when
you
say
slash
approve,
it
has
to
have
an
issue
and
not
everybody
likes
that
so
like
the
regex
would
actually
validate.
B
You
know,
slash
retest,
pound
ignore,
and
so
basically
we
just
found
that
pretty
much
a
hundred
percent
of
the
retest
commands
were
found
ignore
and
also
we
have
automatically
filed
issues
whenever
there
was
a
flake
before
which
we
turned
off,
because
that
just
created
you
know
hundreds
of
issues
that
no
one
was
actually
looking
at
a
day
and
so
right
now
actually
Cole
did
some
work
where
we're
taking
the
top
failure,
classes
and
I
think
maybe
the
top
three
flakes
or
something
like
a
cheat
day.
B
We're
looking
at
the
top
three
failures
and
filing
issues
for
those
I'm,
not
even
really
sure
if
those
are
getting
a
huge
amount
of
traction,
so
yeah
I
totally
agree
with
you
that,
like
what
we
really
need
is
for
you
know
when
there
is
a
flake
or
someone
for
human
to
start
like
acting
on
it,
but
sort
of
you
know
figuring
out
a
way
to
do
that.
That
isn't
like
super
draining
is
like
you
are,
you
know,
hey
you,
you
are
now.
B
D
And
I
guess
the
other
question
I
had
I
mean
we
tried
to
implement
a
very
similar
thing.
You
know
before
we
were
using
prowl
linked
to
an
issue
and
people
found
out
that.
Oh
you
know
if
we
just
delete
the
comments
and
they
won't
know
that
there
were
special
tests
and
I
guess.
The
other
side
is
what
people
have
asked
us
for.
Is
we'd
love
to
have
a
list
to
say
like
I
as
a
developer,
I
want
to
invest
in
the
test.
That's
liking.
D
The
most
I
want
to
make
sure
that
the
time
I'm
spending
deflating
this
is
going
to
impact
more
people,
and
we
try
to
provide
that
and
I
think
to
some
extent
is
successful,
but
I
think
you
had
actually
a
graph
or
somebody
posted.
A
graph
of
like
x-axis
is
how
many
number
of
tests
ran
and
then
y-axis
probability
of
that
number
of
tests
running
and
actually
the
overall
result
being
successful,
and
it
just
plummeted
right
and
I.
D
Think
at
some
point
there's
so
many
of
these
rarely
flaking
tests,
but
you're
likely
to
hit
at
least
one
and
I
guess
the
the
messaging
struggle
that
we
had.
Then
was
the
no.1
test
here
is
bringing
down
the
ship
but
they're
all
over
the
leaks
and
convincing
people
that
even
those
are
valuable,
I'm,
not
sure
we
find
a
good
way
to
do
that.
D
B
A
High
school
other
people
have
dogs,
I
think
there's
sort
of
a
broader
messaging
thing
here
where
I
agree
to
the
point,
you
need
to
prove
people
through
to
people
that,
even
though,
like
80
percent
consistency
sounds
like
a
high
percentage,
that
actually
implies
that,
like
almost
nobody
gets
a
PR
through
the
first
time
right
so
I
think
we
need
to
impress
upon
the
community
as
a
whole.
But
then
you
need
to
combine
that
with
the
call
to
action
and
I
think
Eric's,
like
just
at
least
triage
off
the
worst
offenders
thing
combined
with
that
week.
A
I
posted
in
chat
of
like
these
are
at
least
as
of
right
now,
with
worst
offending
test
cases.
Called
attention
to
looks
like
I
should
go
folks,
a
gas
today
by
machinery,
safe,
storage
right
so
and
I
don't
know
it's
like.
If
we
were
to
do
this
on
a
weekly
basis,
would
it
actually
make
a
difference
over
you
know,
10
weeks,
are
we
just
constantly
chasing
another
three
bottlenecks?
I!
Don't
know
that
for
sure,
but
it
does
seem
at
least
like
an
iterative
step
in
the
right
direction
and
I.
A
Think
you
combine
that
with
the
fact
that
we
are
measuring
consistently
consistency
in
place
over
time.
This
should
hopefully
be
one
of
those
cases
where
we
can
look
at
the
data
before
make
a
decision.
Look
at
the
data
after
and
see.
If
what
we're
doing
is
helping
or
not
I,
don't
know
it
yeah
yeah,
that's
a
big
one
for
me,
yeah,
that's
that's!
A
It
provides
us
more
data
right,
like
more
pull,
requests
are
having
a
test
run,
and
so,
if
the
test,
if
the
pull
request,
that's
like
the
first
time
now
passes,
we've
identified.
Ok!
Well,
that's
in
the
inconsistency
in
the
tests.
Right,
that's
the
exact
data.
We
need
to
be
able
to
determine
with
more
effectiveness,
which
are
the
test
cases
worth
paying
attention
to.
E
A
Yeah
I
agree:
I
mean
we're
going
to
get
it
back
pretty
quickly
about
whether
or
not
we're
at
a
sustainable
flake
rate
when
it
comes
time
to
do,
burn
down
and
actually
kind
of
release.
I
guarantee
that
so
our
time
for,
like
some
sort
of
actionable
decision
or
useful
metrics,
is
like
three
or
four
weeks
before
code
freeze
comes
up
and
then
three
or
four
weeks
before
the
release
actually
gets
cut.
So
I
was
ballparking
like
I.
A
F
A
Me
anything
that
consumes
this
data
and
presents
it
in
a
more
actionable
manner
or
more,
like
user
facing
manner,
would
be
helpful
for
me
at
least
it's
personally
just
that
I
don't
have
the
bandwidth
to
do
anything
more
than
say.
This
JSON
is
also
in
this
data
is
pointing
us
in
the
right
direction.
What
that
sounds
like
it,
a
fantastic
I
read,
sort
of
broadcast
in
together,
I
guess
been
like,
was
to
make
you.
F
D
B
Oh
yeah,
you
know
that
might
actually
be.
You
know
a
good
there's,
always
like
Help
Wanted
issues,
but
it's
I
mean
we
have
those
JSON
objects
and
if
someone
wants
to
create
a
HTML
page
that
you
know
coals
that
data
and
presents
it
in
a
you
know:
nice,
like
shiny
box,
it
says,
like
expect,
you
know
you
have
a
whatever
forty
percent
probability
of
experiencing
a
flake
like
I.
Don't
know,
I,
think
that
could
potentially
be
really
useful
because
yeah.
E
I
think
these
also
like
really
great
ideas
for
someone
else
to
do,
but
I
don't
hear
anyone
saying
that
they've
want
to
do
it,
but
I,
like
Aaron's
suggestion
of
I'm
going
to
look
at
this.
Maybe
you're
not
I'm,
not
volunteering,
you
by
the
way,
comma
head
I.
Don't
look
at
this
I'm
going
to
ping
the
cig
over
the
next
six
weeks.
Like
that
I'm
tribute
everything
and
say:
hey.
E
F
I
think
we
could
do
it,
I
mean
it
would
be
pretty
to
do.
I
already
wrote
the
on
call
pages
that
something
similar
I
just
don't
know
what
people
look
at
look
at
the
most.
We
could
put
this
in
this
in
a
queue
you
put
this
on
the
on
call.
We
could
make
it
its
own
page,
I
mean.
A
I
guarantee
you
it's
something:
that's
actually
useful
to
people,
it
will
spread
like
wildfire
and
I
will
promote
the
out
of
it.
I
mean
all
right,
I
feel
like
that
message
or
let
it
work
but
yeah
like
if
it's
helpful,
if
it
works,
we
can
definitely
assist
with
the
messaging
or
short
I.
Don't
want
to
spend
too
much
time.
Bike
sharing
against
him
is
chance
to
present,
but
just
another
grow
out.
There
I've
seen
a
lot
of
people
known
as
the
bots
comments
down
there.
A
A
C
So
recently
we
open
sourced
and
announced
a
project
son,
a
boy,
and
we
did
it
as
part
of
the
cnc
of
conformance
effort,
but
the
scope
is
far
beyond
that
for
potential
use
cases.
So
son
ability
basically
wraps
the
execution
of
intent
s
as
long
as
well
as
plug
plug
ability
and
extensibility
for
data
collection.
So
the
idea
is
that
sudden
we
can
clunk
down
in
a
cluster
with
a
predefined
configuration
file,
slurp
up
a
bunch
of
data
and
produce
a
single
artifact
that
artifact
is
it
just
a
tar.gz
whose
directory
structures
well
defined?
C
It
can
collect
little
see
nothing
or
it
can
collect
as
much
as
you
want.
This
is
all
specified
via
configuration,
JSON
file,
so
you
can
specify
the
resources
that
you
want
to
collect.
You
can
specify
the
location
of
where
you
want
to
upload
the
results.
Everything
will
be
aggregated.
You
can
apply
filters
to
workloads
filters
namespaces.
So
if
you
want
to
select
given
areas
of
interest,
because
you
don't
want
to
create
a
report,
give
it
to
the
world
and
then
have
it,
you
know
have
all
your
detailed
information
and
secrets
and
whatnot.
C
This
is
internal
configuration
for
Center
boy
by
default.
It
opens
up
an
aggregation
server
internally
inside
a
son
of
we
as
a
master
worker,
algorithm
or
sort
of
execution
model
where
it
basically
spawns
a
bunch
of
workers
that
can
go
out
to
the
different
nodes
and
collect
data
on
those
individual
nodes.
If
you
decide
to
use
it
that
way,
it's
completely
pluggable,
so
you
can
choose
if
you
wanted
to
plug
in.
C
Let's
say
something
like
fisting
or
if
you
want
to
plug
in
with
your
own
data
collection
mechanisms,
that's
totally
doable
and
it
will
just
basically
spam
it
into
the
report,
as
as
further
data
that
you
can
collect.
So
a
use
case
for
this
actually
originated
from
cig
scale
a
long
time
ago,
where
we
had
wanted
to
have
a
systematic
way
of
generating
a
report
that
says
I
did
this
experiment
about
these
results
with
this
configuration
and
that's
pretty
much
what
Sony
Blee
does,
but
the
primary
driver
for
us
releasing
at
the
scenes.
C
Yes
conformance
meeting
was
to
have
a
unified
way
of
giving
a
conformance
profile
and
which
can
then
be
slipped
in
by
another
tool
to
allow
you
to
do
validation
so
taking
a
quick
gander.
The
directory
structure
and
layout
is
something
that
we're
actually
going
to
PR
on
pretty
soon
we
have
a
current
layout,
which
is
there
so
I
just
opened
up
into
results.
There's
a
whole
bunch
of
information,
that's
here
and
it's
it's
basically
ripping
through
all
of
the
community's
resources
and
a
hierarchical
structure
right.
C
So,
if
I
go
underneath
resources,
then
there's
the
non
namespace
and
the
namespace
Kok
resources.
If
you
go
underneath
me
on
non
NS,
these
are
the
cluster
bond
resources,
such
as
the
nodes
persisted
volumes.
Cone
stands
cluster
rules.
This
is
everything
that's
running
on
the
cluster.
You
can
also
go
into
the
individual
namespaces,
which
you
can
see
here.
Is
that
there's
a
bunch
of
stuff
running
for
different
namespaces
and
you
can
dive
into
the
individual
pods
the
pod
logs?
C
Are
there
I
think
the
one
thing
that
son
of
a
provides
is
a
unified
way
of
collecting
everything
right
and
that's
pretty
much
it
in
a
nutshell.
I
know
we
ran
out
of
time.
I
got
t-minus
one
thirty
seconds
here,
so
hopefully
you'll
find
it
useful.
If
other
folks
want
to
talk
about
it,
other
applications
of
it
I,
don't
necessarily
know
whether
or
not
it
applies
directly
to
the
sig.
But
it
is
highly
useful
for
folks
in
the
field
as
part
of
the
post
deployment
step.
It's
also
super
useful.
E
Cement,
oh
I'm,
sorry
I
am
neither
too
early
I
apologize,
I
was
just
go
say:
fantastic
work,
I'm,
really
excited
by
this
project.
I.
We
talked
about
something
like
this
and
say
coughs
to
take
electrons
months
ago,
but
never
did
anything
with
it
and
I
very
much
look
forward
to
using
it.
I
have
to
drop
off
a
mining
that
does
job
fantastic
work.
I
look
forward
to
running
it
on
my
cluster
design,
Anna,
okay,.
C
And
there
actually
is
a
set
of
a
channel
that
Sara,
let
us
create,
so
you
can
ask
questions
there.
It'll
we're
working
with
the
conformist
group
to
further
define
specifications
to
about
what
people
want
and
there'll
be.
Issues
opens
with
regards
to
the
format
and
we'll
probably
have
PRS
reduction
cool.