►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning
yes
good
morning,
and
thank
you
all
for
the
slight
scramble-
hopefully
we
didn't
lose
too
many
people
on
moving
over
to
the
new
URL.
We've
learned
something
about
zoom,
which
is
you
can't
put
two
groups
of
people
in
the
same
place
at
the
same
time.
Essentially,
okay,
so
welcome
whatever.
B
A
Is
Lori
lets
me
say
thank
you
to
God
and
Phillip
for
thinking
of
ways
to
try
and
help
us
get
your
three
on
your
shoes
and
debugging
this
year.
So
I'm
gonna
kick
this
off
to
them
and
then
I'm
going
to
philip
IA
you
can
I
will
request
being
a
host
from
you
and
see.
If
I
can
do
that.
Oh
because
that
way,
I
can
help
with
the
muting
of
people.
Great
alright,
have
added
fillip
and
sod.
D
So
we're
getting
down
to
the
last
few
weeks
of
the
1.5
release
process
and
during
this
process
we
want
to
squash
as
many
bugs
as
possible.
We
want
to
make
sure
that
we
ship
a
stable
release.
We
have
over
a
hundred
bugs
in
the
1.5
mile
stone
on
and
we
need
the
community's
help
in
order
to
squash
these
bugs.
But
a
lot
of
folks
don't
know
where
to
begin.
D
If
they
have
a
bug
assigned
to
them
or
even
if
they
don't
have
a
bug
assigned
to
them,
which
bug
to
pick
up
and
how
to
get
started,
debugging
it
and
talking
to
Phil
and
Sarah,
we
thought
it
would
be
a
great
idea
to
have
a
session
where
folks
could
walk.
Somebody
could
walk
people
through
how
to
how
to
start
debugging.
Basically-
and
that's
what
Phil
volunteered
to
do
today.
Phil,
okay,.
C
Great
sorry,
I
just
got
pinged
in
slack
asking
where
we
are
so
I'm
going
to
send
that
link
into
the
slack
channel
real
quickly.
C
Paris,
it's
not!
You
want
to
look
at
the
curb
earnest
of
foot
and
just
make
sure
you
Paris
still
get
in,
or
maybe
that
distress,
okay,
great
all
right,
so
I
guess
kind
of
what
the
way
I
intended
to
do.
This
was,
if
anyone
has
it
so,
instead
of
giving
like
an
overview
of
how
to
keep
on
stop
generally,
which
I
think
there's
been
a
couple
talks
I'm
already
and
I
think
those
have
been
recorded
so
hopefully
those
or
something
but.
C
I
want
to
just
like
help,
people
who
have
it
to
design
things,
get
started
and
start
putting
people
in
the
right
direction
and
help
you
get
over
whatever
issues
you
might
get
stuck
on
and
so
I
guess
there's
two
kind
of
points
you
might
be
at
so
one
is
that
you
have
an
issue
assigned
to
you
and
you're
not
really
sure
how
to
get
started.
Debugging
it
and
the
other.
Is
you
don't
have
an
issue
assigned
to
you
and
you're
not
really
sure
which
how
to
get
a
good
issue
to
get
started
on?
C
C
Just
my
celebrity
I
guess
so
it
might
be,
it
might
not
work
to
just
go
through
everyone
one
by
one,
so
we
may
have
to
change
plans
on
here.
Maybe
instead
it
would
be
a
better.
The
first
step
is
to
do
is
ask
people
kind
of
like
what
kind
of
help
they're
looking
for
and
what
they're
looking
to
get
out
of
this
I.
F
Guess
I
for
one
would
like
to
be
able
to
point.
We
possibly
signal
the
correct
sig
to
look
at
a
flinky
flinky
flight
bug.
Looking
at
the
long
message
least
have
some
kind
of
idea.
C
So
I
was
that
figure
out
which
sig
should
be
responsible
or
a
bug,
yep,
okay
arm,
but
maybe
the
best
thing
to
do
then
like
it
also
looks
like
learn
how
to
do
debugging
profit
blog
through
he's
diving
into
a
bug
better
than
actually
have
when
lined
up.
Does
anyone
have
one
that
they
like
helped
with,
and
we
can
just
start
walking
through
that
bug
and
maybe
start
pulling
it
up.
H
D
C
D
D
I
can't
be
run
in
parallel
because
of
environment
issues
and
various
other
things,
so
this
sweet
basically
runs
them
all
in
cereal,
and
these
are
in
10
tests,
and
so
this
particular
test
that
fails
is
a
density
test
and
I
know
nothing
about
this
area,
so
this
would
be
perfect
to
get
started
on,
and
so,
when
this
test
first
failed,
the
Cooper
Nettie's
merge
bot
automatically
created
a
bug
for
it,
pointing
back
to
the
first
occurrence.
Each
occurrence
has
the
logs
associated
with
that
test
run.
D
H
D
D
D
C
C
D
Not
a
comment
to
the
issue
itself,
if
you
have
the
permissions
to
assign
yourself
assign
yourself
otherwise,
just
post
a
comment
saying
I'm
going
to
start
working
on
this
bug:
okay,.
J
J
B
D
D
All
right
switch
back
to
this
tab
so
for
this
particular
test
run,
we
were
looking
at
the
bill
blog,
so
the
tests
are
actually
running
a
Jenkins
job.
In
this
case,
they're
run
serially.
So
there's
a
bunch
of
different
tests
that
are
run
so,
in
this
particular
case
case,
the
create
a
sequence
of
pod
tests.
When
Jenkins
encounters
this
job,
it's
starts
executing
a
bunch
of
different
commands.
Whatever
is
that
that
that
particular
test
is
a
and
we
can,
we
can
look
at
the
test
in
Coober
Nettie's
to
see
what
exactly
it
does.
D
So
if
we
go
into
coo
brunetti
scoober
Nettie's,
and
we
search
for
this
string,
we
should
be
able
to
find
the
there.
It
is
the
exact
test
and
what
it
does.
So.
Basically,
this
code
is
executed
again
to
remote,
remote
cluster,
remote
GCE
cluster,
and
basically
it's
going
to
be
executing
either
one
of
two
things:
either
cube
cuddle
commands
on
the
command
line
against
the
cluster
or
directly
against
API
server
rest
commands
using
the
the
library
and
then
parsing
the
responses
and
expecting,
for
example,
a
pod
to
be
created
within
a
certain
time.
H
D
All
cases-
okay,
that
makes
sense,
and
so
to
clarify
the
this
first
log
that
we
looked
at
the
build
blog
text,
is
basically
the
Jenkins
log.
This
is
Jenkins,
saying
here's
what
I
executed,
here's
the
results
and
if
we
click
into
artifacts,
this
is
a
folder
that
contains
files
that
were
pulled
down
from
the
cluster
that
the
test
was
executed
against
after
the
test
is
completed,
though,
of
course,
of
course
build
log
is
the
dash
log
is
there.
D
This
is
the
file
that
contains
all
the
Jenkins
logs
artifacts
contains
different
folders,
including
the
folder
for
each
of
the
nodes
that
make
up
the
cluster.
So
in
this
case,
we've
got
three
different
nodes
and
if
we
click
into
any
one
of
those
nodes,
we
can
see
the
logs
from
that
particular
node
during
that
test.
D
Usually
what
we're
really
interested
in
is
the
cube
law
cube.
Lid
is
the
binary
that
runs
on
the
node,
the
crew
Burnett
is
binary
and
if
you're
running
tests
against
cute
against
a
particular
node,
you
are
probably
interested
in
figuring
out.
What's
going
on
with
cube
lit
at
that
particular
moment,
and
so
this
Q
blog
will
basically
captures
everything
that
happened
on
that
node
with
cube.
'let.
H
Okay,
thank
you.
Let
me
just
back
you
up
still
on
the
eve
ever
wait.
So
again,
sometimes
what
I
work
on
is
khoob
up
and
I
mean
khoob
up,
fails
so
again,
I'd
like
to
be
able
to
go
all
the
way
back
to
the
definition
of
the
Jenkins
job
and
see
all
the
steps
from
beginning
I've
seen
flakes.
You
know
regarding
just
fetching
the
coubertin
his
code
right,
so
things
can
fail
just
arbitrarily
early.
So
in
some
cases
you
need
to
see
you
know
all
the
steps
right.
K
A
So
we've
had
a
couple
of
points
in
the
chat
about
let's
stick
with
what
we
need
to
know
to
get
started.
I
know
Mike
that
you
have
some
really
deep
use
cases
and
awesome
interest
in
debugging
a
lot
of
stuff.
But
if
we
go
too
deep
in
each
of
these,
then
we're
not
going
to
get
any
sort
of
broader
overview
on
getting
started.
Fair.
A
I
I
So
this
particular
problem
have
the
some
performance
or
whatever
things
shouldn't,
be
any
release:
blocker,
because
that
latency,
all
those
kind
of
things
could
be
in
backed
by
the
EPA
server
at
the
CP
and
all
those
kind
of
things
and
also
could
be
negative
cause
itself
like
we
have
the
experience
course
itself
or
maybe
have
certain
problem,
and
this
is
also
an
Allison.
It
is
about
you,
dr.,
nine
cluster,
because
we
have
no
drop
dr.
nine
support
completely.
So
these
particular
things
it
shouldn't
be
the
one
point:
flower
anything
is
broker,
but
we
monitor.
I
This
is
very
close.
So
one
things
where
I
talk
to
is
that
has
info.
It
is
the
get
rid
of
those
like
to
separate
the
jincan
job
too
cute
eyes.
Those
kind
of
thing
still
has
a
test
of
courage,
but
and
block
the
sub
sub
meet
you
and
the
and
block
the
release.
That's
a
separate
issue.
I
just
want
to
clarify
here
and
the
first
thing:
when
you
look
at
debugger
know
to
eat
wheat
has
there,
you
should
see
the
OS
mu
G
first.
So
I
think
this
are
sad
when
you
click
that
way.
I
You
can
see
that
the
file,
the
first
link
will
cook
in
an
issue
the
first
an
Inca
and
they're,
actually
the
g-unit.
Oh,
we
actually
indicate
which
OS
image.
It
is.
Yes,
you
just
click
any
thing:
yeah
you
can
see
right.
There's
the
ninky
to
you
know
from
the
DNA
will
bound
to
darker
nine,
that's
kind
of
indicated.
What
can
I
40
s
imagery?
The
test
is
running
yes,
so
forth.
D
I
D
D
I
Even
can
reproduce
the
problem
this,
because
this
particular
problem
is
kind
of
good
performance
related.
You
cannot
really
do
much
and
the
better
there's
the
some
specific
node
each
we
function
on
it.
It
has
so
you
even
can
render
know
the
e
to
you
remotely
around
at
the
same
machine
with
the
same
image,
because
the
energy
is
public,
the
store
in
the
pub
image
projector.
So
you
can
start
your
instance
and
reproduce
the
problem
from
there.
D
C
I
C
A
D
A
I,
don't
think
we
expected
110
people
here.
We
thought
this
might
be
more
like
professorial
office
hours,
where
four
or
five
people
appeared
and
said
so.
I
have
a
cool
thing
and
I
want
to
know
how
to
do
it.
So
forgive
us
as
we
figure
out
something
that
is
broadly
accessible
for
all
of
you.
This
is
awesome
to
see
this
and
clearly
there's
a
pent
up
demand,
so
I
think
we're
gonna
have
to
see
a
recurring
event
in
debugging
and
adventures
in
debugging,
yeah.
L
A
A
C
Well,
you
can
use
I
think
the
default
setup
is
for
GC
or
gcp.
That's
how
the
test
scripts
are
configured
riding.
You
might
be
able
to
run
some
tests
locally,
I.
H
Should
be
a
little
more
explicit
know,
their
tests
run
automatically
is
part
of
the
regular
pipeline,
I'm,
not
sure
I
understood
done
correctly.
It
seemed
to
me
like
she's,
maybe
suggesting
I
could
explicitly
trigger
for
my
own
purposes,
some
other
reduce
house
some
other
testing.
Besides,
what's
part
of
the
normal
automation,
yes,.
C
You
can
run
the
ed
test
yourself
and
you
can
actually,
even
if
you're,
trying
to
debug
a
flake,
for
instance,
you
can
scope.
The
ed
test
to
say
only
run
this
test
that
I'm
trying
to
debug,
and
you
can
also
say,
keep
running
that
test
over
and
over
again
until
it
fails,
and
so
that's
a
pretty
good
way.
C
One
of
those
typical
strategies
for
debugging
ed
test
is
looking
at
the
logs
looking
at
the
test,
fit
they're
figuring
out
what
you
don't
know
like
I,
really
wish
I
knew
the
state
of
X,
Y
or
Z,
putting
me
in
additional
debug
messages
to
pay
em
out
the
state
of
the
system
that
is
missing
and
then
arm
running
the
test,
just
scope
to
that.
One
test,
like
overnight,
just
say,
run
it
until
it
fails.
If
it's
pretty
live
like
or
something
like
that
and.
C
Well,
it
could
be
in
the
context
of
a
PR.
Typically,
that's
not
so
you
could
do
that.
That's
not
the
workflow.
Most
people
use
for
the
pr
workflow
most
people
just
assume
the
test
is
going
to
work
and
push
it
and
let
the
automation
infrastructure
run
the
test
for
them,
because
it's
a
little
bit
easier.
The
context
that
this
is
most
developers
use
this
for
is
you
have
a
flake
that
may
be
related
to
attest.
C
You
wrote,
maybe
it
isn't,
but
it's
assigned
to
you
or
you're
trying
to
track
it
down
and
it
happens,
maybe
didn't
happen
on
your
PR,
but
it
does
happen
one
out
of
20
times
in
and
now
is
happening
way
too
much
right
and
so
now
tracking
down
that
one
in
20
failures.
C
Not
on
particular
branch
necessarily
it
could
be
if
it's
on
the
release
branch,
definitely
not
on
a
PR,
is
not
what
I'm
referring
to
the
work
climb.
Language
would
be
like.
Well,
you
could.
You
could
absolutely
do
that,
but
in
that
case
you
would
probably
run
all
the
ED
tests
right.
Hey.
H
Not
clear
I'm
really
lost
here,
okay,
so
I'm
just
trying
to
figure
out
you're
telling
me
I
can
ask
for
a
particular
test
to
be
repeated,
but
repeated
on
what
sorry.
How
do
I
say
that,
and
is
this
q
mounted
in
the
hunting
flakes
page
am
I
just
being
redundant
here.
C
I
need
to
pull
up
the
hunting
flakes
page
2
to
see
exactly.
What's
there
I'll
take
a
side,
I
can
say
the
problem.
We're
trying
to
solve
is
that
a
flake
is
occurring
and
we're
seeing
it
not
because
no
human
is
observing
it,
but
the
robot
is
observing
that
when
people
other
people
are
creating
PRS
that
this
test
seems
to
be
failing
or
blocking
the
submit
Q.
If
we
run.
K
C
C
I
I
just
post
the
cute
links
like
it
is
how
to
run
the
e2e
loader
test
and
another
link.
It
is
actually,
however,
backup
water
Phillip
said
how
to
run
the
cluster
été
test
and
how
to
repeat
an
angel
and
the
one
jinkin
the
test,
the
cases,
because
you
try
to
reproduce
the
problem
in
that
document.
You
please
take
a
look
at
that
document.
I
jus
have
the
other
detail.
It
has
a
scare
screw
that
that
document.
B
He's
gone:
can
we
go
through
something
that
I
I
was
doing
last
night?
I
can
send
you
the
first
URL.
Can
you
click
on
that?
That's
odd
and
share
your
screen,
and
then
we
in
between
the
three
of
us.
We
can
walk
through
that
sure
yeah
yeah
click.
Click
on
that
link
that
I
pasted
on
the
chat.
If.
B
B
Right
and
then,
if
you
scroll
to
the
right-
and
you
will
see
that
once
from
the
time
this
test
was
added-
this
has
never
passed
right.
So
you
can
fix
the
size
to
super
compact
to
super
compact
yeah,
it's
so
this
test
was
recently
added
and
since
the
test
was
added,
it
has
been
failing
so
then
now,
oh,
then,
what
we
did
was
we
went
to
find
when
this
test
what
else
was
added-
and
you
know
what
the
test
actually
does.
B
We
can
go
through
the
bill
doctor,
so
there
is
Bill
log
and
there
is
artifacts
right.
So
if
you
go
to
artifacts,
you
can
see
the
build
log
link
as
well,
so
you
can
see
the
build
log
length
there
and
then
from
here.
The
next
step
is
to
get
the
name
of
the
tests
that
feel
exactly
like
massage
showed
you
before.
So
II
dense,
subset,
like
talk
to
a
bi
server,
and
then
you
can
go
look
in
I,
either
github
or
in
in
your
editor
and
slit
for
it.
B
B
B
B
B
B
You
can
find
it
in
the
path
that
was
one
clue
and
then
I
went
back
to
the
odd
JSON
to
see
why
you
know
where
is
it
picking
from
the
host
path
it
and
it's
hard
coded
to
user
benq
cutter
and
guess
what
user,
when
cube
cuddle
is
probably
not
available
on
the
on
the
you
know
in
the
Jenkins
environment.
So
then
the
question
was
what
would
be
the
replacement
in
in
the
Jenkins
environment
that
that
was
the
next
Captain.
C
Jack
say
Jack,
something
here
yeah.
This
is
running
in
a
pod,
we're
not
talking
about
the
Jenkins
environment
right,
we're
talking
about
the
node.
This.
C
B
B
So
then
the
question
was:
how
do
I
figure
out?
What
is
the
part?
It's
correct,
host
path
and
then
going
through
the
laws.
I
was
able
to
figure
out
that
there
is
a
thought
that
is
being
used
for
running
cube
cutter
and
if
you
scroll
side,
look
at
the
commit
message
yeah
so
on
the
GK
environment,
a
cue
cuddle.
Isn't
that
specific
path.
Then
the
question
was:
how
do
do
I
hard
coat
that
path
or
do
I
find
it
electron
time
and
then
I
figured
out
that
there
is
a
framework
test
context,
docu
career
path?
B
C
B
K
B
B
Right
and
bought
doesn't
well
something
like
I
said.
Some
of
these
tests
are
not
run
by
wats,
so
it's
very
difficult
to
figure
out
how
to
run
these,
and
this
specific
in
to
end
test
is,
you
know,
doesn't
show
up
in
any
of
the
BRS,
so
you
can't
at
the
end,
if
there
is
no
hook
for
it.
Let
give
me
one
sec.
Let
me
find
you
there
basically.
D
Not
all
end-to-end
tests
are
run
on
every
single
PR.
Only
some
subset
of
tests
are
run.
If
the
subset
of
tests
that
run
fail,
you
will
get
a
message
from
the
bot
saying
that
it
failed
and
how
to
rerun
that
test
suite
in
this
particular
case,
the
test
that
dims
fixed
wasn't
part
of
the
test,
suite
that
gets
run
against
every
PR.
So,
in
order
to
verify
that
you
fixed
it,
you
have
to
run
it
locally
when
you
run
it
locally.
D
K
Just
my
pear
pie,
the
reason
not
all
tests
are
run
just
so,
we
all
understand
break
his
tests,
that
are
writings,
PRS
must
be
fast
and
they
must
be
capable
of
being
run
in
parallel
for
maximum
and
quickest
feedback.
There
are
some
III
tests
that
are
very
slow
and
take
an
extremely
long
time
on
the
order
of
hours.
To
finish,
that's
why
they're
not
all
right
against
the
PRS?
That's.
B
B
B
K
A
B
D
A
D
Yeah,
that's
a
great
idea,
so
this
particular
test
was
testing
GCE,
persistent
disks.
So
again,
this
is
an
issue
that
was
filed
automatically
by
the
Cooper
Nettie's
bought
because
continuous
integration
test
was
flaking
or
failing.
This
is
the
report
generated
when
the
test
failed?
It's
linked
to
from
the
issue
and
looking
at
it,
it's
a
test.
Pod
disk
should
schedule
a
pod
with
a
read/write,
persistent
disk
ungracefully,
remove
it
then
scheduled
it
on
another
host.
Anybody
that
doesn't
work
in
storage
probably
has
no
idea
what
that
means.
D
So
the
first
thing
would
be
to
let's
take
a
look
at
the
test
grid.
So
if
you
click
on
recent
runs,
what
it
does
is,
it
shows
you.
The
recent
run
runs
for
this
particular
test
suite
and
you
can
see
how
often
this
test
suite
has
failed,
and
if
you
click
on
test
grade
history
for
job,
you
can
actually
see
all
the
tests
that
are
run
and
what
we're
interested
in
is
this
particular
test.
D
If
you
click
on
options,
you
can
filter
by
regular
expression,
basically
the
name
of
the
test.
So
what
I
want
to
do
is
just
filter
down
to
that
particular
test
and
let's
shrink
the
size,
so
we
can
get
a
better
feel
for
it.
So
we
can
see
that
every
now
and
then
this
test
does
flake
and
the
error
is
identical,
so
the
next
step
is
trying
to
figure
out.
D
D
D
D
D
So
basically,
what
this
test
is
trying
to
verify
is
that
when
a
particular
pod
that
is
using
a
GCE,
persistent
disk
is
scheduled
to
one
node
deleted
from
that
node
and
then
move
to
a
different
node
when
it
comes
back
up.
The
data
that
was
written
from
the
first
from
the
first
host
should
be
visible
on
the
second
host,
because
they're
using
the
same
persistent
disk,
if
it's
failing
that
indicates
data
corruption,
so
that's
actually
pretty
bad
and
in
this
particular
case
this
test.
That's
failing.
Let's
take
a
look
at
the
build
log.
D
The
build
log
is
from
Jenkins,
so
the
test
starts.
It
creates
a
persistent
disk.
It
submits
the
pod
to
host
0.
It
writes
a
file
into
the
container,
so
I
wrote
this
particular
value.
Then
it
deletes
the
pod
from
host
0.
Then
it
moves
the
pod
to
host
one
and
it
tries
to
read
the
value
and
when
it
reads
the
value,
that's
when
things
go
wrong,
so
we
expected
the
value
to
17
blah
blah
blah,
but
what
was
read
back
was
empty
string.
D
So
that
means
that
something
went
wrong
along
this
path
and
we
need
to
figure
out
what
happened
after
this.
This
is
just
the
cleanup
steps
that
defer
that
you
saw
in
in
the
code.
This
is
just
tear
down
steps
that
get
executed
and
once
the
tear
down
steps
are
completed,
then
the
test
fails
out.
It
says
that
the
expected
didn't
match
so
now
we
need
to
figure
out
what
happened
here
and
so
in
order
to
do
this,
what
I
did
was
I
ran
this
test
locally
against.
D
I'm
not
going
to
go
into
the
details
of
all
of
that,
but
basically
what
we
found
was
that
the
data
actually
still
existed
on
the
disk,
but
what
was
happening
was
the
cube
cuddle
command
that
was
being
used
to
read.
The
data
from
the
pod
basically
execute
a
command
against
the
pod.
Let's
say
you
know
a
command
that
cats
out
the
contents
of
a
file
that
cube
cuddle
command
for
exec
was
actually
returning
empty
string,
even
though
the
data
was
still
there
and
me
see.
If
I
could
find
that
particular
bug.
D
So
basically,
what
was
happening
was
the
queue
cuddle.
Exec
that
was
being
used
by
the
test
case
was
returning
the
wrong
data.
So
the
good
news
here
was
that
this
wasn't
data
corruption.
The
bad
news
is
that
there
was
a
bug
somewhere
in
the
path
between
Q
cuddle,
exec
and
the
container
itself.
So
I
opened
a
bug.
I
wasn't
sure
where
to
go
from
here,
but
this
was
a
good
place
to
start.
D
And
NCDC
actually
was
able
to
isolate
this
issue
even
more
so
remove
all
the
gunk
from
the
KU
burnetii
zanten
test
and
just
do
docker
exact
back
to
back
to
back,
and
he
basically
had
a
repro
where
docker
exec
would
sometimes
return
empty
string
when
it
shouldn't
so
right
now.
I
believe
this
has
been
fixed
in
docker
in
a
docker
version.
D
In
the
meantime,
what
we
ended
up
doing
in
Coober
Nettie's
is
adding
retry
logic
around
this
particular
code,
so
that
we,
if
we
see
unexpected
responses
from
q,
cuddle
exec
we're
going
to
retry
a
few
times
to
ensure
that
that
is
in
fact,
a
real
failure
and
not
a
fake
failure.
So
we
worked
around
the
issue
until
we're
able
to
pick
up
a
new
version
of
dr.
C
Actually,
remember
actually
remember
this
issue
and
a
couple
of
things
I
think
we
looked
at
as
originally
thought
it
might
be
part
of
coop
control,
and
so
one
technique,
I
think
that
we
might
have
tried
was
trying
to
eliminate,
wear
or
identify
what
area
the
bug
was
in
right.
So,
like
you,
control
talks
to
the
API
server
which
talks
to
the
right,
and
so
you
know
trying
to
reproduce
so
we
can
reproduce
on
coop
control.
Then
can
you
reproduce
just
using
the
API
directly
without
coupe
control
and
eliminate
okay?
C
This
isn't
a
coop
control
bug.
Okay,
then,
can
you
reproduce
just
running
this
on
the
note
right
and
then
eventually
we
got
it
down.
Okay,
can
we
just
reproduce
this
just
using
docker
right
and
then
and
then
once
we
were
able
to
isolate
it,
there
was
became
the
problem
with
scoped
much
better
and
the
other.
C
If
we
were
on
the
same
thing
three
times
it
might
pass
and
it
might
fail
right
and
that
will
allow
you
to
start
if
it's
part
of
the
cluster,
you
can
start
looking
at
the
state
of
the
cluster
and
how
everything
is
set
up
and
potentially
even
leave
the
cluster
in
its
current
state
and
then
or
leave
a
running
cluster
up
and
then
trying
to
look
at
I'll
reproduce
it
on
the
running
cluster.
So
we're.
A
At
a
time
which
is,
is
not
a
good
problem
to
have
so.
Thank
you
and
thank
you
all
for
joining
us.
What
I'm
seeing
is
we
have
sort
of
two
different
types
of
questions
and
making
correct
me
if
I'm
wrong
or
send
me
emails,
because
you
all
do
saying
we
need
more
information
out
the
testing
suite
and
how
to
debug
tests,
but
then
there's
also
interesting
debugging,
the
cooper
Nittis
code.
A
So
we
will
schedule
another
of
these
again.
We
did
this
around
1.3.
We
missed
1.4,
we
can
talk
and
we
can
also
talk
on
the
select
channel,
as
everyone
says,
so
we
will
do
this
again
and
I
will
try
and
schedule
these
more
regularly.
But
that
also
means
that
we
need
people
to
show
up
with
things
they
want
help
debugging.
So,
let's,
if
you
can
send
me
those
and
send
me
times
that
you're
interested
in
it,
we
will
see
what
we
can
do
about
scheduling
more.
Thank
you
all
and
another
time.