►
From YouTube: 20200407 SIG Arch Conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
We'll
make
sure
we
do
that
every
conformance
meeting
and
then
there's
also
a
discussion
that
may
run
around
failure
modes
for
watching.
We
talked
about
it
a
bit
but
I,
don't
know
that
we
had
it
super
clear
consensus.
The
a
is
that
we
had
were
where
this
is
okay
for
this
test,
but
I
think
it
may
have
changed
over
the
course
of
the
week,
so
I'll
go
ahead
and
turn
it
over
to
John.
B
B
Our
understanding
of
things
has
changed,
and
so
we
want
to
document
some
of
that
document,
some
of
the
user
stories,
as
we
see
them
in
order
to
get
a
little
bit
clearer
picture
of
the
requirements
and
then
they'll
be
an
upcoming
PR
that
talks
about
the
solution
and
how
it's
changing
in
response
to
that
from
what
we
have
today.
So
basically
just
further
refinements
on
this
existing
cap-
and
you
know
looking
for
comments
out
here-
it's
it's
still,
they're
still,
probably
quite
a
bit
more.
B
We
can
do
on
the
user
stories,
but
we're
also
going
to
start
filling
in
the
changes
in
the
way
we're
doing
like
the
initial
version
of
the
cap
really
focused
on
developers
and
how
we
could
try
to
automate
some
of
that
to
some
of
the
generation
of
behaviors
and
automate
some
of
the
generation
of
tests
from
the
API
spec.
But
we're
sort
of
verdi
emphasizing
that
we
did
a
little
bit
of
work
around
that
and
it's
not
going
it's
not
as
clean
as
we'd
like
it's,
not
as
we
have
some
tooling
to
help
generate
behaviors.
B
But
it's
not.
It
still
requires
a
lot
of
manual
curation,
but
what
other
than
focusing
on
that
which
doesn't
seem
to
be
as
fruitful
as
we'd
like?
We
are
simply
focusing
on
Samarra
the
youth
kisses
around
tooling
or
what
individual
you
know
what
CI
jobs
would
be
to.
You
would
need
from
the
tooling
in
order
to
say
what
they
need
things
like
that.
So
that's
what
this
talks
about
and
please
review
it
and
there's
a
little
bit
of
discussion
as
well.
B
C
So
what
do
we
view
as
next
steps
here
that
we
collectively
agree
on
I?
Think
we
want
to
get
to
the
point?
Maybe
we're
like
we
don't
have
any
of
these
unresolved
things
in
here.
Yes,
what's
that,
like
collectively
agree
on
the
use
cases,
so
let's
say
we've
done
all
of
that.
What
would
be
the
next
step
after
that?
The.
B
Controversial
the
next
step
would
be
to
take
some
of
our
ideas
around
how
to
solve
those
use
cases
and
document
them
more
clearly.
So
we
we
have
some
of
this
sort
of
in
flight,
but
it's
been
sort
of
just
on
npr's,
as
opposed
to
being
document.
It
is
like
there's
a
plan,
so
it's
to
document
the
plan
in
the
proposed
solution
section
and
update
that
to
talk
about.
B
B
Maybe
I
should
have
put
it
in
the
list,
too,
is
to
get
the
existing
all
the
existing
tests
converted
over
to
the
new
regime,
so
that
that's
starting
and
that's
a
peon
from
Jeff
tree
at
Jeff,
tree
I,
haven't
looked
at
it
closely
yet
anyway,
but
that's
sort
of
the
next
time.
It's
the
the
PR.
This
cap,
updated
with
the
specific,
tooling
we're
going
to
build
and
and
the
conversion
of
those
tests,
and
you
know
actually
building
the
tooling.
B
So,
if
there's
nothing
else
than
that,
we
can
go
to
the
next
PR
next
PR
is
we've
discussed
here
in
the
past
and
yes,
yes,
working
group
as
well.
This
profiles
concept
and
there's
a
Google
Doc
out
there
somewhere
I
think
I'll
link
to
it
in
here
that
describes
it
and
a
couple
of
meetings
ago.
We
talked
about
that.
We
really
want
to
move
this
forward
so
that
we
can
have
things
like
separate
profiles
for
cluster
admins
versus
ordinary
user
workloads
and
ordinary
users.
B
So
I
put
two
get
put
down
the
sort
of
summary
of
motivation
and
goals
and
non
goals
section
of
this
cap
to
get
us
started
and
would
love
some
eyes
on
that.
Make
sure
that
we're
all
on
the
same
page
and
then
you
know
we'll
start
talking
about
the
we'll,
probably
break
down
some
use
cases
for
this
similar
to
what
we
did
before
and
then
propose
a
solution
for
how
we're
going
to
capture
profiles
and
how
we
categorize
tests
and
so
are
the
tasks
that
need
to
be
done
around.
That.
C
A
The
next
portion
is
the
the
q1
okay,
our
exploring
and
I've
got
to
kind
of
set
out
in
two
ways
of
you.
If
you
click
on
this
first
link,
it
will
actually
bring
up
the
rendered
markdown
file.
It
might
be
easier
to
do
and
I
think
at
the
end.
It
said:
let's
merge
if
we
agree
on
scoring,
may
just
be
the
PR
itself.
A
I'm
gonna
scroll
down
a
little
bit.
We
kind
of
just
look
at
the
if
I,
even
just
zoom,
in
to
the
top.
We
don't
even
have
to
look
at
the
details
too
much
for
q1
and
I'm
saying
q1
is
Jan
to
March,
that's
the
same
as
as
everyone
the
score
is
lower
at
the
moment
than
I
would
like
it
to
be
it's
0-2,
and
that's
because
our
key
result
metric
was
getting
227
new
conformance,
they've,
win
points
and
that
one
I
will
dig
into
a
bit
our
score.
Calculation.
A
A
Oh,
it's
gonna,
there's
been
a
lot
going
on
in
the
world,
so
we
didn't
quite
get
to
our
1.0
at
the
precise
end
of
March,
but
these
lists
the
promotions
and
stuff
that
needs
to
weeks.
We
can
look
at
the
board
for
that
later,
but
here's
the
specifics
on
that
we
jump
down
to
our
the
the
kr
tube
is
just
another
way
of
writing
the
percentage
increase.
So
we
didn't
get
to
six
percent,
we're
still
at
point
three,
but
that
can
get
the
1o.
If
we
let
that
soak
and
wash
off.
A
A
We
needed
to
move
off
of.
We
had
our
own
Google
cloud
account
that
I
was
that
I
I
was
billing,
CN
CF
for
and
we
were
asked
that
you
know
move
stuff
to
where
we're
using
prowl
for
most
of
our
stuff,
but
we
also
had
EK.
You
know
amazon
donating,
and
so
I
I
worked
hard
to
try
to
get
everything
off
of
the.
If
you
will,
the
personal
Google
account
into
things
either
at
packet
or
or
Amazon
or
prowl.
So
our
key
result,
one
was
everything
being
created
on
prowl.
A
C
Have
sorry
we
didn't
catch
this
yesterday,
we're
gonna
be
making
a
move
with
product
Kate's
thought
IO
to
kind
of
shed
any
project.
That
is
not
part
of
kubernetes.
So
if
you
want
to
continue
to
have
because
right
now,
for
example
like
it's
supporting
things
like
tensorflow
and
basil
build
and
that
night
Google
cloud
platform,
none
of
those
are
actually
part
of
kubernetes.
It
scrutinised
project
problems.
You.
C
Of
the
CN
CF
and
like
so
so
are
helm
charts.
We
don't
think
it
is
appropriate
or
the
kubernetes
project
to
be
spending
its
money
on
things
that
are
not
part
of
the
kubernetes
project.
So
we
need
to
either
talk
about
conceptually.
How
would
you
make
API
snoop
a
sub
project
and
which
signal
would
inherit.
B
A
Giving
you
a
heads
up,
clicking
donating
API,
snoop
to
sig
arch,
which
is
I,
think
that's,
lovely
or
or
looking
at
all,
I'll
turn
that
the
CI
my
vote
is
for
this.
One
will
go
back.
Thank
you
for,
and
roadmap
the
where's,
the
nice
pretty
link.
Here
we
go
and
we
moved
everything
off
of
non
prowl.
That
is
because
everything
else
is
proud,
and
now
it's
either
on
prowl
or
it's
on
eks
or
packet
and
that's
a
success.
We
did
it
so
25
I'd
love
to
get
that
higher
and
I.
A
Think
if
we
can
get
that
release
out
where
everything.
Let
me
push
a
tag
that
prow
creates
our
images
so
that
we
have
you
can
just
go.
Coop
cuddle,
apply
and
bring
up
API
snoop
to
do
test
writing
and
for
the
site.
Then
we'll
have
a
success
there.
We
don't
yet
have
all
the
merge,
ybox,
so
I
kind
of
a
zero
here
for
us
not
doing
any
manual
merging
and
adopting
the
kubernetes
kubernetes
bought
flow
and
I
might
even
any
I
need
to
research.
A
A
A
We
can't
get
the
contributors
of
it,
so
it's
the
URLs
but
I'm
pairing
weekly
with
with
winabi
and
Molly
and
I
think
I
spelled
the
name
wrong
there
to
ensure
that
the
workflow,
the
way
that
we're
writing
stuff
is
accessible
and
if
they
like
it.
You
know
the
next
few
weeks,
then
we'll
bring
the
mentoring
session
to
others
and
in
turn,
that's
why
I'm
pairing
with
and
Caleb
is
directly
mentoring,
Zach
and
Steven,
who
are
two
other
test
writers
and
Zach
actually
does
most
of
you
guys
new
GUI
work
and
Zach
and
Steven
our.
A
That
was
one
of
our
goals,
the
other
people
than
me,
teaching,
test,
writing
and
so
Zach
and
Steven
are
teaching
test-riding
tyrion.
Who
is
our
newest
test
writer
who
comes
from
south
africa
and
did
sold
his
house
stole
this
car
and
then
locked
down
happen?
And
so
he's
started
this
week
and
that
agree
on
maybe
on
the
column,
I'm
glad
you're
here
so
I
I
think
we
got
a
point
five
out
of
those.
A
So
assuming
that
we
got
26
I'd
like
to
get
us
to
40
new
conformant
endpoints
over
Q
in
q2,
and
instead
of
doing
a
6%
increase,
I
want
to
do
a
9%
increase
and
it's
so
close
to
50.
It's
a
stretch.
I
would
be
so
happy
if
we
could
actually
get
50,
but
I'm
gonna
definitely
put
care.
You
know
that
the
50
percent
is
a
stretch.
B
One
of
the
user
stories
and
use
cases
for
the
debate
years
kept
that
we
put
in
were
the
CN
CF
conformance
program,
and
we
want
to
make
sure
that
what
we
do
meets
the
needs.
So
if
you
can
review
that
and
what
I'd
really
like
to
see
there
is
that
the
logs
contain
enough
machine,
readable
information
about
what
the
Tet,
what
tests
have
been
run
and
whether
they
succeeded.
B
But
then
we
can
map
that
back
to
the
list
of
behaviors
and
show
that
it's
successfully,
because
one
of
the
one
of
the
things
is
what
you
said
that
we
want
to
make
sure
all
the
tests
were
run
well,
if
we
know
which
tests
were
run.
We
map
that
back
to
how
those
tests
move
back
to
the
behaviors.
We
can
show
that
all
the
behaviors
were
covered
so
that
that's
kind
of
you
know
successfully
so,
rather
than
checking
that
all
the
tests
are
wrong.
B
What
I'd
like
to
do
you
know
in
that
tool
in
that
cab
is
make
sure
all
the
behaviors
are
covered.
They
leaves
open
the
possibility,
we're
not
sure
we
need
it
yet,
but
it
leaves
open
the
possibility
that
when
we
get
to
profiles
and
there's
optional
functionality,
that
and
some
of
those
tests,
but
for
optional
functionality
may
be
implemented
differently
and
different
providers.
They
can
still
have
the
same
behaviors.
We're
not
sure
we
need
that
yet
we'd
like
to
avoid
it,
but
it
may
be
necessary
in
some
cases,
I.
C
Guess
I
agree
with
that,
although
for
what
it's
worth,
my
perspective
is
I
would
view
this
more
as
the
Reese
coping
and
who
work
on
gaining
based
on
endpoint
coverage.
I
think
we
would
maybe
prefer
to
see
gaming
based
on
behavior
coverage,
but
that
does
kind
of
depend
on
whether
or
not
we
can
like
lock
in
place
and
authority
that
list
of
behaviors,
as
opposed
to
a
situation
where
you
know
the
number
of
behaviors
continually
increases
over
time
and
coverage
goes
down.
This
we
celebrate.
B
C
B
Yes,
but
what
I'm
saying
and
that's
how
we've
done
today?
What
I'm
saying
is
that
we
can
take
that
list
of
tests
right
and
we
can
map
that
back
to
the
behaviors,
so
that
if
we
do
as
we
move
forward
with
profiles
and
as
we
moved
forward
with
optional
functionality,
they
won't
always
be
running
all
the
tests.
And
this
way
we
can
make
sure
that
the
tests
they
do
run
cover
a
complete
profile
which
is
defined
in
terms
of
behaviors,
also
when
they
run
all
those
tests.
B
We
can
also
there
may
be
cases
like
go
tick
ingress,
for
example,
where
the
core
API
is
the
same.
But
there's
there
may
be
some
differences
in
annotation.
Do
you
use
between
vendors
to
accomplish
the
same
behavior,
and
so
there
may
be
two
different
tests.
I
mean
it
gets
a
little
fuzzy
because
it's
not
fully
portable
then,
but
the
two
different
tests
which
come
for
the
same
behavior
in
that
case,
in
which
case
it
becomes
complicated
to
track
that
they
ran
all
the
tests
they
needed
to
run
without
doing
that,
join
back
to
the
behaviors.
B
A
B
So
we
could
either
do
it
at
the
time
the
test
runs.
It
could
be.
There
could
be
some
tooling
built
into
this,
whatever
they
run
or
they
could
just
deliver
us
the
logs.
They
continue
the
list
of
files,
the
list
of
tests,
they
ran
in
the
results,
and
then
we
can
do
the
join
during
the
evaluation
process.
So
that's
that's
kind
of
two
different
solutions
to
that
problem.
B
When
we
go
look
at
the
use
case
to
me,
the
use
case
for
the
census,
you
can
see
if
conformance
program,
yes,
I
need
to
make
sure
that
that
all
the
behavior
speaking
in
terms
of
a
Harris
that
all
the
behaviors
that
that
belong
to
this
profile,
what
tested
and
those
tests
ran
successfully
now
how
I
actually
evaluate
that?
Well,
what
am
I
here,
people
it's
my
inputs-
are
gonna,
be
the
test
that
we're
on
whether
they
were
successful.
B
C
P,
are
you
referencing
earlier
in
the
agenda
to
part
existing
tests
over
to
the
new
ratio
is
an
example
of
how
we're
trying
to
link
behaviors
to
past
so
currently,
the
proposal
is
we
enumerated
behaviors
in
the
yellow
file
and
we
need
to.
We
need
your
help
in
exercising
this
and
working
theory
to
see.
If
that's
the
mechanism
that
makes
the
most
sense.
C
Want
to
trust
but
verify
that
so
I
know
you've
been
including
the
number
of
endpoints
which
you
think
you're
adding
to
the
title
of
the
PRS.
But
what
I
don't
think
I'm
saying
is
runs
of
API
knew
before
and
after
that
show
us
and
like
it's
cool,
if
you
do
it
like
at
the
end
of
the
quarter
or
whatever,
to
show
that
in
aggregate
you
finally
bumped
up
27
endpoints
but
I
see
lines
going
up
into
the
right
tonight.
Can
we
talk
about
to
confirm.
A
A
A
A
C
The
reason
I
bring
that
up
is
because
we
can
get
into
more
in
depth
about
the
PR
review
itself,
but
there
was
one
PR
that
was
talking
about
the
fact
that
we
are
patching
the
scale
and
status
sub
resources
of
replication
controller,
he's
kind
of
racing
right
and
we're
seeing
some
flaky
some
conflicts
there,
and
so
like
one
workaround,
is
to
just
not
bother
patching
those
things.
But
if
we
don't
patch
those
things,
then
we're
not
covering
those
endpoints,
and
so
the
number
of
endpoints
that
that
PR
would
cover
needs
to
be.
You
know,
changed.
B
The
issue
is
finding
something
that
we
can
patch
that
we're
not
going
to
fight
with
the
controller
over
which,
like
patching
ready,
patching
the
number
of
ready
container
replicas
is
definitely
something
we're
gonna
fight
with
a
controller
or
I.
Don't
know
if
all
the
statuses
there
is
something
that
we're
not
gonna
find
with
the
controller
over
because,
but
you
know,
statuses
also,
we
might
be
able
to
just
add
crap
to
it.
I'm,
not
sure
yeah.
C
So
if
we
want
to
dive
into
the
specifics
on
that,
you
know
right,
I,
I
found
prior
art
in
some
of
the
pod
disruption
budget
tests
that
do
turn
to
retry
on
conflict.
So
I'd
like
to
see
us
do
that
and
see
if
it
actually
works.
Yeah.
A
A
A
Like
this
one
here
for
pronoun
Corinne
points
point
we
have
the
issue
and
we
have
the
create
the
test
and
end
point.
So
you
can
see
like
this
doesn't
quite
match
up.
I
need
to
know
whether
it's
a
4
or
5
or
a
4,
and
so
I'm,
probably
just
caleb,
may
have
had
a
typo
there,
but
this
is
where
we
define
it
by
running
this
mock
test
down
here,
but
I.
C
A
I,
ideally
it's
it's
a
it's
part
of
the
gate.
We
go
back
and
and
comment
on
the
ticket
that
says
you've
you're,
adding
this
number
of
coverage
and
if
it's
a
negative,
we
block
specifically
here's
the
list.
So
it's
really
clear
in
the
PR
from
from
the
bot.
A
A
A
A
B
B
C
A
B
It's
still,
you
still
have
to
deal
with
optimistic
concurrency,
regardless
of
what
field
you're
changing.
So
maybe
that's
it
nan,
okay,
irrelevant
point,
except
that,
if
you're
expecting
to
patch
it
and
then
read
it
and
see
the
value,
that's
not
necessarily
gonna
happen.
If
you're
fighting
with
the
controller
well
and
it's
something
that
the
controller
really
reconciles
and
actively
not
you
and.
B
Field
is
that
way
which,
in
which
case,
we
have
another
option,
which
is
that
well,
the
optimistic
concurrency
and
dealing
with
that
and
not
worrying
so
much
about
where
we
read
out
of
it.
If
the
version
is
bumped
and
to
is
status,
I'm,
not
sure
what
status
maybe
fungible
enough,
that
we
can
actually
add
something
to
status.
That's
not
defined
in
the
go
types,
I'm,
not
sure.
B
B
The
probably
the
most
expedient
thing
we
can
do
would
be
to
deal
with
our
optimistic
concurrency
like
you're,
suggesting
that
by
disruption
budget-
and
you
probably
can't
expect
your
rights
to
survive
for
very
long.
So
don't
expect
to
be
able
to
just
read
it
right,
afterward
and
have
it
say
the
same
thing
you
wrote.
F
But
what
I
have
noticed
is
that
if
we
don't
get
the
in
this
case,
the
replication
controller
status
from
a
separate
request,
if
we
just
get
from
what
it
returns,
it's
probably
okay,
but
I
would
also
like
to
make
sure
that
what
I'm
saying
is
correct
through
running
through
the
tests
on
half.
But
it
seems
to
have
completed
for
this
P.
Oh
I
think.
B
That's
yeah
I'm,
not
sure
the
exact
mechanics
of
it,
the
API,
so
everything,
but
it
that's-
probably
that's,
probably
true
and
I,
think
you
were
watching.
So
you
should
see
your
event.
I've
been
relation
to
your
event,
come
through
the
watch,
it's
possible
and
it
could
compact
or
whatever,
but
so
mean.
This
is
a
race
condition.
So
it's
gonna
be
like
that's
what
makes
it
flaky
I
said:
I
can
work
99.9
percent
of
the
time
and
then
just
flake
out.
C
C
C
C
Maybe
maybe
it's
me,
but
this
PR
seems
to
add
a
whole
bunch
of
watches,
and
it's
not
clear
to
me
or
it's
like
looking
for
a
whole
bunch
of
watching
this.
It's
not
clear
to
me
what
that's
doing
like
how
that's
helped,
but
this
last
one
right
here
is
the
very
last
thing
in
the
test
and
it's
watching
for
a
deleted
event
number
one.
It's
not
clear
to
me
whether
we're
guaranteed
to
see
that
dilute
event
and
number
two
I
started
thinking
about.
C
Well
what,
if
the
watch
times
out
before
we
get
here,
will
this
test
pass
or
fail?
It
sounds
like
McCaleb
was
saying:
okay,
this
this
pattern
shows
up
any
number
of
tests
where
it's
like
it's
the
test
doesn't
make
it
all
the
way
through
before
the
wash
bills.
Does
the
test
fail
and
it
sounds
like
it
passes.
It.
B
E
So
if
the
watch
fails,
you're
either
gonna
get
closed,
which
would
be
like
a
normal
disconnection
or
disruption
or
you're
gonna
get
an
error
event
back
the
channel,
so
in
general
I
think,
like
maybe
we
didn't
clarify
this
last
time.
Anything!
That's
watching
the
events
come
off.
The
channel
has
to
be
inside
a
loop.
There
ryokans
the
watch.
E
If
it
gets
closed
at
the
last
resource
version
that
we
got
and
in
any
normal
test
scenario
that
I
can
possibly
imagine
you
will
still
get
deleted
vents
when
you
don't
get
events
a
dilly,
defense
or
edit
events,
it
doesn't
matter
what
won't
happen
would
be
if
you
get
delayed
for
like
five
minutes
during
a
test,
and
then
you
know,
like
your
process,
hangs
and
then
you
get
you
retry.
Yes,
events
may
not
be
available.
That
means
that
something
like
your
VM
got
paused
for
five
minutes
is
not
really
a
valid
test.
E
Case
scenario
like
we
don't
pause
the
test
processes
for
five
minutes
and
then
verify
that
they
work.
So
all
watches
have
to
be
in
sport
inside
some
loop,
which
says
like
the
for
loop
on
the
outside
is
like
open.
The
watch
at
the
resource
version
every
time
you
get
an
event
off
the
channel,
update
the
resource
version.
If
the
watch
closes,
you
got
to
go
back
to
your
outer
for
loop,
reopen
it
and
keep
testing.
E
Let's
come
tricky
to
write,
which
is
generally
another
reason
why
it's
discouraged
doesn't
mean
that
the
tests
or
them
the
process
is
wrong.
But
another
way
to
do
it
would
be
like
you
define
your
end
state
or
what
you're
waiting
for,
and
you
do
that
for
loop,
where
you
just
take
all
the
events
off
the
channel,
put
them
in
an
array
and
then
validate
the
array.
That's
probably
what
I
would
recommend
for
the
vast
majority
of
people
who
actually
can
benefit
from
testing
like
this?
E
C
D
C
E
F
E
The
only
reason
this
doesn't
exist
is
because
the
people
who
went
and
did
the
first
round
of
this
we're
very
opinionated
and
didn't
like
when
people
do
this,
which
is
basically
saying
they
don't
like
a
behavior
of
kubernetes,
and
so
they
were
a
little
too
zealous
in
there.
You
shouldn't
use
this
because
it'll
hurt
you,
they
I
think
we
missed
an
opportunity
to
say
well,
let's
add
a
third
method
which
shows
you
how
to
use
watch
events
in
a
way
that
you
can
use
it
to
see
a
timeline
of
changes
as
the
server
season.
E
C
C
Bounced
around
enough
that
I
don't
have
time
to
get
deep
into
epi
machine
hearing
stuff
to
understand
like
what
the
best
way
of
doing
things
is
today
so
that'll
be
really
because
the
problem
I
have
right
now
is
I,
can
like
eyeball
the
test
and
say
what
think
it
looks
okay
and
then
you
know
you
let
it
merge,
and
then
we
just
empirically
verify
whether
it's
like,
which
I
can
do
that.
Just
takes
a
lot
longer.
First
off
you
come
out
in
the
wash.
E
B
C
B
E
A
E
C
E
E
This
is
the
first
week
that
I
haven't
gone
before
the
meeting
and
then
just
looked
at
whichever
ones
so
yeah
I
think
the
scheduler
preemption,
the
the
test
is
fine,
so
I
think
this
is
a
good
one.
Once
it
gets
to
look
good
to
me
from
somebody
who's
looked
at
it,
which
I
think
is
it
only
has.
It
doesn't
have
looked
good
to
me
on
it
right,
I.
E
E
C
Yeah
and
for
what
it's
worth,
the
other
PR,
that's
sitting
in
the
needs
of
Google
column
that
is
approved
was
what
about
there
were
some
I,
don't
know
how
many
months
it's
been
discovered
that
there
were
some
conformance
tests
that
relied
on
the
cubelets
API
in
order
to
pass.
We
decided
that
was
a
bad
idea.
We,
you
know
changed
our
policies
to
say
nothing.
Shouldn't
use
that
so
I
went
through
and
modified
the
existing
conformance
tests
that
break
that
rule.
Stop
doing
that.
So
as
a
v1.
B
C
If
you
can
dig
that
up,
that
would
be
helpful.
I
pinged
use
you
for
the
node
test.
It
was
about
pods
being
submitted
and
terminated
gracefully
because
they
had
the
most
history
whether
they
said
it
looks
good
and
then
I
think
somebody
from
the
cig
scheduling
for
the
predicate
test,
which
was
verifying
like
both
of
these
tests,
were
verifying
the
coolants
view
of
what
pods
were
on
the
couplet.
Instead
of
the
control
points
view
right.
Okay,.
B
A
Right
I
am
there's
a
UI
glitch.
If
you
click
on
the
point
and
then
you
reload,
the
page
notice,
the
URL
is,
is
shareable
the
you
get
the
right
numbers,
there's
157
and
then,
if
we
click
over
here,
we'll
scroll
down
and
see
our
and
then
reload
the
page
you
anglish
is
165.
Okay,
so
there's
where
our
11
done
comes
from.
A
A
A
A
We
can,
if
somebody
else,
I
mean
yeah
I,
don't
have
all
the
automation
in
place,
so
it
seems
to
be
a
bit
manual
still
and
I.
Think
the
the
meeting
we
were
gonna
like
the
in
thing
discussion.
We
kind
of
discussed
failure
modes
up
here,
and
this
was
the
notes
we
had
before
so
I
think
we
did
well
any
other
comments
before
we
give
everybody
three
minutes.