►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180710
Description
Meeting Notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.py47inrndi23
Highlights:
- Brief discussion of SIG Charter
- Registering for SIG sessions at Kubecon China and Kubecon NA
- kubeadm API transition (v1alpha3 or v1beta1)
- Control plane component timeouts are an issue on slower devices when using kubeadm
- Discussion around what the SIG plans on supporting for different container runtimes
A
Hello
and
welcome
to
the
sequester
lifecycle
meeting
for
Tuesday
July
10th
2018
today,
I
put
a
couple
of
quick
fYI
things
the
beginning
of
the
agenda.
The
first
is
that
the
steering
committee
has
been
poking
folks
to
get
their
charters
for
SIG's
in
I,
looked
at
the
the
spreadsheet
and
we
do
not
have
a
PR
open.
A
Yet
for
a
second
charter,
we
do
have
two
reviewers
from
the
steering
committee
of
sine,
which
is
Brian,
grant
and
Joe
beta,
and
so,
unless
anyone
else
wants
to
do
it,
I
was
gonna,
go
ahead
and
take
the
Sigma
ssin
statement
that
just
in
Santa
Barbara
Road
up
first
a
little
while
ago
and
sort
of
recast
that
into
the
signature,
template
and
start
a
PR.
At
that
points.
I'd
love
for
people
on
the
call
to
take
a
look,
make
sure,
looks
okay
and
then
we'll
assign
it
to
our
steering
committee.
Reviewers.
Can
you
can
you.
B
Wait
on
that,
maybe
for
like
a
day,
I
have
to
go
through
this
afternoon.
Phil
myself,
Erin
and
Eric
have
to
go
we're
reducing
the
template,
because
we
have
seen
a
bunch
of
issues
with
the
template,
so
the
basically
repetitiveness
in
roles
and
responsibilities.
So
we're
trying
to
try
to
see
what
we
would
like
and
what
we
don't
like
and
trying
to
get
that
fixed
up.
So
I
can
probably
give
you
concrete
feedback
and
better
feedback
in
one
day.
A
That
would
be
great
yeah.
It
looked
like
we
were
supposed
to
be
doing
this
and
they
were
sort
of
pestering
people
that
hadn't
done
it
yet
so
I
thought
we
should
go
ahead
and
do
it,
but
if
it
makes
more
sense
to
wait
a
little
while
and
let
other
people
do
the
first
couple
of
them,
then
we
can
certainly
wait.
You
know
even
a
couple
of
weeks
if
it's
not
urgent,
it's.
B
A
A
Way
was
just
punt
for
a
week
and
talk
about
it
next
week.
If
it's
not,
it
sounds
great
to
me
awesome.
The
second
one
is:
we
got
an
email
from
CN
CF
asking
if
people
wanted
to
sign
up
for
either
cig
introductions
or
cig
deep-dive
sessions
for
the
two
upcoming
Q
cons,
the
first
one
being
in
Shanghai
China
and
the
second
one
being
in
Seattle
for
North,
America
and
I
was
gonna,
respond
and
grab
spots,
for
you
know
up
to
all
four
of
these
sessions
in
the
two
different
locations.
A
B
A
A
Okay,
so
I'll
definitely
sign
up
for
the
two
slots
for
Seattle,
looks
I,
think
that'll
be
pretty
easy
for
people
to
staff
for
our
city
staff
and
I'll
hold
off
on
the
Shanghai
one,
although
the
due
date
for
that
one
is
much
sooner.
So
if
anyone
is
interested
in
doing
that,
please,
you
know
poke
me
or
Tim
in
slack
and
we
can
get
that
spot
reserved.
A
At
this
point
it
doesn't
seem
like
we're
going
to
have
enough
people
there
to
warrant
signing
up
recessions,
all
right,
the
next
time
they
didn't
Fabrizio,
who
said
he
wouldn't
be
able
to
make
it
today
linked
to
a
tracking
issue,
see
with
latest
contributions
from
Lucas
I
was
here
last
week
is
Lucas
officially
out
I
saw
on
Twitter
that
he
was
about
to
join
the
military.
I
wasn't
sure
what
his
status
was.
Yes,.
B
B
A
this
week
yeah
so
that
I'm
aware
of
the
details,
because
it
was
unreviewed
for
a
lot
of
stuff.
It's
basically
the
transformation
of
from
B
1,
alpha
2
to
either
V
1,
alpha,
3
or
proposed
beta.
We
still
need
to
determine
whether
or
not
we
can
actually
get
all
the
way
to
proposed
beta
because
of
some
of
the
other
dependencies
on
other
component
config
items.
I
will
happily
be
involved
in
this
work
and
I
will
loop
Liz
in
on
the
that
stuff
too,
as
well.
B
B
That
one's
pretty
straightforward,
I
agree
with
all
of
the
details
that
are
there.
I
think
we
just
need
to
be
able
to
run
through
and
execute
on.
Some
of
them
and
I
know
he's
already
done.
Portions
of
it,
so
I'd
have
to
I'd,
have
to
verify
from
the
document
and
the
tracking
issue
what
has
been
done
and
what
still
needs
to
be
done.
A
B
C
B
Liz
and
I
were
actually
talking
this
morning,
and
with
this
afternoon
we
were
gonna,
go
through
some
of
the
backlog
and
kind
of
go
through
in
triage
some
of
the
issues.
So
our
plan
later
on
this
afternoon,
if
folks,
would
like
to
join
us
happy
to
loop
them
in.
Please
just
send
me
a
message
on
slack
and
we
can
walk
through
some
of
the
backlog
together
as
a
as
a
broader
subsection
that
the
sig
that
are
interested
in
executing
and
pieces
of
this.
C
Is
going
to
be
a
zoom
meeting?
Yes,
what
time
approximately
from
now.
B
C
C
C
B
It's
up
to
people
who
want
to
contribute
to
work
on
this,
to
be
honest,
I
originally
triage
and
looked
at
it,
and
then
I
was
like
from
a
company
perspective
of
help
you
it
is
not
in
the
it's
not,
we
don't
actually
have
any
Raspberry
Pi
targets,
I,
don't
and
most
of
the
other
folks.
Don't
some
people
do
but
their
own
personal
things,
and
it's
not
at
a
aggressive
thing
that
we
are
actively
pushing
so
from
a
community
perspective.
I.
E
B
C
So
I
guess
the
question
here
is
because
I
can
take
care
of
this,
not
that
my
company
wants
to
me
to
contribute
so
I
had
basically
seen
the
demand
for
this
feature
like
I
can
work
on
this,
but
I.
Don't
know
where
to
put
like
the
is
it
going
to
be
a
flag?
Is
it
going
to
be
a
command-line
option
or
for
environment
variable?
Basically,
we
need
some
decision-making.
How
to
solve
this.
Like.
B
You
can
always
put
forth
an
idea
or
proposal
there.
There
are
like
three
proposals
in
that
issue.
If
I
recall
correctly,
and
it's
all
about
delaying
for
the
most
part,
the
liveness
checks
or
increasing
it
or
having
a
variable
that
you
can
set
as
part
of
it
I
think
there
were
a
couple
different
ones
yeah,
but
from
from
my
perspective,
it's
I
can't
even
see
the
behavior,
because
I
don't
have
a
super
slow,
Raspberry
Pi.
This
is
the
name
of
Raspberry
Pi
3.
C
I
mean
I'm,
not
sure
this
is
a
good
option,
but
potentially
something
like
that
can
be
added
on
the
Alpha
and
back
ported
to
a
previous
version
that
people
can
use
this
feature
for
a
while
until
we
are
ready
in
the
config,
because
I
ready
to
configure
it
now
is
kind
of
difficult,
because
we
are
factoring
the
config
and
I
was
thinking
about
the
office
of
commencement.
It's
not
really
a
good
option
as
well.
B
I
would
say,
come
up
with
read
through
the
details,
because
it's
a
long
issue
read
through
it
in
detail
and
what
they
basically
did
was
like
crib
out
sections
of
the
manifest
to
allow
an
option
override
and
I.
Well,
ideally,
what
they
want
is
a
config
override,
knob
right
for
this
particular
tip
of
behavior,
but
I
don't
know,
I
haven't
tested
whether
or
not
you
can
actually
get
that
indirectly
already
or
whether
or
not
you
need
to
explicitly
add
this
to
portions
of
comedian.
C
Yeah
I,
don't
think,
is
possible
right
now
the
manifests
are
hard-coded
in
you.
You
cannot
deduce
these
timeouts
for
the
madness
from
basically
so
Lucas
said
that
there
should
be
like
a
waiting
flag
or
something
that
if
you
are
the
coefficient
to
a
parameter
incubator
and
it
basically
adds
a
value
to
all
the
timeouts
in
there.
C
B
A
I
mean
one
thing
I
would
say
is:
is
looking
at
the
code
where
we
set
these
values.
It
seems
like
these
values
were
core
occulted
from
the
bash.
That
I
don't
know
if
we
have
any
notion
of
why
they
were
set
to
these
values
in
the
first
place.
Somebody
on
the
the
comment
thread
suggested
just
increasing
them
globally,
and
would
that
actually
be
bad
and
I?
Don't
know
that
it
would
be
bad.
A
We
might
want
to
check
with
the
API
machinery
sig
and
say
what
should
the
health
check
for
the
API
server
actually
be
like,
and
should
it
be
changing
over
time,
because
I
think
people
in
the
comment
thread
said
it
it
used
to
work
on
1.8,
and
now
it
doesn't
work
on.
You
know
one
to
ten
right
and
it's
quite
possible.
It's
behavior
of
the
API
server,
and
especially
the
sort
of
time
changes
as
they
add
new
functionality
and
then
having
static
values
and
perpetuity
is
actually
the
wrong
decision
anyway,
and
you
so
Tim.
A
You
also
said
that
this
doesn't
matter
because
you
guys
don't
have
arm
as
a
target.
It's
also
possible
that
the
you
know
old
values.
We
have
in
there
are
incorrect
for
Google
or
AWS
also,
maybe
they
should
be
smaller
so
that
we
can
detect
failures
more
quickly
and
I
think
a
lot
of
times
with
these
probes.
It
just
seems
to
sort
of
work,
and
so
we
don't
look
too
closely
this.
Is
it
the
magic
closely?
What.
A
B
I
kind
of
I
would
really
like
111
verification
of
this,
because
on
the
split
of
the
config
pull
that
we
have
now,
there
should
have
been
I
always
thought
if
their
time
delay
is
multiple
seconds
for
a
health
check
on
the
API
server
start
for
just
the
server
starting.
You
know
in
we're
already
past
that
point.
I
can
understand
the
delay
on
the
actual
image
full,
but
we
have
that
as
a
separate
step.
So.
B
A
I,
guess
is
what
I'm
saying
it
is.
It
doesn't
seem
like
this
is
a
place
where
we
should
necessarily
add
parameterization
and
make
it
everybody
able
to
pass
custom
values.
We
should
figure
out
if
there
is
sort
of
a
correct,
prober,
config
or,
if
there's
something
fundamentally
wrong
with
the
startup
of
the
API
server.
That
needs
to
be
fixed
instead
right
and
he
this
is
pointing
to
a
problem,
but
the
solution
isn't
necessarily
add
more
knobs
for
the
user.
C
Okay,
I'm
going
to
ask
the
the
appropriate
people
for
this
to
see
if
there's
an
actual
problem
and
I
agree
that
not
exposing
such
arbitrary
parameters
to
users
is
a
good
idea.
A
Yeah
I
mean
I
could
see
an
argument
for
like
if,
if
we
think,
if
we
actually
come
back
from
the
academician
rates
sake-
and
they
say
this
should
probably
be
configurable
based
on
your
platform,
then
maybe
we
need
to
create
some
parameters,
but
you
know
to
date
other
than
these
sort
of
older
raspberry,
pi
machines,
plus
newer,
kubernetes
right
so
you're
ending
up
with
a
larger
time.
Skew
it
doesn't
seem
like
this
is
an
issue
is.
C
A
A
A
Right
so
I
think
the
the
reasonable
pass
forward.
Here
we
should
talk
to
the
API
machinery
folks,
it
might
be
reasonable
to
just
set
the
defaults
to
be
higher
to
be
you
know.
Maybe
the
initial
delay
is
higher
so
that
it
works
in
more
cases.
If
we
don't
think
that's
gonna
be
detrimental
to
the
places
where
it
is
working
correctly.
Today,
the
downside
of
making
you
know
they
did.
C
A
Well,
the
only
thing
that
the
large
value
here
will
do
is
it'll
make
it
take
longer
for
the
cubelets
to
kill
the
API
server
if
it's
not
coming
up
healthy.
But
if
you
have
an
automated
system,
that's
using
cube
admin,
it's
probably
also
waiting
for
the
API
server
to
be
healthy
itself
right,
like
the
cluster
API
right
now,
when
it
creates
a
cluster,
it's
it's
in
pulls
API
server
to
wait
till
it's
up
and
running
all
right
and
I
would
expect.
Other
automated
system
still
talk.
F
A
A
B
The
as
I
mentioned
the
backlog
is
mostly
groomed.
I
know
we
did
a
first
pass
Permian
last
week
and
Lumiere
added
a
lot
of
the
things
that
we
did
from
planning
and
I'm
going
to
go
through
the
stuff
that
Lumiere
added
to
try
and
see.
If
we
need
to
assign
some
other
folks
that
we
know
can
dedicate
time
to
it.
So
we're
going
to
do
that
this
afternoon,
as
I
mentioned
earlier,
so
I
think
we
should
be
ready
to
go
for
what,
mostly
for
112.
There
are
broader
and
level
discussions
as
I
see.
B
Ben
is
here
so
I'm
women,
okay,
there
are
broader
level
discussions
about
what
we
plan
on
supporting
for
CRI
and
what
that
even
means
for
not
just
Cuba
the
end
but
like
kubernetes,
because
there's
a
lot
of
marketing
and
there's
not
a
lot
of
meat
that
behind
some
of
the
statements
that
are
made
so
typically
for
1703.
That's
the
most
validated
version
of
the
CRI
that
we
verify
across
the
entire
space
from
committee
and
deployment
all
the
way
through
to
the
actual
docker
instance.
This
run,
we
have
a
large
swath
of
tests
that
are
run.
B
We
have
the
upgrade
tests
that
I
run.
People
independently
can
run
scale
verifications
against
them,
but
none
of
that
jiggery
exists
for
comedian
deployments
of
different
CR
eyes
and
I.
Don't
exactly
know
if
we
are
going
to
quote-unquote
pontificate.
That's
a
CR
is
supported
by
kubernetes
without
us
having
a
lot
of
this
basic
plumbing
and
jiggery
in
place.
I
don't
know
how
we
can
actually
make
some
of
those
statements.
C
F
B
Well,
the
question
is:
there's
there's
this
weird
mismatch
of
marketing
that
exists
on
the
kubernetes
blog
that
was
created
by
other
parties
with
what
actually
exists
and
we
get
issues
that
filed
that
are
filed
against
Cuba
diem
as
but
we
have
no
test
signal.
So
there's
there's
this
weird
mismatch
of
expectations
in
reality
because
of
the
marketing
that's
been
published.
B
And
you
know
Lucas
did
his
little
best
in
the
changes
he
made
to
the
config
to
allow
that
type
of
support
for
a
mixie
right
type
of
environment,
so
that
the
ability
to
publish
back
in
the
mid
of
data
the
runtime.
So
when
you
do
the
upgrades
that
can
fix
some
of
the
problems,
so
the
I
don't
know
who
is
on
the
hook
to
make
this
thing
go
I.
C
Think
the
demand
here
for
come
from
companies
and
individuals
to
use
like
something
other
than
docker
I
would
suggest
that
these
folks
invests
into
contributing
through
Krakatau
and
basically,
basically
helping
us
to
provide
support
for
all
the
containers.
Because,
right
now
we
are
kind
of
tight
and
this
open
source
I
mean
people
feed.
They
want
to
push
something
forward
this
they
should
start
contributing
to
the
whole
ecosystem
and
not
only
to
keep
em
but.
B
There's
there's
a
fundamental
problem:
the
fact
that
the
marketing
was
mismatched
and
was
published
without
other
stakeholders
being
involved
as
part
of
that
publishing
right
so
like
it
makes
it
sound
like
container
D,
is
first-class
and
for
better
for
worse,
it's
away
in
cryo
is
first-class,
but
I
know
Red
Hat
has
the
apparatus
to
verify
that
in
place
already,
because
they
should
open
shipped
with
it,
but
that
doesn't
mean
that
we
have
it.
The
apparatus
for
rubidium
deployments
all
the
way
through
right.
B
So
this
marketing
mismatch
is
a
source
of
frustration,
the
sorts
of
issues
that
I
don't
know
who
is
on
the
hook
or
what
buttons
we
can
press
other
than
we
can
go
to
Sagarika
tech,
chure
and
other
people
to
try
and
make
a
forcing
function
somewhere,
but
or
who
is
actually
actively
in
the
community.
Trying
to
make
this
go
right.
A
G
Did
I
I'm
not
sure
about
all
that,
but
I
do
I,
am
in
contact
with
a
few
of
these
votes,
including
you
jus
and
discussed
testing
with
them
regularly
I
know
they
are
contributing
to
Craig
cuddle.
On
the
couple
of
note
folks
near
me
that
are
working
with
container
D,
stuff
and
I
know
they
have
a
pretty
large
dashboard
tracking
container
D.
It
sounds
like
the
main
thing
asked
here
is
Q
batum
testing.
Specifically
it's.
A
I
guess
one
interesting
thing
here
is
as
we
get
different
variations,
so
we
have
different
network
overlays.
We
have
different
container
runtimes.
You
know
each
of
these
sort
of
different
pieces
of
communities
we
break
out
in
create
an
interface
and
create
multiple
implementations
for
is,
is
increasing
our
test
space
exponentially
unless
we
can
actually
test
them
independently
and
have
some
confidence
that
fitting
together
later
will
work
right.
A
So
if
the
node
team
tests
each
of
the
CRI
is
that
they
want
to
support
themselves,
but
don't
add
test
for
cube
ADM,
and
then
we
have
users
that
use
one
of
those
see
our
eyes.
The
node
team
says
should
work
with
cube
ATM
and
it
doesn't
work.
Then
we,
it
seems
like
we
have
a
failure
and
testing
somewhere
and
adding
a
whole
nother
test
suite
for
Q
ATM
for
every
CRI
and
then
multiplying
that
by
every
Network
overlay,
calico,
flannel,
etc.
A
G
B
A
B
Know
that
they
ripped
out
the
calico
tests,
the
we
have
do
go
through
a
validation
for
calico.
For
every
time
we
do
a
quick
start
release
because
we
kind
of
recommend
that
so
we
have
signal
that
we
feedback
and
we've
done-
updates
to
the
docs
periodically
over
time.
So
from
that
perspective,
there
is
some
signal
feeding
back
into
the
system
with
regards
to
weave.
A
Yeah
I
mean
in
some
ways
I
think
we're
trying
to
become
the
vendor
agnostic
and
cube
ATM
and
saying
we'll
support
whatever
you'd
like
to
use,
but
for
our
own
sanity
and
support
ability.
We
we
may
need
to
say
here
are
the
things
that
we
actually
test
and
validate,
and
the
architecture
is
such
that
you
can
use
other
things.
But
it's
sort
of
more
at
your
own
risk
right
unless
you're
willing
to
step
up
and
provide
the
testicle
yourself
I.
F
C
C
A
Think
if
we
had
clear
documentation
saying
these
are
sort
of
the
tested,
validated
configurations-
and
here
are
the
other
configurations
may
or
may
not
work.
If
somebody
filed
an
issue,
we
could
then
say:
oh
you
didn't
see
our
supported
matrix,
go
check
this
out
and
if
you're
outside
of
the
range
of
things
we
support-
and
you
know
you're
more
on
your
own
unless
you
want
to
contribute
to
this
Sagan-
create
a
signal
to
ensure
those
things
that
work
in
the
future
right.
A
A
lot
of
times
is
whether
it's
sort
of
aligned
with
the
direction
of
sig
wants
to
go
or
the
direction
of
the
companies
behind
the
people
in
the
sig
months
ago,
and
if,
if
you
as
a
a
user
as
a
contributor,
wants
to
go
any
for
direction
because
it's
open
source
and
of
the
community,
you
can
then
get
in
that
direction.
Right
for
your
own
initiative,
right,
I,
think
that's
what
what
Tim
was
first
thing
about.
A
C
B
Or
just
say
there
is
a
distinct
contributor
who
you
know
if
it's,
if
it's
not
assigned,
and
it
has
the
Help
Wanted
flag,
you
know
I
think
setting
the
expectation
bar
there
is
is
clear
like,
but
maybe
being
more
explicit.
Like
you
know
this,
this
issue
is
not
has
been
deprioritized
by
the
contributing
members
because
they
don't
have
time
or
bandwidth
to
look
at
this
particular
issue
right
and
you
know
if,
if
otherwise,
you
know
typical
open-source
mattress
patches
welcome.
C
G
C
A
A
Okay,
so
I
think
that
action
on
there
Ben
you're
gonna
talk
to
the
folks
on
the
node
side,
work
on
container
D
and
see
if
they
can
help
add
some
test
coverage
for
container
D
into
cube
admin,
because
container
D
is
one
of
a
small
number
of
what
we
believe
to
be
sort
of
first
class
CR.
Is
that
we'd
like
to
have
broad
support
in
our
tester
for
sure,
for
that's
not
like
a
good
summary.
G
B
C
See
from
like
Katherine
Bernards
and
probably
Zachary
Sarah,
who
is
actually
I,
don't
know
his
secret
deep.
He
is
the
maintainer
of
sick
dogs
right
now
and
I
think
that
those
people
know
I
can
ping
them
because
I'm
seeing
recent
blog
posts
about
core
DNS
and
IP
vs,
currently
being
revealed
so
I'm
guessing
you're
talking
about
the
same
type
of
books.
Yes,.
C
Basically,
there's
a
there
is
an
issue
there,
because
we
don't
have
technical
reviews
for
the
blog
posts
and
sick
Doc's
reviews
them
for
typos
systems.
Make
city
toriel
changes
as
well,
but
I
don't
see
a
lot
of
technical
people
there.
Who
can
basically
argue
about
an
issue
today.
I
saw
that
somebody
is
talking
about
see
being
a
language
which
isn't
memory
safe
and
I
said.
B
Think
any
blog
post,
perhaps
should
Guth
I
mean
this
is
we're
kind
of
stepping
over
our
bounds
of
sequester
lifecycle,
but
I.
Think
in
in
sig
arch
is
a
potential
point.
I
was
just
trying
to
find
out
like
how
did
this,
who
is
the
person
that
put
this
in
the
pipeline
and
how
then
I'll
go
back
to
the
other
SIG's
like
mainly
cigar
likes?
They
need
to
be
on
the
loop
for
for
review
of
some
of
these
blog
posts
before
they
get
out
the
door
for
release
icon.
Oh
there.
A
Well
and
I
think
we
have
that
first
sort
of
the
main
website
documentation
right,
like
I,
do
see
the
website.
Doc
changes
get
pain
for
technical
review.
You
have
a
technical
sign-off.
It
sounds
like
we're
just
missing
that
step
for
the
blog's
and
we
already
have
the
list
of
people
who
maintain
the
website
documentation
for
different
features,
and
so,
if
someone's
writing
a
blog
post,
where
they
talk
about
cube
ATM,
they
should
find
a
cube,
ATM
maintainer
of
the
website
and
those
people
in
protectable
review.
C
G
Think
that
already
exists
today,
like
the
docs
team,
does
a
lot
of
that
I
also
don't
know
that
all
these
things
are
necessarily
invalid
I.
Do
you
think
there
is
a
like
there's
somewhat
of
a
failure
here
with
Q
Batum,
but
there's
a
lot
of
other
testing
going
on,
and
this
was
you
know,
a
multi
company
post
I'm,
looking
at
it
now
and
and
to
the
other
point
that
it's
related
but
being
able
to
write
memory
safe
code
in
a
language
does
not
make
the
language
itself
memory
safe.
B
I
think
yeah
I
do
think
it
needs
to
bubble
up
to
the
point
of
cigar
ch,
but
that's
again
we're
kind
of
overstepping
their
boundaries
on
some
of
on
some
of
this
space.
My
point
of
order
was
that
we
should
definitely
start
to
provide
signal
back
to
these
other
SIG's,
including
docs,
arch
and
node,
that
you
know
they
kind
of
pushed
you
hard
on
the
CRI
conversation
without
getting
enough
feedback
and
realizing
what
just
happened
across
other
SIG's
right,
I.
G
Have
one
ask,
can
you
point
me
to
some
of
the
issues
that
popped
up
so
that
we
know
like
what
kinds
of
things
are
being
missed
in
the
test
coverage?
So
I
can
also
get
a
better
idea
of
like,
like
Robert,
was
talking
about
what's
having
this
huge
matrix,
if
we
can
kind
of
figure
out
what
the
what
the
points
really
need,
testing
and
integration
are
absolutely
and
we
should
probably
file
a
made.