►
From YouTube: 20200616 SIG Arch Conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
everybody:
this
is
the
cig
architecture,
conformance
office
hours
meeting
for
either
Wednesday
or
Tuesday
16
15.
Most
of
the
places
you
are
70
in
New
Zealand
I
am
hippie
hacker
your
host
and
there
is
a
Google
Doc
that
I
will
share,
or
if
someone
could
share
in
the
link,
that
would
be
great
I'm
going
to
share
our
screen.
There's
also
an
attendees
area
that
would
be
great
I.
Have
someone
wants
to
that?
You
want
to
add
yourself
there
if
you're
having
trouble
adding
yourself.
A
One
of
the
first
thing
I
want
to
add
to
our.
We
have
a
lot
of
accomplishments
that
happen
here
on
the
and
throughout
the
week,
and
one
of
the
fun
ones
over
the
edits
at
a
shout,
shout
outs
area
and
also,
if
we
have
anybody
new
on
the
call
that
wants
to
introduce
themselves
and
say
hi,
probably
take
a
moment
to
do
that
as
well.
A
All
right,
I
have
a
shout
out
for
Liggett
he's
been
bringing
the
any
time
that
he
promotes
new
things
to
GA.
He
always
brings
his
conformance
test
with
them
and
I
super
appreciate
that
and
we
got
fifteen
new
conformance,
stable
end
points
added
that
came
along
with
the
the
tests,
and
so
we
are.
We
have.
We
had
one
untested
in
point
in
118
and
now
there
are
15
total
tested
endpoints
that
are
conformance
end
points,
and
that's
thanks
all
thanks
to
Liggett
for
that.
A
A
Our
first
conversation
is
around
our
new
watch
tooling
and
Caleb.
Behind
me.
Over
here,
Bobby
mcbomb
has
been
doing
most
of
the
hard
yards
on
that
and
I've
been
assisting,
but
we
have
a
current
PR
open
and
we've
captured
most
of
that
stuff
from
a
previous
comment,
but
I
can
bring
up
from
a
while
back
and
here's
the
current
issue.
The
summary
is
what
we
were
looking
for
was
a
watch
that
had
I
will
go
over
to
the
actual
PR.
The
recording
screen.
A
B
A
Is
back
in
the
13th
of
May
and
it's
been
ongoing
before
that,
but
this
is
each
meeting
we
try
to
capture
our
action
items
and
the
consensus
at
that
time,
and
this
is
what
we
had
set
up
back
in
May
and
we
wanted
to
make
sure
that
if
things
had
changed,
then
it's
really
clear
what
those
are.
So
we
can
get
a
clear
definition
of
done
on
that.
A
A
This
is
the
current
one,
so
it's
been
a
quite
a
quite
a
long
conversation
through
here,
and
it
depends
on
a
lot
of
a
lot
of
points
that
pinned
on
this
been
going
through.
So
I
want
to
make
sure
that
we've
addressed
all
concerns
and
that
anything
we
have
an
address.
We
come
to
a
consensus
on
as
far
as
the
way
forward.
D
C
In
any
individual
in
particular,
but
also
annoyed
with
myself,
so
my
my
annoyance
comes
from
the
fact
that
I
feel
like
there's
a
lot
of
complexity
being
introduced
to
it.
This
PR
and
the
resulting
tests
that
use
it
now
it
very
well
could
be
that
I'm
part
of
the
reason
this
complexity
has
been
introduced.
C
Part
of
my
problem
is
I
have
had
less
and
less
time
to
engage
with
this
project
over
the
past
couple
of
weeks
months,
that
if
you
want
to
call
it
and
so
and
concerned
that
between
feedback
from
John
and
Clayton
and
myself
happening
at
two
week
intervals,
you
know
are
not
getting
the
appropriate,
consistent
items
that
he
made,
and
so,
if
this
is
me
doing
yet
another,
like
redirection
I'm,
not
happy,
you
know
like
just
stop
but
okay.
So
let
me
try
and
restate
my
concern.
C
C
That's
about
update
and
improve
the
replication
controller
test,
like
I,
had
lost
track
of
why
we
decided
this
tool
and
is
needed
for
that
text
like
how
it
would
improve
the
flakiness
of
the
test
and
the
test
itself
is
now
a
couple
hundred
lines,
long
which
again
maybe
I
just
chalk-
that
up
to
the
velocity
of
bill.
But
it
feels
like
we're
trending
in
the
direction
of
making
these
tests
longer
and
introduce
some
more
complexity
for
reasons
that
I
can't
quite
grasp
or
understand.
C
Seems
to
be
providing
either
or
a
bandwidth,
but
me,
and
so
that's
part
of
my
frustration,
so
I
kind
of
went
up
like
I,
think
I'm,
just
rambling
and
I'm
not
sure,
if
I'm
being
very
clear
but
I
want
to
make
sure
I,
unblocked
and
I.
Allow
you
to
move
forward.
But
so.
C
Fine
I
guess
I
kind
of
don't
really
understand
this
code
anymore,
and
they
don't
understand
why
we
have
it
anymore,
but
I'll
I'll
rubber-stamping
to
allow
you
to
merge,
interrogate
what's
going
to
grow
out
of
this,
look
as
we
start
writing
a
bunch
of
new
tests
and
copy
pasting
a
bunch
of
stuff
around.
Are
we
going
to
continue
to
use
this
pattern,
which
is
really
good?
C
The
bumper
at
least
me
to
comprehend
what
and
why
we're
doing,
and
then
that
leads
me
feel
concerns
that,
like
there's
there
going
to
be
more
tests
that
are
difficult
to
understand,
but
hey
again
like
if
I'm
the
only
person
here
he
seems
to
be
dragging
his
feet
and
you
are
all
very
clearly
aligned
on
like
what
and
why
we
should
be
doing
this.
Maybe
I
should
stop.
You
know,
stop
dragging
my
feet
or
stop
letting
perfect
be
the
enemy
of
good.
C
C
C
Thank
you
doubt
that
guidance
from
lately
where
he
looked
at
the
issue
where
he
proposed
how
to
do
this
and
he
was
like
that's
cool,
but
what?
If
there
are
long
pauses
or
delays,
but
if
we
retry
this
and
so
now
we're
retrying
everybody.
This
also
kind
of
simulates,
the
behavior
of
thinking
about
flake
attempts.
C
So
I
don't
know
if
y'all
remember
are
familiar
with
the
fact
that
Kinkos,
the
testing
framework
that
we
used
for
this
and
way
back
in
the
day,
even
Anna
he's
added
a
flag
to
ginko
upstream
on
plank
attempts
and
what
it
would
do.
Is
it
again
can't
ask
case
failed.
It
would
just
really
try
the
test
again
and
if
it
succeeded
that,
second
time
on
that
second
plank,
it
won't
pop
in
it.
C
We
considered
the
test
passed
it's
weird,
because
we
would
end
up
with
changing
an
XML
that
Shaco
failed
and
password
to
save
s,
so
he
ultimately
decided
to
stop
using
that
feature
around
December
of
2019,
because
we
thought
that
was
masking
real
real
flakes
and
it
did
force
us
to
infer
a
number
of
flakes
that
depicts
an
uptick
in
dependency
such
as
on
the
city
and
so
now,
I
feel
like
his
practice
of
like
retrying.
The
entire
scenario
looks
a
lot
like
that.
C
D
Would
you
please
mind
reiterating
that's
fine
I
must
just
be
clear,
I
think
it's
real
good
that
you're
saying
all
this
I,
don't
think
that
you
should
have
any
stress
or
any
you
shouldn't
feel
guilty
for
taking
up
all
that
time.
But
with
that
said,
would
you
mind
saying
your
first
point
again,
because
I
was
something
that
I
wanted
to
to
comment
on
with
that
I.
C
C
D
I
guess
one
important
Hedden
that
I
would
really
like
to
see
in
tests
is
the
whole
thing
of
validating
that.
The
thing
that
we
just
did
actually
did
take
place
from
a
separate
thing
which
allows
us
to
in
the
case
of
patching,
which
I've
had
quite
a
bit
of
issue
with
in
previous
tests,
where
you
patch
something,
and
then
you
need
to
update
it
or
something
and
then
there's
the
waiting
to
make
sure
that
that
actually
happens
without
using
a
sleep
in
a
real
time
situation.
D
So
that
means
in
practice
we
we
pet
something
in
some
situation
and
then
we
we
get
that
object
back
and
then,
with
that
data
we
maybe
modify
it
and
then
push
it
back
as
an
update
instead
of
a
pet
which,
without
validating
that
may
cause
resource
version
error
or
something.
And
then
it
can't
do
the
thing
that
you're
wanting
to
do
in
the
test.
So
that
that's
the
kind
of
thing
that
has
been
a
developing
pattern
through
the
tests
that
I've
written
on
the
stuff
that
AI
has
done
so
I
kind
of
think.
D
It's
a
good
practice
to
be
able
to
do
that.
And
that's
kind
of
what
these
two
functions
and
the
watchh
tooling
actually
ensure,
as
especially
the
watch
until
without
retry,
that's
the
one
which
actually
is
handling
the
that
pen
that
have
just
said
about,
but
when
they
use
both
together,
it
seems
to
live
out
that
that
idea
of
how
tests
should
be
put
together
right.
C
Sorry
I
mean
I,
guess,
I,
don't
know,
I'm
stuck
sitting
like
all
right,
I,
just
keep
going
back
to
like
I,
should
read
this
and
try
to
comprehend
it.
But
if
that's
that's
the
blocker
for
y'all,
my
concern
is
just
there
like.
If
I'm
not
taking
the
time
to
read
this
or
comprehend
it
and
say
LT
PM
means.
F
A
E
Wanna,
throw
something
say
something
similar,
so,
let's
assume
the
everything
in
the
test
directory
of
kubernetes
was
like
Cork
opportunities.
You
know
if
we
are
implementing
something
in
the
in
date
with
framework,
let's
assume
we
were
actually
doing
in
the
cubelet
I
feel
like
this
is
how
and
I
mean
I'm
coming
today,
I'm
coming
to
this
a
little
bit
late
to
the
party,
because
I
only
spend
a
couple
hours.
Okay,
looking
at
the
issues
and
the
beers,
but
actually
I
feel
like
this
kind
of
change.
E
We
shall
have
something
like
a
kit,
not
nessam,
and
not
much
on
the
whole
process,
but
like
a
actually
document
and
like
we
are
proposing
this
kind
of
this
kind
of
structure,
this
kind
of
this
kind
of
changed
to
a
tweet
we
test-
and
this
is
the
behavior
that
we
want.
This
is
the
behavior
that
we
want,
and
this
is-
and
this
is
what
we
want
it
and
actually
the
end
actually
documented
somewhere,
like
in
a
sick
testing
community
as
a
pattern
for
PA
as
a
parent
for
PA,
for
people
to
use.
C
C
If
you
like,
pretty
much
putting
something
like
this
to
their
cap
is,
is
too
much
process
like
I
would
guess
my
hope
is,
and
maybe
this
is
me
just
talking
myself
through
actually
just
merging
it
area,
even
though
I
don't
understand
this
entirely.
What
this
seems
like
it's
adding
a
lot
more
code.
I
would
hope
this
drives
us
to
a
place
where
we
can
start
to
reduce
the
lines
of
code
necessary
to
greater
than
that
in
fest,
so
that
we
can
understand
it
yeah.
C
It
was
like
the
only
way
you're
going
to
figure
out
what
the
fastest
most
concise
way
to
write
tests
is,
is
if
you
get
feedback
and
without
those
P
are
actually
getting
merged,
and
without
you
seeing
all
this
stuff
right
in
CI
I'm,
not
really
getting
any
feedback
of
whether
or
not
this
is
working
with
actively
that's
scale
right.
We
might
be
getting
some.
It's
a
cultural
feedback
for
me
as
a
PR
reviewer
that,
like
hey
I,
find
this
kind
of
business
understand
and
maybe
there's.
C
Situation,
I,
don't
I
kind
of
mean
having
a
difficult
time
deciding
that
right,
but,
like
there's
a
part
of
me,
that's
just
sort
of
inherently
like
because
this
looks
like
it's
more
complex
code:
ie,
don't
trust
that
it's
going
to
be
any
less
flaky
or
the
bus
that
are
missing
CI
and
maybe
I
should
just
like.
Actually
look
this
merging
link
you
put
back
to
the
test.
I,
don't
think
this
is
necessarily
mandating
a
way
that
everybody
writes
tests
going
forward.
I.
C
C
D
E
A
Can
add
a
little
clarification
on
the
sequencing
that
I
haven't
heard
mentioned
yet
on
the
sequence
retry?
There
was
some
worries
about
events
happening
in
the
middle
that
I
needed
to
be
I.
Think
the
word
was
squashed
or
similar,
where
there's
other
events
happening
in
the
middle
that
aren't
important
just
looking
for
the
duration
of
the
test,
when
it
actually
completed
ensuring
that
all
of
those
events
occurred,
and
there
was
not
an
easy.
A
D
A
replication
so
yeah,
so
we
for
instance,
what
Chris
is
saying.
We
declared
the
events
that
we're
going
to
expect
and
only
the
types
are
required,
because
we
already
know
that
the
type
is
going
to
be
unstructured,
replication
control,
we're
going
through
and
making
sure
that
there's
the
Edit
for
it
and
the
modified
in
fact.
But
this
1/4
times
or.
D
C
D
All
right,
you
know
that's
this,
but
this
is
easily
extensible.
So
if
there's
any
extra
pots
that
we
need
to
add
in
then
we
can
just
make
that
and
just
type
I
don't
know,
but
for
now
that
seems
to
do
the
job
and
yeah
it's
again,
it's
extensible!
So
if
we
want
to
add
any
more
checks
in
there,
then
that
should
be
reasonable.
A
A
C
A
We
can
leave
that
particular
test
open
if
we
want
to
look
at
the
other,
because
there
we've
got
I,
think
20
20
points
that
are
ready
for
a
cruise,
we're
ready
for
lgt
and
whatnot
that
are
using
this.
This
is
just
one
example.
We
pulled
out
I
think
because
it
maybe
had
more
points
or
something
but
I
I
gave
you
just.
C
C
C
C
C
C
So
if
it
turns
out
these
are
very
reliable
and
non
lady
tests,
that's
cool,
maybe
we
could
work
on
either
a
negative
like
reduce
the
line,
cavities
and
mimic
that,
as
we
start
to
see
comedy
patterns
and
we
pass
those
patterns
more
than
cisely
the
whole
large
sequence
of
them
like
watching
the
de
menthe
thing
kind
of
weirds
me
out
for
the
replication
controller
tests,
for
example,
I'd,
like
the
specifying
her
yeah
I'm
supposed
to
find
that
list
of
events.
Some
talk,
but
like
it's
not
immediately
clear.
Why?
C
For
what
those
events
are
for
I
think
they
were
just
explained
to
me
that
each
of
those
events
corresponds
to
their
friend
like
packed
column,
only
a
modification
falling
or
something,
and
so
it
seems
like
it
would
be
clearer
in
flight.
Those
were
put
in
to.
C
A
A
D
The
sequence
of
events
is
like
API
calls
the
the
crud
that
you'll
do
in
a
test.
That's
part
of
the
the
test
itself
which
which
random
nature
actually
happens.
So
the
events
will
always
come
through
in
terms
of
order,
as
in
you
create
an
event.
Are
you
creating
something?
And
then
you
modify
something
so
we're
just
making
sure
that
those
things
happen
in
the
tests
that
you've
written
as
well.
A
A
Are
you
sure
the
delete
happened
and
then
these
and
then
what
about
a
sequence
of
events?
What
happened
making
sure
they
jennette
see
the
sequence
of
events
occur
and
then
in
order
to
make
sure
that
all
of
those
things
happened
in
the
sequence
that
they
think
that
they
were
expected
and
if
not
retrying
the
scenario,
and
that
was
the
stuff
from
midnight
and
so
that,
when
we're
going
through
trying
to
to
implement
that
this
is
the
flow
we
came
up
with
it.
A
Well,
let's
make
sure
that
for
this,
this
sequence
of
things
that
are
required
that
we
want
to
see
the
the
events,
the
watch,
events
having
them
wear
the
wrapper
around
the
scenario
and
that's
were
returned
wanting
us
to
get
some
options
for
trying
to
stick
it
inside
and
seem
a
bit
difficult,
given
the
way
that
scenarios
work
within
the
Kiko
framework
and
that's
where
I
would
love
some
other
options.
But
I
don't
see
these
super
clear
ones
that
present
themselves
and
if
we
do
find
them,
it
might
be
useful.
A
Doing
things
on
a
whole
scenario
level-
and
this
was
a
simple
wrapper
to
let
us
look
at
that
sequence
and
ensure
that
things
arrived
in
order,
regardless
of
restart,
regardless
of
the
what
was
like.
There
were
other
one
that
cannot
often
even
I.
Don't
know
that
it
comes
often
as
far
as
flakiness
would
be,
the
the
watch
expiring
or
the
watch
yeah.
C
C
Think
it
just
sort
of
trying
to
like
mentally
map
this
to
the
way
I
remember
test
two
used
to
be
with
an
even
relatives,
so
I
keep
mapping
like
launching
zone
without
you.
Try.
Oh
that's.
Instead
of
a
wait,
thoughtful
immediate
on
where
we
hold
the
system
for
our
expected
state
and
then
sit
there
and
watch
for
our
expected
state.
The
reason
we
do
that
is
because
we
think
it
will
be
fast
because
we're
using
a
watch.
We
are
then
vulnerable
to
that
watch.
It
will
closed
on
us
or
we
are
vulnerable
to.
C
E
C
Okay,
so
then
the
reason
we
verify
that
all
of
the
events
erotically
expected
order,
I
guess
I'm
having
a
much
more
difficult
funny,
still
comprehending.
Why
we
do
that.
It
seems
like
at
each
step
of
the
way,
we're
waiting
until
we
see
the
event
we
expect
to
see,
and
so
at
the
very
end
auctions
and
without
retry.
If
it's
gonna
wait
until
it
exceeds
I've
deleted
a
bit
fine.
A
C
C
How
does
he
get
it
underwater
well
like
so
another
one
thing
about
this
watch
see,
could
have
entered
spare
tire
that
feels
like
an
anti
patterns
may
because
it
looks
like
you're
having
to
pass
in
a
function
that,
like
cleans
up
good
attack
this
scenario
if
it
doesn't
work
successfully,
and
that
starts
to
feel
an
awful
lot
like
before
each
and
after
each
for
a
specific
gender
test
or
something.
D
Yeah
that
was
found
to
be
necessary
due
to
the
fact
of
the
the
the
test
on
upon
release
may
still
having
resources
lying
around
such
as,
if
you
create
a
resource,
what
you're
more
than
likely
to
do,
you
don't
want
to
have
that
same
resource
with
the
same
name
existing
so
that
that's
just
the
way
of
doing
it.
You
could
just
pasta
to
MD
function
and
your
test.
If
you
have
nothing
to
clean
up
or
don't
want
to
use
that
aspect
of
I,
think.
A
What
hearing
is
pointing
out
here
is
that
a
lot
of
this
is
part
of
the
framework
of
Ginkgo
itself
and
how
ginkgo
has
a
4h
and
after
each
that
already
does
a
lot
of
that
cleanup
and
and
the
the
kubernetes
communities
ii
ii
ii
ii.
Testing
framework
extends
that
a
bit
further.
By
doing
some
things
like
cleaning
up
the
namespace
and
whatnot
I.
C
Guess,
there's
just
a
part
of
me:
that's
one
that
it's
feeling
like
you
should,
let
it
you
know
rather
than
and
free
chocolate
does
that
sort
of
to
disable
it
like
I
said
this
seems
to
mimic
good
behavior,
but
we
stopped
using
them
camp
about
you.
Try
it
fast
if
it
fails.
This
feels
an
awful
lot
like
just
for
the
most
part
you
trying
attacks.
If
your
best
fails,
and
so
it's
cleaning.
E
D
C
C
C
You
would
get
through
the
test,
bastard
and
then
et,
but
doing
all
of
this
retrying,
but
maybe
it
will
be
more
of
a
mile
in
the
face
of
a
heavily
loaded,
custard
and
I'm
guessing
you
don't
have
a
way
to
reliably
create
a
living
cluster
and
see
how
things
play
just
learning
the
past
learns
the
I.
It
isn't
even
a
great
deal
to
do,
maybe
stop
being
a
peer
or
see
if
happy,
we
didn't
reduces
the
stem.
E
C
A
A
For
us,
we
have
very
specific
tests
that
are
using
this
framework
that
will,
hopefully
we
can.
You
know,
merge
at
the
same
time
and
they're
not
going
to
become
conformance
tests,
but
if
they're
fully
then
we'll
need
to
roll
those
back
and
at
that,
that's
I.
Think
with
Aaron
is
suggesting
is
that
we
go
ahead
and
do
a
merge
and
then,
as
we
see,
that
they're
flaking,
we
can
roll
them
back.
But
in
addition
adding
some
issues
that
we
assigned
for
where
those
concerns
are,
is
that
right,
Aaron.
C
C
C
Yeah,
it's
a
lot
of
code,
like
I,
said
I,
sympathize
and
empathize
with
y'all
trying
to
write
these
tests.
This
is
not.
This
can't
be
easy
and
right,
but
certainly
not
easy
for
me
to
be
well
I
wish.
We
could
just
back
a
little
bit
better
everybody's
lives
better
if
you
could
more
concisely
express
intent
in
a
blonde
funky
there
by
the
way.
A
A
Thank
you
for
all
of
the
notes,
rien
and
well
lots
of
good.
That's
there.
Thank
you
for
that.
We
also
have
some
endpoints
that
are
low
priority
and
I.
Don't
know
that
we
have
a
concrete
list
of
those.
We
do
have
a
concrete
list
of
endpoints
that
are
not
part
of
conformance
that
we
did
in
an
or
call
earlier
and
this
there
may
not
be
a
low
priority
list.
That's
formal
and
that's
okay,
but
I
just
wanted
to
get
clarification,
and
we
in
turn
brought
that
up
on
some
of
our
prioritization
efforts.
A
That
again,
okay,
so
there
were
stuff
that
was,
we've
turned
off
recently.
It
was
a
couple
of
end
points,
but
I
didn't
know
if
there
was
a
general
direction
stuff,
that's
low
priority
that
we're
not
looking
at
currently,
even
though
they
are
part
of
conformance,
but
they
were
too
complex
for
now.
I
think
proxy
was
one
that
we
had
initially
shoved
off
anything
dealing
with
proxy,
but
I
might
pick
that
up
again
but
I'm,
putting
it
below
more
low-hanging.
Fruit,
apparently.
B
B
The
list
I
shared
with
you
all
opening
winters
not
cover
it
yet
kind
of
the
idea
behind
this
is
if
we
can
create
a
list
that
we
can
share
with
the
diff
tracking
team.
It's
okay,
because
of
certain
reasons,
business.
This
endpoint
should
be
low
priority.
It
would
help
us
a
lot
if
we
can
basically
mock
him
off.
As
you
not
look
at
this
for
the
moment,
it's
not
that
we're
not
gonna
look
at
them.
Just
just.
A
C
C
C
C
B
What
do
you
think
about
these
porno
priorities
and
then
maybe
make
a
PR
and
just
document
that
these
things,
for
these
reasons
are
like
Oregon
and
as
I
shared
in
our
discussion
that
we
really
don't
want
to
go
to
excel
this
as
tools?
It's
what
that
Amelia
output
of
the
tooling
and
it's
it
reads
easy
for
everybody
to
look
at
so
yeah.
C
A
C
A
The
last
two
that
I
want
to
make
sure
we
get
to
are
the
request
to
move
these
from
triage
to
sort
of
backlog.
The
first
one
is
plus
16
in
points.
This
was
brought
up
by
Steven
and
it's
a
fairly
it's
an
easy
one.
I
think
we've
got
the
get
the
different
API
groups
that
are
currently
not
hit
by
any
endpoints.
The
reference
is
pretty
straightforward.
Go
through
the
list
of
API.
A
Is
iterative
list
get
in
point,
pass,
here's
the
the
go
mock
up
again
it
pretty
straight
for
it
I
think
it's
around
14
lines
that
results
in
all
of
these
checks
and
it
does
result
in
it
increase.
This
is
one
that's
fairly
simple,
not
a
lot
of
complexity
to
it,
and
it
is
there
just
to
make
sure
that
what
we
retrieve
via
the
list
of
api's
are
that
are
available
are
available.
C
Something
that
certainly
sounds
valid
like
that
reaction.
It's
this
sounds
like
something
I.
If
I
were
in
your
shoes
I'd,
you
ask
so
gave
the
I
machinery
about
this
test
well
number
one
they
might
have
opinions
on
how
to
write
it
fit.
No.
Such
tests
exist.
Rt
number
two
I
am
a
little
surprised
that
if
that
machine
here
doesn't
have
test
coverage
over
discovery
like
I
know,
it's
not
good.
That
haven't
had
a
test
with
us
loose
already
fact
that
they
don't
feel
surprising.
A
C
E
B
A
Will
do
that
follow
up
with
sig
architecture,
and
this
last
one
is
one
end
point
it's
also
from
from
Steven.
Thank
you
for
this.
Second
one
Steven,
and
this
is
pretty
much
just
get
the
code
version.
It's
an
untested
end
point.
The
mock
test
is
pretty
straightforward
to
verify
that
the
major
and
minor
match
what
we're
looking
for,
and
it
is
currently
untested
completely
untested.
So
that
will
give
us
at
least
17
points
cigarette.
If
your
machinery
is
happy,
I,
don't
think
there's
a
safe
to
hit
forget
the
version
we
are,
though
I
would.
C
A
Think
it's
part
of
how
our
golang
library
has
written
it
actually
and
the
same
thing
for
we're,
trying
to
get
other
end
points
that
are
exposed
via
the
API,
but
because
of
the
way
the
tooling
is
written,
it
doesn't
hit
them,
but
they
are
part
of
the
API
itself,
but
they
just
assume
they
exist
or
in
the
exploration
process.
The
discovery
finds
them
in
a
different
way.
Okay,
I
just
realized
we're
close
to
the
time
and
I
wanted
to
respond
or
four
minutes
are
okay.
A
Well,
yeah,
it
don't
sound
good
to
me:
we'll
move
those
from
triage
into
backlog,
so
we
have
a
nice
bit
of
that
log
and
with
that
replication
is
right.
With
the
watch
retried
tooling,
there
that's
20
points
that
should
go
on
the
board
to
be
soaking,
we'll
look
for
those
to
be
non
flaky
and
if
they
get
done
flaking,
the
two
weeks
will
be
really
good
there
and
we
have
quite
a
few
points
in
the
backlog.
A
I'd
say
about
30
right
now,
top
my
head,
so
we're
on
track
for
this
and
then
get
another
shout
out
to
scroll
back
up.
Thank
you
for
all
those
notes
by
the
way
Breann
Liggett
and
he
did
it
in
his
PR
as
well
I'm.
Looking
forward
to
the
bots,
we
have
some
work
progressing
on
that,
but
nothing
I
want
to
share
here.
Yet,
okay.