►
From YouTube: Kubernetes SIG Testing 2017-09-19
Description
Meeting notes: https://docs.google.com/document/d/1z8MQpr_jTwhmjLMUaqQyBk1EYG_Y_3D4y4YdMJ7V1Kk
A
Okay
hi
everybody
today
is
Tuesday
September
19th
I'm,
Aaron
Berger.
This
is
the
weekly
SiC
testing
meeting
on
the
agenda
today.
I
just
wanted
to
give
a
quick
shout
out
to
a
couple
shiny,
new
things:
I
wanted
to
have
a
brief
discussion
about
testing
for
an
on
call
stuff
that
sort
of
got
raised
during
sig
release.
A
Today,
srini
wants
to
talk
a
little
bit
about
documenting
conformance
tests
and
then
I
believe
Eric
theta
has
a
bunch
of
shiny,
ongoing
work
that
he
wants
to
update
us
on
if
I've
left
anything
out
or
if
we
have
time
at
the
end,
please
feel
free
to
add
to
the
agenda
which
I
will
paste
in
the
chat
here.
Just
so
shiny.
A
C
A
Cron
jobs
or
Korn
jobs,
I
guess,
I,
guess
so
conceptually
they
came
up
with
the
idea
of
putting
together
a
real
control
plane,
but
a
bunch
of
hollow
nodes.
And
then
the
issue
has
been
kind
of
sitting
around
for
a
couple
weeks
now
awaiting
input
from
state
testing.
A
So
I
asked
in
the
state
testing
slack
channel
like
last
week
or
two
weeks
ago
and
didn't
see
much
traction
there
and
so
they're
sort
of
noticing
that,
by
virtue
of
lazy
consensus,
maybe
we
were
totally
cool
with
this,
but
I
did
try
and
remind
people
that,
like
we
did
have
a
big
discussion
about
what
integration
testing
really
is.
The
idea
of
whether
or
not
it
made
sense
to
do
integration
testing
against
sort
of
a
lower
fidelity
cluster
in
the
past.
A
We
were
talking
about
doing
this
with
a
doctor
and
doctor
cluster
in
this
particular
pull
request.
It's
talking
about
you
know
real
cluster,
but
a
caller
notes,
sort
of
the
same
way.
Coupe
mark
works
and
just
whether
or
not
we
have
any
opinion
on
it.
I
mean
at
this
point.
It
seems
like
we
haven't
weighed
in
so
you
know,
I'm
not
inclined
to
prevent
somebody
from
doing
work.
A
A
Next,
up
crowds
been
having
a
lot
of
work
done
to
it
recently
the
documentation,
for
it
is
improving
quite
a
bit,
so
we
now
have
an
announcements
section.
C
A
We
also
have
sort
of
this
architecture
doc,
which
started
out
as
the
life
of
a
proud
job.
But
we
also
have
a
getting
started
doc
now,
but
actually
teaches
you
how
to
spin
up
your
own
cluster,
what
all
the
proud
components
are
and
how
you
might
go
about
setting
this
up
to
be
pointed
at
your
own
github
organization,
in
your
own
repos.
How
you
want.
C
A
A
So
you
can
see
sort
of
for
a
single
commit
the
probability
that
that
commits
gonna,
flake
on
a
single
job
or
potentially,
all
of
the
jobs,
all
the
presubmit
jobs
that
run
when
you
open
up
a
pull
request
and
we've
also
now
expanded
sort
of
the
flaky
estava
on
the
dashboard
here
from
not
just
the
job
name,
but
also
the
flaky
is
test
that
is
causing
that
to
be
the
flakiest
job.
So
when
people
ask
about
like
is
this
the.
A
Is
the
like,
where
can
I
help?
What's
the
flakies
test,
where
I
can
make
the
biggest
impact
by
trying
to
dedicate
some
of
my
time
to
fixing
flaky
tests?
You
know
we
can
show
them
this
shiny
graph
and
we
can
show
hopefully
how
overtime
those
lines
will
go
down
if
we're
doing
a
better
job
of
fixing
these
flakes
and
we
can
use
the
flaky
is
job
table
here
to
point
them
at
which
flake
for
which
child
they
should
go.
Take
a
look
at.
D
A
A
So
roughly
the
way
I
tried
to
phrase
this
during
the
cig
release
meeting
was
that
we're
all
actively
paying
a
lot
more
attention
to
tests
right
now
and
there
are
a
lot
more
eyes
on
making
sure
that
pull
requests
like
are
tested
and
can
get
merged
and
making
sure
that
the
submit
cube
and
all
of
the
test
infrastructure
is
healthy.
You
know
we're
actively
looking.
C
A
A
Know
yesterday,
two
days
ago,
that
Jenkins
was
like
damn
didn't,
seem
to
be
running
any
pre
submits
and
I
divined
this
by
looking
at
you
know
the
history
page
of
the
submit
queue
and
sort
of
the
fact
that
the
merge,
the
submit
queues
history.
Sorry,
the
history
page
of
the
submit
queue
and
I
was
looking
at
the
fact
that
proud,
didn't
really
have
any
log
links
for
any
of
the
pre
submits
and
then
Eric
sort
of
gradually
discovered.
A
A
Occasionally
look
at
the
last
commit
time
for
the
last
commit
to
master
and
kubernetes
kubernetes,
and
so
there's
this
whole
thing
where
we
as
a
community
collectively
are
like
trying
figure
out
whether
or
not
things
are
okay,
because
we've
sort
of
lost
that
big
red
box
that
flashes
up
on
a
submit
key.
That
says
there
is
a
problem,
but
the
other
thing
was
okay.
A
Just
kind
of
slows
us
down
and
causes
more
context
switches,
so
I'm
trying
to
find
there
I
compromise
for
making
sure
there's
a
little
more
visibility
on
when
problems
are
being
like
when
there
is
a
problem
and
when
and
how
the
problem
is
being
worked
and
won,
and
how
the
community
can
be
helpful
here.
So
an
example
of
the
community
being
counterproductive
is
if
we
didn't
know
that
Jenkins
is
down
and
we
want
to
pair
up
front
paraphrasing
Geoff
here
live
the
life
of
a
kubernetes
developer
and
just
slash
recess
all
day
long.
A
That's
actually
gonna
cause
the
you
know
the
Jenkins
queue
to
to
build
up
and
up
and
actually
make
the
problem
worse.
But
if
we
knew
that
there
was
a
problem
and
that
slash
retesting
was
going
to
make
the
problem
worse,
we
would
have
backed
off
as
a
community.
So
it's
trying
to
find
a
way
to
to
bridge
that
gap.
A
E
I
think
that
part
of
the
problem
here
is
the
existing
test
in
for
rotation,
which
is
all
internal
to
Google.
There's
sort
of
a
high
bandwidth
communication
channel
that
we're
used
to
using
amongst
ourselves
and
I
think
it.
We
need
to
figure
out
ways
of
getting
that
information
out
to
the
rest
of
the
community
and
so
that
part
of
yeah,
maybe
you
know
trying
out
movie,
could
we
have
a
daily
stand-up?
Now
we
could
try
having
that
into
sort
of
a
community
version
of
that,
at
least
for
the
short
term.
E
Until
we
get
back
into
a
healthier
state,
that's
one
thing
to
try:
I
think
we
just
need
to
develop
ways
of
communicate.
You
know
strengthening
the
cig,
testing
communication
channels
and
developing
those
and
having
us
be
better
practiced
about
using
those
channels.
So
I
think
one
thing
is
sort
of
you
know
there's
this
meeting,
but
that's
only
once
a
week
I
think
you
know
I,
think
you're
I
think
you
have
a
good
idea
about
having
an
issue
that
people
can
go
to
or
I
think
we
need
to
develop
that
practice.
E
I
think
the
problem
is
you've
opened
that
issue
yesterday,
but
nobody,
none
of
us,
really
updated
it.
We
kind
of
are
just
talking
to
ourselves
which
is
useful
for
us,
but
not
useful
for
the
community.
We
need
to
be
posting,
some
updates
somewhere,
that
the
community
knows
how
to
find
and
that
we
are
good
about
sending
updates
to
and.
E
Yeah
I
mean
yeah,
you
know
and
eventually
I
think
I
know
if
this
is
where
we
want
to
talk
about
extending
the
on-call
rotation,
but
you
know
what
I
guess
they
want.
Something
that
would
be
useful
for
us
to
know
is
like
what
information
do
people
want
to
see
and
do
people
what
ideas
do
people
have
about
how
we
should
be
communicating
that
information
to
people.
A
Yeah
I
mean
having
having
clear
unambiguous.
So
you
know
the
things
are
working
or
things
are
not
working
is
difficult.
I
understand
and
efforts
are
being
made
towards
that
I
mean
so
so
one
looking
at
it
is
it's
not
necessarily
a
new
normal.
It's
a
thing,
that's
necessary
when
we
are
in
release
in
this
making
seems
to
be
bumpy,
for
whatever
reasons
why
Jase
was
proposing
like
could
we
get
just
like
we
have
a
daily
burn
down
meeting
that's
focused
all
on
the
release.
Could
you
have
something?
A
C
A
So
I
think
that
I
know
I
said
it
was
a
goal
for
1/8,
and
then
we
all
went
off
and
did
our
own
thing
I'd
like
to
revisit
whether
or
not
we
can
do
that
for
1/9,
especially
since
there
are
people
who
are
actively
working
on
an
uber
Nader
and
the
submit
queue
outside
the
walls
of
people.
Now
I
mean
I,
think
they're.
The
community
of
people
who
didn't
access
to
the
logs
for
brow
and
munchkin
hub
could
more
actively
triage
and
diagnose.
What's
going
on,
rather
than
having
to
throw
up
our
hands
to
say.
F
Just
as
a
commenter
I
think
it
would
be
really
good
to
frame
the
question.
How
do
we
assess
the
the
health
of
a
proud
cluster
just
or
like
the
proud
components
on
a
cube
or
daddies
clusters?
I
think
as
an
operator
of
a
cluster
that
sits
outside
of
Google
I,
have
the
same
questions?
Can
I
very
quickly
look
to
see
you
know
what
was
the
last
time
I
got
a
web
hook
from
github.
You
know
how
many
of
my
jobs
are
currently
running,
how
many
am
I
about
to
go
do
like
because
of
concurrency
limits?
F
E
E
You
know
the
idea
that
we
should
have
more
metrics
like
when
was
the
last
time
we
merged
something
or
when's
the
last
time
we
got
a
web
hook
or
how
many
pending
jobs.
You
know
how
many
pending
builds
are
there
for
each
job
sort
of
metrics
like
that
Ryan
and
Clinton
are
working
on
adding
some
of
those
metrics,
and
so
hopefully
we'll
have
that
first,
one,
the
first
one
we're
starting
with
is
like
how
long
has
it
been
since
we
merged
something
but
yeah
there's.
Definitely
lots
of
others
that
we
want
to
add
I.
E
E
How
can
we
be
better
communicating
with
each
other,
and
how
can
we
be
better
communicating
with
the
community
like
do
people
have
I
mean
I
can
sort
of
make
up
random
things,
but
if
people
have
other
opinions
about
things
they
want
to
see,
I
would
be
very
interested
in.
You
know
incorporating
that
feedback
into
what
we
start
doing
do
we
have.
F
A
sort
of
more
general
answer
to
that
question:
if
I'm
running
an
application
and
let's
say
I'm
publishing
some
metrics
or
let's
say
you
know,
we
have
some
sort
of
like
machine,
readable
format
on
health
of
the
system.
How
do
I
know
publisher
status
page
or
something
like,
maybe
something
that
Sega
apps
would,
because
these
are
Cuban
in
these
applications
right
like
do
we
have
a
solution
for
that
question.
F
B
So
there
is
a
cig
monitoring,
meetup
that
you
can
go
to,
and
there
is
a
sort
of
a
road
map
type
dock
that
you
can.
You
can
look
at.
There
are
plans
in
that
direction,
but
it's
still
pretty
fluid
and
I.
Think
a
lot
of
the
effort
and
a
lot
of
the
mind
chair
is
leading
towards
Prometheus
at
the
moment
right.
F
Yeah
and
I
mean
that
seems
reasonable
as
well
like
if
we
could
even
just
have
some
semi
bare-bones
status
page
that
has
automatic.
You
know
we
have
an
alert
Prometheus
says:
oh,
we
haven't
done
X
in
five
minutes,
let's
put
a
little
yellow
triangle
in
there
and
everyone
can
see
it,
but
I
thought
that
would
be
like
one
one
part
of
the
message
anything
I
think.
A
Yeah
so
I
I
guess
like
they
give
them
the
half
hour.
We
have
and
other
people
want
to
say
stuff
I'm,
not
sure
this
is
play
the
place
to
hash
it
all
out,
but
like
I,
think
if
it's
cool,
if
that
on
called
daily
on-call
stand
up,
it
doesn't
intrude
on
people's
schedules
too
much
I
would
be
curious
to
see
how
that
works
for
the
time
being
and
see
what
we
get.
A
I
personally
believe
that
on-call
first
and
foremost
is
a
communications
job
only.
Secondly,
is
it
actually
like
resolving
the
issue,
so
you
know
I
as
a
community
member.
If
there's
anything
I
can
do
to
help
broadcast
information
or
clarify
what's
happening.
You
know
I
I'd
be
eager
to
do
that.
I
guess:
I've
I'm,
hearing
that
there's
also
an
interest
in
allowing
the
broader
community
to
participate
in
a
technical
manner
which
we
should
look
towards
for
1-9,
but
maybe
in
the
interim,
this
on-call
stand
up
will
help
us
with
the
communications
issues.
E
E
I
In
order
to
do
that,
I
got
this
information
last
week
during
the
voices
from
it
and
I
thought
about
this
for
two
couple
days
and
I
have
couple
of
ideas
that
wanted
to
run
by
to
see
where
this
tool
fits
and
what
the
tools
does
to
is.
We
can
do
it
basically
like
a
go
doc
kind
of
approach
where
you
know
a
test
that
is
mark
conformance.
It's
a
tag
on
the
test
at
the
top
of
the
test.
I
We
write
a
comment
saying
there's
a
good
hoc
kind
of
comment
and
we
can
scan
through
all
the
all
the
ginko
tests
and
see
which
are
conformance
tests
and
read
this
unpublish.
It
I
can
scare
share
my
screen
and
show
you
what
that
means,
and
the
second
approach
is
to
inject
a
function,
call
into
the
test
solve
that.
I
The
description
is
basically
something
that
we
are
picking
up
from
the
test
itself
like
a
comet
or
we
can
do
something
like
the
Branagh
conformance
test
and
the
test
will
call
something
like
a
dog
Jan
that
will
go
and
write
at
your
file
right,
which
is
much
simpler.
But
there
are
Goods
and
Bad's
about
these
two
approaches.
If
you
use
a
tool
to
generate
the
documentation,
we
need
to
parse
the
false
for
gota
kind
of
comments
and
see
if
they
are
associated
with
the
test,
and
that
is
error-prone
with
the
second
approach.
I
I
J
Yeah
I'm
on
the
cube,
conformance
work
group,
that's
run
by
Dan
Cohen
from
the
CN
CF
and
and
and
one
dentist
from
Google
and
there's
a
huge
push
for
this
type
of
stuff
from
the
CN
CF,
so
that
we
can
show
that
you
know
cube
is
interoperable
and
we
have
a
lot
of
vendors
that
are
running
the
conformance
tests
and
passing
it
and
there's
gonna
be
a
big
splash
in
the
future,
probably
at
the
next
coop
con.
But
one
of
the
things
we
realized
we
needed
was
some
documentation
that
says
hey.
J
These
are
the
conformance
tests
and
here's
what
they
do
and
if
you
look
at
the
current
state
where
we're
at
which
is
well,
if
you
want
to
know
the
conformance
test
you
go
in
and
you
grep
for
the
conformance
tag
and
you
kind
of
have
a
sort
of
a
vague
description.
We
really
do
need
to
get
to
a
little
more
formal
model
of
the
documentation
of
what
it
means
to
have
conformance
tests
and
why
their
conformance
tests
and
well
yeah.
J
You
could
just
go
write
that
document,
but
everybody
was
pretty
clear
that
that's
just
gonna
get
stale
very
quickly
and
bit
rot
and
the
thought
was
the
safest
thing
to
do
as
we
do
a
lot
of
things.
It's
the
engineer
into
the
code
and
be
able
to
generate
the
document.
So
you
know
the
idea
is
Sri.
Nia
was
looking
at
these
approaches
to
hopefully
find
one
that
that
y'all
didn't
throw
up
over
that.
J
That
would
give
us
the
ability
to
quote
unquote
annotate
the
code
like
we
used
to
do
in
the
old
days,
the
Javadoc
and
then
he's
working
on
the
tooling
to
go
generate.
You
know
what
the
conformance
tests
are
and
then
you
know
you
know
simple
folks
like
I
can
go
in
and
actually
go.
Add
those
annotations
to
the
conformance
tests
and
we'd
actually
be
in
a
better
shape
of
being
able
to
have
a
more
formal
reference
for
what
it
means
to
have
conformance
tests
and
why
we
have
them
I'm.
Sorry
that
was
a
long-winded
explanation.
J
K
We
we
already
have
a
something
that
parses
the
test
code
and
extracts
the
names
from
it.
It's
in
slash
list
in
thing,
and
it's
quite
easy
to
grab
the
comments
as
well.
Just
like
go
doc
does
that's
how
good
arc
generates
documentation
files,
it
parses
a
code
and
if
friends,
comment
immediately
preceding
fun,
decorations
and
that
way,
it's
out
of
that
or
I
find
that
any
time
you
have
copying
any
time
we
copy
strings
people
are
likely
to
forget
to
update
them.
Unless
you
have
a
specific
rule
and
trend
they
match
up.
K
So
you
might,
you
might
have
a
better
time
if
I
you
extend
some
tool,
automatic
extract
them
and
you
have
a
rule
that
every
test
tech
conformance
has
a
blurb.
It's
also
extract
by
this
tool
about
why
it's
confirmed
testing
what
it
does,
but
using
the
parse
impact.
It's
quite
easy,
just
recommend
you
have
a
look
into
that
before
you
go
too
far
down
this
path
and
it's
also
a
little
bit
easier
for
people
to
use
assets.
They're
used
to
it
already
for
good
on
yeah.
J
That's
what
we
were
hoping
that
you
know
you
guys
being
being
the
experts
here,
would
it
would
get
click
and
you'd
say:
oh
yeah,
we
we've
kind
of
done
that
and
here's
how
you
could
extend
what
we
have
and
folks
really
don't
go
too
far
down
a
path.
Please
do
this
the
way
that
we
would
recommend
that
fits
really
well
with
what
we're
already
doing.
That's
exactly
what
we're
looking
for
shree
knees
that
make
sense.
Yeah.
That
makes
sense
perfectly.
I
E
I
would
definitely
say
that
I
personally,
like
the
idea
of
go
doc,
you
know
comment
sounds
a
lot
easier
and
faster
than
putting
in
inside
of
the
test
case,
since
some
of
our
ete
tests
can
take,
like
you
know,
12
plus
hours,
to
run
so
being
able
to
just
do
that
report
by
analyzing.
The
code
is
a
lot
better
than
the
alternative.
Yeah.
K
J
A
J
We've
been,
you
know,
working
hand
in
hand
with
them
from
the
conformance
work
group
over
with
them
that
they
quote-unquote
own
it,
but
the
the
devil
in
the
details
of
the
and
we
we
just
all
met
at
the
open
source
summit
in
LA.
The
devil
in
the
details
was
the
guys.
We
want
to
see
something.
You
know
some
pull
requests
that
show
these
examples
and
start
getting
these
annotated
in
the
next
couple
of
weeks,
and
so
that
kind
of
fell
on
my
shoulders
and
trainee
shoulders
to
hopefully
find
a
way
to
do
this.
J
That
was,
you,
know,
acceptable
and
you
know
well
engineered
and
what
you
already
had-
and
so
you
know
hopefully
shreya-
can
work
with
the
folks
and
come
up
with
a
nice,
simple
syntax
that
you
know
even
dumb.
Guys
like
me,
can
then
start
generating
the
pull
requests
and
going
out
and
those
annotations
to
all
those
tests
for.
A
J
Very
cool
so
yeah
as
soon
as
we
get
sort
of
a
format
and
a
way
to
do
it
that
you
all
and
Trini
agree
to
you
know
folks,
like
me,
you'll
start
seeing
those
pull
requests
showing
up
and
you'll
have
to
review
them
right
because
you
know
maybe
I
think
it's
a
core
performance
test
because
of
a
B
and
C.
And
you
know
you
look
at
me
and
I'll
shake
your
head
and
go
oh
geez,
no,
no
dude,
it's
D
and
F
and
that's
cool
right.
But
but
that's
how
we
learn.
A
All
right
cool
we
appear
to
be
over
time,
but
I'm,
certainly
happy
to
stick
around
depends
on
how
you
feel
Eric
about
given
some
updates
on
what's
been
going
on
lately,
yeah
yeah.
E
C
The
other
one
is
some
setup
that
needed
to
happen,
for
emails
can
go
externally.
I
tested
that
ad
hoc
by
spamming,
my
own
email
account,
and
that
seems
to
work
so
I'm
just
doing
the
PR
right
now
to
change
the
configuration
and
allow
people
to
email
themselves.
I'm
gonna
set
that
up
for
me
again.
First
make
sure
that
I'm
not
like
spamming
too
often
or
the
there
isn't
a
problem.
The
update
server
when
I
do
that.
Theoretically,
this
week
should
be
good.
Actually
so
can.
C
E
E
That
could
have,
you
know,
helped
with
all
these
jenkins
things,
because
it
could
have
noticed
that,
like
hey,
it's
been
four
days
since
our
maintenance
jobs
have
started
so
that
would
be
super
cool
yeah.
You
know,
there's
been
a
bunch
of
jenkins
issues,
we're
trying
to
deprecate
jenkins,
so
Ben
and
maybe
Jeff
are
looking
in
two
ways
of
seeing
what,
if
any
jobs
we
can
get
off
of
Jenkins
and
on
to
prowl,
we
intend
to
sort
of
shut
down.
Jenkins
next
release
cycle,
we'll
see
how
that
goes.
E
H
H
E
Some
work
of
trying
to
split
like
right
now,
every
PR
job
does
the
same,
build
and
over
and
over
again
we're
trying
to
work
on
sharing
the
builds
which
is
sort
of
related,
although
not
explicitly,
we
could
just
continue
doing
the
inefficient
building,
but
we
want
to
obviously
get
things
off
of
Jenkins,
Clinton
and
Ryan
are
working
on
Clinton.
Do
you
want
to
talk
about
the
monitoring
start?
Okay,.
B
Can
you
guys
hear
me
yep
cool,
so
the
plan
for
monitoring
is
we're
just
gonna
pipe
metrics
from
Prometheus
into
velodromes
the
pipe
metrics
from
prowl
components
into
velodromes,
Prometheus
DAC,
where
we'll
then
go
to
an
alert
manager,
it
seems
to
be
pretty
vanilla,
stuff
and
we'll
send
an
email
out
to
the
current
on
call
and
to
the
slack
channel
specific
things
we
want
to
monitor
in
the
immediate
future
our
web
hooks
coming
into
prowl
and
some
time
since
last
merge.
So
those
are
the
those
are.
B
B
B
Atheist
also
provides
a
push
gateway
for
primarily
for
two
use
cases:
one
for
services
behind
a
firewall
that
it's
difficult
or
awkward
to
pull
from,
and
two
for
ephemeral,
ephemeral
workloads
like,
for
instance,
patchwork
roads
that
have
no
some
things
that
are
not
persistent.
That
will
necessarily
be
there
when
you
go
to
try
and
reach
them,
and
so
the
way
that
that
works
in
Prometheus
is
you
push
those
to
a
push
gateway
which
Prometheus
then
goes
and
pulls.
B
F
D
D
B
E
L
Yeah
I'm
gonna
work
on
moving
the
release.
Note
muncher
over
I've
recently
just
been
working
on
making
the
plugins
able
to
handle
generic
common
events
instead
of
having
to
register
for
a
bunch
of
different
types
of
events
anywhere
where
a
comment
can
be
left
so
now,
crowd
commands
should
be,
should
work
anywhere
on
soon
even
review
comments
and
reviews.
I
think
I'll
probably
have
that
done
today.
F
A
There's
like
an
issue
that
says
we
want
to
move
them
over,
but
it
didn't
spell
out
all
of
the
munchers.
So
we
could
like
turn,
that
into
a
checklist
of
all
the
munchers
that
have
yet
to
be
moved
over
cross
off
the
ones
that
can't
be
moved
over
because
of
X
or
Y
I.
L
A
E
One
big
one
that
I
know
car
Gattis
expressed
interest
in
at
some
point
was
the
approvals
a
handler
that
was
kind
of
complicated
because
you
got
to
like
deal
with
all
the
file
processing.
So
is
that
still
on
your
radar?
Maybe
I
was
not
your
first
name
by
the
way
I've
heard
Steve
say
it
a
couple
times:
Holly.
D
E
Yeah
and
then
beyond
that,
Joe
is
also
working
on
replacing
the
submit
queue
with
tied
and
I.
Believe
right
now,
he's
working
on
the
dashboard
front,
end
I
think
it's.
He
has
it
dealing
with
batches
and
it's
not
doing
anything
yet
I
think
he's
done
some
experimentation
on
testing
for,
but
it
lacks
a
UI
right
now,
and
so
he
is
going
to
work
on
creating
that
UI.