►
From YouTube: Kubernetes SIG Testing 2017-06-13
Description
Meeting notes: https://docs.google.com/document/d/1z8MQpr_jTwhmjLMUaqQyBk1EYG_Y_3D4y4YdMJ7V1Kk
A
A
It
seems
like
we
have
roughly
have
surprisingly
had
kind
of
a
smoother
ride
than
last
release,
although
maybe
that's
because
we're
not
yet
really
drilling
down.
As
far
as
I
know
we're
still
on
track
to
lift
code
freeze
tomorrow,
we've
had
a
few
submit
queue
bumps
along
the
way,
primarily
due
to
some
quota
issues
related
to
networking
tests
which
have
since
been
kicked
at
over
2g
slow
sweet.
So
we
had
subnetworks
quota
I
get
bumps,
and
then
we
had
in
the
ingress
test,
which
was
failing
to
delete
firewalls.
A
The
other
issue
we've
had
is
with
namespaces
taking
a
long
time
to
delete,
and
it
seems
like
the
root
cause
is
that
we
are
not.
Basically,
the
algorithm
to
delete.
Namespaces
is
really
susceptible
to
flaking,
beyond
a
30
second
boundary
to
hits,
and
if
there
are
too
many
tests
that
are
concurrently,
attempting
complete
namespaces,
it's
sort
of
Cascades
and
gets
us
really
close
to
a
timeout
threshold.
A
A
A
Anyway,
I
think
that's
all
I
have
there.
Next
up,
we've
got
Eric
put
together
a
really
awesome
swag
at
the
at
a
roadmap
per
se,
testing
looking
at
1a
and
Beyond
for
the
rest
of
2017
figured.
We
could
just
sort
of
walk
through
what
we've
got
on
that
open
it
up
for
discussion.
If
folks
have
some
ticular
questions
or
comments
so
that
I
will
hand
off
to
you
Eric.
B
And
yes,
I
put
a
link
to
it
in
the
chat.
Let
me
also
put
it
on
into
the
slack
channel.
B
And
joined
Google
group
if
you
need
access
to
it,
which
hopefully
everyone
here
is
already
a
part
of
yeah,
so
you
know
I
sort
of
just
mostly
through
this
together,
not
necessarily
authoritative
and
there's,
not
really
anyone
assigned
to
it
yet,
but
at
this
yep,
so
the
you
know,
high
level
goals,
I
guess
I
was
trying
to
capture
is
that
we
want
to
be
able
to
using
kubernetes
cluster
to
set
up
CI
and
test
kubernetes,
and
then
we
want.
You
know
simple
inner
commands
to
do
that.
Testing
and
use
different
layers.
B
So
we
have
you
know
rather
than
one
super
complicated
tool.
We
have.
You
know
multiple
layers
on
where
they're,
like
you
know
so
like
browse
job,
is
to
monitor
github
and
start
pods.
Then
inside
the
pod
we
have
a
different
tool
to
like
manage
that
complexity
there,
and
then
you
know
sort
of
increase
the
idea
of
making
everything
all
of
our
tooling.
B
You
know
such
such
that
everyone
has
access
to
the
same
tools
and
that
everyone
can
participate
in
making
them
better
and
share,
distribute
the
cost
of
operating
it
so
arm
for
prowl
listed
a
bunch
of
things
here.
I
think
the
you
know
most
important
goal,
at
least
for
you
know.
The
people
working
on
prowl
currently
is
to
make
sure
that
it
can
run
in
kubernetes
cluster,
as
opposed
to
just
gke.
Joe
has
been
spending
a
bunch
of
effort.
B
This
release
trying
to
remove
those
dependencies
and
I
think
they
are
all
mostly
removed,
but
it
would
be
you
know
we
really
like
to,
but
yeah
we're
not
entirely
sure.
If
that's
the
case
or
not,
and
so
I
suspect
there
may
be
some.
You
know
unforeseen
things
that
we
need
to
do
in
1.8
and
so
we'd
really
like
to
find
on.
B
We
have
some,
you
know
have
like
a
second
kubernetes
provider,
like
maybe
I,
have
or
I
know,
like
I,
think
various
people
have
expressed
interest,
I,
think
Red,
Hat,
I,
know
and
then
I
think
maybe
also
Microsoft
have
sort
of
expressed
some
interest
in
you
know
getting
it
to
work
on
their
systems,
and
so
that
would
be
cool
and
I
know.
B
C
B
You
know
potentially
I,
don't
I,
don't
necessarily
I
mean
yeah
I
mean
we
certainly
could
do
that,
but
I
sort
of
I,
don't
really
I,
don't
know
I
guess
I
would
want
there
to
be
that's
not
a
priority
from
our
side.
I
guess,
because
I
feel
like
we're
all
pretty
productive
in
the
testing
for
repo,
which
is
pretty
small
and
fast
to
get
reviews
and
submit
code.
So
there's
nothing
really
from
you
know,
III
think.
B
If
we
do
actually
have
you
know
others
now,
there's
nothing
preventing
that,
but
there's
nothing
really
making
it
obvious
that
we
need
to
do
that.
Work
right
now
to
me.
What
is
a
but
yeah
I
mean
I,
know
that
you
are
interested
in
that
I
mean
what's
the
motivation
for
doing
that
sooner
I
mean.
Is
there
any
what's
the
motivation
for
wanting
to
do
that?
Just.
C
A
decoupling,
so
people
can
just
leverage
and
has
a
canonical
source
of
documentation.
Usually
when
you
go
to
posit
or
E
it's
like
one
item
and
your
documentation
goes
from
soup
to
nuts
from
the
main.
Read
me
page
right,
because
right
now,
when
you
navigate
to
tested
for
it's
not
clear
to
anyone
other
than
the
people
working
in
the
project,
what
all
the
tested
for
details
usually
like,
if
you
want
something
to
live
on
its
own,
its
lives
in
its
own
repo,
has
its
own
life
cycle
and
cetera
and
manage
separately.
I
guess.
A
Might
already
miss,
and
that
would
be
that
leave
me
inside
of
the
prowl
directory
is
relatively
comprehensive.
When
we
talk
about
prowl,
we
usually
point
people
to
that
that
subdirectory,
the
readme
inside
of
the
test
day
for
repo,
doesn't
call
out
every
specific
subdirectory,
but
it
calls
out
a
good
chunk
of
them
like
what,
if
I
need
to
add
a
new
job
and
then
the
only
like
for
better
for
worse.
A
The
technical
reason
that
comes
to
mind
is
I
would
personally
love
to
see
some
of
the
github
code,
potentially
shared
between
a
variety
of
testing
utilities
that
use
it
or
or
I
guess,
I'm.
Thinking
of
like
specific
constants,
like
Cate
spot
that
are
kind
of
shared
between
munch
github
and
prowl
and
velodrome
is
an
uber
Nader
a
couple
other
things
right,
whether
or
not
it
makes
sense
to
have
the
variety
of
testing
tools
all
have
a
common
library
or
something
to
refer
to.
A
E
Yeah
I
was
going
to
make
a
similar
suggestion.
Maybe
the
correct
sequence
of
events
is,
first
of
all,
you
know
break
up
what's
there,
which
I
think
is
essentially
when
Eric
suggested
into
Linnaeus,
specifically
in
the
criminal
case,
and
then
once
we
have
very
you
know,
separated
out
layers
of
things
that
aren't
very
reusable,
then
then
pull
those
out
into
there
and
because,
as
a
second
pass,
which
I
think
is
what
you're
interesting
yep.
B
Yeah,
so
I
also
want
to
do
some
alerting
because,
like
right
now
we
don't
really
have
like
if
something's
going,
slow
or
whatever
like.
We
don't
really
have
I,
haven't
really
instrumented
proud.
Yet
to
do
anything,
and
we
would
like
to
start
finding
out
about
problems
based
on
some
sort
of
you
know,
monitoring
and
graphing
that
we
have
and
potentially
set
up
alerts
around
that
as
opposed
to
you
know,
someone
pinging
us
on
slack,
saying
like
a
something
isn't
working.
B
So
the
bootstrap
library
is
the
thing
that
you
know:
checks
out
the
repositories
inside
the
container
and
uploads
logs,
and
so
we're
yes.
So
one
thing
we'd
really
like
to
do
is
sort
of
just
provide
a
Basel
bootstrap
image,
so
that
like,
if
you
are
using
prowl
with
basil,
like
essentially
there's
not
a
lot
of
configuration
necessary,
you
can
just
sort
of
say
you
know
point
at
it
and
because
otherwise
you
need
to
like
make
your
own
image
and
that
can
get
you
know
hairy.
B
But
if
we
sort
of
you
know
leverage,
basil
and
prowl,
we
could
easily
get
to
where
you
know
as
long
as
you're.
If
you're
you
know,
if
your
repos
not
doing
that,
then
you
can
of
course
ship
whatever
image
you
want,
because
prowl
just
starts
a
container
doesn't
really
care
what's
happening
inside
your
pods,
but
if
you're
using
basil,
then
you
essentially
have
a
very
simple
configuration
to
just
say:
go
basil
test
all
the
things
in
my
repo
and
we'll
do
all
that.
B
Oh
another
thing
is
right
now
you
know,
especially
as
we
go
to
the
lots
of
different
repos.
We
probably
don't
want
everything
and
tests
infra,
especially
it
like
right
now
the
job
definitions
of
like
what
jobs
do
I
want
to
run
on
this
repo
or
that
repo.
We
probably
want
to
figure
out
some
other
way
to
do
that,
as
opposed
to
storing
everything
in
the
test
and
for
repo,
especially
if
we're
like
getting
people
who
aren't
even
part
of
kubernetes
using
prowl.
A
B
Think
it
might
be
a
bit
of
both
I
mean
yeah
I
mean
one
thing
is
maybe
for
ete
tests
that
might
span
multiple
repos,
so
maybe
we'd
want
to
put
that
in
its
own
location,
but
maybe,
if
I'm
not
exactly
sure,
I
think
this
needs
to
have
more
thought
put
into
it,
but
but
like
boot,
but
bootstrap
a
suit,
but
booster
up
assumes
it's
in
testan
bruh.
So
if
someone
is
wanting
to
use
bootstrap
elsewhere,
that's
sort
of
yeah
strange,
we
seem
to
provide
a
good
experience
for
us.
Yeah.
B
And
then
for
the
scenarios,
yeah
I
think
this
is
sort
of
you
know,
talking
same
sort
of
deal
where
we
have
like
a
Basel
like
it's
really
easy
to
do.
Testing
with
Basel,
and
it's
in
so
the
kubernetes
Nereo
has
a
mode
where
it
runs
inside
a
docker
container,
which
we
don't
want.
If
we're
already
running
inside
of
Prowse,
and
it's
already
inside
a
container,
we
also
have
a
bunch
of
really
what
just
want
to
move.
B
We
basically
want
to
move
all
of
the
junk
there
sort
of
we
just
want
to
move
all
the
junk
in
to
where
all
of
our
ete
tests
are
encapsulated
inside
a
single
cube
test
command
now
cops
does
some
weird
stuff
inside
its
own
ete
run
sh,
and
then
we
want
to
finish.
We've
been
moving
a
bunch
of
like
the
pattern.
Right
now
is
the
scenario
sets
environment
variables
and
then
launches
the
ete
runner,
which
then
translates
the
environment
variables
into
cube
test
flags.
C
And
we
can
we
potentially
change
that
and
flip
that,
instead
of
passing
flags,
can
we
just
have
a
configuration
file.
So
that
way,
when
you're
looking
through
as
part
of
your
intent
test,
because
what
what
we're
doing
inside
of
the
repository
itself
now
with
component
configs,
just
specify
a
single
configuration
file
which
has
all
of
the
parameterised
values
for
the
tests
or
testing
scenarios
in
a
single
file
that
way
like
when
you're
looking
at
it.
C
It's
pretty
easy
to
grok
and
reproduce
too,
because
they're
trying
to
look
through
10,000
logs
of
whatever
jenkins
builder
execution,
stuff
it's
difficult
to
take
those
logs
and
then
recreate
and
figure
out
what
the
parameters
were.
But
if
you
have
a
single
configuration
file,
you
can
say
like
okay
I
can
test
this
on
my
own
really
easily
just
grab
the
configuration
file.
The
way
I
go.
B
Yes,
I
mean
that
that's
essentially
the
yeah
like
essentially
you
know
any
anything
anything
any
job
we
run
should
be
able
to.
You
know
if
you
just
clone
to
test
in
for
repo
and
call
bootstrap
with
the
job
name,
it
should
run
the
job
exactly
in
the
same
way
that
we
do
and
furthermore,.
C
B
Right,
yeah
I
think
there
is
yep
I,
think
that
you
know
we
are
wanting
to
so
we
sort
of
essentially
have
that
where
one
of
the
arguments
that
you
pass
are
these
you
know
yep
so
I
mean
yeah,
that's
essentially
the
goal.
B
The
way
things
are
composed
right
now:
they're,
not
super
they're,
not
super
composable
like
what
we
I'd
really
like
to
get
to
where
we
define
things
like
a
slow
sweet
and
a
parallel
sweet
and
a
you
know,
maybe
even
a
node
sweet
and
a
whatever
Suites
a
bunch
of
different
Suites
that
are
detached
from
like
providers,
and
so
then
we
can
compose
together.
Like
a
you
know,
cups
on
AWS
deployment,
with
the
slow
sweet
and
maybe
the
you
know,
one
point:
six
release
or
something
that's
like
right.
B
Now
we
sort
of
have
that
ability
to
do
that
if
we
had,
if
we're,
structuring
things
that
way,
but
we
haven't
really
structured
things
that
way
we
kind
of
just
have
you
know
the
you
know
the
gke
slow
release
and
there's
not
an
easy
way
to
take
that
to
some
other
environment,
but
we
want
to
make
that
better.
I
don't
know
if
I
have
that
somewhere
yeah
right,
like
you
know,
refactor
these,
to
make
it
easier
to
read
your
thing.
I.
A
Mean
maybe
maybe
I,
let's
understand,
but
it
does
seem
we're
trying
to
sort
of
crawl
walk,
run,
we're
like
getting
rid
of
all
the
environment
variables
so
that
we
can't
even
allow
ourselves
ET,
murderess,
H
and
all
that
stuff
is
crawl
and
then
converting
all
of
the
flags
that
we're
using
to
to
configuration
files
that
can
be
JSON
Yamma.
Whatever
could
be
the
next
step
right
that
could
maybe
be
composable,
I
think
you're
trying
to
get
rid
of
all
the
environment
variables
first
or
understanding
what
we're
trying
to
yes.
B
So
the
main
so
that
the
yes,
the
crawl
step,
is
to
get
to
where
we
can
run
a
single
command
that
with
a
bunch
of
flags
that
duplicates
a
scenario
so
like
if
I
have
a
test.
Failure
and
I
want
to
rerun
that
I
can
just
look
at
one
line
in
the
build
log
that
specifies
the
the
cube
test
command
and
then
I
can
just
run
that
cube
test
command
with
the
same
flags
and
get
the
same
run.
B
The
same
scenario
run
the
same
result
and
because
right
now
we
don't
actually
have
that
we
have
most
of
that,
but
there's
still
a
bunch
of
goo
and
edie
run
or
sh.
So
in
reality,
it's
like
you
have
to
run
these
fifteen
commands
in
this
particular
order
want
to
simplify
that
to
just
run
this
one
command
and
then
once
we
have
that
and
continue
iterating
above
the
you
know
in
the
scenario
like
how
the
scenario
composes
those
flags
that
sends
the
cube
test,
we
can
you
know
we
can
continue
improving
that
interface.
E
E
You
know,
install
kubernetes
version,
X
on
platform,
Y
and
then
install
commish
prometheus
version,
the
built
by
then
there
are
Q
on
cluster
and
then
tested
for
components,
etc
and
creating
trying
to
create
some
kind
of
toolkit
where
it's
reasonably
straightforward
to
create
those
permutations
of
things,
which
is,
you
know,
philosophically
at
least
I,
think
long
lines.
What
Erickson
describing
and
I'm
involved
that
there
are
actually
three
parallel
projects,
somewhat
independent
projects
they're
by
different
companies
trying
to
do
essentially
the
same
thing.
E
Well,
one
of
them
is
far
away
and
we
actually
have
a
sort
of
a
product
that
we've
planning
to
donate
over
CN
CF.
That
actually
aims
to
provide
that
kind
of
workflow
functionality.
Where
you
can,
you
know
plug
arbitrary
nodes
into
web
flow
graphs
and
have
them
schema
data
from
one
phase
to
the
next,
and
then
there
are
a
couple
of
others,
but
the
only
reason
I
mention
it
is
that
we
might
be
in
a
rather
than
rebuilding
and
redesigning
and
rethinking
all
that
stuff
from
scratch.
E
It's
possible
that
this
group
could
either
you
know,
take
take
some
kind
of
hints
from
what's
a
way
to
be
done
with
a
veil.
Actually,
we
use
some
of
the
software,
but
I
mean
mandating
that
they
have
to.
You
know
to
the
same
thing,
but
sounds
like
the
same
problem
with
China
sort
and
very
briefly,
one
of
the
solutions
which
is
the
Huawei
one
is
its.
E
For
example,
you
might,
you
know,
build
a
build,
an
AWS
set
of
nodes,
and
then
you
might
build
kubernetes
ll
with
that
and
then,
when
once
of
those
are
when
you
and
both
of
those
are
complete,
they
will
automatically
trigger
next
step,
which
is
to
install
committees
on
the
AWS
nodes
and
then
run
some
arbitrary
set
of
these
tests,
as
defined
some
of
them.
Perhaps
serially.
E
Where
is
approximately
what's
being
done
so
that
they've
built
so
far
such
a
thing
for
kubernetes
Prometheus
and
core
code
NS
for
testing
those
on
top
of
each
other
and
the
planets
sort
of
build
that
into
a
proof
of
concept,
show
it
to
people
and
see
whether
that's
an
Avenue
with
pursuing
further.
So.
E
It's
both
it's,
it's
all
open
source,
so
the
underlying
workflow
platform
is
called
container
ups
and
that's
open
sourced
and
we've
offered
to
donate
that
to
the
CM
TF
and
then,
in
addition
to
that,
this
kind
of
CNC
of
specific
work,
which
is
building
these
nodes
of
these
workflows
and
and
configuring,
a
setup
like
that
to
demonstrate
using
this
thing
to
just
multiple
CNC,
a
project
in
parallel
and
that's
all
open
source
and
work
in
progress.
Okay,.
A
My
understanding
is
we're
going
to
get
the
opportunity
to
see
a
demo,
that
least
some
of
that
Bruce's
concept.
Two
weeks
from
today
at
Pacific
on
at
the
next
C&C
FCI
working
group
meeting,
which
I'm
happy
to
forward
a
link
to
the
sick
testing
groups
for
those
who
are
interested
in
checking
it
out.
Yeah.
E
B
So
cube
tests
is
sort
of
our
interface
for
running.
You
know
ran
our
EDD
tests
and
we
have
a
project
in
testan
for
Bastas,
which
is
sort
of
a
pool
of
resources
that
something
can
lease
the
main
goal
for
this
is
to
make
it
so
that,
right
now
we
have
a
pretty
tight
coupling
between
the
job
and
the
GCP
project
in
which
it
runs,
which
is
you
know,
nice,
because
it
provides
isolation
between
jobs,
but
it's
gross
in
the
sense
that
it
makes
it
a
little
bit
more
awkward
to
you
know.
B
We
have
to
create
a
bunch
of
new
projects
for
each
new
release,
which
you
want
to
get
away
from.
I.
Think.
Another
thing
which
would
be
really
helpful
is
if
we
could
get
cube
tests
to
pass
conformance
tests
using
a
local
cluster
of
some
sort.
I
think
that's
a
frequent
cig
testing.
You
know
helps
like
hey,
I,
launched
a
local
cluster
and
how
do
I
get
any
tests
to
pass
on
this
I
think
if
we
showed
people
it
one.
B
A
Think
I've!
Okay,
with
that,
with
the
exception
that,
like
a
local
cluster
sort
of
as
impossible
on
Darwin
right
now,
just
because
I
can't
run
the
server
components
on
my
laptops,
but
talking
about
like
a
docker,
docker
type
cluster.
Something
that
could
work
across
all
environments
will
be
equal
to
that
before.
How.
B
Okay,
yeah
I
mean
yeah
I
feel
like
there's
frequently
people
who
are
interested
in
fixing
tests,
but
they
don't
you
know
they're,
probably
I,
suspect
they're,
not
wanting
to
get
a
build
by
a
cloud
provider.
You
know
they're
not
working
right,
I,
guess
for
us
at
Google,
I,
don't
know
we
don't
really.
We
can
start
beings
and
that
doesn't
get
easier
for
us
to
start
VM.
Since
it's
our
thing
so
it'd
be
nice
that
there
is
some
way
for
someone
who
doesn't
want
to
run
a
cloud
DMS
to
help
us
yep
I'm,
test
grid.
B
So
d,
you
know
I
think
that
the
biggest
thing
which
actually
isn't
you
know,
there's
still
a
couple
pieces
that
are
like
not
open,
sourced
and
we'd,
really
like
to
get
them
open
source
so
that
other
people
can
you
know
one
like.
If
someone
else
wanted
to
run
test
good
right
now,
they
could
check
their
config
into
our
repo
but
like
if
they
wanted
to
run
their
own
test
grid
instance.
They
don't
have
the
way
a
way
to
do
that
and
we'd
like
to
make
that
possible.
Also.
B
That
would
then
make
it
easier
for
community
members
to
improve
both
the
updater
and
then
the
those
updaters.
What
you
know
produce
the
states
and
then
the
dashboard
is
what
displays
the
state,
so
both
of
those
are
sort
of
slow
and
written
in
Python,
and
you
know
not
accessible
to
the
community
we'd
like
to
make
them
accessible
to
the
community
arm
and
then
alerts
are
something
we
really
want
and
then
internally,
the
internal
version.
B
Has
this
neat
feature
where
you
can
sort
of
like
if,
like
a
test,
has
a
non
fatal
error,
you
can
sort
of
display
that
by
still
having
to
be
green
but
maybe
put
like
an
e
on
the
cell
and
yeah
so
that
the
grouping
related
tabs
together,
it
might
be
useful
to
you
know
it
will.
B
Actually
one
is
I'd,
really
love
to
make
it
so
that
we,
instead
of
having
like
a
Google,
gke
or
a
gke
dashboard
and
a
GCE
dashboard
in
AWS
dashboard
I'd,
really
like
to
reconstruct
it
around
six,
so
that,
like
sig
sort
of
feel
empowered
to
like
these.
Are
the
tests
I
care
about
and
it's
you
know,
motivation
to
keep
them
green
and
then
once
we
have
that
we
can
soon
we'll
have
the
ability
to
group
related
tabs
together.
So
we
could
say
that,
oh,
these
are
all
the
sig
CLI
one,
seven
tests.
B
B
The
last
thing
monitoring
sort
of
talked
about
that
a
little
bit
earlier,
so
we're
you
know
we
have
some
bigquery
metrics,
which
we're
sort
of
calculating
continuously.
We
want
to
put
those
onto
a
graph
on
Bella
drone
somewhere
and
then
also
just
you
know,
also
get
to
where
we
are
learning
on
things
so
that
if
something
goes
broken
or
if
our
test
infrastructure
is
down
and
we're
not
even
running
tests
that
we
start
to
use,
you
know
whatever.
We
have
a
place
to
see
that
and
alert
on
that.
A
B
A
Either
I
guess
the
reason
I
brought
up
the
single
cluster
in
my
head
was
whether
or
not
that
would
make
it
any
easier
to
start
adding
non-googlers
to
test
into
rotation,
because
I
think
as
an
ideal.
That's
really
great
I
think
we'll
probably
need
to
spend
some
time
in
the
coming
weeks,
figuring
out
how
to
break
that
down
into
actionable
tasks
and
we're
like
what
documents
or
tribal
knowledge
where
we're
lacking
I'm.
A
Sorry
I'm
hopeful
to
you
know,
participate
in
some
discussions
and
the
upcoming
meetings,
because
in
the
channel
to
figure
out
what
we
can
do
to
help
accomplish,
that
I
think
that
that
would
be
a
really
big
force
multiplier
for
potentially
some
of
the
rest
of
this
work.
Just
there's
some
areas.
Some
bug
fixes
some
pieces
of
code
that
we
can't
fully
test
or
fully
review,
because
we
don't
know
how
it's
going
to
affect
the
actual
testing
front.
A
B
I
think
one
thing
is,
you
know
if
we
aren't
aware
there's
this
go
to
Kate
shadow,
slash
on
call
which
says
who
the
build
Cup
and
the
test
in
for
on-call
person
is
arm
I.
Would
you
know
if,
if
anyone
knows
of
some
open-source
tool
or
some
you
know
convenient
way
to
generate
these
rotations
as
well
as
you
know,
display
them?
B
That
would
be
useful
right
now
we
sort
of
have
our
internal
rotation
tool
that
this
is
sort
of
the
data
backing
on
which
makes
it
awkward
to
I
feel
that's.
You
know
really.
If
we,
if
we
had
a
way
to
produce
a
rotation
and
display
that
like
there's
nothing,
you
know
if
we
had
eager
members
I
know,
we've
had
at
least
like
I
think
Maru
has
said
that
he'd
be
willing
to,
but
we
don't
actually
have
a
mechanism
right
now
to
make
Maru
on
call
and
yeah.
A
Okay,
cool:
well,
we
once
again
gone
over
time.
I
think
this
is
an
awesome
list
and
my
proposal
would
be
that
sometimes
that
being
sort
of
leave
this
open
for
discussion,
a
any
follow-up
items
next
week
and
then
two
weeks
from
today,
I'd
like
to
see
us
actually
capture
these
and
the
issues
that
we
actually
that
we
commit
to
a
milestone
so
that
we
have
something
we
can
present
to
the
community
and
say
yeah.
This
is
what
goods
sake
is
committed
to
for
the
upcoming
release.
A
A
B
Would
we
be
interested
in
having
I
think
Justin
Wright
was
mentioning
suggesting
the
idea
of
having
like
a
office
hours.
You
know
where
we
don't
necessarily
have
an
agenda,
but
maybe
we
throw
up
a
I,
don't
know
be
there
to
ask
questions.
I,
don't
know
if
anybody's
interested
in
this
or
if
we
need
this
I
feel
like
we're.
You
know
pretty
responsive
on
the
slack
channel.
A
Keeping
at
all
this
last
channel
would
certainly
be
my
boat.
Alternatively,
something
that
came
up
during
the
Leadership
Summit
was
a
number
of
other
SIG's
sort
of
alternating
weeks
where
they
have
one
week.
That's
a
little
more
strategic
and
one
week,
that's
a
little
more
tactical,
and
so
we
could
do
that
sort
of
schedule
where
the
tactical
week
is,
you
know,
what's
everybody
working
on
what
specific
issues
do
we
have
office
hours
that
sort
of
stuff
at
the
moment
I've
just
kind
of
been?
You
know
we
were
we
sort
of
play
it
by
ear.
A
We
have
a
slow
week.
We
generally
tend
to
find
things
to
talk
about
actively
speaking,
but
I
want
to
make
sure
we're
open
to
the
broader
community,
so
I
think
the
office
hours
question
is
a
great
thing
to
to
ask
mailing
lists
and
potentially
kubernetes
tests
to
see
what
folks
there
think,
but
I
personally
have
tried
to
be
as
responsive
as
I
can
on
the
slack
channel
and
I.
Think
that
we've
gotten
some
good
questions
answered
there
and
Justin
seemed
to
find
his
questions
answered
there
as
well.