►
From YouTube: Kubernetes Community Meeting 20160915
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
Demo "K8s & Distributed Trusted Computing in Practice"; SIG Cluster Ops; SIG Autoscaling; 1.5 feature planning; 1.4 release update.
A
So
good
local
time
to
everyone
mid
afternoon
for
early
afternoon
from
the
east
coast,
where
I
am
live,
life
casting
from
work
Benton
way,
the
Queen
team
is
hosting
me
there
at
lunch,
they'll
join
us
in
a
minute
or
two
potentially
or
they'll
join
this
remote
ours
because
today
is
be
car,
then,
so
this
is
to
remind
them
of
it
a
public
and
recorded
meeting,
and
it
is
September
15.
So
this
morning
we
start
with
a
demo
from
Matthew
carrot
at
I
think
about
distributed
to
burn.
It
is
distributed,
trusted
computing
and
practice.
B
B
Fundamentally,
no
matter
how
secure
we
make
containers
if
the
host
itself
is
not
secure,
then
it
doesn't
matter
if
we
have
signed
containers,
if
we
have
various
other
mechanisms
for
verifying
that
the
container
is
containing,
we
expects
it
to
be
visiting
from
the
machine
from
the
source
that
we
expect
it
to
come
from.
If
we
then
run
this
on
a
system
that
is
not
itself
trustworthy,
that's
system
than
tamper
with
the
container
it
can
skip
image
validation.
It
can
inject
new
content
into
the
container.
Even
the
scene,
there
was
originally
trustworthy.
B
It
can
even
reach
directly
into
a
running
container
and
modify
the
state
of
running
applications.
If
we
want
to
have
faith
that
containers
are
trustworthy,
this
we
can
put
our
secrets
in
containers
and
appts
systems.
We
really
need
a
mechanism
for
ensuring
that
systems
we're
deploying
our
containers
on
are
themselves
trustworthy.
B
This
goes
back
to
a
technology
that
was
first
introduced
in
the
early
to
mid-2000s.
It
falls
into
a
category
called
trusted.
Confusing
trusting
confusing
has
something
of
a
bad
reputation.
There
was
a
lot
of
mismanagement
of
the
original
presentation
of
its
functionality
as
desirable
pneus,
and
it
mostly
came
across
as
a
mechanism
for
other
people
to
control
your
computers
and
prevent
you
from
doing
what
you
wanted
to
do,
whereas
in
reality
that's
not
a
cheap,
sticky,
straightforward
and
instead
it's
much
more
useful
as
a
technology
for
imposing
restrictions
on
your
own
systems.
B
The
base
of
trust
computing
is
the
idea
of
creating
a
root
of
trust.
This
extends
all
the
way
from
the
system
firmware
up
to
an
arbitrary,
an
arbitrarily
high
level
of
the
stack.
This
is
based
on
having
something
called
a
trusted
platform
module
or
a
TDM.
That's
a
small
chip
on
a
system
motherboard
which
contains
some
registers,
and
each
of
those
can
hold
crashes
of
each
component
of
the
boot
process.
B
When
you
update
the
TPM,
you
don't
directly
change
the
content
to
those
registers
you
pass
in
a
hash,
so
stay
the
firmware
generates
a
hash
of
the
loads
of
Japan
or
a
hash
of
the
kernel,
and
those
caches
are
passed.
The
TPM
and
combined
with
the
existing
register
values,
and
so
you
end
up
with
something
that
is
only
plausible
by
performing
exactly
the
same
series
of
rights
to
the
TPM.
B
B
So
that
means
that
if
the
TPM
tells
you
I
have
these
values,
then
you
can
work
out
well,
the
values
are
expected
were
this,
so
that's
good
or
the
values
are
expected,
are
not
this,
and
so
that's
bad
and
sign
those
values
and
provide
a
signed
copy
of
them
to
a
remote
system
which
can
then
look
at
them,
verify
them
and
then
make
policy
decisions
based
on
that
square.
Shin
is
how
do
we
tie
this
into
Cuba
Nessie's?
B
Obviously,
having
this
all
just
magically
work
out
to
the
box
is
the
best
possible
outcome
right
now.
We're
doing
this
in
a
slightly
naive,
but
basically
functional
way.
Just
by
introducing
an
additional
admission
controller
that
taints
new
nodes,
any
node
that
joins
the
cluster
is
automatically
tainted,
and
so
as
a
results,
the
scheduler
will
not
schedule
anything
on
those
notes.
So
the
assumption
that
all
new
nodes
are
untrusted
until
proven
otherwise.
B
B
So
the
admission
controller
actually
has
a
black
list
of
updates
that
can
be
performed
by
an
untrusted
system
and
that's
currently
hard
coded
and
I'm
very
interested
in
figuring
out
a
better
way
of
doing
this.
But
yes,
that
laser
so
things
that
it
blocks
are
multiplying
at
eight
state
modifying
the
configuration
and
modifying
the
policy
that's
associated
with
determining
whether
assistance,
trustworthy
or
not.
We're
then
using
our
back
in
order
to
grant
certain
users
permission
to
modify
the
TPM
parameters.
B
This
nose
verifies
that
it's
TPM
state
matches
the
policy
and
if,
as
policy
modification
goes
through
all
the
modes
and
verifies
that
they
still
match
the
policy
right
now,
this
is
being
built
out
of
the
cube
necessary
directly
because
it
depends
on
a
fair
amount
of
the
API
I'm.
Looking
forward
to
being
held
to
build
that
outside
the
full
Cuban
se
street,
then
we're
using
third-party
resources
to
store
the
policy
information,
so
policy
informations
just
what
are
acceptable.
Tpm
values.
B
What
are
acceptable
things
that
were
logged
is
the
CPM
during
the
boot
process,
and
so
just
look
at
those
can
make
a
reasonably
well-informed
decision.
We're
shipping
known
good
policy
values
for
core
OS
as
possible.
Core
OS
build
right
now
determining
what's
good
values
are,
for
our
existence
is
kind
of
a
difficult
job
and
mention
that
so
CPI
manager
has
permission
to
remove
the
same
flag.
The
imaging
controller
will
allow
it
to
do
that.
So
if
the
excitation
taser
matches
the
policy
we
clear
the
same
flag
and
then
jobs
will
get
scheduled
on
that's
nose.
B
A
C
So
quick
question
at
you:
is
there
any
way
we
can
print
the
note
from
joining
it
all
I
mean.
Can
we
start
attacking
this
at
the
sort
of
like
TLS
cert
level
breaks.
B
So
something
we
have
actually
considered
is
doing
this
at
the
authentication
layer
rather
than
the
API
layer.
The
problem
there
is
this.
You
then,
need
basically
separate
tooling
to
obtain
the
note
states
to
identify
why
the
load
isn't
able
to
access
the
system
and
that's
having
the
nose
be
able
to
join
the
cluster
and
then
give
us
information
about
it
through
the
cube.
Nessie's
API
means
that
it's
a
lot
easier
to
build
stuff
around
this
I'm,
not
absolutely.
B
Menting
is
at
this
level
there's
I
can
certainly
see
my
doing
a
set
btls
levels
more
appealing
in
certain
ways.
The
other
downsides
of
doing
this
at
the
killas
level
is,
if
run
sign.
You
update
the
policy
racist.
Now,
that's
pretty
straightforward,
CP
and
managers.
See
policy
updates
goes
through
excellent.
B
Can
then
remove
mission
from
those,
because
you've
just
discovered
that
a
kernel
that
has
a
known
vulnerability-
you
don't
want
any.
You
just
be
scheduled
on
that
doing
it
at
the
TLS
level.
Use
instead
be
to
invalidate
all
existing
authentication
and
then
ensure
that
nodes
for
smashley
picks
up
new
one.
Oh.
A
B
D
Discussed
what
and
and
just
to
fill
it
on
that
that's
what
we
call
field
level
authorization
it's
been
discussed
in
the
sig,
sig
off
and
I
think
your
use
case
mirrors
a
lot
of
other
ones
such
that
it
is
definitely
climbing
in
importance
to
have
some
concept
of
a
more
generic
and
flexible
setting
these
lines.
Yeah.
B
D
There's
a
look
there's
another
point
to
it.
I
think
I
should
point
out,
as
notes
today
have
access
to
pretty
much
all
close
to
resources
by
policy
which
isn't
great
if
one
of
your
notes
gets
compromised
and
so
there's
some
other
discussion
going
on
about
a
more
selective
authorization
policy
that
might
only
apply
to
nodes.
That
is
the
other
half
of
that
story
which
is
locking
down
which
suits
you
can
access
and
which
complete
map.
Second
axis,
which
positive
exercise
etc.
D
So
you
should
probably
join
sig
off
I've
made
money
with
that
seems
like
officials,
including.
A
Your
mastery
of
understatement
is
awesome,
just
all
resources,
and
that's
not
really
good-
that
was
great
hi.
Any
further
questions
for
Matthew.
Alright,
thanks
Matthew
and
thanks
for
you
to
give
us
give
us
updates
on
this
work.
Can
you
send
a
link
or
drop
into
the
community
kneeling
for
your
slides?
Will.
A
We
have
Mike
with
us
know
and
then
is
remembering
there's
someone
else
think
they're
giving
an
update
on
say,
cluster
ops,
no,
not
at
the
moment.
Okay,
we
have
an
update
from
sig
auto-scaling,
so
solly
I
see
your
video,
which
means
you
are
here.
E
There
we
go
I
needed
all
right,
so
we
we've
kind
of
restarted
the
meetings
for
sig
auto-scaling.
They
are
on
on
thursdays
at
eleven-thirty.
In
the
eastern
time
we
are
aiming
for
bi
weekly
schedule,
except
for
next
week,
when
we're
going
to
be
meeting
and
discussing
a
roadmap
for
an
future
plan
for
metro
and
monitoring
collection
going
forward.
Our
other
goals
to
be
415
are
to
buy
to
improve
the
horizontal
pot
autoscaler
api
somewhat
to
allow
for
scaling
on
arbitrary
metrics
and
also
to
push
the
cluster
autoscaler
forward
as
well.
E
So
we,
you
know,
welcome
some
more
participation
from
anybody,
who's
interested
in
auto
scaling
and
look
forward
to
seeing
you.
There
are
also
in
the
process
of
figuring
out
our
relationship
with
state
against
fermentation,
which
is
another
metrics
related
sig.
So
that's
enough
for
moving
towards
in
the
future
clock.
Yet.
A
Suspect,
there's
going
to
be
a
lot
of
cross
collaboration
there,
as
you
guys
figure
out
where,
where
your
boundaries
and
edges
are
because
I
know
you
mentioned
on
the
mailing
list,
when
sig
instrumentation
was
created,
that
there
was
some
potential
overlap
and
that
that
would
just
need
to
be
sorted
as
I
think
the
instrumentation
comes
up
and
it
grows
into
a
full-blown.
Take.
A
Anybody
have
questions
about
sagano
scaling
all
right,
so
I
can
do
the
update
about
quest
drops,
which
is
mostly
in
the
docs
they've,
been
working
on
a
bunch
of
drawings
to
try
to
supplement
docs,
trying
to
give
cube
us
better
visualization,
as
well
as
a
working
to
bring
people
together
to
push
against
docs
from
an
operator
perspective.
So
much
like
Kelsey's
of
empathy
sessions.
That
he's
talked
about
we're
trying
to
try
to
build
this
as
if
you've
not
built
it
before
and
make
our
dogs
and
our
experience
better.
A
F
A
A
F
A
F
One
not
my
feature
on
a
key
because
we
go
I
only
thing
is
there.
Yet
everything
was
cranky,
therefore
whatnot
or
but
I
think
the
majority
was,
and
we
got
feedback
that
really
David
yeah.
What
was
a
partner
for
and
then
being
able
to
see
what
it
status
was
help
via
GA
and
being
a
field.
Those
issues.
H
A
H
A
D
F
J
A
F
F
So
we
know
that
we're
going
to
be
working
on
Federation
and
AJ,
and
some
level
of
automated
updates
automated
up
wait.
Let
me
I
think
the
plan
was
to
work
on
upgrading
the
master
in
place
and
coming
up
with
a
strategy
for
that
it
will
also
be
ongoing
ease
of
adoption
work
so
continuing
some
of
the
work
that
the
cluster
lifecycle
sig
has
has
put
in
for
alpha
in
for
they'll,
be
getting
towards
kind
of
a
no
ops,
tish
cooper
Nettie's.
This
is
one
thing
that
you
were
sort
of
playing
around
with.
F
Also,
there
are
a
bunch
of
engineering
productivity
items
that
the
team
added,
so
you
can
see
them
here.
Engineering,
health
and
Etsy
dv3
work
as
we're
going
to
try
and
put
all
of
this
in
for
the
features
recall
and
revisit
your
kind
of
beam
should
be
eat
as
the
repo
gets
popular.
If
you
could
go
to
the
next
slide,
we
also
take
a
look
at
by
sig
what
the
cigs
have
both
for
the
next
release
and
you
can
sort
of
see
a
little
bit
of
that.
F
I
already
talked
about
Federation
and
the
node
team
is
working
on
container
runtime,
alternate
container
runtime
or
container
runtime
interface,
and
some
security
features
as
well
as
notes
back
so
I
think
we
already
have
the
ones
that
are
in
green.
We
actually
already
have
feature
before
entry
score
and
those
are
already
targeting
one
dot.
F
F
A
F
A
F
F
A
J
Yes,
yeah,
okay,
just
this.
J
F
J
I
just
wanted
to
expand
on
this
a
little
bit.
That
was
when
I
comment.
They
think
that
this
is
a
topic.
That's
come
up
every
time
the
middle
ops
word
gets
used
and
I
know
that
this
is
kind
of
a
I'll
say
no
ops,
this
kind
of
Google
slang
for
some
things
that
are
very
good
but
Cooper
Nettie's
as
a
system,
is
very
operationally
important.
It
really
needs
to
appeal
to
operations.
Audiences
I
think
the
no
ops
word
really
has
you
know
has
a
lot
of
potential.
J
In
fact,
it
offends
one
of
the
really
important
users
and
target
audiences
for
systems
like
Cooper
Nettie's.
So
I
really
just
make
a
plea
that
we
would
use
some
other
Bernie
operational
readiness,
operational
efficiency,
power
offs,
that's
good!
I
think
this
keeps
coming
up
in
it.
I
think
it's
worth
some
discussion
about
not
not
offending
a
critical
part
of
our
constituencies.
F
A
You
want,
if
anyone
wants
more
information
about
that
particular
point
of
contention.
Generically
the
community,
John
alspaugh,
wrote
an
amazing
answer
to
Adrian
cockroft,
calling
something
no
offs,
and
you
only
have
to
search
for
John
all
spa
and
no
ops
and
you'll
find
it.
I
think
it's
just
on
github
somewhere,
but
it
was
a
really
wonderful
respond.
J
Just
to
add
to
that
so
within
the
community
context,
I
think
it's
pretty
clear
that
this
is
a
contentious
can
be
a
contentious
thing
and
I'm
sure
we
figure
out
how
to
avoid
it.
I
would
really
encourage
the
folks
on
the
call.
Perhaps
they
were
on
track
folks
career
interests
that
you
know
sort
of
involved
in
external
marketing
lot
of
posts.
I
would
really
strongly
encourage
you
to
yeah
the
use
of
the
new
awkward
and
sort
of
the
outbound
messaging
from
google,
because
it
has
a
similar
effect.
F
K
K
Put
under
an
umbrella
I
think
there
have
been
some
great
comments
and
Bob
I.
Think
you
put
your
finger
on
it.
Oh,
we
did
we're,
definitely
not
looking
to
put
people
out
of
work.
We're
just
looking
to
have
them
be
able
to
focus
on
more
interesting
work
and
I.
Think
we
can
do
better
about
the
language
we
use.
Yeah.
A
F
A
F
Think
I
think
we'll
last
slide
is
that
we
as
a
team,
need
to
decide
what
will
be
in
core
versus
what
is
not
in.
There
is
a
proposal
linked
there
and
one
of
the
things
that
we
are
adding
to
the
issue
template
in
the
futures.
Rico
is
an
entry
that
says:
where
is
this
work
targeted
to
land?
Is
it
target
to
landing
for
or
not
which
all
faster.
F
Okay,
great
yep,
that's
all
I
have
so
as
people
are
filling
out
the
future
repo
which
please
do
415
my
sig,
keep
in
mind
with
repo
you're
targeting
awesome.
A
Nope
all
right,
Phil
I,
think
I
saw
you
hey
guys,
welcome
and
thank
you
for
your
continued
efforts.
Yes,.
L
Yes,
of
course,
ya
hear
me:
okay,
yeah,
all
right
great,
so
I'm
gonna
start
out
today
talking
about
upgrade
testing.
I
left
a
couple
updates
about
it
on
my
update
emails,
but
I
thought
might
be
worth
thing
in
and
talking
about
really
what
I'm
talking
about
when
I
say
that
so
for
upgrade
testing,
we
have
two
types
of
great
testing
we're
doing.
We
have
automated
upgrade
testing
everyone's
different
versions
of
our
ed
test,
suite
against
a
eschewed
cluster,
and
the
goal
here
is
to
test
for
api
compatibility
across
version
types.
L
L
L
A
L
We
also
do
manually
upgrade
testing,
so
the
automated
testing
we're
not
creating
objects
in
an
old
cluster,
then
upgrading
the
cluster,
then
continuing
testing.
What
we're
doing
is
just
creating
all
the
new
all
the
old
objects
are
all
the
new
objects
against
a
running
cluster,
just
with
skewed
versions
of
what
we're
trying
to
do.
L
The
manual
obtained
sitting
upgrade
testing
to
do
right
now,
which
should
be
automated,
but
it's
just
not
tests
that,
if
you
create
objects
in
cluster,
then
upgrade
it
to
a
new
version
that
those
objects
continue
to
still
function
well,
so
this
doesn't
have
an
automated
sweet.
We
can
just
run
against
that's
already
developed
and
instead
is
done
by
a
doc
with
a
manual
list
of
steps
and
objects
that
we
test
and
how
we
test
them.
L
So
this
doesn't
have
the
coverage
that
automated
tests
do,
because
we
can't
test
every
field
on
every
object
with
the
manual
testing
to
make
sure
it
still
functions
properly.
In
all
the
different
situations
in
interactions
between
objects
is
really
just
an
object
by
object,
analysis
to
make
sure
they
individually
seem
to
work.
It's
very
kick
the
tires
and
we
can't
run
this
against
every
beta
and
we
can't
run
this
against
the
Final
Cut.
L
So
what
we're
doing
is
trying
to
look
at
the
cherry
picks
we
take
in
after
we
have
declared
we've
done
our
final
round
of
upgrade
testing
and
make
sure
that
by
eyeballing
it
that
they
don't
impact
on
the
possible
upgrade
of
objects,
and
so
we've
uncovered
a
number
of
issues.
A
couple
issues
we've
been
covered
with
the
automated
testing
is
introducing
new
features
that
have
made
old
the
way
we
use
the
client
in
the
past
incompatible
specifically
there's
one
with
KU
control.
L
Rolling
update
that
we
discovered
with
the
introduction
of
garbage
collection,
has
an
interaction
within
account
for
we
also
test
this
also
catches
where
we
change
an
API
and
a
backwards
come
out
of
a
manner
that
isn't
backwards,
compatible
such
by
changing
the
name
of
a
field
we're
using
for
something
or
something
like
that.
We
shouldn't
be
doing
this.
This
would
be
a
mistake
if
we
end
up
doing
it,
but
it
does
give
us
coverage
against
that.
L
So,
moving
on
we'll
talk
a
bit
about
the
state
of
the
upgrade
testing
we've
been
doing.
L
The
state
of
the
automated
upgrade
testing
is
it
takes
seven
and
a
half
hours
to
get
a
run
of
a
full
upgrade
test
suite.
So
that
means,
if
we
don't
restart,
kick
all
the
tests.
When
we
do
a
cherry
pick,
the
worst
case
is
15
hours
for
it
to
it.
Just
started
an
old
one,
so
it
finishes
them,
take
seven
and
a
half
hours
to
finish
and
then
picks
up
the
cherry
pick.
L
So
that
means
we
don't
have
a
lot
of
shots,
I
getting
fixes
in,
and
we
end
up
doing
a
lot
of
manual
effort,
because
we
can't
just
make
a
change
untested
quite
as
easily.
There
are
ways
we've
gotten:
we've
reduced
the
time
using
our
dev
boxes,
but
it's
still
it's
hard
to
see
these
things
turn
green.
Oh
you.
L
Mean
we
get,
we
get
about
three
runs
per
day,
but
the
the
end
in
latency.
If
your
cherry
pick
goes
in,
it
could
take
a
full,
it
could
have
just
start
an
old
run.
So
we
don't
get
your
cherry,
we
don't
pick
it
up
and
then
before
starting
the
new
run,
so
it's
15
hours
later
before
you
actually
get
the
signal
and
the
tests
that
they
turn
green,
so
you'd
have
to
manually.
Take
them
to
prevent
that.
L
Does
that
answer
your
question?
Michael.
I
L
You,
sir,
so
so
what
I
think
is
good
news
is
we've
actually
uncovered
real
issues
with
this
yesterday
and
I
know,
everyone
felt
like
the
release
was
actually
going
very
quiet,
more
quiet
than
we
expected,
and
we
all
kind
of
wondering
like
what
are
we
missing
and
so
I
feel
like.
We
are
starting
to
close
the
gap
on
what
are
the
things
that
maybe
we
are
missing
and
so
I'm
feeling
optimistic
from
that
perspective.
L
One
thing
we're
seeing
is
because
we
built
in
logic
into
the
coop
control
client,
such
just
for
reaping
and
garbage
collection,
it's
hard
to
keep
a
consistent
testing
view
there.
You
really
do
need
to
test
the
skewed
fought
client
against
the
new
server,
because
there
is
two
pieces
to
logic
lives
in
two
places.
Now.
L
So
that's
the
automated
testing
for
using
GK,
which
we
had
mainly
we
had
the
most
infrastructure
setup
on
that
for
GC.
We
had
much
less
infrastructure
available
and
we
realized
late
into
the
game.
The
critical
importance
of
doing
this
testing
as
well,
and
that
GK
didn't
cover
everything
we
needed
so
we're
setting
up
infrastructure
to
do
the
same.
Automated
testing
we're
doing
on
gke
on
GCE
there's
infrastructure
issues
we
are
encountering
there
we're
working
through
them.
We
did
some
cherry
picks
this
morning
to
try
and
address
those
if
they
fixed
everything
with
related
infrastructure.
L
And
then
so,
finally,
manual
upgrade
testing
which,
which
has
been
going
better
than
either
the
automated
upgrade
testing
divorce.
Our
team
in
Warsaw
worked
last
night
to
do
another
round
of
testing
on
the
most
recent
data
that
I
cut
and
uncovered
a
number
of
issues
that
we
need
to
look
in
more
closely
today.
L
So
I
haven't
talked
about
open
issues
in
the
milestone
or
open
pr's
that
haven't
been
merged
or
flakes
and
I'm
not
going
to
be
concentrating
on
these
right
now.
Eee
Paris
has
been
doing
a
lot
of
the
cherry
picking
worth
there,
including
back
so
you
should
work
through
him
and
only
bring
me
in
if
you
really
need
to
escalate.
L
Talk
a
lot
a
bit
about?
We
have
great
test
being
a
pretty
bad
state.
The
folks
working
on
the
upgrade
tests
have
been
really
overworked.
They've
been
doing
working
really
really
hard
on
this
I
want
everyone
to
know
that,
and
we
can't
expect
to
keep
the
same
pace
and
have
the
same
people
just
plugging
in
long
hours
to
get
this
done.
L
L
A
I
have
I
am
one
comment
just
based
on
the
some
of
the
information
in
the
chat,
so
Joe
called
me
out
for
saying
we
do
both
kinds
of
testing,
gke,
nGCP
or-
and
that
is
simply
a
fact
of
skip
bandwidth
and
skill
limitations.
We
would
love
for
other
people
to
be
healthy.
Do
more
testing
on
different
platforms
that
do
this,
this
sort
of
upgrade
testing.
This
is
not.
This
is
not
us
saying
that
those
are
the
only
two
platforms
by
any
stretch,
but
we
try.
C
To
act
like
human
jokin,
Sarah
and
I
understand
the
constraints
there
I
think
one
thing
that
might
be
again:
it's
the
Hoos
is
one
question,
but
in
terms
of
resources
there
is
the
there
is.
The
the
CNC
have
cost
her
arm
and
it
would
be
I
think
it's
important
to
think
about
how
how
maybe
that
could
be
utilized.
As
part
of
you
know,
release
testing.
L
Sure
I
just
want
to
make
a
quick
comment
about
that.
Actually,
thanks
a
good
comment:
I,
don't
think
the
major
blocker
here
is
a
constraint
of
compute
resources.
The
real
blocker
here
is
getting
engineers
working
on
this
stuff.
If
we
had
the
engineers
worked
like
we
had
the
compute
resources
three
months
ago
for
all
this
stuff
to
be
done,
nothing's
changed
there,
but
none
of
this
stuff
was
done
and
we
are
now
fighting
a
fire
to
get
engineers
on
this.
L
So
like
I
guess,
if
we,
if
we
want
to
do
AWS
like
getting
engineering
resources,
there
is
really
beat
the
blocking
factor
or
the
CNC
F
cluster
or
wherever
else
we
want
to
do
this
so.
K
K
Stop
it
it's
a
perfectly
valid
thing
and
you
should
keep
our
feet
to
the
fire.
Let
me
say
this,
though:
I
have
no
budget
whatsoever.
If
you
have
a
cloud
and
your
single
problem
is
not
having
a
credit
card
or
VMS
or
something
to
run
it
and
manage
the
tests,
call
me
immediately.
I
will
figure
out
how
to
get
you
money
in
order
to
accomplish
that
that
I
think
Phil
is
exactly
right.
K
The
problem
is,
is
that
Google
engineers,
no
Google
infrastructure
and
and
I
think
we're
always
going
to
like
be
a
little
bit
behind
when
it
comes
to
knowing
AWS
or
as
your
digital
ocean
or
all
the
places
that
we'd
love
cumin
entities
to
run.
So
if
you
have
the
expertise,
please
don't
let
money
be
the
blocker,
please
don't
talk
and
Aaron.
A
J
A
J
Little
rest
of
this
attention,
so
they
run
at
the
same
frequency
as
GCE
and
TKE.
So
this
is
a
huge
tremendous
I
heard.
This
is
fantastic,
but
just
a
hectic
villas
point
I,
it
seems
to
me,
is
the
Zack:
was
a
hero
and
just
open
and
fixed
whatever
needed,
to
be
fixed
for
a
coup
boxes,
stop
right
for
AWS
and
it'd
be
great.
If
we
had
more
than
just
here,
it's
actually
browning
around
this,
but
just
it's
some.
J
I
Question
yes,
Michael,
as
development
is,
is
winding
down.
There
are
more
people
at
least
I'm
working
with
who
are
able
to
participate,
and
if
I
missed
this,
please
let
me
know
where
it
is
the
sort
of
work
queue
that
people
can
say.
Oh
look,
I
can
jump
on.
This
is
I've
been
posted.
You
stash
des
oh
yeah,.
L
So
there's
there's
different
areas.
One
area
that
I'd
probably
have
people
is
I,
am
I've,
been
creating
issues
for
the
failed
upgrade
tests
and
I've
laid
up
in
applying
the
label
kind,
upgrade
test.
Failure
and
I've
posted
a
link
to
that
in
the
dock,
under
state
of
automated
upgrade
testing
and
inside
each
one
of
those
issues,
I
post
the
link
to
be
publicly
visible
test
grid
failure
that
shows
it
happening
as
well
as
the
the
summary
of
the
more
detailed
results
of
that
test
run
known.
L
One
challenge
we
have
right
now
is:
we
have
eight
deployment
issues,
they
might
all
be
the
same
cause.
Oh
yeah.
No,
we
have
a
couple
people
looking
at
them.
We
could.
If
we
really
wanted
to
move
this
forward
as
quickly
as
possible,
we
would
have
different
people
looking
at
the
different
deployment
issues
and
communicating
between
each
other
really.
I
I
think,
but
we're
more
people
who
could
help
who
are
not
helping
and
so
I
want
to
make
sure
that
you
know
they're
not
duplicating
effort
of
other
people
and
getting
in
the
way,
because
I
worried
now
that
you
know,
as
people
are
rolling
off
dev,
we
could
easily.
You
know,
have
a
coordination
thing.
That's
let
us
all
down
so
yeah.
I
L
Excellent
question,
probably
myself
or
David
Oppenheimer,
would
be
the
two
best
people
because
we're
the
two
looking
at
the
upgrade
test
issues.
Suggestion.
I
Well,
what
are
you
good
at
and
if
they're
you
know
able
to
jive
on
something
without
a
lot
of
ramp
up,
then
that's
great,
and
if
it's
someone
you
know
where
and
other
people
on
my
team
that
would
take
you
know
new
hires
and
such
or
you
know
you
roll
tires
that
would
take
so
much
ramp
up
its
not
worth
them
bagging
on
search
and
thank
you
and
I
saw
that.
That
sounds
great.
I
think
broadcasting
that
which
is
you
know
their
finger
time.
I
Please
go
to
david,
it's
probably
a
good
thing
to
do,
and
then
my
my
last
question
is,
I
think
it's
great
but
you're
talking
about
the
fact
that
the
more
fatigued
people
are
there's
a
lab,
a
point
of
diminishing
returns.
I
Let's
do
we
have
any
estimate
or
idea
anticipating?
Are
we
in
a
situation
where
were
like?
Oh,
like
you
know,
days
weeks
months
of
possibly
slipping
the
date
that
we're
shooting
for
and
what
is
our
current
target
date,
even
if
we
know
we're
not
going
to
make
it?
Those
are
my
last
two
questions.
Yeah.
L
Okay,
so
I
think
this
is
something
I'd
like
to
discuss
in
the
burndown
meeting
and
I'd
like
to
have
other
people's
input
on
our
current
target.
Our
previous
current
Derek
target
date
was
Tuesday
and
our
with
the
original
timeline.
I
gave
we'd
be
cutting
the
release
candidate
tomorrow,
and
we
would
have
high
confidence
that
there's
nothing
wrong
with
it.
L
So
clearly,
we're
not
in
this
state
that
I
laid
out
on
the
road
map
for
getting
the
Tuesday
date
arm
and
right
now,
there's.
Unfortunately,
we've
uncovered
a
bunch
of
issues
and
we
don't
have
a
good
clue
on
the
severity
of
those
issues.
We
haven't
even
finished
the
setting
up
the
infrastructure
to
uncover
additional
issues
with
the
GC
stuff.
I
L
I
think
so
I
think
I'm
going
to
end
on
an
uplifting
note.
I
think,
but
to
your
point
like
I
think
one
thing
I
need
to
do
in
the
next
with
mice
morning
is
figure
out
like
the
next
steps
that
we
need
to
be
taking
and
how
we
can
paralyze
that
effort.
I
wish
I
had
a
more
solid
vision
of
exactly
what
needed
to
be
done.
Okay,.
I
L
Yeah
to
your
point,
like
the
issues
we're
in
covering
now
are
largely
things
we
haven't
been
doing
for
previous
releases
and
and
things
we
could
have
very,
very
easily
just
launched
and
not
uncovered.
So
so
that's
good,
and
what
else
is
good
is
that
this
is
actually
not
something.
We
have
to
wait
till
the
last
two
weeks
to
start
for
our
next
release.
So
we
we
don't
have
to
experience
this
every
time.
A
A
I
A
Invite
him
right
now-
and
I
will
also
just
mention
that
the
Cates
burn
down
meeting
is
a
public
meeting.
You
need
to
be
part
of
the
cake
turn
down
mailing
list
to
have
the
invite,
that's
only
to
make
sure
that
we
have
a
handy
way
to
track
it.
It
is
a
meeting,
though
that
is
specifically
going
through
open
issues
as
quickly
and
as
efficiently
as
possible,
trying
to
get
to
decision
the
action
items
and
assignees
as
well
as
short
discussions
about
things
like
risks
and
rewards
of
slipping
timelines.
A
So
saying
that
we're
pushing
that
conversation
there
is
not
in
any
way
to
try
to
it's
just
going
to
make
sure
that
we
have
the
right
audience
of
people
participating
and
it
in
a
smaller
group,
where
it's
not
a
lot,
where
it's
not
necessary,
a
larger
all
right
Phillip
did
you
have
anything
more?
No.
A
A
Okay,
then
I'll
go
sit
through
some
quick
notices,
which
is
docks
as
we
push
really
hard
toward
the
one
point
or
release.
We
need
to
make
sure
that
our
Doc's
match.
So
please
get
your
dock
yards
in
for
release
before
friday.
That's
tomorrow
at
noon,
pacific
time
and
the
current
gossipy
are
attracting
a
link
that
is
showing
in
the
notices
section
for
those
of
you
who
are
curious
and
interested
about
how
the
elders
idea
is
coming
forward
and
how
we
help
us.
We
are
getting
to
having
a
solution
and
an
elder
proposal.
A
I
have
updated
the
issue
which
I
will
put
a
link
to
in
here.
I
hope
they
did
the
issue
in
the
community
Cooper
Denny's
repo,
with
revised
language.
That
would
be
basically
the
formation
language
for
the
elders,
the
charter
for
them,
so
that
be
too
is
up.
Please
feel
free
to
go
comment
on
it.
I
would
really
like
the
lock
fits
down
as
we
get
one
point
for
out
the
door
and
get
started
on
1.5.