►
From YouTube: Kubernetes SIG Cluster Lifecycle 20171212
Description
Meeting Notes: https://docs.google.com/document/d/17J496IR2tXKw7k97fxwz2KUWOf9rpBD3pIEsmDiJQSw/edit#heading=h.4d67o86ks7yr
Highlights:
- Reviewed the proposal for a new benchmarking tool
- Update on gcr.io support of manifest lists
- Running kubeadm tests on AWS
- 1.9 release status
- KubeCon summary
A
B
D
E
F
D
Okay,
hello,
everyone,
I'm,
stunning
call
from
University
and
I'm
very
intuitive,
Union
communist,
especially
so
say,
I
ever
see
the
scenes
at
the
same
time,
I'm
also
material
old,
say
our
Atos.
So
let's
take
a
look
at
this
product.
This
product
is
called
cogniser
reuleaux
the
benchmark
under
this
program
you
folks
arms
or
real
Walker
law.
That
has
the
cases
so
first,
let's
take
a
look
at
overview
of
this
product
companies,
Rural
Bank
mark
age,
which
uses
Robo
colluded
to
add
the
test
cases.
So
the
key
point
is
a
real
workload.
D
B
E
E
Can
you
help
me
I'm
Germantown
from
her
team
are
about
cloud.
We
have
down
sorry
about
those
provisioning
tours
and,
in
my
opinion,
the
new
cast
API
looks
good.
You
asked
sorry,
it
is
theory
in
prototype
stage
and
we
are
thinking
about
whether
choose
chaos
or
anywhere
to
a
sporadic
cloud.
So
we
want
to
discuss
with
you
I.
G
Think
that
what
last
time
we
check
we
talked
about,
it
is
I
suggest
and
talk
to
help
you
so
whatever
it
is
cross
to
lifecycle
manager
to
use
to
do
the
today's
kubernetes,
the
conformance
test.
We
could
using
the
same
thing
and
another
thing:
this
is
not
Casa
lifecycle
manager,
it's
not
a
cheap
harder
for
this
one,
so
keep
honey.
It
is
beyond
what
we
are
doing
for
conformance
test
and
we
want
to
do
something
for
the
performance
benchmark.
On
top
of
beyond
the
conformance
test,
yeah.
A
You
ask
it:
can
you
point
the
conformance
like
the
the
benchmarking
at
any
existing
cluster
I
mean
I.
Think
that
would
be
one
great
way
to
do.
This
is
to
say
that
the
cluster
lifecycle
manager
in
some
ways
it's
out
of
scope.
If
all
you
need
is
to
create
and
delete
clusters,
you
can
create
really
the
cluster.
However,
you
want
and
then
run
the
benchmark
on
top
of
an
existing
cluster
yep.
B
It
ideally
part
of
your
benchmark
tests.
It
would
be
a
data
gathering
of
how
the
environment
is
configured
because
you,
you
ideally
want
to
plunk
down
your
test
in
any
environment
and
be
able
to
collect
as
much
information
about
said
environment
as
possible.
So
that
way,
any
other
person
person
could
take
that
and
potentially
reproduce
it.
G
G
D
D
D
So
so
we
doing
this
and
I
think
we
can
explain
this
in
two
ways
of
communities:
users.
There
are
more
and
more
companies
which
are
attended
to
adapt
community
in
their
production
environment.
They
want
to
know
whether
their
communities
across
our
stable
under
stable
Mandy
run
different
attacks
were
close
in
terror
classes.
D
D
So
this
is
called
logic,
exclusion
flow
benchmark,
so,
firstly,
in
configuring
stage,
video
user,
Bank,
Marco,
configure
Farrow
and
command
line
flags
to
define
the
test
cases
we
want
to
use
and
other
information.
Secondly,
inter
exclusion,
State
Bank,
mockable
and
hello,
the
test
cases
and
collect
monitoring
data
of
the
crosser
family.
D
So
how
how
do
I
look
I
think
step?
One
is
there
are
left
the
closet
lifecycle
manager
of
you.
Casa
will
create
a
cluster,
maybe
article
on
TC
permit,
host
and
step
two.
The
tech
engine
will
render
you
walk
alone,
hesitated
on
the
kunais
router.
At
the
same
time,
demonic
in
the
moment
possible.
Installer,
motorist
and
collected
data,
which
we
are
interested
in
the
mana
controls
will
store
data
in
the
storage.
D
So
the
step
3,
it's
the
evaluation
system-
will
will
read
their
data
from
storage,
so
it
will
use
going
mode
which
raise
associated
test
case
to
evaluate
their
performance
of
the
crown,
and
it
will
generate
a
report
or
in
this
reporter
we
were
really
ok
or
something
details
and
some
suppose
and
some
organized
to
point
out
how
to
improve
this
crown.
Sir
wah,
so
that's
all
there
are
selects
I
will
I
will
stop
there.
B
B
We
wanted
to
be
able
to
plunk
inside
of
the
cluster
and
any
cluster
because
it
could
be
provisioned,
so
many
different
ways
and
then,
from
that
point,
expand
the
capabilities
and
execute
the
test.
So
part
of
suitably
was
designed
specifically
with
this
in
mind,
because
we
had
performance
tests
that
we
originally
had
run
and
they
were
external
to
the
cluster.
So
we
at
Red
Hat
when
I
was
a
player
at
Red
Hat.
B
We
had
designed
a
entire
test
suite
that
did
exactly
what
you're
doing,
and
it
was
called
pea
bench
and
cluster
loader
and
pea
bench
basically
clunks
down
across
your
entire
cluster
and
gets
high
frequency
data.
So
that
way
you
can
take
that
back
and
analyze
it
afterwards.
But
cluster
loader
was
the
work
load
profile
engine
that
basically
ran
a
controlled
experiment
to
load
your
cluster
to
a
certain
States
and
then
P
bench
would
collect
all
this
data.
B
And
then
you
could
take
that
data
and
manipulate
it
afterwards
and
whatever
engine
you
wanted
to
part
of
creating
Sona
boy
was
to
unify
a
way
of
deploying
this
on
any
Questor
right,
so
so
nobody's
really
agnostic
to
what
it
executes
on.
So
you
could
create
a
plugin
that
basically
ran
P
bench
right,
so
that
was
actually
one
of
the
intentions
and
there's
an
open
issue
against
it.
So
it
would
run
down
in
your
cluster,
deploy
all
the
pieces
for
P
bench,
execute
the
cluster
loader
profile,
coalesce.
B
A
My
wounds
say
something
similar
I
know
there
was
a
there's,
a
comment
in
this
week's
meeting
notes
pointing
to
an
effort,
that's
being
driven
by
the
CN
CF
for,
like
the
the
cloud
native
folks,
are
doing
benchmarking
of
kubernetes
clusters,
there's
also
a
Google
cloud
platform
project
called
perf
kit,
which
is
designed
to
do
performance
benchmarking
of
different
cloud
platforms
and
looks
like
it
has
some
very
basic
support
for
kubernetes
as
well.
So
I'm
wondering
if
there's
a
reason
to
start
a
new
project
versus
jump
on.
G
That's
the
Rena
I
suggest
him
to
talk
to
the
sig
half
circle,
because
I
heard
some
variable
efforts,
but
I
didn't
see
the
match,
especially
I,
didn't
seem
at
you
from
the
real
work
node
but
I
know.
There's
the
Eifert
are
the
various
performance
tests
even
signaled
also
have
some
know
the
level
of
the
performance
test
and
also
I,
know
there's
the
confirms
times,
but
the
unconfirmed
after
there's
no
stand
away
uniform
you
to
run
the
perform
stand
against
the
real
workload
and
also
on
other
hand.
G
I
also
know
that
performance
test
will
largely
depend
on
your
real
production
right.
So
next,
for
example,
even
when
you
measure
their
network
related
stuff,
how
are
you
going
to
there's
a
huge
independent
foreign
firm
depend
on
your
back
back-end
and
they're.
Also
disk
I/o
there's
also
a
lot
of
dependency
on
your
production.
So
so
that's
why
we
want
to
give
to
learn
the
idea.
G
What's
the
cost
of
differentiation
of
the
product
for
the
performance,
so
that's
kind
of
help
that
every
provider
also
help
them
in
general
of
the
kubernetes
community.
So
not
everybody
in
the
production.
So
that's
why
we
started
G
with
the
path
lifecycle
we
could
have
started.
There's
to
also
have
the
separate
proposal
in
there
or
interpreter.
Also
we
need
to
the
Noda
just
ran
those
work,
note
and
note
aside,
because
one
pad
after
no
the
performance
dashboard
which
use
non-reality
occur.
So
next
are
those
fake
kilometer
to
test.
G
C
B
I
can
point
they
can
always
hit
me
on
slack
and
I
can
point
them
to
the
history
of
all
the
different
pieces
that
are
there
because
there's
a
lot
of
history
here,
we've
been
at
this
game
for
three
years,
and
we've
have
multiple
solutions
that
have
existed
over
time
by
different
vendors,
but
are
not
unified,
and
the
only
thing
you
can
really
unify
on
is
a
tooling
to
deploy
and
collect.
The
analysis
is
always
going
to
be
custom
right
like
there.
You
cannot.
B
So
the
provisioning
is
very
opinionated,
so
you're
not
going
to
unify
in
provisioning
until
we
have
like
cluster
API
right
and
then
deploying
to
execute
is
sometimes
also
in
pinyin
ated.
But
if
you
have
a
framework
for
doing
that,
it
makes
it
easier.
That's
the
reason
why
we
made
sort
of
way,
but
then
all
of
a
sudden
you
now
you
have
the
data
in
a
reusable
format.
B
The
next
step
is
analysis,
and
that
will
also
be
highly
opinionated.
So
it's
like,
you
have
the
front
side
and
the
back
side,
which
are
going
to
be
custom
or
whatever,
but
the
middle
is
the
suite
ground
to
probably
focus
on.
How
do
you
create
the
tests
that
you
want,
and
how
do
you
collect
the
data
that
can
be
redistributed
for
everyone
to
take
a
look
at.
B
G
B
B
To
all
the
tooling
is
general
purpose,
so
P
bench
was
actually
originally
designed
to
do
colonel
performance
benchmarks,
so
P
bench
it
was
adapted
for
openshift
over
time,
like
many
many
years
like
even
open
ship
v2.
So
the
original
intention
was
to
do
generic
benchmarks
and
all
of
the
benchmarks
you
get
from
any
Red
Hat
produced
blog
is
basically
originated
from
the
bench
in
some
way,
shape
or
form.
E
B
Yeah
I
totally
agree,
I,
think
I.
Think
a
new
marine
I
think
having
a
document
that
enumerates
the
test
space
and
what
these
tests
provide
is
almost
just
as
useful
as
the
whole
jiggery-pokery
that
you've
set
up
I
think
it's
even
more
useful,
because
then
it
anybody
should
be
able
to
reproduce
the
tests
in
any
environment
right,
irrespective
of
the
apparatus.
A
Guess
I
think
that's
where
I
was
saying
earlier
right.
It
seems
like
the
real
value
here.
Is
the
tests
themselves
and
not
necessarily
all
the
framework
around
them
and
they're
quite
a
few
frameworks
that
already
exists
that
we
might
be
able
just
to
take
advantage
of,
so
we
can
actually
focus
our
time
and
energy
on
the
tests
themselves,
which
is
what
we
care
about
right.
We
care
about
actually
doing
the
benchmarking
trying
to
generate
either
through
synthetic
or
through
you
know,
capture
whatever
you
workload
looks
like
and.
E
G
So
so
that's
why
I
suggested,
based
on
whatever
conformant
has
the
standard
off
the
framework
just
using
just
just
one,
and
so
then
we
can
have
the
campaign
combine
the
data,
so
you
can
have
the
every
needs.
You
rather
are
the
same
framework
and
the
same
set
of
the
past.
You
can
add
anis
the
generator
of
the
history
of
the
data
performance
benchmark,
so
it
has
the
more
important,
and
so
everyone
can
repeater.
We
produce
the
same
set
of
the
tests
but
on
other
hand,
choosing
one
time's
the
framework.
G
As
the
start
point,
we
also
important
so
otherwise
those
tests,
everybody
because
it's
really
dependent
on
my
mentor,
everyone's
ruined
the
same
size
that
passed.
But
we
want
you
for
the
gaming
kubernetes
and
the
tonnage
of
autos
and
the
nice
infrastructure.
What
it
is,
the
data
view,
data
and
every
release.
You
have
the
new
release
and
you
want
to
the
same
set
us
against
the
same
configuration
same
sighting.
What's
the
new
leader.
Yes,
it
means
that
we
can
hope
we
can
find
some
new
question.
Something
like
that.
B
Yeah
feel
free
I
think
we
should
probably
table
at
this
point
because
I
think
we're
probably
in
violent
agreement
but
feel
free
to
reach
out
on
slack
and
I'm
sure
why
Tech
will
also
be
interested
to
I
know.
America
is
working
in
other
things,
but
I
know
boy
tech,
myself,
Jeremy,
eater
and
Sebastian
Young
have
all
worked
in
this
space
before
so.
A
Should
we
go
into
the
next
item?
Yes,
so
Rodrigo
said
he
couldn't
make
it
because
it's
doctor
appointment
was
running
late,
but
he
wanted
to
mention.
I
saw
Lucas
his
Twitter
feed
that
Google
GCR
IO
has
just
added
support
for
manifest
lists.
Lucas
has
been
bugging
us
about
this
for
like
a
year.
Finally,
there
it
requires
some
internal
migration
and
there
they
didn't
want
to
migrate
Google
containers
last
week
during
cube
con,
which
is
probably
a
good
thing.
A
I,
don't
know
if
they'll
do
it
this
week,
since
we
have
a
release
coming
out,
but
sometime
in
the
next
couple
of
weeks,
we
expect
it
to
migrate
and
then
then
have
support
for
manifest
lists.
So
I
think
Lucas
is
very
excited
about
being
able
to
add
better
cross-platform
support
coming
up
in
1.10,
which
is
I
think
mostly
just
changes
to
the
build
process
to
be
able
to
push
containers
that
we
build
to
use
manifest
lists
with
multiple
architecture,
support
and
then
a
very,
very
small
change
to
our.
A
H
I
talked
to
Lucas
some
time
ago,
probably
about
a
month
ago
about
running
the
periodic
qadian
tests
that
are
currently
run
on
GCE
on
AWS
as
well,
and
so
I
finally
have
time
to
get
around
to
setting
something
up.
The
question
that
I
have
for
everyone
here
is
I,
essentially
see
two
approaches.
One
of
them
is
choosing
a
set
of
tests
running
them
from
an
account
that
is
owned
by
the
community
and
AWS
would
essentially
contribute
would
credit
the
account
on
a
monthly
basis.
The
other
one
would
be
running
them
from
an
account.
H
That's
sort
of
dedicated
to
ETS
testing
infrastructure,
just
sort
of
less
open,
but
potentially
easier
for
me
to
do
on
my
side
and
then
so
either
way.
If
you,
if
you
wanted
to
discuss
details
and
what
tests
you
think
needs
to
be
run
on
AWS
infrastructure
like
I'd
love
to
have
that
conversation
either
now
or
offline,
but
I
just
wanted
to.
Let
it
but
say
closer
lifecycle
know
that
I've
taken
that
and
potentially
writing
a
proposal.
H
B
The
reporting
the
test
grid
from
the
federated
testing
infrastructure
does
not
give
signal
ooh
anyone
else
outside
of
the
people
who
are
working
in
testing
for
stuff
and
even
them.
Even
they
don't
look
at
it
that
often
the
blocking
jobs
give
continuous
signal,
which
is
both
good
and
bad,
the
blocking
jobs
being
good
and
the
fact
that,
like
you,
find
things
early
before
they
become
an
issue,
it's
bad
in
that.
If
there's
something
that's
wrong,
you
will
hear
about
it
in
many
many
ways.
H
B
It
depends
upon
who
reports
the
signal
out
if
we
have
a
means
by
which
we
have
unified
reporting,
that
we
can
take
a
look
at
and
if
some
sig
is
tracking
this
over
time.
So
if
we
have
like
a
URL
that
we
can
take
a
look
at
as
part
of
this,
you
know
part
of
this
sig
periodically
to
evaluate
I.
Think
that's
fine.
It's
just
a
matter
of
keeping
on,
and
somebody
needs
to
watch
the
fences
right.
All.
H
B
I
And
yeah
I
would
suggest.
The
theory
I
think,
is
that
you
can
submit
the
results
into
GCS.
They
will
appear
in
test
grid
and
they
should
be
on
the
I
know
the
theory
exactly
yes
I
and
they
should
be
on
par
other
than
for
blocking
of
merges
that
that
should
all
be
fine.
The
practice
is
it's
a
little
hard
to
get
the
results
into
test
grid
with
the
existing
infrastructure.
It's
a
little
complicated.
E
I
Not
quite
a
first-class
citizen,
and
so
I
keep
flip-flopping
on
this,
and
it
is
I
would
suggest
if
you
have
the
energy
to
try
doing
both,
not
least
because
doing
it
directly
like
run
by
test
info.
We
have
that
account.
It's
not
I
can
talk
you
through
it.
It's
a
little
bit
of
a
mind
bend,
but
it
isn't
terribly
difficult,
and
then
you
have
a
reference
for
what
it
should
look
like
when
you
start
sending
them
in
yourself
and
what
the
feature
gap
is
and
I
think.
As
once
we
get
more
people
doing
federated
reporting.
H
See
what
you're
saying
I
don't
know
I'd
have
to
ask
about
that,
but
yeah
I,
don't
like
the
crediting
approach,
because
there's
a
potential
for
like
I,
don't
even
know
what
the
process
is
to
getting
the
credits
every
month.
I
don't
know
if
there's
I
hope
that
there's
not
some
bureaucratic
process
that
we
have
to
go
through
every
month
and
that's
why
I'm
scared
of
doing
that
but
yeah.
Ideally
it
would
just
be
built
directly
to
us,
but
I'd
have
to.
I
I
My
understanding
is
that
the
process
is
not
terribly
bureaucratic,
that,
like
there's
a
block
of
credits
that
has
been
provided
and
we're
gonna
see
how
long
it
takes
to
burn
through
them
and
then
we're
going
to
decide
how
often
we
want
to
top
it
up.
I
think
it's
like
a
supervisory.
You
know
it's
not
like
a
hundred
million
dollars
of
like
each
each
s
that
just
go
through
without
anyone
like
signing
off
on
hint
right
right.
B
I
B
J
J
Helping
debug
it
I
fixed
one
issue
of
where
the
nodes
failed
to
downgrade
because
of
where
you,
because
of
a
switching
issue
between
the
in
the
shell
scripts
between
GCI
and
cos,
and
that
was
an
easy
fix
and
now
the
master
the
current
air
is,
the
master
is
failing
to
downgrade
because
we
upgraded
at
CD
versions
and
Etsy
doesn't
downgrade.
J
B
Actually,
inside
of
I
just
merged
a
PR
recently,
which
says
we
don't
downgrade
sed
versions,
we
basically
we
will
downgrade
the
control
plane,
but
we
will
not
downgrade
at
CD
versions.
If
we
do,
we
do
a
three
part
process
which
is
one
before
we
even
do
the
upgrade.
We
purposely
make
a
snapshot
set
it
aside
to
a
given
location,
and
then
we
have
a
multi-part
backup.
So
I
can
point
you
to
the
location
of
where
we
put
this
code
in
cube
ATM
and
we
because
Lucas
and
I
were
bantering
on
this
for
a
while.
B
J
B
This
is
also
a
part
of
the
jiggery-pokery
of
the
test
apparatus
that
you're
currently
running
vs.
Eventually,
we
want
to
rally
on
having
the
same
unified
deployment
tooling.
So,
instead
of
people
testing
it
five
different
ways,
there's
five
different
deployers.
We
because
we've
already
fixed
this
right,
I
agree,
but
that's
not
gonna
happen
by
tomorrow.
No
I
agree:
you
can
hack
it,
but
I
can
show
you
the
code
to
of
where
it
gets
changed.
Okay,.
B
B
I
think
the
one
takeaway
that
I'm
going
to
take
as
far
as
how
I
execute
within
this
sake,
in
particular,
is
I
would
like
to
get
more
folks
involved
right.
There's
only
a
couple:
people
talking
and
a
whole
bunch
of
people
sitting,
you
know
listening
and
I
would
love
to.
We
have
your
opinion
feedback
patches,
welcome,
etc,
involved
inside
of
the
execution.
So
if
you
are
shy
and
don't
want
to
chime
up,
feel
free
to
ping
me
on
slack
and
I
will
help
get
you
looped
into
some
of
the
things
that
are
going
on.
B
B
B
K
B
K
No
one
one
for
one
for
quality
is
one
for
commuters
and
colleges
thanks
having
a
CNCs
have
a
own
project
pavilion,
but
this
a
winner
have
a
booth
there,
because
people
were
complaining
that
they
wouldn't
be
able
to
participate
in
different
talents
or
whatever.
It
would
be
great
to
have
booth
to
hang
up
on.
H
This
topic,
G,
face
to
face
Mina,
was
I,
think
a
really
good
opportunity
so,
like
I'm,
a
new
lunch
business
to
the
group
and
I
think
just
some
feedback
on
that
is.
It
was
a
good
effort
and
if
it's
nice
to
follow
up
Lucas's
talk
with
being
able
to
kind
of
spawn
together
a
new
ideas.
We
should
definitely
do
some
diligence
to
make
sure
that
the
location
is
a
little
bit
quieter
and
maybe
have
a
place.
B
B
F
B
Right,
if
not
we'll
go
into
the
next
one,
there
is
one
nine
planning
but
I
think
with
Roberts
kind
of
you
know
not
here,
but
here
and
Lucas
not
here,
but
here
I
recommend
folks
to
take
a
look
at
the
link
documents.
We
are
starting
to
plan
for
the
next
iteration
in
cycle.
If
you
are
willing
to
contribute
I'd,
be
happy
to
help
guide
you
along
your
journey
into
the
wilderness
with
a
hatchet
and
no
Flint
whatever
and
there's
also
what
we
were
probably
going
to
have
to
do
this
next
iteration
as
well
is.
B
B
You
going
once
twice
three
times:
if
not
thanks,
everybody
and
I
think
the
planning
will
probably
begin
in
the
New
Year
I'm.
Guessing
at
this
point.
I
know
folks
are
probably
going
to
be,
if
not
heading
out
this
week.
Definitely
the
week
after
that
they'll
be
totally
offline.
So,
as
is
tradition,
usually
in
the
community,
if
we
don't
get
a
chance
to
meet
next
week,
happy
holidays,
whatever
holiday,
you
wish
to
be
involved
in
and
I
think
that's
it
thanks.
Everybody.