►
From YouTube: 2017-05-23 09.00.54 SIG-cluster-lifecycle 166836624
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
great
all
this
is
the
sex
life
lifecycle,
meeting
on
Tuesday,
the
23rd
of
May
2017,
and
let's
have
a
look
at
what's
on
the
agenda
Tim,
you
have
some
things
you
have
to
go
through
them.
Okay,
I
was.
B
Hoping
Mike
would
be
around
so
I'll
go
through
the
ones
that
doesn't
need
Mike,
yet
or
Lucas.
The
first
one
is
same
defaults
for
the
audit
work.
This
will
be
important
now
that
they're
doing
more
better
audit
logging
in
1:7
operators
would
ever
want
to
get
access
to
that
facility
and
I.
Don't
know
we
even
have
a
default
of
where
we
want
to
mount
or
put
that
file.
B
C
I
can
comment
on
that
since
we've
been
working
on
the
audit
logging
side
of
things
in
the
sing-off,
so
the
basic
structure
is
that
there
will
be
two
options
and
one
seven
completely
one
will
be
a
file
with
structured
JSON
and
the
other
one
will
be
an
actual
web
hook.
That
will
allow
people
to
receive
audit
log
events.
So
if
one
of
those
is
more
helpful
than
the
other,
then
yeah.
C
A
B
Are
the
ones
I
the
upstream
downstream
packaging
one
or
something
for
feedback
from
like
Denise?
It's
not
here
for
the
sub
posted
being
broken
one
I
don't
know.
I
was
suffering
for
Lucas
to
give
more
information
because
I
started
looking
into
it,
but
I'm
not
entirely
certain
at
what
state
it
broke
and
how
to
reproduce
the
broken
behavior,
because
I
I've
tried
a
couple
of
times
when
I
can
get
into
a
broken
state.
Only
by
miss
configuring,
CNI
right
so
I,
don't
exactly
know
what
conditions
get
us
into
that
state.
B
That's
the
thing
is:
okay:
I
don't
have
history.
There
I
only
have
anecdotal
evidence
from
the
single
ticket
I
guess
that
kind
of
points
to
whether
or
not
we
are
going
to
turn
on
and
to
end
testing
too.
If
we
had
intent
testing,
this
would
give
us
much
better
visibility
into
when
something
breaks
and
how
it
breaks.
Yeah,
I
know
Jacob
is
here
little.
If
you
want
to
speak
to
that.
D
And
I
see
defaults
options
for
just
cube
admin
and
it
can
join.
We
just
don't
have
many
other
variants
like
self-hosted
is
not
the
default
option
or
a
file
based
discovery
or
HTTP
based
discovery.
We
just
use
the
normal
token
based
discovery
and
yeah.
We
could
definitely
have
more
coverage
there.
Well.
B
D
That's
that's
what
we
do
I
think
we
just
don't
have
many
variants
in
the
standup
itself,
so
self-hosted
is
an
option
as
initialization
time,
so
we
run
the
full
conformance
test
suite
all
of
the
conformance
end-to-end
tests,
but
we
don't
have
many
variants
on
how
we
initialize
the
cluster.
In
the
first
place,
we
just
use
the
default
okay.
D
I
mean
it
something
anyone
could
add
it's
not
it's
not
Google
specific.
All
of
that
code
is
open.
Source
I
can
try
to
help
people.
If
anyone
wants
to
attempt
to
take
that
on,
and
it
should
mostly
be
cloning,
some
files
and
your
messaging
bits
to
think
how
we
initialize,
but
I'm
kind
of
bogged
down
this
quarter
I'll
receive
I'm
instead
of
plenty
of
post-mortem
action
items
that
don't
include
this
B
I'll
consider.
E
A
Well,
I
guess
that
takes
us
onto
my
item,
which
is
really
just
apologize.
I
haven't
had
capacity
to
work
on
on
upgrades
and
given
that
we're
now
two
weeks
out
from
code,
freeze,
I'm
just
going
to
call
it
and
say
that
unless
someone
else
picks
it
up
and
tries
to
take
it
over
the
line,
I
don't
think
I'm
going
to
make
it
one
seven.
So
then
sorry
so.
E
I
guess
one
thing
there
Lucas,
we
don't
have
to
have
them
be
coded
automatic.
Like
NC
scriptable
upgrades,
we
need
to
have
a
procedure
for
upgrade.
So
if
we
said
cube,
Edmund
was
beta
in
one
up
Nick,
since
I'm
gonna
sort
of
cluster
like
can
we
write
documentation
to
get
them
to
1.7
I
mean
I.
Think
that's
sort
of
the
mimbar
we
want
to
hit
write
is
like,
even
if
nothing
is
meant.
If
nothing
is
automated.
Everything
is
manual.
Here.
E
A
E
F
B
E
G
Look
I
have
a
question,
regardless
of
there's
a
timeline
right.
What
kind
of
help
work
of
what
kind
of
opportunities
do
want
to
be
double
there
in
the
short
term
of
the
long
term?
Actually,
how
does
that
relate
to
teams
work
on
a
chain,
because
these
are
very
very
much
connected
to
the
h8
design?
We
should
be
collaborative
with
the
designer
team.
I
think
the
team
is
working
on
this.
B
All
of
the
h.a
work
is
kind
of
asynchronous
at
this
point.
Working
on
branches,
and
eventually
some
of
that
stuff
is
already
been
merged.
The
stuff
that's
non-disruptive
and
the
other
things
still
need
documents,
another
full
vetting
for
to
occur.
That
won't
happen
in
the
one
seven
timeframe,
I
think
what
it
sounds
like
you're
asking
is
I'm
reinterpreting,
but
the
it
sounds
like
we
want
to
have
a
canonical
list
of
action
items
for
one
seven
that
folks
can
help
out
on
and
potentially
start
to
lay
down
the
list
of
prayer
times.
One
eight
items
yeah.
G
B
G
A
Sir
I
really
appreciate
you
asking
thank
you
the
so.
Your
your
question
was
kind
of
in
two
parts
that
I'll
address
the
second
part
first,
which
was:
how
is
the
upgrade
work
going
to
mesh
with
the
h.a
work?
If
you
follow
the
links
through
from
where
it
says,
upgrades
to
the
feature
in
the
features
repo
on
sorry,
where
it
says
upgrades
with
the
link
on
the
agenda
to
the
feature
on
the
features
repo
and
follow
the
link
again
to
the
design
proposal,
which
is
a
sketch
at
the
moment.
A
A
Self-Hosting
working
at
all
establishing
whether
self
hosting
does
work
or
not
at
the
moment,
which
doesn't
seem
even
that
doesn't
seem
to
be
understood
at
the
moment.
So
so
that
would
be
a
really
good
starting
point
and
then
beyond
that,
I
know
that
we
need
checkpointing
in
self-hosting
so
that
we
can
survive
reboots.
Those
two
items
in
life,
sensible
groundwork
items
to
me
before
we
even
start
working
on
like
sketching
out
a
prototype
of
the
CLI
for
cube
admin
upgrade
and
so
on,
get.
A
D
D
It
seems
like
the
consensus
is
that
people
want
to
create
a
a
P
I
object
rather
than
abusing
or
reusing
config
maps,
and
so
I
am
looking
into
basically
attractor
state
of
which
system
add-ons
are
installed
in
their
current
versions
and
that
sort
of
thing
so
I
am
looking
at
doing
that
for
routing
reasons
they
would
probably
have
to
be
in
core,
which
means
it
is
a
much
higher
bar,
of
course.
So,
looking
at
that,
but
no
real
progress,
yeah.
E
A
Yeah
that
sounds
good
cool,
so
Sasha
I
think
that's
you
right!
Updates
on
CNI,
oh
yeah,.
G
This
one
of
you
have
quick
update
where
we
are
with
the
AGI
plugins
for
update
for
testing.
So
generally
I
think
he
did
pass
the
code
review
with
Jacob
on
adding
new
flag
to
testing
front.
So
we
can
now
pass
the
two
flags
for
CI,
plugins,
right
and
default
will
be
weaving.
According
to
the
code
review
and
the
last
couple
days,
I
work
some
cute
green
eyes
anywhere
Pastore
to
plump
through
that
flag,
basically
and
I
think
I've
done
a
better
final
base
in
Jamie's
word
and
we've
such
day.
G
I'm
still
testing
everything
from
my
laptop
on
Google.
Please
don't
work
as
expected,
yet,
but
I
hope
I'll
wrap
it
up
today.
So
we'll
have
two
full
requests
in
sync,
both
in
testing
front
and
very
similar,
and
then
the
last
week
would
be
to
add
that's
to
education,
my
jobs
for
Jenkins,
but
use
those
new
features
and
I
know
that
someone
from
Tiger
team
is
working
on
the
same,
but
for
my
canal
and
chemical.
F
H
A
H
H
A
That's
super,
so
Mike
you're
noticing
that
you're
here
I
think
Tim
had
some
comments
earlier
and
he
was
like
you
I'm
arrived.
Ask
you
about
this.
So.
B
There's
questions
about
the
release
repository
and
who
kind
of
owns
and
manages
that,
namely,
like
some
of
the
packaging
stuff
for
bootstrapping
Kubb
adn.
You
know
this
is
a
precursor
Hertz
right.
H
H
Now
so
the
problem
is,
we
are
we
kind
of
went
with
a
prepackaged
solution
that
was
originally
very
quick
to
get
off
the
ground.
I
think
the
correct
place
to
get
this
is
integrated
in
our
release
so
that
we,
when
we
run
an
ago,
which
is
the
script
that
actually
cuts
a
release.
We
update
a
debian
and,
after
repository
synchronously
with
Deb's
built.
Alternatively,
we
could
explore
packaging
enough
stream,
like
you
are
doing
in
a
DL,
but
the
problem.
H
So
what
I
was
considering
is
creating
a
GCS
bucket.
That
was
that
I
could
grant
community
people
on
the
release
team
access
to
to
push
tube
and
we
could
host
apps
in
rpm
repositories.
There,
maybe
not
rpm.
Maybe
we
want
to
go
with
just
DL
that
it
can
be
Google
funded,
but
it
just
needs
to
be
able
to.
We
need
to
be
able
to
give
permission
to
the
release
team
to
push,
and
then
somebody
needs
to
write
the
scripts
to
maintain
that
as
part
of
an
ago.
Okay,.
B
H
B
H
B
H
B
H
I
And
I
wanted
to
be
good
I'm
interested
in
this
topic
as
well,
namely
getting
that
packages
will
professionally.
We
have.
We
have
build
scripts
that
produce
snap
packages
for
the
releases
today,
but
we're
also
interested
in
getting
that
upstream
as
well,
so
that
for
every
kubernetes
released
along
with
our
theme
and
our
skin
I
get
to
build.
H
I
H
Yeah,
sorry,
that
release
is
somewhat
neglected
and
and
every
so
often
when
we
cut
a
release
of
a
bunch
of
people,
work
on
it
and
then
it
sort
of
goes
neglected
for
another
two
months,
but
it'd
be
good
to
get
some
ownership
over
there.
Maybe
we
can
do
like
a
periodic
meetups
as
part
of
a
working
group
in
there
are
new
sig
release
that
was
formed,
I
think
yeah.
E
H
So
the
problem
was
with
sig:
release
is
I,
don't
know.
Maybe
if
mike
caleb
was
on
cig
release
or
somebody
else's
on
a
call,
I
think
they
didn't
want
to
take
ownership
of
actual
like
build
related
stuff,
and
they
wanted
to
work
on
issue
tracking
and
a
failing
test
tracking
coming
up
to
that
release.
I
think
that
was
the
so
I
guess
this
would
be
more
of
a
working
group
for
for
build
related
is.
J
Like
it's
actually
both
so
that
we
have
the
release
team,
that's
responsible
for
making
the
release
and
sig
releases
response.
Well,
it
was.
The
relief
team
is
contained
within
Zig
released.
That's
the
idea
and
the
Sigrid
oozes
process
and
tooling
support
derivatives
process.
So
it
seems
as
if
building
packages
would
fall
under
cig
releases
sponsibility.
H
The
other
thing
is,
we
probably
don't
want
to
do
this
like
we
do,
I
think
release
is
overloaded.
We
need
to
be
able
to
do
a
release
per
per
push
right.
That's
what
we
do
now.
We
create
a
release
and
we
push
it
to
GC
es
for
testing,
so
it's
in
in
that
sense,
I,
don't
know
if,
like
we
should
be
calling
these
like
builds
rather
than
releases.
This
is
not
the
release
that
happens
every
three
months.
This
is
the
release
that
happens.
You
know
hundreds
of
times
per
hour,
I.
E
D
This
time
is
a
negative
is
the
scenario
Rico
and
the
clinic
release
life
cycle,
because
these
are
cops.
We
it's
not
practical
to
release
when
information,
so
commanders,
but
early
on
disciplinary
and
the
like.
If
there
just
isn't
enough
time
in
between,
like
the
last
totally
going
in
like
get
things
that
I
was
reasoning
at
the
same
time
is
dangerous.
H
We
we
felt
that
it
wasn't
only
dangerous.
We
actually
felt
the
reverie
questions
last
time
we
really
skew
Badman,
just
went
terribly
so
I
think
definitely
an
open
question,
especially
as
we
start
to
track
issues
and
code
across
repos.
We
need
to
figure
out
how
to
safely
do
these
yeah
releases.
I
got
me.
J
A
Cool
Tim,
whether
any
other
fish,
ji,
yeonsoo
or
any
other
topics
you
wants
to
raise
today
and.
C
B
Don't
know
if
there's
an
answer
and
it
sounds
like
there's
a
non-answer.
So
there
was
a
couple
of
folks
to
get
a
PR
to
put
these
certs
as
secrets
as
part
of
startup
routine,
and
the
problem
with
that
is
that
secrets
aren't
necessarily
secret
and
our
back
isn't
locked
down
in
a
way
that
controllers
don't
have
root
level.
Don't
have
access
to
everything
right.
So
it's
a
weird
chicken
egg
problem
with
security
and
I.
Don't
it
didn't
I
I,
listen
to
that's
the
same
off
call
it
sounded
like
everyone
just
went
in
circles
yeah.
B
C
I
mean
so
I
think
you
stumbled
upon,
like
a
broader
conversation
that
we're
having
around
secrets,
which
is
kind
of
unfortunate,
but
the
sort
of
I
think
that
the
main
takeaway
is
that,
like
the
highlight
here,
is
you
know
for
a
lot
of
clusters,
read
for
secrets
in
the
cubes.
The
namespace
is
equivalent
to
root,
there's
a
big
question
of.
If
that
needs
to
be
true,
and
if
we
are
trying
to
develop
clusters
where
that
is
not
necessarily
true.
I.
C
D
Which
is
it
the
only
way
to
do
our
Beck
permissions
is
on
a
namespace
basis
as
far
as
I
understand,
and
so
there's
been
like
tidiness
issues
with
like
throwing
everything
into
cube
system
before,
but
now
we
have
a
concrete
like.
Perhaps
we
should
have
a
namespace
for
every
thing.
Everything
every
development
entities
are
one
supreme
controller,
one
for
API
server,
one
per
and
so
on,
because
unless
there's
a
plan
to
change
the
idea
that
names
basis,
control
or
scope,
then
we
should
we
should
buy
some
Bulleit
and
split
them
all
out
yeah.
This.
C
C
However,
there
are
a
lot
of
controllers
and
things,
but
simply
say
we
have
to
read
all
the
secrets
in
the
cluster
or
you
know,
sort
of
do
that
by
default.
The
ingress
controller
is
a
good
example
of
one
of
those
where
it's
sort
of
like
by
default
reads
all
of
the
secrets:
Federation
API
server
api,
a
control
plane
is
another
one
that
sort
of
requires
the
ability
to
watch
all
secrets
and
all
namespaces
and.
B
C
D
C
D
D
B
C
H
E
C
H
C
E
I
mean
it
is
in
the
sense
that
it's
it's
very
eccentric,
but
it
has
a
huge
impact
on
Cluster
lifecycle,
because
if,
if
the
decision
is
you
shall
not
store
cluster
root
level
certificates-
and
it
keeps
us
a
namespace,
then
that
kind
of
breaks
a
lot
of
the
plants
for
self
hosting
right.
Because
that's
that's,
where
you
put
the
data
so
that
you
can
do
upgrades
so
that
you
can
move
your
API
server
around
and
you
don't
have
things
tied
to
the
local
disk.
Yeah.
C
No
I
I
think
that
I
mean
this
is
how
we
do
it
internally
for
tecktonik.
Simply
with
the
idea
that
you
know
read
on
coop
system
is
his
group
already
I
think
there's
the
base.
The
main
conversation
is
sort
of
like
does
that
have
to
be
the
case,
and
will
that
be
the
case
in
the
future?
So
you
know:
is
this
a
to-do
item
for
later
releases,
where
you
know
we
put
it
in
the
coop
system
right.
B
Today,
ever
today
is
reasonably
isolated
and
there
was
a
conversation
in
this
I
got
call
that
it's
is
no
worse
than
what
we
are
asked
today.
So
we're
not
like
exposing
new
holes.
We
already
just
flagrantly
saying
we
have
a
whole
just
maybe
pounding
it
a
little
wider
for,
but
the
I
think
there
would
have
to
be
some
migration
that
allows
the
upgrade
process
that
we
ever
did
find
a
new
home
for
some
of
these
certs.
B
C
Just
to
give
a
background
and
sort
of
the
it
within
that
document,
or
within
that
issue,
there
is
a
link
to
sort
of
the.
What
secrets
look
like
in
the
roadmap
and
sort
of
the
TLDR
is
trying
to
get
these
stuff
like
encryption
arrests,
but
largely
it's
to
get
pod
identity
and
defer
to
external
secret
stores,
like
faults
or
whatever.
C
C
A
Okay,
my
problem
I
mean
it
sounds
like
a
school
any
issue,
yeah
I,
guess
the
I
mean
the
my
perspective
on
is
we
just
need
to
make
sure
that
we
keep
communicating
between
Steggles
and
the
cluster
lifecycle
of
about
whether
whether
any
changes
are
around
this
do
impacts
on
our
plans
to
provide
the
UX.
We
want
yeah.
D
B
Problems
they
only
know
known
that
I
know
about
is
that
for
still
releasing
one
seven
on
three,
oh
17
at
City,
and
that
still
has
the
known
quorum
core
rate
I
up
deficiency
that
was
fixed
in
three
ones.
So,
if
you're
working
on
AWS
and
you're
at
a
reasonable
scale,
you
got
to
make
sure
you
monitor
your
ions
to
your
whatever
your
backing
storage
location.
Is
you.
B
A
It
sounds
good
and
yeah
thanks
for
the
heads
up
on
that,
any
other
topics
anything
else.
Anyone
going
once
going
twice:
okay,
let's
knock
it
on
the
head
thanks.
Everyone
and
I
am
away
next
week
on
vacation,
so
I
will
well.
Does
anyone
want
to
volunteer
to
chair
the
sig
next
week?
If
not,
I
will
ask
some
people
who
aren't
here.