►
From YouTube: 20180605 sig cluster lifecycle
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
folks,
this
is
June
5th
2018.
This
is
the
regular
scheduled
sig
cluster
of
lifecycle
meeting,
and
we
have
a
pretty
light
agenda
list
here.
So
folks
could
fill
in
who
is
attending,
and
if
you
have
an
agenda
item,
please
feel
free
to
add
it
to
the
backlog.
I
think
the
big
one
for
today
is
freezes
today.
So
we
have
to
get
folks
and
if
there
are
any
more
remaining
PRS,
they
need
to
be
LG
teams
by
the
I
think
it's
end
of
business
today
at
Pacific
time
is
that
right.
A
They
need
the
entire
degree
to
get
into
the
release,
so
that
includes
approved
for
milestone.
It's
gotta
have
a
priority.
It's
got
to
have
a
kind
of
whatever
it
is
and
it
has
to
have
an
owner.
A
select,
owning
sig
associated
with
it.
I
did
see.
I
mean
there's
one
well-known
PR
that
we
have,
which
is
the
the
last
change.
With
regards
to
the
Coolidge
configuration
modifications
for
this
cycle.
I
know
Lucas
is
working
on
them
right
now.
A
C
A
A
A
C
A
B
C
So
so
can
we
just?
Can
we
just
say
that,
as
we
know,
Ryan
is
responding.
Responding
he's
been
active
in
slack
as
well.
So
can
we
just
lgt
em
that
one
this
one
and
add
this
one
and
then
close
the
one
that
is
similar,
that
is
a
week
old
or
something
it's
it's
like
weeks
old,
but
yes,
yeah
yeah
like
anyway,
so
just
so,
we
get
the
GCR
io
change
merged.
Today,
I
can.
C
C
C
Yeah,
that
should
be
we're
a
fine
when
we
we're
planning
that
Cody
anus
is
going
G
a,
and
we
have
test
coverage
on
this
as
it's
the
same
thing
as
people
do
in
the
like
official,
manifest
I
think
it
should
be
very
okay
to
to
let
let
through
and
then
we
have
like
a
fairly
secure
by
default.
Dns
plugin
we're
shipping
in
111.
C
If
we
pull
up
I
can
I
can
paste
the
ps1
it's
right
here,
six-two,
so
yeah,
so
I'm
thinking,
I'm
gonna
after
right
after
this
meeting
I'm
gonna
do
only
the
bit
changes
send
us
as
a
peer.
You
know,
with
an
action
required
release,
note
for
people
that
depend
on
like
there
are
packaged
things
themselves
or
whatever
keep
this
for
a
Cuba
time
upgrade
node
and
out
of
phases,
commands
and
then
upload
the
CRI
so
I
get
such
as
a
different.
That
is
the
only
thing
I
haven't
coded
still,
which.
A
A
C
C
A
C
A
C
C
B
A
Jet
I
generally
want
to
start
to
apply
policy,
but
we'll
deal
with
that
later,
like
about
what
what
is
back
portable
and
what
is
not
because
we
are,
we
are
we've
been
pretty
loose
with
policy
and
I
think
we
should.
We
should
start
to
tighten
that
up
just
so
that
we're
not
kind
of
randomly
backporting
things
which
overlap
between
like
you
know
what
what
things
are?
A
Okay,
because
otherwise
we
could
potentially
back
pour
in
half
of
111
right,
so
that
we
were
into
this
weird
state
where,
like
we
should
have
policy
and
other
SIG's
like
it,
was
very
strict
right.
Look,
we
only
back
port
p0
critical
blocking
bugs.
That
is
the
only
thing
you
back
port
and
like
as
an
ex
red
header,
that
is,
that
is
core
to
Montreux
right
like
that
is
what
you
do
you
never
back
port,
anything
that
isn't
p0
critical
thing.
Oh.
A
Like
we
that's
the
problem
like
we,
we
release,
but
this
is
a
fundamental
problem,
we're
we're.
We
keep
on
doing
these
things
and
they're,
not
good
things
to
do
and
sustainable
things
to
do
in
the
long
haul.
We've
done
them
for
history,
but
we
they're
not.
They
won't
help
us
in
the
future.
So
sorry,
I
meant.
C
A
C
A
central
place
to
discuss
what
to
backports
like
if
it's
sorry
it's
like
recorded
in
github,
so
let's
say
we
have
five
PS.
We
potentially
cut
back
pause
and
at
least
three
of
them
appear
zero
or
critical
urgent.
Then
we
discuss
the
other
two.
Is
it
in
or
is
it
out
that
we
have
like
the
centralized
place
where
we
do
discussion?
So
it's
not
lost
it's
like
that
was
what
I'm
interested.
A
A
So,
if
we're
gonna
reference
config
from
sort
of
facts,
I
don't
want
to
have
like
an
entire
config
structure,
blown
out
for
a
given
version
of
V
1
alpha
1
into
our
Docs
I'd.
Much
rather
have
it
reference
a
go
doc
with
the
example
listing.
That's
there
about
how
to
do
like
argh
overrides
for
this
release
cycle
for
certain
features.
C
I,
don't
know
what
to
call
it
so
anyway,
we
need
to
get
better
automation
around
approval
for
for
the
docs
repo
I
noticed
that
there's
a
couple
of
Doc's
that
belong
to
us
that
aren't
that
doesn't
have
see
class
life
cycle
s
onus.
So
you
need
top-level
approval
if
you're
gonna
do
something
so
I
opened
a
PR
to
fix
that,
but
it
got
stalled.
Other
top-level.
E
C
It
was
about
to
be
realized,
so
so
that
is
something
we
we
need
to
or
I
need
to
rebase
now
as
soon
as
possible,
and
we
can
get
in
from
the
docks
owners.
So
we
actually
can
merge
our
own
stuff.
Otherwise
we
will
have
to
wait
for
so
powerful
every
time
we
change
a
sentence:
the
cubed
M,
specific
dogs,
so.
A
C
C
A
C
A
Part
of
that
we
are
as
part
of
the
docs
bridge
right
like
we're
gonna
do
that
tomorrow.
So
we,
what
I
want
to
do
is
get
a
breakdown.
I
walk
through
all
of
our
docs
and
I,
can't
honestly,
say:
I
understand
the
documentation
structure
like
there
are
pieces
that
make
sense
like
the
the
setup
portion.
That's
there,
but
there's
also
a
separate
cluster
administration,
which
is
down
in
a
separate
total
different
location
which
has
pieces
that
I
don't
understand.
Why
they're
separate.
A
C
C
A
Okay
and
they're
all
blocking
gets
that
main
tracking
issue.
I
think
the
one
thing
we
need
to
make
sure
we
do
is
anything
that's
blocking
for
111
after
talking
about
the
docs
folks
needs
to
reference
that
tracking
issue.
So
that
way
it
doesn't
get
Auto
merged
as
part
of
the
release,
and
they
have
a
single
choke
point
to
understand
what
are
all
the
clear,
ADM,
specific
things
for
the
111
release
cycle.
That's
the
one
that
blue
blue
mirror
has
been
curating
so
far.
D
C
D
A
We
can't
do
anything
yet
because
we
need
we
need
somebody
from
cig
network
to
own
this
thing,
because
it's
it's
totally
out
in
the
wilderness
at
this
point,
unless
we
tried
it,
the
problem
is,
if
you
touch
that
you're
gonna
start
to
become
the
maintainer
in
order
of
proxy
config
and
I.
Think
the
the
general
sentiment
that
I've
got
from
talking
with
other
people
was
that
everybody
is
waiting
for
I'm
telephone
to
be
to
finish
up
as
the
could,
the
configuration
before
anyone's
gonna,
step
up
and
own
the
other
things.
C
Yeah
so,
but
I
was
thinking
like
the
general
that
it
counts.
Yeah,
so
I
also
looked
forward
the
next
cycle
when
what
hopefully
people
are
gonna
step
up
at
this
cuz,
it's
often
is
kind
of
done.
Dynamic,
Oobleck
config
actually
graduated
to
be
done
in
111,
which
is
nice,
it's
nice
to
see,
and
it's
it's
going
to
be
really
easy
to
enable
in
Kiev
Adam.
You
literally
just
pause,
feature
gates,
dynamic,
cubed
config
through,
and
it's
enable
we
have
it.
C
C
C
C
A
A
C
C
A
C
C
C
C
That
goes
to
the
issue,
and
then
we
have
the
TR
lengths
down
somewhere
yeah.
So
this
is
when,
when
like
feature
wise
or
like
behavior
wise,
this
is
change
in
behavior,
because
it's
it's
super
straightforward
here.
But
what
what's
happening
now
when
we
set
hostname
all
right
there
as
a
flag
to
the
cubelet,
is
that
we
actually,
with
the
node
name
flag
in
Cuban
men
in
it
they
actually
dictate
what
what
the
host
name
of
the
cubelet
should
be,
or
what
the
node
API
object
name
for
the
cubelet
should
be.
C
That
is
what
we
dictate
with
this
earlier.
We
have
said
that
oh,
the
cubelet
detects
what
it's
like
it
detects
anything
it
can
detect
from
the
environment
to
to
use
whatever
node
API
object.
We
want
it
won't,
and
then
we
have
to
create
matching
certs.
That's
why
we
have
okay,
let's
generate
a
serve
with
the
same
thing
that
the
cubelets
is
going
to
do
so
we
like
having
the
unknown
name
flag.
It's
like
make
make
the
stuff
I.
Have
the
search
I
have
match
the
order,
detection
of
the
cubelets.
Now
it's
we
just
change
it.
D
D
C
A
C
C
And
it
so
ideal
when
we
will
get
this
in
place
and
queue
proxy
has
correct
component
configuration
we're
going
to
host
the
proxy,
so
we
can
dictate
with
cube
at
a
minute
and
join.
This
is
what
the
node
name
should
be.
Then
the
cubelet
picks
it
up
from
the
flag.
Then
the
cubelets
registers
a
node
API
object
with
that
name.
Then
the
proxies
running
a
daemon
set,
which
we,
which
gets
the
the
desired
node
API
object,
name
from
the
downward
api
automatically.
C
So
that's
like
the
golden
story.
I
guess,
then
we
have
the
educate
with
the
cloud
provider,
so
the
in
the
PR
I
linked
here.
If
you
go
to
the
comments,
legit
disappear,
Jordan
that
is
documenting
the
the
relation
between
hostname
override
and
the
cubelet.
There
are
cloud
provider
flags
which
is-
and
this
is
specifically
about
entry
cloud
providers,
it's
different
from
other
tree
cloud
providers.
C
But
what
it
says
here
is
if
we
specify
a
cloud
provider
which
is
basically
means
in
three
cloud
providers
like
AWS
AWS
is
the
big
like
not
issue
but
like
the
big
provider
that
makes
this
confusing
for
a
lot
of
people
cuz.
It's
really
none
of
this,
so
if
you,
even
though
we
set
hostname
override,
it's
not
gonna,
have
any
effect.
If
we
also
set
cloud
provider
AWS,
that's.
F
Right
now,
with
the
AWS
cloud
provider,
at
least
with
version
110
and
below,
you
need
to
actually
specify
hostname
override
explicitly
to
the
fully
qualified
internal
DNS
name.
For
that
instance,
because
the
hostname
that
you
get
from
hostname
only
gives
you
the
short
hostname,
which
doesn't
match
up
with
the
private
DNS
name,
which
is
what's
used
to
look
up
via
the
API
object.
C
F
Will
attempt
to,
but
then
it'll
fail
to
actually
start
it
up.
So
it's
complicated
because
you
right
now
today
to
run
on
AWS
with
the
cloud
provider
set,
you
have
to
set
hostname
override
on
the
cubelet
unless
you
update
the
host
name
in
advance
on
that
instance,
and
then
you
also
have
to
do
that.
The
same
way
on
the
on
queue
proxy
as
well.
A
G
A
C
F
I
was
updating
our
hefty
Oh
AWS
QuickStart
I
did
try.
Removing
our
use
of
hostname
hostname
override
and
you
were
I
was
not
able
to
even
get
through
a
cube
ATM
in
it
without
it,
because
it
attempts
the
API
lookup,
based
on
the
host
name
of
the
instance,
from
the
hostname
or
from
the
prop
file
system,
and
then
query
sat
against
the
private
DNS.
The.
G
Dns
name
of
the
instance
sorry,
and
that
was
with
that
was
cubelets
and
with
cloud
provider
AWS
there
was
a
queue
proxy
that
was
the
cubelet
queue.
Proxy
has
the
same
behavior,
but
that's
I'm
just
trying
to
now
we're
done
one
at
a
time
right
and
then
and
then
Club
cooperate
or
AWS
or
top
provider
external
cloud
provider.
Aws.
Okay,
that's
interesting!
Oh.
C
F
F
C
What
is
tricky
so
this
is
the
new
cubelets
fighter
host
name.
This
function
looks
like
this,
so
if
we,
if
you
passed
hostname
all
right,
it's
going
to
use
that
if
you
didn't
check
our
cells
name
and
as
list
pointed
out,
it's
going
to
lower
these
list
string
it
unconditionally.
That
is
the
first
thing
that
happens.
C
C
Override
here
so
so,
if
we
have
the
let's
say
we
pass
both
hostname
override
and
the
cloud
provider
AWS
its
first
gonna
run
this,
but
it's
gonna
be
over
it
in
any
way
with
the
AWS.
What
AWS
thinks
is
true
or
whatever
other
entry
Cloud
Files
it
exists,
which
is
that
is
what's
documented
in
that
PR,
but
yeah
this.
This
makes
it
really
really
tricky,
because
we
have.
C
We
have
two
cases:
one
I,
just
I'm
in
a
bare-metal
environment
I
want
to
override
the
node
API
object,
name,
I
have
I
pause,
cube,
am
in
it
node
name
and
I.
Want
that
to
flow
down
to
qubits,
then
less
patches
is
great
because
that
is
gonna
happen,
but
in
the
other
case
we
have
okay,
I'm
in
an
AWS
environment.
I
want
to
make
the
I
just
wonder
stuff
to
work,
then.
Currently,
what
we're
doing?
What
you
have
to
do
is
to
pass
cube
item
in
it.
C
G
D
F
D
C
C
When
on
AWS,
you
still
have
to
do
the
same
thing.
You
do.
You
bet
em
in
it
no
name
and
the
fully
qualified
domain
name,
which
is
going
to
generate
assert
with
the
fully
qualified
domain
name
and
also
pass
hostname.
Are
we
right
to
the
cubelet,
the
host
name
override
to
the
cubelets
when
using
the
AWS
cloud
provider
it's
going
to
be
ignored,
but
it's
anyway
gonna
match
as
a
free
cloud
provider
in
a
West
cloud
provider
is
gonna
also
do
the
follicle
fully
qualified
domain
name?
D
D
C
A
A
C
C
C
E
C
C
C
C
Let's
start
with
doing
this
first,
because
that's
what's
gonna
make
it
into
the
EDS,
so
we
can
have
coverage
LEDs
that
it's
working
and
then
we'll
just
make
the
exactly
the
same,
changed
to
the
components,
release
repo.
So
anyway,
we
here
we
say
what
Deb's
should
be
created,
will
add
one
more.
It's
gonna
be
pretty
similar
to
Combinator
CNI.
So
like
GRI
tools
or
whatever
we,
it's
I,
think
it's
the
repost
called
co-writer,
so
that
could
make
sense
yeah
we're
not
building
a
binary.
So
we
can
skip
that
and
then
we
say.
C
E
E
C
So
it's
somewhere
here,
it's
just
fetching
the
release
car
yeah
from
kubernetes
Eni,
so
you're
gonna
do
the
same
thing
like
download
from
github
using
this
straw
and
then
just
include
it
in
the
tab
sounds
good
and
yeah.
That's
that's
the
first
step.
Then
we
can
and
it's
similar
to
the
RPMs
which
is
like
in
in
this
file.
C
It
looks
pretty
similar
just
saying
that
yeah,
so
it's
this
reference
I
can't
seem
to
find.
But
if
you
dig
where
this
reference
is
referencing
github
you're
gonna
find
the
similar
way
to
consider
the
CEO
right
tools
and
the
net
outcome
is
like
one
new
Deb
file
over
rpm
file
that
we
can
use
in
our
ETV
testing.
Okay,.
A
Please
use
slack
and
poke
folks
to
try
and
get
things
done
if
you
have
any
PRS
that
are
in
flight
and
a
reminder
to
that
tomorrow,
we'll
go
through
and
try
to
break
down
the
details
of
docks
that
need
to
be
done.
There's
a
lot
a
lot
lot
and
we
might
change
some
of
the
structure
there
too,
because
right
now
it's
kind
of
doesn't
make
a
lot
of
sense.
So
I
know
that
Jennifer's
gonna
come
with
some
history
as
well
as
some
other
things
that
actually
didn't
even
get
transferred
over
to
the
new
Hugo
site.