►
From YouTube: sig cluster lifecycle kubeadm office hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
September
5th
2018.
This
is
the
Cooby
diem
office.
Hours
of
sig
cluster
life
cycle,
I
am
Tim,
st.
Clair
I'll,
be
your
moderator
for
this
conversation
and
I
got
a
couple
of
genda
items
that
I
put
in
there
before
the
meeting
of
other
folks
have
other
agenda
items,
please
feel
free
to
add
them
to
the
dock.
A
So
it's
kind
of
this
weird
conundrum
that
we
currently
have
so
inside
of
the
main
dock
for
rubidium
installation
I
was
of
the
mind
that
we
should
recommend
container
D
and
just
say,
like
we
recomm,
we
quote,
recommend
container
D
for
the
installation
and
then
have
a
totally
separate
dock,
which
is
part
of
this
is
pull
that
outlines.
Here's
how
you
install
these
other
CR
eyes
right.
A
Knew
that
the
location
of
the
of
the
document
would
be
incorrect
and
someone
would
want
a
different
home
I
even
mentioned
that
to
Vince
when
he
was
originally
doing
the
PR,
so
I
figured.
We
wait
for
comments
and
feedback
of
where
the
proper
home
should
be
I'll,
also
poke,
Jennifer
and
assign
her
on
that
issue,
to
figure
out
the
where
the
canonical
location
should
be.
A
B
A
Think
many
people
care
I
think
docker
would
probably
be
the
preferred
still
default
for
the
time
being
because
it's
what
people
know
the
problem
with
that
is
that
we
don't
have
an
updated
version
of
docker
in
the
test.
Ci
automation.
So
it's
still
running
so
so
the
only
images
that
we
use
I'd
have
to
double-check
this.
We
might
be
able
to
update
it.
A
I
know
the
default
tests
stuff
for
most
of
the
test
apparatus
uses
Google's
optimized
container
image
for
90%
of
the
test
jaggery,
but
with
our
test
images
I
believe,
there's
still
a
bun
too.
So
it's
possible
that
we
could
make
a
really
quick
jig
change
to
get
updated
with
the
latest
version
of
docker
and
still
default
to
doctor.
B
A
A
Not
going
to
they
don't
they
they
validate
what
they.
What
Google
cares
about
and
the
CRI
folks
do
their
own
validation
for
their
bits
signal
will
not
take
preferential
treatment
on
given
versions
of
docker
or
other
things.
They
basically
use
their
optimized
images,
at
least
that's
what
Google
story
right,
but.
A
Think
we
have
no
other
choice.
We
are
the
only
integration
point
eventually
I.
We
should
probably
create
a
test
matrix.
That's
up
the
stack
a
little
more,
an
ideal
world.
We
have
multiple
cluster
API
implementations
that
build
out
comedian,
atria
clusters
for
different
cloud
providers
and
in
that
world
we
can
get
good
signal
for
different
CRE
implementations
across
cloud
providers
across
installations.
A
A
We
can
make
a
PR
to
test
infra
and
update
the
installation,
instructions
which
I
think
it
actually
might
even
be
pulling
in
1806
right
now
by
default
and
we're
not
even
really
paying
attention,
in
which
case
we
don't
have
to
do
anything
because
I
don't
think
you
can
default.
The
install
1703
at
all
anymore.
B
B
A
C
B
A
A
A
Well,
good
single
there
I
think
we
should
follow
up
on
slack
with
regards
to
the
details,
so
we
should
probably
loop
through
the
docks,
the
PR
that
Vince
is
currently
making,
as
well
as
deduping
the
installation
instructions
and
finding
the
canonical
home
for
where
we
want
to
do.
This
I
would
probably
use
a
similar
location
to
where
the
doctor
dock
is
and
then
we'll
you
could
probably
move
it
to
that
similar
location,
Vince.
A
A
Is
that's
always
been
a
conundrum,
so
yeah
on
SUSE
an
open
system,
we
have
docker
and
we
would
like
to
make
cry
out
of
the
default
one
if
people
want
to
add
to
set
a
mission
for
cryo.
The
problem
with
cryo
currently
in
the
upstream
tests
is
the
CI
signal
is
not
good.
So
if
you
actually
look
through
the
test
suite
through
test
grid,
the
cryo
integration
through
test
automation
is
not
complete,
but
it
is
for
container
D,
so
I
think
that's
just
technical
debt
and
I've
already
poked
the
stakeholders
repeatedly
regarding
this
issue.
A
B
A
A
A
A
B
B
A
C
B
B
B
This
is
gone.
This
is
active,
a
somebody
took
it.
The
problem
here,
I
think,
is
that
he
found
that
there
are
more
potential
work
output,
stuff,
no
one's
to
my
understanding.
The
check-in
comment
more
on
this.
I
we
only
have
like
two
straight
walk:
outputs
for
in
the
validator
code,
but
I've
got
a
comment
on
this.
More
ideally,
check
should
come
in
here.
E
B
A
Why
don't
I
send
this
to
you,
then,
with
a
mirror
too
just
so
it's
has
somebody
who's
shepherding
it
through
and
we
can
always
revisit.
I,
don't
think
it's
SuperDuper
important
I
do
know
that
the
the
weird
output
mishmash
that
we
had
before
was
not
really
good,
but
it's
not
like
a
blocker
for
release.
Yeah.
A
F
A
A
I'm
gonna
skip
over
Chuck
stuff,
CRA
installation
instructions,
that's
already
currently
up,
and
we
just
talked
about
that
one.
This
is
one
that
is
I
I,
don't
mind.
If
anyone
else
does
it
I
assigned
it
to
Liz,
but
you
know
it's
currently
not
active
but
go
docking.
The
config
I
think
would
be
helpful
for
everyone.
It
is
a
major
pain
point
from
the
consumers
of
people
who
are
trying
to
use
kuba
DMS
configuration
file
and,
as
we
constantly
shift
these
things
having
a
canonical
location
for
these
examples
is
super
useful.
A
B
A
Yes,
but
I'm
I'm,
saying
like
the
example
that's
inside
of
the
Kubb
ATM
setup
instructions
should
not
be
in
the
document
team.
You
should
not
be
in
the
guide.
We
should
take
examples
and
push
them
into
the
go
doc
examples.
So
that
way,
every
time
a
person
looks
for
the
code
for
a
given
version
or
release
of
kubernetes,
they
can
find
the
coop
idiom
config
go
doc
with
the
examples
with
the
config
right
there,
yeah.
A
Gordian
s,
this
should
be
fixed,
I,
believe
now,
I
think
it
was
merged.
I
saw
it
merged
through.
So
we
can
close
this
one
for
the
TLDR
of
this
one,
for
whatever
reason,
the
core
DNS
folks
put
a
memory
limit
on
it
by
default
in
111,
and
it
was
booming,
so
they
changed
and
modified
it.
So
it
no
longer
has
a
limit.
So
it's
no
longer
a
bug.
H
B
I
mean
he's
talking
about
by
customizing
the
version
which
is
which
is
not
exactly
that
easy
to
do.
Yeah
yeah,
we
have
a
separate
issue
about
this.
I
can
talk
about
the
memory
limited
I
suggest
collinear
spokes
to
if
they
are
not
sure
about
this,
they
should
pretty
much
go
back
to
the
512,
which
is
the
default
mix.
C
C
A
I
A
A
A
A
I
A
I
A
B
A
Ideally
Liz
I'd
still
like
to
have
a
cap
at
the
end
of
the
cycle
so
that
we
can
start
to
talk
about
it.
I
know
the
SUSE
folks
are
here
or
suzay.
I'd,
never
know
how
to
pronounce
it
right
about
unified
policy
for
command
line
are
D
unifier
integration
and
yada
yada
yada.
There's
I,
wouldn't
like
one
plan
to
rule
them
all
and
start
to
talk
about
this
stuff.
A
As
a
group
I
don't
know
if
the
SUSE
folks
have
chimed
in
on
what
their
requirements
are,
but
it
would
be
ideal
if
they
had
a
listing
and
requirements.
So
I
think
they
might
have
what
he
did
so
I
think
working
with
with
the
cig
and
developing
a
kept
that
we
could
start
execution
on
and
113
would
be
great.
But
it's
not
it's
not
strictly
requirement.
B
B
A
I
want
to
start
with
comedian.
The
problem,
if
you
do
something
global,
is
death
by
a
thousand
needles,
because
if
you're
gonna
involve
every
person
in
or
every
other
sig
it
might,
it
might
be
stuck
right.
If
we
vet
a
cap
and
we
agree,
we
can
POC
it
and
then
eventually,
if
it's
successful,
we
can
sort
of
try
to
push
adoption
more
broadly
across
the
rest
of
the
components
in
the
stack.
But
I
don't
want
to
start
there
first,
because
that's
just
recipe
for
intransigence
right.
B
A
B
B
A
It's
irrelevant
from
kubernetes
perspective,
so
long
as
licenses
for
vendor
into
kubernetes
are
consistent
and
there
is
a
maintainer.
Typically
there
there's
a
bunch
of
code,
that's
actually
vendored
into
communities
that
I
think
is
just
legacy
and
not
even
necessarily
maintain
doing
an
audit.
That
is
a
WoW
level
of
work.
I
know
if
you
look
at
the
entire
depth
graph
of
kubernetes,
but
it's
pretty
amazing.
B
A
B
A
We
can
leave
that
there
h8
documentation.
That's
an
unknown.
I
know
that
once
once
we're
further
into
this
week,
Chuck
and
Ruben
and
Jason
will
probably
do
a
quick
pulse
on
the
updates
and
installation
instructions
for
AJ.
So
if
you
before,
when
you're
doing
that,
please
reach
out
to
Fabricio
to
make
sure
that
to
get
him
on
the
review
chain
for
some
of
that
stuff,
myself
too,
because
the
instructions
should
probably
be
modified
for
how
we
did
things
before.
B
B
B
A
B
A
Here
because
so
like
so
when
you
get
overlapping
diffs,
there
were
a
couple
different
locations
of
where
the
location
of
the
pulled
binary
went
from
a
different
bin,
a
GCS
bucket.
We
it
was
like
CIS
cross
and
that's
where
I
commented
so
like
it
switched
from
CI
or
release
to
the
CI
cross
and
in
those
issues
the
reason
I
asked
Alexander
to
take
a
look
at.
It
was
because
we've
had
a
ton
of
issues
with
the
pull
location
test,
automation,
not
being
the
correct
location,
so
I
don't
see
them
anymore.
B
A
B
K
A
It
was,
it
was
something
that
was
a
CI
cross,
but
I,
don't
remember
what
it
was.
That's
when
I
commented
on
I
don't
know
if
I
could
see
male
comments
here,
yeah
it
was
right
here.
This
is
the
one
that
stood
out
to
me.
I'm
like
whoa,
because
I
know
that
it's
actually
correct
I,
don't
think.
I
even
saw
that
in
your
updated
deaf,
okay.
B
A
A
A
G
A
A
I'm
going
to
assume
that
the
rest
of
the
bits
for
control
plane
join,
you
have
a
checklist
here
for
breed
Co
and
I
marked
a
couple
of
them
checked
the
website
stuff,
there's
a
lot
of
overlapping
issues.
To
be
honest,
we
have
like
three
or
four
different
AJ
docs
issues
that
are
all
intermingled
and
interrelated.
So
perhaps
next
week
we
can
just
like
focus
on
deduping
that
stuff
and
see
where
we're
at
you.
A
B
A
A
B
B
A
J
B
A
That's
the
actual
provisioning
of
resources,
but
I
think
there's
two
pieces
that
are
ideal
right
like
we
have.
You
know
a
layered
approach
to
this
thing.
If
we
get
if
we
can
somehow
get
kinds
in
place
for
PR
blocking
jobs
and
eliminate
cops,
eliminate
all
these
other
provisioning
tools,
that
kind
of
blast
out
clusters
and
take
it
makes
every
single
PR
for
kubernetes
take
forever
and
make
the
PR
blocking
jobs
be
kind
or
which
are
basically
the
kind
is
a
community's
doctor
and
doctor
implementation.
A
That
Ben
is
working
on,
which
is
written
and
go
and
tries
to
take
the
best
efforts
from
other
doctor
and
tackle
implementations.
That
is
ideal,
because
then
we
have
automatic
PR
blocking
test
coverage
for
everything,
Covidien
related,
which
we've
never
had.
We
have
the
periodic
jobs.
We
don't
have
the
blocking
jobs,
that's
both
good
and
bad.
The
good
thing
is
we'll
find
things
immediately.
The
bad
thing
is
flakes.
If
there
are
flakes
with
the
with
this
new
setup,
only.
A
B
A
We'll
deal
with
that
I
think.
Once
we
get
into
113,
you
know
we
can
we
can.
This
is
we're
kind
of
getting
a
little
off
the
rails,
but
I
think
the
the
premises,
let's
punt
on
that
stuff
until
1:13,
and
when
we
can
discuss
some
of
the
details
there
once
we're
in
that
cycle,
I
think
right
now
we
probably
need
to
focus
on
is,
if
there's
anything
else
that
we
think
needs
to
be
addressed
for
112,
because
we're
we've
got
t-minus
week
and
a
half
and
was
it
25th
is
when
it's
cut
I.
A
B
A
A
L
B
B
B
Something
we
can
do,
I
mean
that's
dislike
more
project
oriented
this
not
qbm
specific,
but
I
can
like
show
an
example
of
how
I
have
set
up
my
email
Walters.
If
someone
is
interested,
but
that's
it's
a
I
mean
I
receive
a
lot
of
email,
probably
like
you
Tim,
but
I
filter
like
everything
and
I,
see
as
an
example,
I
monitor
the
website
repository
the
release
repository,
you
name,
it
I,
monitor
everything
and,
if
others
can
set
about
to,
they
come
like
they
want
to.
C
G
A
A
B
A
Agree
and
I
know
you're
talking
about
so
I
would
just
do
it
right
up
and
set
it
to
Kate's,
devel
I.
Don't
think
it's
germane
to
this
particular
sig.
You
know
it's
just
include
a
larger
audience.
Ok,
that's
you
can
even
like
crib
T
Hawkins
I
asked
to
knock
ins
a
long
time
ago
and
I
copied
some
of
his
ideas
of
how
he
deals
with
it,
and
he
wrote
up
this
long
list
of
hierarchical
filter
that
he
created,
and
you
know,
I
have
a
very
similar
mechanism
which
is
even
more
is
in
team.
A
G
A
G
A
Just
building
agenda
and
if
you
guys,
if
folks,
have
an
agenda,
then
it
makes
perfect
sense
to
hold
a
meeting
if
there's
yeah,
but
we
don't
have
credentials
to
recordings.
That's
I
be
happy
to
share
some
of
those
deeds,
because
we've
done
that
more
attitude.
So
if
you
wanted
to
be
able
to
record
the
meeting,
they
just
put
it
in
a
Google
Drive
and
Robbie
your
icon
or
Jason
now
can
also
post
it
I
mean.