►
From YouTube: 20180711 sig cluster lifecycle kubeadm office hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
I've
seen
this
issue
a
couple
of
times
reported
and
the
problem
is
that
people
around
Q
beta
a
minute
and
they
don't
reset
after
that,
and
they
try
to
run
comedian
join
or,
for
instance,
and
right
now
our
output
is
to
show
to
the
users
error
messages
relating
to
past
HCD
certificates
and
stuff,
like
that
and
I
was
thinking
about
showing
a
better
message
for
this.
Like
K,
you
already
run
Q
beta,
meaning
in
it
here.
So
maybe
you
should
call
comedian
reset
first
and
like
is
there
a
better
way
to
detect
it?
A
We
could
drop
a
Sentinel
marker,
something
that
says
like
we
were
here.
You
know,
Brooks
was
here
something
that
says
you
ran
coup
Beatty
in
it
on
this
date
with
you
know
these
parameters
and
arguments
the
history
file
in
bar
Lib,
whatever
that's
common.
So
that
way
you
have
you
have
history
of
when
things
are
run.
I,
don't
think.
That's
a
bad
idea
in
general
to
have
a
feature
like
that
over
time.
I.
A
A
A
Having
history
of
what
operations
were
done
on
the
machine
is
highly
useful
because
you
could
even
record
resets
over
time
too,
because
sometimes
people
get
into
this
weird
state,
that's
indeterminate
and
we
have
no
history
of
what
they've
actually
done.
This
would
provide
some
little
history
of
you
know
be
able
to
like
look
inside
they're
like
what
operations
were
done
when
they
did
it
upgrade
at
this
state.
They
did
an
init
at
this
state.
They
did
a
joint
at
this
day.
They
did
a
reset
at
this
day.
A
A
Yeah
you're,
just
basically
having
some
transactional
history
of
what
operations
had
succeeded
or
you
could
do
failed
and
succeeded
like
you,
I
would
almost
even
think
that
you
would
want
to
write
down
a
more
detailed,
a
issue
on
this
and
then
reference
reference.
This
one
as
like
you
know
this
is
this
is
a
example
of
how
we
could
you
know
if
we
have
if
we
fix
this
issue,
it
fixes
this
other
one
right.
B
A
That
would
be
useful
from
diagnostic
perspective
for
sure,
because,
like
a
lot
of
times,
we
get
issues
filed
and
there's
no
diagnostic
history
of
what
a
person
tried
to
do
and
having
some
level
of
history
that
we
can.
Reference
is
always
useful
because
sometimes
people
go
in
like
these
reset
loops
and,
as
we
know,
not
all
the
recep's
loops
are
hide
impotant
so
like
we're
not
actually
resetting
iptables
rules
or
not
resetting
some
other
things
in
the
system
that
I
have
that
aren't
just
really
germane
to
qadian
proper.
So
having
that
history.
C
That
feels
like
a
pretty
complex
solution
for
this
problem.
I
wonder
if
there's
another
another,
simpler
solution,
we
could
use
where
it's
a
matter
of
creating
a
Sentinel
file
or
just
to
file
kind
of
like
a
kind
of
like
a
pin
file.
It
just
says:
yeah
we're
an
Kubb
ADM
in
it
and
then
Kubb
ADM,
and
it
will
not
run
if
that
file
already
exists,
so
you
should
run
reset.
Instead,
it.
A
It
is
almost
being
like
a
cook
think
of
Senna
bleah,
but
outside
of
the
actual
cluster
itself.
Yeah
like
it
would
almost
be
like
it
almost
be
like
a
committee
and
collect
history,
collect
report
history,
which
is
like
here's,
a
sauce
report
for
your
machine
as
well
as
the
history
of
everything.
Food
medium
is
done.
Let's
file
a
bug
like.
C
A
But
it's,
it
could
be
a
number
of
other
things.
It
says,
error
message
when
could
be
a
minute
and
joins
should
not
be
calling
this
a
machine.
There's
a
number
of
conditions
that
can
occur.
I
think
we
could.
We
could
easily
put
a
simple
fix
in
this,
really
solves
the
request,
or
we
could
have
a
more
comprehensive
type
of
way
of
managing
if
somebody
else
wants
to
sign
up
for
it.
I'm
all
for
comprehensive,
but
I
do
I
do
see
the
point.
There's
it
we
could
mandate
it
or
we
could
yeah.
C
B
D
A
E
B
B
E
A
F
E
We
have
I'm
working
on
API
snoop
and
we're
trying
to
I'm
finding
because
I'm
changing
lots
of
components
within
kubernetes
I
need
to
be
able
to
deploy
kubernetes
quickly
and
run
through
to
ET
test
and
ET
hack,
up
and
and
similar
clans
that
flow
through
cube
tests
right
now.
The
only
path
and
I'm
finding
easy
is
to
use
GC
I'm,
trying
to
find
a
vendor
neutral
way
and
add
support
for
some
other
vendors
to
be
able
to
go
through
that
iteration
loop.
E
It
looks
like
cluster
API
is
the
longer
term
goal,
but
we've
been
pushing
the
use
of
that
off
for
six
months
to
a
year
for
as
far
as
it
kubernetes
anywhere
so
kubernetes
anywhere
was
a
suggested
approach
a
little
more
than
nine
months
ago,
but
it's
was
stopped
because
we're
doing
stuff
with
cluster
API
I
wanted
to
get
some
advice
on
what
to
do
but
had
a
way
not
going
forward
with
kubernetes
anywhere
and
if
we're
really
not
going
there.
What
can
we
do
now
to
get
queue
idiom
to
support
other
other
vendors?
So.
A
A
E
A
Long
term
sustainable
path
is
for
doing
the
provisioning
plus
to
do
the
whole
cluster
layout
has
always
been
cluster
API
incriminate
is
anywhere
has
been
like
the
holdover
until
we
get
all
the
pieces
of
cluster
API
in
place
and
we
actually
get
automation
in
place.
So
I
don't
have
a
good
answer
until
a
cluster
API
is
all
built
out
and
tested
and
verified
for
specific
versions
of
rubidium,
because
there's
nothing,
that's
generic
enough
that
works
across
and
providers
without
just
going
to
some
installer
right,
like
you
could
always.
E
E
A
I
know
what
you
get
it,
the
you
want
something
to
say:
declaratively
make
me
a
cluster
of
qadian
version.
X
with
this
you
know,
configuration
DM
will
go
and
that's
the
intent
of
cluster
API
and
cluster
FBI
is
still
alpha.
There's
no
there's
there's
a
subset
of
implementations
where,
where
GCP
is
the
most
prevalent
and
but
there
isn't
a
lot
of
the
other
providers
created
yet
like
digitalocean
or
your
packet
or
whatever
I
know
that
OpenStack
just
created
their
repo.
But
there
is
a
lot
of
my
chair
there
and
people
are
rallying
around
it.
A
So
I
think
long-term.
That
is
the
objective
or
goal,
but
I,
don't
in
the
short
term
or
near
term
I,
don't
know
of
a
I
think
you
could
target
it
now
and
you
probably
be
okay,
if
your,
if
your
time
horizon
it's
like
nine
months
to
a
year,
to
have
some
of
them
to
the
providers
in
place.
But
if
your
time
horizon
is
short,
then
I
have
a
good
answer
for
things
like
packet.
Okay,.
E
What
does
it
look
like
right
now
and
a
difficulty
rating
to
get
another
provider
up
and
running
I'm
a
value-add,
including
packet
on
there,
because
I
have
access
to
like
donating
there,
noting
as
some
support
credits
to
their
cloud
and
I'm,
trying
to
use
all
of
our
contributors
in
the
community
value
their
input
and
support.
Yeah.
G
So
right
now,
as
far
as
the
implementation
goes,
you
would
basically
have
to
implement
a
bolt,
a
cluster
actuator
and
a
machine
actuator
and
basically
use
those
to
build
a
custom
machine
controller
and
a
custom
cluster
controller.
They
kind
of
spin
up
any
shared
cluster
resources
with
the
cluster
actuator
and,
however,
you
would
kind
of
spin
up
per
machine.
G
There
really
isn't
any
automation
in
place
for
how
to
drive
the
installs
right
now.
So,
basically,
the
existing
approaches,
kind
of
leverage,
scripts
embedded
in
user
data
to
kind
of
kickstart,
those
so
basically,
all
of
that
scaffolding
would
have
to
be
put
together
for
packet
or
whatever
provider.
G
E
E
Let's
say
we
already
have
heavy
terraform
lifting
for
quite
a
few
providers
inside
of
our
cross
cloud,
CI
crowd,
dashboard
deployments
having
those
in
place
and
sort
of
the
user
data
and
and
there's
already
a
pattern
of
scripting
around
the.
Is
there
something
we
could
look
at
for
leveraging
to
use
that
existing
work
to
use
their
terraform
or
or
or
another
approach
to
bring
those
in?
Yes,.
G
So
there
was
a
discussion
about
having
like
a
generic
terraform
based
provider
for
cluster
API
and
I.
Think
that
could
definitely
make
use
of
the
work
that
you
guys
already
have
the
basically
for
the
existing
examples.
The
best
sources,
the
entry
providers
for
cluster
API
and
I
can
put
a
link
in
the
notes
for
the.
E
E
D
B
E
A
Are
there
other
agenda
items
for
today
we
have
for
lack
of
a
better
vernacular,
a
hell
on
the
backlog
to
burn
through
for
112
I
know
that
I
need
to
start
spinning
up
on
those
elements,
as
well
as
getting
back
into
the
comedian's
side
of
things.
I've
been
kind
of
switching
gears
between
prioritizing
the
backlog,
as
well
as
other
functions
for
help
do
to
get
like
some
of
we
updated
for
111
and
other
stuff.
Are
there
they're
there
other
items
currently
for
112?
A
I
think
it's
just
a
matter
of
execution
now
I
think
we
need
to
start.
You
know,
start
burning
down
the
backlog
and
start
getting
into
it.
I
think
it's
pretty
well
defined.
I
do
need
to
I
think
the
important
one
that
I
do
need
to
run
through
and
a
problem
to
run
through
this
with
Liz
and
Fabrizio
is
the
the
come.
A
So
this
is
history:
p0
is
critical,
p1
is
important.
Soon,
p2
is
important.
Long
term
p3
and
above
is
backlog.
That's
how
there
originally
was
this.
Everyone
followed
googles,
prioritization
model
originally
and
then
Brian
grant
switched
it
to
just
you
know,
have
the
different
names,
so
in
general,
the
overall
rule
of
thumb
is
that
p1,
which
is
important
soon,
is
always
trying
to
try
to
always
target
those
critical
and
important
student
before
the
milestone.
B
A
That
is
the
one
that
I
need
to
go
and
take
a
look
at
and
deep
and
dive
through
I
want
to
finish
off
some
sort
of
rework.
First,
so
I
don't
try
to
test
switch
because
we
require
a
lot
of
thoughts
and
cat
wrangling
because
the
it's
not
just
the
config
to
beta
its
the
pieces
of
component
config
that
I
have
to
touch
with
other
SIG's
to
get
those
things
to
beta
murder
for
us
to
actually
rev
everything
else.
A
A
E
These
6i
thoughts
or
the
the
long-term
goal
here
is
to
try
to
get
Sun
booty
and
this
kind
of
still
the
sole
chain
of
bringing
up
a
cluster
using
cube,
Adium
and
then
moving
on
to
the
center
boy.
Testin
80
II.
A
H
So
there
is
an
update
and
pointers
to
like
different
things,
but
the
basic
thing
that
needs
to
be
done
is
making
sure
that
all
the
images
that
we
use,
whether
from
the
test
infra
or
from
somewhere
else,
all
of
them
have
manifests
in
the
docker
repository
so
Manjunath
and
has
been
working
on
doing
the
test
interest
off,
as
well
as
the
release
repository
release
repository.
We
need
for
q
proxy
and
things
like
that,
for
example
right.
H
So
the
somebody
is
there,
but
as
of
last
week,
there
was
a
bug
in
docker
CLI,
which
has
gotten
merged,
and
there
is
a
rc2
candidate
that
needs
to
be
specifically
used.
So
we
are
a
little
bit.
You
know,
for
we
need
some
time
to
make
sure
that
there
is
a
docker
CLI
version
that
is
released.
That
can
be
then
used
for
uploading
all
the
images.
So
that's
where
we
are
right
now,
but
that
comment
that
I
linked
to
should
have
the
information
that
you
need.
A
If
the
folks
who
are
contracting
through
the
conformance
working
group
have
arm
as
a
target
that
would
be
beneficial
for
getting
that
tech
debt
paid
down
because
I
don't
know
of
anyone,
who's
gonna
fall
on
that
grenade.
It's
a
lot
to
fix,
because
we
don't
have
ownership
rights
for
pushing
to
Google
repositories.
The
community
doesn't.
So
you
need
to
find
a
Googler
and
haggle
those
people
to
get
the
manifests
and
images
built
and
updated
for
the
the
components
that
are
part
of
tested
for
and
I,
don't
even
know
where
some
of
them
exist.
A
H
On
that
front,
Tim
I
am
supposed
to
help
Tim,
you
got
a
doubt.
You
know
which
GC
GC
r
die
dot,
IO
repositories.
You
know
people
need
permissions
to
push
to
and
maybe
even
move
that
to
CN
CF
under
CN
CF.
That
kind
of
stuff,
so
I
have
to
help
Tim
I'm
supposed
to
work
with
him
on
that:
okay,
I'm,
hoping
to
start
next
week
on
that
effort.