►
From YouTube: Kubernetes Kops Office Hours 20180202
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
everyone
it
is
February
2nd.
This
is
the
cop's
office
hours
open
discussion
on
all
topics
cops
the
particular
development
of
new
features,
and
we
have
a
bunch
of
items
on
the
agenda
for
today.
So
I
guess
I
think
the
first
one
on
the
agenda.
Campbell
is
not
here.
So
actually
what
Chris?
Why
don't
we?
Why
don't
we
go
straight.
B
A
B
Again
so
it's
I
think
the
fix
is
that
you
have
to
use
a
custom
note
up
right
now
off
of
master
branch,
or
else
it
doesn't
work
like
you
have
to
use
master
note
up
and
master
cops
together,
but
we
have
a
new
Justin
with
the
key
set
changes
you
made.
We
have
a
new
manifest
that
goes
into
the
states
or
is
that
correct,
correct.
B
B
B
C
B
A
It
is
a
single
file
that
contains
all
the
keys
that
are
otherwise
spread
in
multiple
files.
It
also
has
the
same
format
as
the
cop
server
thing
and
it
Cup
server
format.
So
it's
a
an
API
object.
So
it's
a
little
bit
better
on
that
front
and
the
other
thing
it
does
is
it
avoids
and
the
real
motivation
was
it
avoids
having
to
do
a
list
operation
which
is
actually
problematic
on
GCE
and,
in
theory,
other
back-end
stores.
A
A
Then
the
the
problem
that
remains
is,
if
you
use
cops,
1,
9
and
then
go
back
to
cops,
1
8
that
were
on
the
same
cluster.
That
will
be
a
the
key
set
that
Yamma
file
will
be
there
and
node
up
doesn't
like
it
and
that
will
barf
the
two
options
I
see.
Are
we
have
instructions
on
how
to
downgrade
which
is
delete
key
sets
on
yamo,
or
we
do
a
fix
before
we
do
1
9?
We
do
a
one,
eight
one
that
ignores
key
sets
IMO
and
then
we
say
to
people
you
can
downgrade.
A
A
A
A
What
we
should
want
to
do
is
I'll
look
at
G,
ke,
ke
and
or
G
C,
and
see
what
keys
those
things
if
they
use
a
longer
key,
then
we'll
match
their
key
size.
If
they
don't
I,
think
we
should
make
it
up
environment
variable
or
a
feature
flag
or
something
just
because
otherwise
it's
just
otherwise
it's
just
risky.
A
This
isn't
too
bad,
because
once
the
keys
are
created,
they
don't
tend
to
get,
they
won't
be
recreated.
So
this
isn't
you
know
the
what
I'm
always
concerned
about
with
a
feature
flag
or
an
environment,
or
something
like
that
is
you
have
to
remember
to
pass
it
every
time
that
won't
be
a
huge
issue
here,
because
you
just
as
long
as
you
pass
it
the
first
time
you'll
be
fine.
D
B
A
A
lot
more
green,
we
were
having
a
really
good
patent
with
one
nine
where
it
was
half
on
half
one
failed
one
pass
one
fill
and
pass.
They
continuously.
That
seems
to
have
stopped
I,
don't
know
what
changed,
but
we
just
had
a
red
but
other
than
that.
I
think
we're.
Okay,
we're
certainly
good
enough
to
do
an
alpha
or
a
beta.
At
this
point:
yeah,
okay
and
then
yeah.
It's
weird,
the
weave
one
is
oh!
No,
the
weave
one
is
fine
leave
one.
Is
that
TCP
wait,
okay
and
then
yeah
so
other
things.
A
The
big
changes
are
that
we
I'd
love
to
see
et
et,
testing
has
been
a
pain
or
IDI.
Testing
in
the
images
has
been
a
super
pain
point
for
the
past
month,
with
all
the
specter
thing
and
switching
a
diverse
accounts
meant
nothing
was
working
basically
for
a
whole
month,
but
I
think
we're
finding
each
other.
Finally
turn
the
corner
on
that
one,
and
we
do
have
an
our
back
change
going
in
Chris.
You
change
the
default
to
enable
our
batch,
which
is
a
fairly
big
yep,
but
gke
has
done
it
as
well.
I
think.
B
A
Okay,
yeah
we're
not
so
we're
not
going
to
turn
on
authorization,
we're
just
going
to
turn
on
our
back
and
does
that
turn
on
four
existing
clusters
or
just
new
clusters
I
think
it's
just
new
clusters.
It
just
changes.
The
default
sit
still
I
mean
it's
easy
to
turn
auto
back
in
your
existing
cluster.
If
you
want
to
and
I
think
it's
easy
to
change,
we
would
need.
B
A
A
Yeah,
that's
in
create
so
yeah,
that's
great
and
then
something
like
GCE
is
that
one
point
where
I
think
it's
ready
to
go
to
remove
the
alpha
feature
date.
So
that's
another
change
that
I'll
put
in
to
1/9.
It
doesn't
block
an
alpha
or
whatever,
and
then
the
final
one
is
at
CD.
Three
was
sir
entity
to
was
difficult
at
scene:
three
isn't
deprecated.
Yet
at
City,
two
was
deprecated
in
one
nine
and
so
I'm
talking
to
say
cluster
lifecycle
on
Tuesday
about
something
I've
been
working
on,
which
is
you
know,
an
STD
manager
approach.
A
There
is
a
format
which,
like
this
EDD
manager
thing
does,
but
it's
a
very
simple
like
s3
format,
so
I
would
love
to
get
I'm
gonna
try
to
get
something
into
basically
start
updating,
Oh
backing
up
at
CD,
not
necessary,
restoring
it
yet
just
backing
it
up
sort
of
the
my
proposed
plan
there
is
to
get
at
CD
backups
working
and
continue
with
the
current
at
CD
system
and
then
in
a
future
release
of
cops,
be
at
110
or
111.
A
Well,
I
guess,
probably
in
110
enable
you
to
switch
out
for
the
sed
manager,
assuming
we
know
or
whatever
we
decide,
but
not
making
it
not
forcing
you
to
do
it,
and
so
the
idea
would
be
you're
able
to
backup
using
the
current
system
you're
able
to
receive
to
restore
from
a
complete
disaster.
You
can
do
that,
but
you
might
have
to
you,
might
have
to
switch
to
the
sed
manager
to
make
that
happen,
and
then
eventually
we
can
make
the
sed
manager
the
default.
A
All
right
and
my
gamble
paid
off
where
we
were
pausing
your
item,
and
so
we
weren't
thinking
you're,
gonna
gonna,
make
it
but
yeah
that's
great
to
have
you
here.
So
here
we're
just
talking
about
and
then
so.
That's
the
last
1/9
thing
that
I'd
love
to
get
into
online.
But
again
it's
not
a
blocker,
and
so
we
can
rotate
around
to
the
first
item
on
the
agenda,
which
is
rolling,
update
strategies.
E
B
A
E
E
A
A
Isn't
yeah
so
part
of
communities
is
a
tricky
thing
and
but
it
will
be,
it
will
be
an
API
type
and
a
set
of
API
key
types.
Machine
machine
set,
maybe
and
probably
something
else.
On
top
a
controller,
you
I
think
it'll
be
more
like
cloud
controller
manager,
so
it
will
be
not
in
the
core
repo.
It
won't
be
mandatory
on
all
clusters,
but
one
might
expect
that
within
two
years
most
clusters
are
running.
It.
A
Yeah,
because
if
the
Idid
addictively
be
like,
we
have
all
these
rolling
update
strategies
and
they're
they're
inconsistent
across
the
different
kubernetes
installation
methods-
and
you
know
some
of
them
behave
better
for
different
use
cases
and
so
yeah.
If
we
have
a
use
case
like
I,
think
we
should
get
it
into
ours
and
and
then
yeah
and
then
we
can
make
it
make
it
being
the
the
official
one
or
the
other
thing
is
that
the
it
may
well
end
up
that
the
machine
controller.
E
A
When
you
actually
go
to
create
a
machine,
there's
not
a
ton
of
configuration
in
in
that,
and
so
we
will
have
a
cop's
machine
controller
which
will
go
and
create
a
machine
or
an
instance
for
the
machine
using
the
cops
configuration
and
that
is
actually
on
a
branch
and
I
will
be
able
to
demo
that
at
some
stage
I
can
I
was
planning
on
demoing
it
in
Wednesday
week
at
the
machines
API
thing.
So
in
n
days,
13
days,
10
days,
10
days,
that's
right!
A
12
days,
that
sounds
more
accurate
12
days,
and
but
it's
it's
a
proof-of-concept,
but
it
that's
sort
of
one
of
the
ways
that
I'm
one
of
these
I'm
working
on
I
think
that
happened.
Ii.
The
an
interesting
thing
is,
there
will
sort
of
be
two
controllers:
there's
a
machine's
controller
which
just
goes
increase
the
machines.
A
But
then
we
can
have
a
separate
controller
and
we
want
a
separate
controller
which
actually
does
them
like
the
rolling
update
here,
because
the
idea
being
that
you
know
you
will
have
a
GCE
machine
controller
you'll
have
a
VMware
machine
controller
and
you
want
to
use
the
same
rolling
update
strategy
it,
regardless
of
which
machine
controller
you
use.
So
that's
a
whole
nother
access
where
I
don't
know
whether
we
would
actually
want
to
have
a
different
machine
and
like
a
cop's
machine
controller
versus
a
gke
mesh.
A
Sorry,
a
cops
update,
controller
machine,
rolling
update
controller
versus
a
rolling
up,
taking
tour
that
sort
of
defeats
half
a
point,
which
is
if
we
want
to
find
one
good
one
but
I
mean
I,
think
cops
likely
currently
has
the
best,
I
will
say,
and
so
we
should
make
it
better.
And
then
we
have
a
good
chance
of
being
that
that
one.
A
A
E
A
We
have
that
in
deployments,
so
okay
well
I,
also
take
another
pass
at
the
PR
and
see
if
there's
anything
that
I
think
is
gonna
be
up.
But
that's
just
my
concern
at
this
point
is
just
whether
it
paints
into
a
corner
that
makes
it
hard
to
Griffin.
We
know
machines.
Api
is
coming
whether
it
build
right
there
I,
don't
think
I
think
it's
gonna
be
okay
from
what
I've
learned
since
I
almost
appear
another
Wow,
so
yeah.
E
A
E
A
E
E
F
B
E
E
Yes,
so
this
is
the
fact
there
are
two
different
variables
which
is
accepted,
appear,
client
or
basically
just
fair
ensures
that
the
the
client
certificates
I'm
I
the
CA.
So
at
the
moment
what
we
have
is
we
do
have
points
difficult,
but
it
doesn't
actually
verify
that
was
assigned
by
the
CA
widen
it.
Why
that's?
A
key
for
I
have
no
idea
but
barely
have
to
specify
these
two
environment
variables
such
enforcer.
E
It's
because
of
the
way
which
you
have
to
roll
out,
if
you
already
have
a
classes
and
etc
D
because
of
if
I,
remember
correctly
this
right
you
because
of
the
way
we
can
play
the
new,
the
new,
the
excetra
D
classes,
have
to
pull
down
the
stiff
occur
before
they
can
use
it.
So
when
you
roll
out
the
first
one,
he
then
turns
on
enforcement.
The
other
two
can
talk,
sit
anymore
cuz.
E
They
don't
have
a
new
client
certificate
and
then
they
say
it
does
work
because
there
you
can
read
if
you
get
as
long
here
three
nor
in
a
cluster,
because
the
first
one
rolls
out
the
other
two
can't
talk
to
it,
but
it's
fine
the
step
one
rolls
out
and
then
they
form
a
cluster
and
one
you're
back
in
play.
I
think.
A
I
think
I
think
we're
doing
okay
on
test
coverage,
except
we
obviously
don't
have
that,
for
he
said
yeah
mo
what
we
just
talked
about.
We
took
that
before
you
were
on
was
the
idea
of
that
I
will
do
a
cups
181,
which
will
essentially
just
add,
ignoring
key
set
llamo,
and
what
that
means
is
it
will
give
people
first
of
all,
it
will
immediately.
A
A
So
a
little
note
that,
like
you
know,
if
you
downgrade
the
same
cluster,
then
do
that
but
you're
something
there's
also
another
problem
where
we
need
to
update
it
doesn't
work
correctly.
Yes,.
A
F
A
F
That's
right,
yeah,
so
it's
using
good
near
the
belt
node
up.
So
if
you,
if
you
have
a
cluster
that
before
the
PR
that
you
race,
which
got
merged
in
so
you've
created
a
cluster
but
chiamo
is
an
industry
when
you
then
update
to
a
new
version
using
the
new
node
up
binary,
it
expects
it's
got.
Use
brindell,
set
to
true
expense,
key
so
channel
to
exist
in
history.
But
of
course
it
wouldn't
have
created
her
because
it's
just
a
cluster
up
day
and
not
a
fresh
creation
of
a
new
cluster.
A
F
E
E
A
What
is
interesting
is
with
the
machines
API
if
we
enroll
a
node
from
the
master.
So
it's
it's
I,
don't
know.
That's
that's
the
other
fly
in
the
ointment
here.
The
machines
API
has
the
notion
that
the
controller
effectively
creates
their
machines,
and
so
the
masters
at
least
for
the
non
masters,
the
nodes
themselves.
You
could
create
that
node
secret
in
like
machines,
controller
and
push
it
out
and
actually
really
really
lock
down
the
nodes
at
that
point
like
they
wouldn't
need
access
to
the
bucket
almost
at
all.
Alright.
E
A
I
I,
don't
know
is
the
answer.
We,
the
the
appeal
of
the
appeal
of
SSH,
is
that
you
know
you
can
you
can
do
sorry
like.
A
E
E
A
A
B
A
A
Anyway,
I
think
we
we
can't
wait
for
the
machines
API
to
make
it
happen.
The
machines
API
may
invalidate
a
lot
of
what
we've
done
or
what
we're
doing
here.
So
maybe
we
do
something
that
we're
not
100%
happy
with
and
say
that,
like
the
real
security
comes
when
we
do
the
machines
API,
so
in
other
words,
it's
okay
today
to
do
a
bit
of
a
song-and-dance
which
doesn't
really
achieve
very
much
because
we
believe
the
real
answer
is
make
or
a
more
secure
answer
comes
later.
So
actually
it's
like.
So
what
I?
A
The
the
machine
controller
like
proof
of
concept
I
have
right
now
runs
on
VMware
for
crazy
reasons.
Basic
as
I
didn't
want
to
use
any
cloud
stuff.
So
I
was
like
like
VMware
all
right
and
then
it
creates
a
bundle
of
the
files
that
would
otherwise
get
from
s,
3
s
ep's
them
across
to
the
target
and
runs
node
up
from
a
filesystem
which
the
local
you
know,
the
local
thing.
I,
don't
think
it's
currently
generating
a
per
node
certificate,
but
it
it
could.
You
know
so
and
that
that
works
pretty
well.
A
F
A
F
A
Put
in
recently
so
we
had
the
always
lock
to
var
log.
We
also
now
sent
it
to
standard
out,
so
it
appears
in
doctor
log
scoop
catalogs.
We
used
to
do
that
with
a
bash
pipe
2t
upstream,
which
pointed
out
that
shell,
without
exact
means
the
processes.
Don't
respond
very
well
the
signals
anymore,
though
there
is
something
using
look,
FIFO
and
K
FIFO
and
saying.
A
A
F
The
only
other
issue
is
that
so,
where
they
now
store
denier,
such
diversions
differs
from
what
was
originally
hard-coded
in
persecute
I
think
it
was
for
the
SSD
image
someone's
actually
introduced.
The
pr2
allow
you
to
override
that
passing
an
image
property
and
that's
such
a
D
cluster
spec,
and
so
you
can
use
that
now.
Just
passing
you,
the
new
path
to
GC,
are
a
certainty.
Department
everything.
A
B
F
A
E
A
A
F
A
Anything
else
I
will
wish
you
all
a
happy
Friday
to
you
for
lending
us
mint
drop-ins,
but
it
looks,
looks
good.
So
yes,
we'll
try
to
get
one.
Eighty
one
and
one
I
know.
Maybe
even
this
weekend,
I
don't
know.
I
think
I've
said
that
before
and
it
tends
to
not
go
well,
we'll
see.
I,
don't
think
I
have
anything
on
this
weekend.
So.
B
A
B
A
Yeah
so
I
I
will
I
will
have
a
look
at
where
we
are
on
the
colonel
and
I
guess.
Do
another
image
I'm
worried
that
we're
not
building
with
the
right
flags?
So,
even
if
we
had
enough
to
take
colonel
that,
we
now
need
to
pass
new
flags,
so
I
mean
to
find
out
whether
we're
missing
some
flags
or
so
a
different
compiler
or
whatever,
whatever
magic
it
is.
That
is
today's
fix
for
the
unfixable
issues.