►
From YouTube: Kubernetes Kops Office Hours 20180622
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
everyone
it
is
June
22nd.
This
is
the
cop's
office
hours.
We
have
a
couple
of
things
on
the
agenda,
but
if
you
would
like
to
add
anything,
please
do
so.
There
is
one
informational
piece
first
up,
which
is
that
I
cut
cops,
110
zero
alpha
one
last
night
starts
yay,
which
starts
the
1:10
release
train.
So
hopefully
we
can
get
that
I,
don't
know
beta,
maybe
even
a
release
within
a
week
or
two
we'll
see,
but
also
I
want
to
start.
A
The
111
alpha
release
train,
which
will
be
a
longer
alpha
for
I
wanna,
start
that
as
soon
as
possible,
after
cops
one
Assaraf,
the
kubernetes
111
is
released
to
sort
of
get
on
the
schedule
where
we,
where
we
have
an
alpha
release
as
soon
as
possible
or
concurrently
with
the
with
the
kubernetes
release.
And
so
yes,
please
do
try
that
out.
One
interesting
thing
is
there
is
a
Windows
built.
A
It
took
me
a
long
time
to
to
actually
get
my
Windows
machine
updated,
but
I
updated
it
last
night
and
it
was
able
I
was
able
to
create
a
cluster
from
Windows
I
didn't
do
a
whole
lot.
More
testing
than
that,
and
it
only
to
be
clear-
the
support
is
only
for
running
my
cups,
CLI
tool
on
a
Windows
machine.
We
cross
build
I'm,
not
sure
all
the
tests
pass,
we're
certainly
getting
closer.
A
That
was
a
community
country,
so
I
can't
remember
who
did
it,
but
thank
you.
Thank
you
to
that
person
and
if
you
do
try
it
out
and
you
have
any
issues,
let
me
know
I
wasn't
sure
whether,
for
example,
you
should
have
an
exe,
suffix
or
anyway.
Please
do
file
issues
on
that.
If
you
try
it
out,
but
yeah,
exciting
stuff
for
our
windows,
friends,
so
we
have.
We
have
Windows,
Linux
and
Mac.
Now
so
I
think
that's
I'm
sure.
Now
someone
will
create
a
new
OS.
That's
got
to
be
the
way.
A
B
A
Yes,
so
we
don't,
we
won't
typically
backport
features,
so
we
wouldn't
typically,
if
it
missed
the
1:9
cut
off
it,
wouldn't
typically
go
back
into
192.
For
example,
the
cops
110
does
support
the
general
principles.
Cops
110
will
support
like
kubernetes
1
920
is
1/8
and
basically
is
all
the
way
back.
We
do
get
a
little
fuzzy
about,
like
you
know.
Certainly
I,
don't
think
you
in
protesting.
A
Kubernetes,
1,
5
anymore,
so
like
I,
wouldn't
recommend
that,
but
you
should
be
able
to
use
cops
110
to
to
run
kubernetes
1
9
and,
as
such,
the
cops
110
alpha
should
be
reasonably
stable
for
running
kubernetes
1
9.
So
that
would
be
a
easy
way
to
do.
It
I'm
not
sure
whether
I'd
recommend
doing
that
in
production
with
an
alpha
built,
but
certainly
to
get
it
up
and
going
for
a
staging
environment
I
think
would
be
fine.
For
example,
if
you're
running
the
1/9
version
of
kubernetes,
ok.
B
A
I
had
remedied
that,
prior
to
the
to
the
meeting
today,
we
had
a
bunch
of
PRS.
This
weekend
we
had
a
nasty
bug
with
or
an
issue
with
GCE,
with
the
image
update
where
I
had
just
so
anyway.
Some
things
I
needed
fixing.
So
it
pushed
us
by
a
couple
of
days,
but
I
made
it
last
night
at
around
10
p.m.
and
so
there
is
a
110
alpha
zero.
There
is
a
110
branch.
The
the
release
branch
is
still
like.
1
9
is
still
the
stable
branch,
but
the
alpha
branch
is
the
alpha.
A
B
A
D
A
A
It's
not
too
bad
and
it's
getting
the
process
is
getting
smoother
and
I'm
gonna
try.
You
know
we
want
to
get
it
where
we
can
build
these
things
automatically,
and
so
you
may
have
seen
that,
like
a
fix,
went
in
to
make
the
version
numbering
a
little
easier,
so
we
only
have
to
update
in
one
place
and
I'm
gonna
try
to
build
one
of
the
I
think
a
TV
manager,
probably
I'm
gonna,
try
to
build
completely
automatically
the
cops.
A
Build
is
not
terribly
compatible
with
that
right
now,
but
the
we
can
start
somewhere
else
and
get
fully
automated
builds
and
I'm.
Imagining
a
world
where
we
we
sign
a
commit.
I
guess
is
what
we
want
to
do
and
that
triggers
a
build.
We
sign
a
tag.
Maybe
in
that
trick
triggers
a
build.
I,
don't
know
something
like
that.
That's
sort
of
what
I'm
playing
with
this
like
the
idea
great
and
then
there
are
two
actual.
There
are
two
things
that
I
guess
are
longer-term
that
are
on
the
agenda,
which
I
came
up.
A
One
camp
on
sac
and
one
came
up:
I'm
a
PR
in
github
which
I
wanted
to
sort
of
bring
up
with
the
group,
but
they
are
longer-term
items
so
I.
Don't
anyone
else
is
anything
more
pressing
that
they
want
to
bring
up.
The
two
items
are
a
cool
idea
around
mirroring
docker
images
and
a
cool
way
to
bootstrap
cubelet.
That
may
be
to
do
cubelet
bootstrap
tokens
that
may
be
more
secure
than
the
default
implementation.
A
A
Alright,
let's
go
for
it,
so
the
first
on
the
Cape
is
a
gentleman
named
Sam
on
sack
had
an
interesting
idea.
He
was
trying
to
use
so
he
went.
He
wants
a
docker
registry
mirror
in
pull-through
mode,
primarily
for
availability
in
the
case
of
the
docker
registry
being
unavailable
for
networking
type
failures,
maybe
speed
being
a
an
advantage,
but
not
really
a
motivating.
A
I
got
a
strict,
motivator,
I
sort
of
a
bonus,
I
guess
you'd
say
the
it's
different
from
what
we
have
today
with
our
image
mirroring,
which
is
really
meant
for
creating
a
air-gapped
mirror
such
that
you
never
refer
to
the
source
repos
at
all,
but
it
likely
can
use
a
lot
of
the
same,
the
same
approach
where
we
simply
okay,
so
the
dunker
actually
has
a
mirror
registry
flag.
I
think
the
gotcha
is.
It
only
works
for
images
that
are
on
docker
hub
and
does
not
work
for
images
that
are
on
GC
RI.
A
Oh,
yes,
I
see
the
shaking
of
heads,
so
the
problem
being
that,
of
course,
the
like
API
server
image
is
on
GC
REO
and
I'm
probably
go
anyway.
Our
images
do
not
only
come
from
docker
hub,
and
so
the
idea
was
so.
The
suggestion
was
to
effectively
rewrite
a
handful
or
the
GCR
I/o
images
effectively
to
the
to
be
foot
to
the
fully
qualified
mirror.
So
I
think
we
can
use
the
same
asset,
remapping
logic
except
it
would
it
would
rewrite
it
in
a
way
that
was
more
compatible
with
a
pull-through
error.
A
So
I
don't
know
if
anyone
has
anything
that
they're
doing
that
is
similar
to
that
or
incompatible
with
that
or
wants
to
throw
in
any
opinions.
But
I
thought
that
sounds
like
a
really
great
idea
and
wanted
to
share
it
with
everyone
see
if
anyone
had
any
input
but
I
think
Sam.
Actually
volunteered
just
write
up
a
PR
nothing's
here.
I,
don't
see
him
here,
but
otherwise
I
think
it'll
be
great.
Yeah
I
also
think
it'd
be
great.
A
To
have
you
know,
one
configuration
would
be
that
every
node
runs
a
local
docker
mirror
and
that
that
could
then
you
know
pull
those
images
in
a
peer-to-peer
way.
I
think
would
be
exciting.
For
example,
like
I
think
there
was
a
project
camera.
It
was
on
hacker
news
that
I
did
peer-to-peer
distribution
of
docker
images
and
that's
other
things,
so
I
figure
that
I
think
that's
an
interesting
and
chic
approach
and
I
yeah
I'm
hoping
he'll
come
through
on
that
the
other
PR.
A
If
no
one
has
any
thoughts
on
that,
the
other
PR
is
more
complicated,
so
cubelet
bootstrap
tokens
and
I
may
get
this
wrong.
So
please
correct
me
are
a
way
to
we're
essentially
moving
we're
trying
to
get
it
so
that
every
every
node
has
a
cubelet
key
pair
and
in
particular
of
a
certificate
that
is
scoped
or
identifies
that
node.
And
then
the
idea
is
that
the
the
permissions
that
that
qubit
is
granted,
which
are
normally
very
broad,
are
hard
narrowed
by
the
node
authorizer
so
that
they,
the
node,
is
only
allowed.
A
The
cubit
is
only
allowed
to
do
things
that
the
cubelets
should
be
doing
so,
for
example,
the
cubelet
wouldn't
be
able
to
read
a
secret
unless
a
pod
that
we
had
that
access.
That
secret
was
scheduled
on
to
that
node.
The
idea
being
that,
when
a
should
you
escape
out
of
the
out
of
a
container
onto
the
node
that
you
don't
immediately
have
access
to
everything
in
kubernetes
you
just
effectively,
you
limit
the
blast
radius
so
that
you
only
have
access
to
the
secrets
that
were
scheduled
on
that
node,
for
example.
A
So
it's
it's
a
security
measure
at
its
core.
The
way
that
this
works
is
there's.
This
bootstrap
flow
for
creating
a
cubelet,
the
cubelet
certificate
and,
if
I
recall
correctly,
the
way
it
works
in
say
cube
ATM
is
there
is
a
static
token,
a
master
token
cubelet
exchanges,
the
master
token
or
sir
logs,
in
with
the
master
bootstrap
token,
creates
a
certificate
signing
request
and
that
certificate
is
source.
If
your
signing
request
is
signed
by
ku
controller
manager
and
the
cubelet
that
hope
that
that
certificate
is
the
scoped
node
scoped
cubelet
certificate.
A
The
challenge
is
that
the
bootstrap
token
is
a
static
token
again
and
I
think
can
mint
any
cubelet
certificate
in
the
system.
So
is
a
bit
of
a
back
bit
of
a
flaw
in
the
system.
I
one
workaround
is
to
rotate
that
bootstrap
token
fairly
aggressively,
but
that
isn't
very
compatible
with
clouds
where
you
might,
you
know,
have
an
instance
that
launches
three
weeks
or
three
months
after
you
first
created
your
plus
term.
A
The
client
comes
up
and
talks
to
the
master
and
asks
it
for
a
bootstrap
token,
and
then
there
are
pluggable
authorizers
that
make
sure
that
the
bootstrap
token
that
the
node
is
allowed
to
access
the
bootstrap
token.
So
we
can
check
on
AWS
that
the
we
use
the
we
can
use
the
node
instance
or
the
instance
identity
document,
which
is
a
signed
piece
of
metadata.
That
AWS
gives
you
there's
something
similar
on
GCE
I,
don't
believe
you
know.
Certainly
bare-metal
doesn't
have
this
and
other
other
clouds.
We
don't
have
it.
A
The
other
thing
you
can
do
on
as
far
as
I
know:
don't
have
it.
The
other
thing
you
can
do
on
AWS
and
I.
Think
on.
All
clouds
is,
you
can
look
at
the
IP
address.
That's
true
that
is
making
the
request
and
you
can
cross-check
the
IP
address
against
the
cloud
API.
So
on
AWS
you
would
do
describe
instances.
You
would
check
the
you
could
check
the
tags
on
the
instance.
You
can
check
the
criminals
cluster
tag
in
particular
to
make
sure
it's
an
instance.
A
As
part
of
your
cluster,
you
can
check
that
the
IP
address
corresponds
to
a
machine.
That's
running.
Is
it
just
a
phantom
IP
address,
so
that's
vulnerable
in
IP
spoofing,
but
most
clouds
seem
to
lock
that
down
a
fair
amount.
So
it's
pretty
difficult
to
IP
spoof
on
most
clouds,
but
there's
I
think
there's
also
a
TLS
exchange
so
between
those
pieces
of
between
those
pieces
of
between
those
checks.
A
I
think
you
build
up
a
pretty
robust
set
of
verifications
that
you
can
really
be
reasonably
sure
that
the
node
is
who
you
think
it
is,
and
the
PR
here
is
number
five
three
one
seven
I
will
paste
in
a
link
for
that,
but
it's
it's
a
big
one
and
I
put
it
in
the
agenda.
Thank
you,
yeah.
It's!
It's
a
it's
an
interesting
one.
So
if,
if
people
have
thoughts
on
that,
please
one
more
thing:
I
want
to
add
sorry,
two
monologue
the.
A
If
and
when
we
adopt
the
machines
API,
the
cluster
API,
it's
possible
that
the
bootstrap
statistic
it
would
still
would
actually
be
generated
on
the
machine
controller
instead
and
pushed
to
the
notes.
But
we
in
that
scenario
we
have
a
similar
problem
where
we
want
to.
We
want
to
be
sure
that
the
node
we're
talking
to
is
who
we
think
it
is,
and
so
that's
where
I
think
you
we
likely
still
want
some
form
of
exchange
to
verify
that
the
node.
A
You
know
some
sort
of
pluggable
exchange
to
verify
that
in
the
node
is
who
we
think
it
is
ie
matches
the
IP
address
and
all
these
sort
of
things.
So
that's
maybe
so
that's
that
I'm
proposing
that
we
put
in
as
a
feature
flagged
feature
so
that
we
have
the
ability
to
iterate
on
it
and
turn
it
off
in
the
future
if
it
proves
to
be
an
error,
but
if
people
have
input
or
doing
this
in
other
ways
or
are
that
sort
of
thing?
A
A
But
I
don't
think
that
my
and
radio
doesn't
have
an
instance
identity
document
right,
a
sort
of
signed
document,
yeah
and
aw
I
mean
the
80
first
one
is
is
a
the
shortcoming
of
the
AWS
one.
Is
that
it
it's
a
static
document,
and
so
once
you
give
it
to
someone,
then
that
person
could
replay.
You
know,
compare
them
to
the
document
to
someone
else
and
effectively
act
as
you
if
they
were
evil.
So
you
certainly
have
to
be
careful
passing
that
around
so
like
hasha
Corp
has
a
volt
implement
a
hash
corpses.
A
A
A
A
B
A
Yeah
I
think
it'd
be
great
to
get
it
in
and
starts
think
about.
It.
I
worry
that
node
that
the
node
authorization
thing
is
an
important
thing
to
to
implement.
I
think
we've
held
off
on
it,
because
that
previous
implementation
has
been
a
little
it's
unclear
of
the
value
on
AWS
or
in
a
cloud
environment
and
from
a
security
viewpoint
and
I.
Think
Gamble's
PR
is
addressing
that.
But
it
is
a
big
PR
that
adds
a
lot
of
complexity.
A
A
So
the
big
advantage
of
of
Campbell's
approach
is
that
it's
very
pluggable,
so
it
will
work
on
not
just
on
GCE
but
we'll
work
on
AWS
and
GCE
and
digital
ocean
and
vSphere
and
anything
like
it
will
work
anywhere,
because
you
can
always
turn
off
authentication
and
just
rely
on
the
TLS
handshake,
which
is
effectively
a
more
secure
shared
token.
So
that
is
an
even
in
the
worst
case.
Assuming
the
implementation
is
correct,
then
it
is
better
than
the
token
is
longer
than
the
the
current
Church
took
awesome.
Yeah.
C
E
Don't
know
if
anyone
else
has
anything
so,
yes,
I
figured
since
you
know
we're
on
the
call.
We
can
talk
real
quickly
about
the
PR
that
you
open
Justin
about
the
launch,
config
clean
up
for
anyone
that
is
AWS
I
pestered
Justin
was
like
hey.
Can
we
like?
Like?
Can
you
point
me
to
the
right
spot
to
do
this,
and
then
we
did
a
live
coding
exercise
where
he
walked
through
and
solved.
E
The
problem
of
you
know
launch
config
cleanup
because
up
until
now,
I
think
the
only
time
we
were
cleaning
up
was,
if
you
rip
down
your
whole
cluster
I.
Think
we
cleaned
up
at
that
point.
But
now
we
we
clean
up
all
but
three
of
the
most
previous
launch
configs
per
instance
group
and
rodrigo
reviewed
the
PR,
but
I
thought
it
was
just
worth
bringing
up.
If
anyone
had
an
opinion,
we
could,
you
know
open
a
new
PR.
E
If
anyone
thinks
we
need
more
than
three
that's
kind
of
what
Justin
and
I
decided
when
we
were
working
through
it.
That
really
you
know
cops
cannot
really
cops,
doesn't
use
an
old
config
again,
so
we
don't
really
use
it
to
roll
back.
If
you
want
to
roll
back,
you've
actually
create
a
new
launch,
config
and
roll
forwards
again,
you
know.
A
We
did
video
it's
sitting
in
the
queue,
I
have
to
figure
out
YouTube
permissions
and
then
the
I
think
about
a
year's
worth
of
videos
will
be
uploaded,
including
that
one.
So
we
have
an
hour
and
a
half
of
fun
coding.
I
think
it
was
total.
But
yes,
the
the
pier
itself
addresses
the
issue
where
eventually,
you
hit
a
limit
on
launch
configurations
and
you're,
not
able
to
launch
or
not
able
to
cops,
update
or
cops
update
will
fail
because
it
can't
create
a
new
launch
configuration.
A
The
current
behavior
is
to
delete
any
ones
older
than
or
to
keep
only
a
certain
number
and
delete
older
ones.
That
is
a
behavioral
change
and
yes,
the
reason
we
chose
to
do.
That
was
because
it
is
otherwise
you're
otherwise
gonna
hit
the
limit,
but
it
is
a
behavioral
change
and
I'll
review
the
your
suggestion,
Rodrigo
and
if
you
have
thoughts
on
that.
F
The
only
suggestion
I
had
was
around
the
environmental
variables.
I
feel
those
are
a
little
as
a
Parekh
and
whenever
you
don't
set
one
and
I
know
we
have
a
lot
in
the
areas
other
places
and
so
I've
built
things
around
crops
itself
to
set
those
for
me
every
time,
but
I
know
I
forgotten
to
set
them
and
various
things
that
aren't
supposed
to
happen
happen.
So
that's
why
I'm
a
bursty,
though
the.
B
A
So
because
it
is
too
easy
to
forget
and
I
think
I
think
we
chose
to
change
the
behavior,
because
we
thought
that
the
new
behavior
was
more
user
friendly
and
that
people
really
wouldn't
be
setting
the
environment
variable
to
keep
their
old
ones.
I,
don't
know
if
anyone
has
a
use
for
keeping
their
old
ones,
but
that
would
help
us
inform.
Like
is
3
there
right
now.
We
just
basically
chose
3
random
I.
A
It's
so
the
I
think
point
taken:
I
think
that
the
the
idea
of
the
environment
variable
is
suppose
we
find
there
is.
Someone
has
a
use
case
for
going
back
30
like
launch
configurations.
We
don't
have
to
cut
a
new
cups
release.
We
can
say,
like
sorry,
here's
an
environment
variable
you
can
do
and
then
we
can
like
deal
with
it
in
a
in
a
timely
manner.
A
E
A
A
F
F
The
mechanisms
that
go
into
are
CA
creation
and
rotation.
Currently
so
I'm
going
to
dig
back
to
that
a
little
more
just
since
you're
a
little
more
familiar
with
ago.
Do
you
know
if
we
can
just
add
a
secondary
CA
and
you
a
rotation
of
the
cluster
and
then
remove
DOTA,
or
is
that
something
that's
not
currently
possible,
so
story.
A
A
A
The
other
scenario
is
currently
mint
a
10
year
CA,
so
it's
valid
for
ten
years,
but
eventually
in
20,
something
25
I
guessing
or
20
26,
the
first
those
will
start
to
expire
and
so
we'll
have
to
rotate
those
CAE
just
naturally,
and
a
good
good
policy
would
be
to
rotate
them
annually
like.
If
we
had
a
nice
procedure,
the
it
is
supposed
to
be
supported.
There
was
a
bug
in
1
5.
It
should
still
be
supported.
So
cops
has
all
the
infrastructure
to
do
it.
A
A
The
there
was
a
bug
that
should
be
fixed
now,
so
you
should
be
able
to
change
it
to
calculate
all
the
certificates
there
is
he
another
gotcha,
which
is
the
kubernetes
controller
manager,
has
a
certificate
signing
key
pair
and
that
can
only
be
the
latest
or
that
should
that
can
only
be
one
it
should
be
the
latest,
though
I
don't
know.
If
we
can't
you
make
that
distinction
in
the
code.
F
A
A
Are
you
gonna
remove
a
c-cert
I?
Guess
you
have
to
regenerate
the
service
accounts,
because
this
accounts
I
think
are
signed,
I'm,
not
sure
whether
they
the
sign
by
well,
know
exactly
how
specific
good
matching
works.
Obviously,
if
you're
a
new
key
I
guess
you'd
be
creating
a
new
key
right,
not
just
signing
a
new
search,
yeah.
C
A
E
New
cert
in
there
well
in
at
least
I,
had
to
do
this
once
because
the
original
Doc's
I
think
around
kubernetes
one
one
or
one
to
set
your
CA
to
be
one
year
and
I
just
copied
and
pasted,
and
a
year
later
it
was
like.
Well,
alright,
let's
do
this
from
scratch
again.
I
think
you
in,
depending
on
how
your
service
accounts
are
set
up.
You
can
just
delete
them
and
they
get
regenerated
automatic,
but
obviously
you
can
create
your
own
as
well.
E
E
A
A
F
D
A
If
you
want
to,
if
you
want
to
start
the
ball
rolling,
then
maybe
we
can
get
get
closer,
but
yeah
yeah
we
can
always
just
reach.
I
was
just
rolling
if
you're
rolling
update
a
cluster,
you
know
you've
bounced
your
pod,
so
we
can
have
a
pretend
rolling.
Update
of
a
cluster
should
not
be
disruptive,
so
you
necessarily
have
to
do
it
or
we
could
have
a
sort
of
like
a
tool
box
command
that
combines
the
steps
and
it
would
take
a
while
suspect,
but
it
would
it
would
work
yeah,
so
it
was
about.