►
From YouTube: Kubernetes Kops Office Hours 20180525
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Just
announcement
of
sort
of
I've
been
a
bit
m.I.a
for
the
past
I
guess
two
weeks
in
particular
working
on
s,
CD
manager,
which
is
sort
of
the
the
thing
that
will
get
everyone
from
Institute
at
c3
and
also
give
us
all
sorts
of
like
minor
upgrades
and
cluster
resizes
and
all
those
sort
of
things
which
are
really
important
for
the
sort
of
health
of
cops,
and
that
is
finally
working
working
as
in
works
for
me.
A
So
we
can
the
goal
for
the
next
culture
race
was
to
get
it
as
in
sort
of
early
access,
opt-in
option,
and
so
that
is
now
gonna.
Be
that's
now
a
reasonable
thing
to
do
and
we
can
get
it
under
80
under
ete
testing
and
so
I
will
also
be
reviewing
everyone's
PR
backlog
or
the
PR
backed
welcome
everyone.
Thank
you
for
all
the
PRS
and
then
I.
Don't
there's
any
questions
on
that,
but
otherwise
we'll
go
on
to
the
first
real
item
on
the
agenda,
which
is
those
cops
use?
A
Terraform,
that's
a
question
we
had
in
chat.
Thank
you.
So
the
answer
that
is
cops
itself
doesn't
have
to
use.
Terraform
cops
has
its
own
model
that
is
sort
of
similar
to
Tara
forms
model,
and
there
are
three
ways
that
the
cops
can
execute
the
cops
model,
one
of
which
is,
it
can
execute
that
model
directly.
It
can
directly
create
auto-scaling
groups
and,
like
and
all
the
other
objects,
but
you
know,
configure
auto-scaling
groups.
We
can
figure
them
to
eat
them.
A
A
The
most
people
I
think
I
use
either
the
terraform
model
or
the
built
in
model.
The
built
in
model
is
sort
of
easier
to
work
with
it's
sort
of
you
know
if
you're
just
want
to
create
a
cluster
as
quickly
as
you
can
that's
a
great
way
to
get
going.
A
lot
of
people
use
the
turbo
model,
I
think
for
two
main
reasons
and
I'm
sure
people
can
add
others
but
use
it
to
test
basically
to
audit.
A
So,
for
example,
we
export
you
can
you
can
have
cups,
export
a
terraform
settings
order
form
as
read
set
of
resources,
use
that
as
a
tariff
or
module
and
that
tariff
or
module
will
export,
for
example,
the
subnet,
IDs
and
I
think
the
security
groups
and
the
VPC
ID
and
the
intention
being
it.
Then
you
can
reference
those
and
create
an
RDS
database
or
EPC
interconnects
or
whatever
you
want
to
do.
B
B
C
D
Well,
Justin's,
looking
that
up.
The
other
thing
that
that
we
do
is
I
didn't
want
to
have
to
export
the
terraform
code
every
single
time,
so
we
use
cops
and
the
cops
model
to
deploy
everything
and
then
within
terraform.
We
use
data
objects
to
reference
everything,
and
so
we
just
do
a
holds.
Data
object
build
that's
one
other
way.
That's
worked
really
well
for
us
so
that
we
can
still
move
fast
with.
You
know
the
cops
way,
but
also
terraform
for
everything
else
we
need
on
top
of
it
is.
D
Remote
state
is
that
we
didn't
use
the
remote
state
stuff,
but
the
data
objects
that
they
have
now.
You
can
base
I,
like
my
second
about
you.
Yes,
so
I
key
off
the
the
name
of
the
the
V
PC,
and
so
then
I
just
create
a
workspace
for
every
V
PC
and
you
know,
then
you
can
get
all
the
subnets
and
everything
based
on
that
interesting
I've.
D
A
A
We
originally
did
that
using
like
go
templates
and
it
proved
to
be
less
good
than
just
using
code
like
much
more
complicated
and
much
harder
to
but
much
easier
to
introduce
bugs
when
so
and
it
didn't
really
actually
gain
us
a
whole
lot
of
benefit
in
terms
of
being
able
to
tweak
it
easily.
So
that's
that's
where
we
are
you.
E
I
have
another
topic,
so
the
cluster
autoscaler
has
an
option
for
auto
discovery,
for
instance
groups
and
you
need
to
add,
like
two
annotations
or
two
tags
to
the
auto
scaling
group.
What
would
be
the
best
way
to
do
that
should?
Should
we
add
another
option
to
the
instance
group
spec,
for
that,
or
is
that,
like.
A
A
The
three
options
I
guess
like
so
you
just
add
them
in
cloud
labels
manually
I,
have
a
field
or
something
that
you
turn
them
on
and
off
automatically
and
the
reduction.
We
do
just
add
them
for
everyone,
because
we
think
it's
generally
useful
I,
don't
know
what
people
how
people
would
feel
would
object
to
like
having
those
annotations
automatically.
E
A
Is
all
the
installation
tools
have
a
concept
similar
to
cops
instance:
groups
they'll,
make
call
them
something
different,
so
I
think
cheeky
uke
was
the
node
pools,
I
recall
correctly
yeah,
and
the
idea
is
that
the
autoscaler
is
a
good
example
of
a
tool
that
wants
to
integrate
with
those
node,
for
instance,
groups,
and
so
we're
going
to
have.
The
cluster
API
is
building
something
called
a
machine
deployment
which
will
manage
machines
in
the
kubernetes
api,
and
so
you
will
have
cops
instance.
Groups
will
will
show
up
as
machine
deployments.
A
So
I
guess
this
is
timeline
on
that
probably
six
months
to
a
year,
I
would
guess
I'm
not
entirely
sure,
probably
less
to
get
machines
integrated
and
more
to
get
the
full
like
cluster
support,
but
that
sort
of
timeline.
So
that
would
suggest
that
we
don't
necessarily
want
to
build
in.
You
know
a
ton
of
machinery
I,
don't
I,
don't
know.
A
D
A
A
The
the
second
late,
so
the
autoscaler,
enabled
that's.
If
that's
the
only
one
that's
needed
right.
I
feel
like
that
one.
We
could
just
say
just
add
that
one
like
we're
just
gonna
have
a
field.
I,
don't
really
see
much
a
huge
benefit
there.
The
cluster
one
is
trickier
because
we
we
that's
I,
think
our
that
is
I,
think
that's
our
cluster
format
right
and
we
use
shared
and
owned.
We
don't
use
true,
so
hopefully
that's
not.
Actually
it
might
not
need
to
be
true.
It's.
A
E
A
How
about
it,
how
about
Docs
first
and
then
we
can
figure
out
what
to
whether
it's
worth
doing
a
field.
Am
I
aware
of
my
wariness
being
that
that
that's
the
time,
the
time
that
we're
gonna
keep
this
for,
is
sort
of
bounded
right
in
the
well
I?
Guess:
I,
guess
that
a
machine
deployment
in
the
future
would
have
mmm-hmm
one
option?
We
could
here's
another
option.
We
have
the
min
and
Max
size
right.
We
could
set
those
annotations
automatically
if
those
min
and
Max
are
different.
A
Code
label
I
think
how
they
was
written
to
go.
It
seems,
like
everyone,
does
that
and
then,
if
people
are
unhappy
about
it,
particularly
in
future,
when
we
actually
get
closer
to
the
machine
deployments
and
can
see
a
little
bit
more
about
how
that's
gonna
work,
then
we
can
revisit
but
yeah.
If
you
wanted,
if
you
want
to
do
a
I
think
the
first
step
would
be
Docs.
I,
don't
have
this
actually
in
the
docs
yeah.
A
Cool
yeah
I
mean
I,
think
I'm
definitely
thinking
more
about
add-ons,
and
so
this
could
be
another
one
for
me
to
think
more
about
yeah
I,
don't
have
a
great
answer
and
it
seems
like
everyone
today
is
happily
setting
in
cloud
labels,
and
that
is
my
terrible
answer.
I
think,
and
there
was
a
there
was
another
question
about,
as
it
safeties
cops
in
production.
A
What
keep
wanting
to
pay
attention
to
I
think
the
general
answer
is:
it
is
safe,
ish,
I
think
the
areas
of
concern
are
like
don't
in
general,
in
Cuba
Nettie's
allow
untrusted
users
to
access
your
cluster
and
then
it's
about
how
much
how
much
trust
you
want
to
give
them.
So,
for
example,
by
default
we
don't
lock
down
access
to
the
I.
A
Am
the
I
am
to
the
database
metadata
service,
which
gives
you
access
to
the
I
am
token
for
the
nodes,
and
that
token
is
now
much
gives
you
much
fewer
permissions
than
it
used
to
give
you,
but
still
in
a
freshness
of
course,
trouble
at
least
and
the
master
token
has
obviously
very
high
permissions,
but
it's
harder
to
get
to
so
you
know
if
you're
talking
about
running
in
it's
it's.
We
believe
it
to
be
safe
against
external
users.
A
So
people
outside
your
cluster
once
you
give
users
access
to
your
cluster,
you
are,
you
are
basically
trusting
them
and,
for
example,
we
don't
set
up
an
authentication
system
by
default,
which
is
so
everyone
effectively
this
sort
of
default
configuration
is
everyone
becomes
an
admin
and
there's
no
easy
way
to
revoke
that
permission.
So
that's
why
things
like
the
Hep,
C,
Authenticator
add-on,
are
so
important,
because
it's
an
easier
way
to
give
people
permissions
that
are
more
revocable
as
I
understand
it.
F
Yeah
I
guess
I
did
so.
It's
mostly
set
up
it's
already
there.
It's
great.
If
you
have
I,
am
role
set
up
already
it
pretty
much
works.
This
only
does
part
of
the
work
for
you.
It
only
sets
up
a
key
and
actually
gets
a
paella
computer
running
on
the
cops
cluster.
You
still
have
to
manage
all
your
I
am
roles
Magnolia
yeah.
A
Yeah,
so
those
are
the
big.
This
is
the
big
areas
of.
Definitely
it's
supposed
to
be
safe
for
running
against,
like
we're
not
gonna
open
the
API
server
to
the
world,
but
once
you
let
a
user
onto
your
cluster,
then
you
you
likely
have
to
trust
them
and
I,
wouldn't
I,
wouldn't,
for
example,
run
a
public
cops
cluster
where
anyone
can
like
come
and
run
jobs
on
your
cluster,
like
don't
run
Travis.
Don't
we
right
Travis
like
Travis
CI
or
a
CI
system
that
is
public
and
running
untrusted
code.
F
Without
the
Authenticator,
you
can
actually
give
certain
users
certain
level
permissions.
So,
like
you
can
let
users
crate
namespaces
or
not
only
create
pods
create
services,
so
you
can
give
users
those
levels
of
permissions
as
well.
So
that
might
be
helpful
depending
on
what
you're
trying
to
do,
but
yeah
yeah.
A
Certainly,
if
you
I
believe
its
yeah,
if
you
so
I,
think
I
think
the
answer
is,
if
you,
if
you
all
the
things
in
kubernetes,
are
probably
now
there
to
enable
you
to
do
something
like
an
untrusted
CI
environment,
but
certainly
it's
not
out
of
the
box
in
cops
and
I.
Don't
know
whether
it's
out
of
the
box
in
any
configuration
now
that
is
provided
by
anyone.
So
we
think
very
carefully
before
about
you
know,
trusting
your
team,
yeah
and.
D
F
Did
it's
mostly
about
the
iron
morals?
We
talked
about
this
last
time.
We
discovered
that
I
am
rolls.
It
was
actually
working
pretty
well,
but
there's
a
issue
with
the
life
cycles
as
well.
We
said
that
we
could
actually
merge
the
I,
am
roles
diff
and
that
we'd
have
to
fix
some
life
cycles,
but
never
that
never
got
done
I'm,
just
not
sure
if
we're
still
okay
with
that
or
if
there's
any
other
issue,
so
you
discovered
around
it.
Okay,.
A
That's
my
flow
I'm.
Sorry
I
will
lori's.
Yes,
there
was
an
issue
around
one
of
the
lifecycle
exists
or
verifying
something
one
of
those
means
not
working
as
we
expected
it
to,
but
yes,
I
will
I
won't
get
I
will
get
to
that.
I
am
working
through
my
alright.
The
excellent
PR
dashboard
in
Goulburn
I
put
a
link.
A
C
A
Otherwise,
the
the
thing
I'm
mostly
thinking
about
personally
is
thinking
about
how
we
do
add-ons
and
things
like
that
and
apart
from
enabling
a
CD
manager
that
that
feels
like
that's
now
at
the
sort
of
you
know,
fixing
bugs
type
stage,
I
hope,
and
so
now.
The
next
thing
on
my
personal,
like
roadmap,
is
enabling
add-ons
and
then
later
we'll
be
thinking
about
machine
deployments
and
the
machines
it
yet.
A
That
is
an
excellent
and
very
difficult
question.
It
means
it
certainly
means
the
we
have
a
set
of
things
in
cops
that
are
managed
by
cops
like
pull
the
net
C&I
network
providers,
and
it
would
be
nice
to
reference
them
X
outside
of
the
cops
so
right
now
when
we
have
a
new
version
of
calico,
we
have
to
release
a
new
version
of
cops
together,
new
hope,
the
new
manifest,
and
that
is
not
a
great
situation.
A
The
hope
is
that,
whatever
system
we
use
for
managing
things
like
calico,
we
can
also
use
generally
for
things
like
the
a
device
encryption
provider
for
things
like
the
cluster
autoscaler,
so
that
there's
no
like
cluster.
What
is
gather
today,
you
install
that
and
it's
sort
of
different
from
cops.
It
would
be
nice
if
they
were
all
sort
of
the
same
experience.
A
So
thinking
about
that
and
trying
to
understand
how
to
do
that,
if
anyone
has
any
great
ideas,
feel
free
to
you
know
ping
me
on
ping
me
on
slack
thinking
about
the
what
core
OS
close
the
operator
pattern
where
you
know
where
you'd
have
an
operator
for
certainly
the
bigger
ones
and
whether
it
makes
sense
have
an
operator
for
everything
so
effectively
cops
would
coordinate,
would
be
an
operator
cost
would
be
almost
be
an
operator.