►
From YouTube: Kubernetes SIG-GCP meeting, 2018.02.01
Description
Notes at bit.ly/k8s-sig-gcp
A
The
Union
kind
of
in
try
to
make
is
to
broadly
apologize
for
how
few
meetings
they've
been
between
this
one
meeting
a
whole
one
and
I've
been
to
do
a
whole
bunch
of
issues
to
do
with
color
calendaring
to
do
a
set
of
holidays
that
came
in
and
opened
other
stuff.
It
actually
strongly
highlights
the
need
for
a
code
lead
for
this
tip
TCP,
which
is
something
I'm
gonna
have
to
try
and
case
now,
one
way
or
another
and
I
think
anybody
who's
watching
this
at
their
interested
in
becoming
kind
of
Christmas
at
all.
A
B
Thank
You
Adam,
yes,
so
I
want
to
like
show
how
cops
can
support
GCP
for
people
that
don't
know
cops
is
sort
of
a.
Where
is
a
cluster
management
tool?
It
lets
you
create
and
modify
the
configuration,
upgrade
delete
or
your
your
clusters
and
it
works
on.
Definitely
it
started
off
working
on
AWS
and
we
have
reasonable
supports
at
this
point
for
GC
II.
We
are
likely
to
take
that
out
of
it's
still
an
alpha
or
feature
gated
right
now
and
I.
Think
in
the
next
release
of
cops,
which
is
coming
soon.
B
We
will
remove
that
feature
gate,
so
it
will
be.
You
know,
ready
to
ready
to
use
will
declare
no
longer
likely
to
blow
up
everything
or
land
you
in
hot
water
so,
and
we
do
have.
It
is
under
e
to
e
testing,
and
so
we
have
under
the
real,
like
Google
run
eatery,
and
we
have
a
couple
of
jobs
which
are
mostly
green.
We
have
that
and
we've
them
an
H,
a
job
which
is,
interestingly,
less
green,
but
mostly
a
sea
of
green
and
some
a
lot
more
tests
to
this.
B
But
in
what
I
will
show
you
is
the
actual
demo.
So
let
me
actually
kick
off
the
creation
of
a
cluster.
First,
that's
probably
easiest
they're
the
best
things
it
takes
a
little
while
so
we
are
going
to
do
cops
created.
We're
gonna,
give
it
a
name
test
case
local
I,
which
also
got
in
a
minute
where
you
specified
kubernetes
version.
We
say
master
count
3
to
do
H
a
masters
with
three
masters
and
we
are
gonna.
Do
it
in
zone
the
zone,
us
east
1b?
We
could.
B
We
could
specify
multiple
zones
here
if
we
wanted
to
do
maltese,
malti
zone
multi
master,
but
we're
gonna.
Do
it
simple
for
now
so
off
it
goes,
and
it's
going
to
do
most
of
the
work
and
figure
out
to
give
us
a
little
preview
of
what
it's
going
to
actually
do,
though,
you
see
some
notes,
and
that
is
what
it's
gonna
go
and
create.
What
I'll
actually
do
is
I
will
kick
off
the
creation
and
then
we'll
scroll
back
through
that,
while
it's
actually
creating
the
objects
so
that
actually,
oh,
my
bad.
B
Okay,
so
that's
gonna
go
and
actually
start
doing
stuff
running
all
those
tasks,
while
it's
doing
that,
let's
look
at
what
it's
actually
doing
so
cops,
has
a
sort
of
or
has
a
declarative
declarative
model
inside
it
of
the
tasks
which
is
gonna
run
so
wow
they're
a
lot
of
these.
So,
for
example,
it
is
going
to
create
three
six
persistent
volumes,
one
thread
to
the
events,
one
for
Etsy,
D
main
and
then
three
masters,
so
it's
gonna
create
all
of
those
it
is
going
to.
B
It
creates
an
address
which
is
gonna,
be
our
IP
address
for
the
ingress
into
the
cluster,
some
firewall
rules.
It
creates
managed
instance,
groups
for
the
masters
and
for
the
nodes
and
what
sorts
of
different
things,
but
most
the
time
you
don't
really
need
to
worry
about
this
okie
pairs,
and
it
has
some
add-on
management
as
well
and
I'm
scrolling
down
okay,
so
it
has.
B
It's
not
great.
If
we
make
that
okay,
we
have
three
three
master
instances
and
we
have
two
nodes
and
we
have
sort
of
by
mistake:
a
cluster
registry,
so
we're
gonna
try
to
work
with
the
cluster
registry
folk
to
make
that
behave
like
the
actual
cluster
registry.
You
can
create
multiple
clusters.
They
all
live
in
a
in
a
what
we
call
a
state
store
which
can
be
an
s3
bucket,
a
GCS
bucket
and
coming
soon
a
server
using
the
API
machinery.
B
But
we
use
API
machinery,
so
you
can
do
things
like
you
know,
cops
get
cluster,
oh
yeah
mole!
You
can
see
the
actual
API
object
that
we
have,
which
is
a
you
know.
A
real
API
object
with
a
spec,
and
our
two
are
two
main
objects
right
now
or
our
two
main
types
are
clusters,
which
are
the
top-level
object,
and
instance,
groups
or
IG
is
an
array,
but
we
can
type
instead,
and
so
you
can
see
that
we
have
in
our
current
cluster.
B
We
have
three
instance
groups
for
the
Masters
and
one
instance
group
for
the
nodes.
We
we
do
an
instance
group
for
master
just
to
make
sure
that
we
are
always
running
one,
but
that's
how
it
works
and
we
can
also,
even
while
it's
booting,
we
can
edit
things
of
course,
and
so
cops
is
supposed
to
feel
a
little
bit
like
coop
cuddle
and
when
we
actually
have
the
API
server
running,
it
will
behave
exactly
like
to
cuddle.
But
even
while
it's
booting,
we
can,
for
example,
resize
the
cluster.
B
The
the
nodes
instance
group
from
two
nodes
to
three
nodes.
We
will
save
that
if
we
apply
that
change,
you'll
be
able
to
see
some
silly
log
messages
and
then
a
preview
of
what
it's
actually
going
to
do.
So
it's
gonna
change
the
target
size
of
that
manage
instance
group
from
two
to
three,
and
you
have
to
specify
yes
to
actually
apply
the
changes.
So
let's
do
that.
B
There
we
go
and
if
we
now
compute
instances
list,
we
should
see
hopefully
a
new
instance
coming
up.
Yes
only
have
a
third
note:
that's
booting
up,
one
of
them
probably
says
somewhere.
No
well,
that
was
GC
is
really
fast.
Okay,
normally,
you
know
me
I
catch
it
before
it
before
it
actually
goes
into
running,
but
it's
already
running
so
we
have
three
nodes,
two
of
which
started
before
and
three
masters
and
the
moment
of
truth
now,
which
so
esters
are
a
little
slower
to
start.
B
We're
lucky
yay,
so
you
can
see
we
timed
that
nicely.
So
we
have
our
three
masters
have
come
up.
The
notes
haven't
joined,
yet
they
have
been
running
for
about
40
seconds.
So
you
know
it
doesn't
take
long
to
boot,
boot
a
cluster
and
get
it
all
running,
and
if
we
watch
it
hopefully
soon
we
will
see
the
nodes
start
joining
up
there.
It
is
the
first
node
just
joined.
Well,
lots
of
nodes
are
joining,
it
looks
like
so
Wow
I
need
to
make
this
a
little.
Smaller
actually
forgot.
The
shortcut.
C
B
B
So
possibly
trust
her
without
yes,
go
show
us
the
preview
and
it
does
a
bunch
of
changes
to
their
managed
instance
groups
and
to
the
instance
templates,
and
you
can
see
that
it's
gonna
add
this,
this
add-on,
which
is
code,
bio
networking,
and
if
we
do
yes,
so
what
this
will
do
is
this
will
actually
apply
the
changes.
What
it
will
not
do
is
restart
the
nodes
for
you.
B
So
what
we'll
do
is
we
will?
We
will
fire
that
off
and
then
what
will
reorder
all
the
nodes
and
we
cups
cause
that
a
rolling
update,
and
so,
if
you
do,
it
tells
you
you
need
pre
need
to
do
a
running
update.
If
you
do
a
cutting
up
the
cluster
again
without
yes,
you'll
see
a
preview
of
what
it
needs
to
do.
So
it's
saying
that
all
for,
for
instance,
groups,
need
updates,
and
if
we
do
yes,
what
it
would
do
is
it
would
do
a
drain.
B
According
the
drain
shut
it
down,
gracefully,
do
it
in
sequence,
run
the
masters
first,
all
that
sort
of
stuff
that
will
take
probably
twenty
minutes.
So
what
we
can
do
instead
is
just
blast
through
it.
I
can't
even
see
I'm
typing
blind
okay.
So
what
this
does?
Is
it
we're
just
saying
to
just
shut
them
down
as
fast
as
you
can
not
dry,
as
a
cloud
only
means
don't
even
use
the
API,
just
just
kill
them
and
have
it
restart
so
we're
basically
shutting
down
all
the
masters
at
the
same
time,
all
the
nodes.
B
At
the
same
time,
the
manage
instance
group
will
replace
those
with
the
new
version
that
we
just
configured
and
so
in
a
couple
of
normally,
there
will
be
no
downtime
in
an
HJ
cluster.
With
this
one
it'll
be
a
couple
of
minutes,
while
those
were
I'm
interred
while
those
recover
and
then
it
will
come
back
with
one.
Eight
seven
is
what
we
configured
and
and
also
with
the
new
networking
configuration
so
the
new
add-on,
and
so
that
is
what
we
will.
The
other
thing
to
complete
the
picture
lightening
demo
to
complete
the
picture.
B
If
you,
when
you're
done
with
your
cluster,
we're
not
gonna,
specify
yes,
but
if
you
have
to
delete
cluster
your
class,
you
can
see
Oh
terrible.
You
can
see
all
the
things
that
you
would
that
you
would
delete
them
yeah.
So,
for
example,
it
will
go
and
it
will
delete
bunch
of
the
things
we
created.
The
address
the
disks
firewall
rules,
the
instances
manage
instance,
groups
and
the
templates.
It
will
basically
clean
up
after
itself.
The
routes-
and
one
thing
you
may
note
is
we
used
a
name
which
was
test
Kate
local.
B
B
So
it's
great
for
development,
the
nodes
or
gossip
amongst
each
other
and
figure
out
DNS.
That
way.
So
it's
it's
server,
less
DNS,
I!
Guess:
you'd!
Call
it!
Let's
see,
though,
if
we're
lucky,
let's
see
if
we
got
lucky
twice,
this
seems
unlikely,
but
we
can
hope
Oh
silly
Justin,
okay,
so
we
weren't
quite
that
lucky.
B
It
is
possible
that
we
Oh
looks
like
one
of
them
might
be
coming
back,
but
yes,
so
that's
that's
what
we're
waiting
for
that,
that
that
is
the
lightning
demo
of
cops
that
basically
is
now
working
on
GCP
and
we
are
going
to
take
the
out
of
alpha
in
the
next
version
of
cops
because
and
add
a
lot
more
été
tests
give
us
confidence
in
the
various
configurations.
It
is
great
that
we
we
used
to
be
a
little
bit
constrained
on
the
ete
tests
in
terms
of
resources.
Now
we're
on
GCE.
B
We
are
less
constrained,
but
also
less
constrained.
The
80s
psychic
is
now
address,
has
stepped
up
and
started
doing
that
so,
but
it
would
be
great.
You
know
we
have
an
H
a
test
on
GCE,
which
is
actually
showing
what
looks
like
some
flakes
not
entirely
sure,
but
definitely
looks
like
there's
some
some
flakier
tests
in
H
a
mode
seems
have
gotten
a
little
bit
better
recently,
but
definitely
some
tests
that
are
a
little
flaky,
maybe
and
we're
still
running
sed
to
so
some
of
that
could
be
fixed
93.
B
So
you
know
we
can
start
expanding
this
test
grid
at
City
three
at
City
to
a
chain
on
H,
a
different
networking
modes.
All
of
this
sort
of
stuff
in
terms
of
cops,
I,
think
the
the
the
road
map
beyond
you
know,
Moore's
support
for
more
cloud
providers
in
for
bare
metal
is
to
work
on
breaking
it
up
into
its
constituent
parts,
so
the
machines,
API
I,
think
Adam.
B
You
have
a
pending
question
about
which
I
will
let
you
ask
in
a
minute,
but
then
the
other
pieces
that
are
baked
into
cops
right
now
that
I'd
like
to
split
out
our
you
know
xcd
management
and
add
on
management.
Those
are
sort
of
baked
into
cops.
The
add-on
management
in
particular
is
a
little
problematic
because
you
would
like
to
work
with
add-ons
without
having
to
use
cops
all
the
time
you'd
like
to
just
use
coupe
cutoff.
B
So
hopefully
we
can
reach
consensus
on
that
and
and
surface
that
in
a
way
that
every
every
installation
program
and
gke
and
eks
and
all
of
them
can
use
and
get
more
test
coverage
on
everything
and
I
guess
specific
to
GCP.
The
features
we're
looking
at
the
only
remaining
features
that
I
know
of
our
IP
AES,
strange
alien,
IP
ranges
and
nvme
support.
I'm,
just
I
feel
like
we
should
be
configuring
that
oh
cool
anyway
and
while
I
was
blabbering
on
it,
it
restarted.
B
B
You
update
the
specifications
edit
the
specifications
either
with
the
equivalent
coop
cut
off
or
with
the
equivalent
of
cou
cut
all
replace,
and
then
you
update
and
you
rolling,
update
your
cluster,
and
that
is
the
and
then,
when
you're
done,
you
delete
your
cluster,
and
so
that
is
the
that
is
cops
in
a
10-minute.
Nutshell,
cool.
A
Few
questions,
maybe
other
people,
will
too
the
first
question
into
mind,
was
actually
when
I
saw
you
do
with
the
kind
of
the
world
and
update
and
the
drain
and
all
that
kind
of
stuff,
and
would
I
be
right
in
thinking
that
the
logic
thought
of
that
is
actually
kind
of
general
purpose
within
cops.
The.
B
Logic
is
general
purpose
exactly
we,
we
have
a
little
like
per
cloud
provider
functionality
which
essentially
finds
out
the
know,
the
instances
that
are
actually
running
and
knows
when
they're
a
master
or
a
node,
but
then
yeah
and
and
can
shut
them
down.
But
then,
yes,
the
the
logic
of
to
remember
the
order.
Gordon
drain
shut,
wait,
shut
down
the
node
wait
for
new
node,
wait
for
pods
and
stuff
to
reschedule
and
everything's
become
healthy
is
generic,
and
hopefully
we
can
work
with
the
machines.
Api
I
mean
that
can
either
become
a
well.
B
Hopefully
we
can.
We
can
that's
another
place
where
we
can
work
with
the
machines
API
and
get
that
logic,
or
at
least
the
learnings,
if
not
the
code
than
the
learnings
into
that
into
that
generic
controller,
because
it's
fairly
tricky
logic
and
there's
absolutely
no
reason
why
it
should
be
part
of
only
in
cops
right.
Every
one
of
these
should
behave
the
same
way.
A
D
I
love
to
how
actually
you
know,
take
a
look
at
your
code
and
kind
of
work
with
you.
We
we
started
and
I
think
a
couple
of
weeks
back.
We
had
a
wanted
to
think
like
closer
life
cycle,
meaning
was
evolved
in
on
a
state
machine
and
it
seems
like
you,
implement
a
lot
of
that
right
like
how
we
would
do
that.
So
we
are.
B
Know
be
great,
I
mean
you
reach
out
to
me
and
also
Chris
love
actually
wrote
a
lot
of
the
the
code,
so
Chris
love
CNM
is
the
other
guy
that
actually
wrote
most
of
the
rolling
update
and
drain
logics.
So
me
definitely
sink
on
that
and
I
yeah
I
mean
I
would
love
to
use
a
standalone
I.
The
other
thing
that
I
think
would
be
great
in
cops.
B
Is
we
only
really
need
to
bring
up
the
masters
and
then
have
the
machines,
bring
up
a
machine
controller
on
the
masters
which
brings
up
the
nodes
and
I
think
in
the
same
way
that
you
know
the
add-on
manager?
Will
let
us
put
more
into
the
kubernetes
api
you
having
the
machines
controller
run.
The
machines
means
that
you
don't
have
to
use
cops
to
run
the
machines,
but
to
tweak
your
machine
configuration.
You
can
do
that
through
the
coop
cuddle
API
or
the
kubernetes
api,
so
yeah
that'd
be
great
great.
To
get
that
happening.
B
The
Masters
is
gonna,
be
the
tricky
one,
but
yes
for
the
nodes
and
stuff.
Definitely
we
should
get
that
going
and
I
was
gonna
say
yes,
so
the
I
have
a
proof
of
concept,
machine
controller
which
leverages
cops.
They
currently
use
VMware,
ironically
enough.
But
yes,
it
is
not
too
hard
to
retrofit
or
to
to
get
the
machines,
controller,
using
cups
and
so
I'm
optimistic,
we'll
get
the
machines
API
and
the
machine
control
or
our
machine
controller
for
cops.
Working,
soonish.
A
Resilient
question,
so
when
you
implement
I'm
thinking,
this
is
kind
of
like
as
a
back-end
like
the
TCP
back-end.
It
provides
right.
You
sends
a
number
of
A's
features
that
then
a
high-level
logic
just
uses
when
you
implemented
that
back-end,
what
bit
made
it
like
tricky
and
difficult
to
use
and
you
had
to
work
around
or
invaded
your
higher-level.
B
I
mean
the
the
so
there
are
different
systems
in
cops
right.
The
rolling
update
is
one
system,
and
we
I
think
we
only
support
it
currently
on
AWS
and
GCP,
and
both
of
those
support
is
what
rolling
update
on
a
device
on
UCP
both
of
those
support
automatic
replacement
of
nodes.
So
that
is
a
list.
All
we
have
to
do
is
basically
terminate
the
node.
So
we
discover
the
discover
the
instrument,
the
instance
we
discover
the
instances
at
them
two
nodes
and
terminate
them.
B
That
is
a
remember.
They
remember
the
sort
of
list
of
things
it
was
gonna
create,
and
this,
if
I
can
still
be
my
scroll
back.
Oh
I'm,
not
sure
my
screen
anyway.
Remember
it
created
all
those
objects.
Like
a
managed
instance,
group
firewall
rule
the
persistent
disk,
so
those
are
entirely
cloud
specific
and
I
think
that
is
always
gonna
be
the
case.
That
would
be
the
way
you
create.
The
actual
objects
in
on
the
underlying
cloud
provider
is
naturally
gonna,
be
very
cloud.
B
Specific
and
people
have
pretty
strong
preferences
particularly
comes
to
networking,
and
at
least
in
Cups.
We
we
have
the
ability
to
output,
terraform
and
CloudFormation
on
AWS.
We
decided
not
to
work
with
terraform
directly
but
effectively.
We
implemented
a
sort
of
smaller
version
of
terraform,
so
we
have
a
similar
model
that
cost
obviously
had
this
right.
B
Maybe
if
the
machine
controller
is
one
that's
responsible
for
the
general
creation
of
the
cloud
resources
and
creating
the
machine
and
handing
off
I
guess
SSH
for
whatever
it,
whatever
it
is
to
the
whatever
the
common
API
is,
then
I
can
imagine
that
everything
from
then
on
will
be
fairly
standard
or
fairly
common
across
clouds.
We
have
that
today,
like
we
have
node
up,
which
is
our
cube.
Idiom
like
thing
in
embeds,
cube
idiom
and
it
it
works
the
same.
It's
the
same,
one
that
runs
on
all
the
clouds.
B
It
obviously
has
cloud
specific
logic
for
or
sorry
it
has
OS
specific
logic
because,
for
example,
container
OS,
so
we
were
running
container
OS
there
in
container
os
has
its
own
requirements
around.
You
know
certain
volumes
or
certain
directories
being
read-only
and
and
no
exact,
and
so
we
have
to
put
things
in
different
places
and
do
different
things
based
on
the
OS.
C
B
Yeah
we
can,
and
let's
see
so,
I'm
not
showing
my
screen
on
one,
but
yes,
you
can
at
least
on
AWS
we
support
yeah.
We
support
core
OS.
Whatever
that's
called
now,
rel
sent
us
a
bun
to
Debian
on
GCE,
we
support
container
OS
and
make
getting
some
butt
bun
yeah.
So
yes,
and
we
can,
we
can
certainly
you
just
basically
select
a
different
image.
The
interesting
thing
is
cups,
its
cups,
the
Commandant
or
doesn't
really
care,
and
it
actually
is
quite
hard
to
know
what
the
image
is,
at
least
on
AWS.
B
D
Do
have
a
question
we,
you
talked
about
add-ons
manager.
What
we
we
just
started,
discussions
around
that
as
well
and
Brian.
They
don't
Brian,
grant
formation
that
you
didn't
your
interest
in
this
face
can
explain
just
kind
of
briefly
the
know
what
their
don'ts
manager
you
know.
Does
that
mean
cops?
Yes,.
B
So
the
cops
idle
manager
is
pretty
similar
to
the
add-on
manager.
That
is
in
the
old
cube
up
when
the
cube
up
scripts,
the
cluster
on
on
Mandrake's.
You
call
it
with
the
exception
that
we
keep
track
of
what
version
of
an
add-on
you
have
installed.
So
if
the
user
goes
and
edits
a
add-on,
we
like
it
won't
get
reverted
five
minutes
later.
That
was
sort
of
annoying
for
end-users
or
confusing
for
end-users.
B
But
beyond
that,
it
really
isn't
great.
It's
still
hard
to
to
tweak
the
configurations,
and
so
like
reading
the
way,
reading
Brian's
proposal
around
declarative
that
application
management
I
think
it's
called
where
he
essentially
uses
the
kubernetes
api
to
layer
on
tweaks.
That
seems
like
a
great
approach
to
me.
So
currently
all
our
what
are
tweaks
so,
for
example,
you
might
change
flannel
from
UDP
to
the
ex
ran
or
the
SNC
UDP.
B
So
currently,
those
tweaks
have
to
be
expressed
in
the
cops
cluster
and
that's
a
bit
of
a
pain
and
not
really
what
users
want
well,
maybe
maybe
in
that
particular
example,
they
do
you
want
that,
but
it's
certainly
not
what
we
want
to
do
for
all
add-ons.
The
other
real
problem
is
that,
right
now
all
those
add-ons
are
baked
into
the
inter
cops
itself,
and
we
both
don't
want
to
bake
them
into
the
inter
cops,
and
we
don't
want
to
be
the
ones
to
maintain
it.
It's
hard
to
maintain
all
these
different.
B
The
networking
ones
are
a
special
source
of
of
work
and
it
it
would
be
great
to
have
a
kubernetes
agreed
way
to
do
the
system
level
add-ons
these
sort
of
very
low-level
things
that
it's
too
early
in
the
Buda
process
to
punt
it
to
something
like,
but
we
also
don't
want.
You
know,
cube
ATM
and
cops
and
gke
and
yes,
and
as
there's
thing
to
all
like
implement
their
own
versions
of
the
add-ons,
have
slightly
different
ones
that
are
all
slightly
different,
broken
in
slightly
different
ways.
B
B
The
hope
is
that
cups
will
essentially
put
ad
or
management
into
that
sort
of
style,
and
then
you
will
be
able
to
like
Cox
will
store
in
the
store
add-ons
as
it
comes
up
so
that
you
have
enough
so
that
you
can
get
to
the
committee's
API
prompt,
but
you
won't
be
required
to
use
cops
to
manage
them
from
then
on
you'll,
be
able
to
use
the
generic
tool
and
you'll
be
able
to
modify
things
using
these
overlay
type
files.
So
if
you
want
to
change
the
memory
allocation,
you
can
do
that.
E
B
E
B
So
the
scalar
work
is
pretty
separate
from
what's
grounding,
cops,
I
guess,
but
I've
also
been
looking
at
the
I
will
bring
it
up
and
then
I
will
share
my
screen.
So
I
guess
this
one's
on
my
personal
repo,
it
is
github
justice,
be
calm,
might
share,
share.
B
So
thinking
a
little
bit
about
how
about
how
add-ons
work,
in
terms
of
so
the
system
level,
add-ons
work
in
terms
of
the
scaling.
So
this
is
something
we
hit.
We
hit
a
lot
it's
hard
to
find
a
don's
that
we're
trying
to
pick
configurations
for
add-ons
that
work
at
both
small-scale
and
large-scale,
and
so
he
wants
to
support.
You
know
running
on
the
smallest
instances
available
on
a
diverse
and
GCE,
but
we
also
want
to
make
sure
that
it
has
the
capacity
to
scale
up
so
that
the
add-ons
are
not
a
performance
problem.
B
So
you
can
have
one
controller
that
is
able
to
configure
various
add-ons
and
scale
them
based
on
certain
inputs
like
the
number
of
nodes
in
the
cluster,
the
total
number
of
cores
the
total
amount
of
memory
vary
any
any
sort
of
thing
like
that,
so
I'm,
just
gonna,
try
find
an
example.
So,
for
example,
here's
something
very
similar
to
cube
DNS
autoscaler,
but
it
is
its
own
API
type
and
it
targets
the
cube,
ENS
deployment
and
so
that
the
one
controller
will
scale
cube
DNS
the
the
DNS
pod
based
on
the
number
of
CPU.
B
Sorry,
it
will
scale
based
on
the
number
of
cores.
It
will
change
the
the
CPU
limits
and
requests
for
the
for
the
pod,
and
we
have
some
notions
of
quantization
and
smoothing
to
try
to
stop.
You
know,
as
as
everything
happens,
quickly
to
start
flapping.
You
know
during
a
rolling
update
of
Cox,
for
example,
your
nodes
are
gonna,
go
up
by
one
then
down
by
one
up
by
one
down
by
one.
You
don't
necessarily
want
to
bounce
or
your
cube.
B
Dns
pods,
all
the
time,
and
so
I
will
say
that
Tim
and
I
are
still
iterating
on
the
exact
nature
of
the
API
and
trying
to
figure
out
something
that
makes
sense.
And
then,
even
after
we've,
we've
decided
that
we
still
have
to
write,
still
have
to
go
and
talk
to
the
autoscaler
folks
who
have
similar
ambitions.
So
they
see
there's
a
prototype
that
will
hopefully
mesh
with
what
autoscaler
is
also
working
on
the
autoscaler.
B
It
really
differentiates
from
autoscaler
is
in
the
same
way
to
add
on
juror
is
sort
of
too
early
for
the
like
something
like
helm
currently
and
they'll.
Currently,
the
autoscaler
relies
on
a
bunch
of
infrastructure,
which
we
don't
necessarily
want
to
want
to
rely
on
for
the
system,
components,
and
it
might
be
that
we
change
that
in
the
autoscaler
and
have
a
fast
path
or
a
direct
path
or
something.
B
C
B
B
You
can
have
multiple
instance
groups
if
you
want
to
mix
instance,
types
for
example,
but
a
nice
use
case
of
that
might
be
bring
up
the
new
the
new
one
and
you
cordon
them,
and
you
do
it
manually
that
way,
I
think
there
is
a
PR
to
do
a
full
doubling
in
size,
as
it
were
to
be
honest
sort
of
trying
to
avoid
doing
that
until
we
get
more
harmony
with
the
machines.
Api
I
feel
like
that's
it's.
B
It
is
a
real
use
case
and
it's
important
I
think
that
we
want
to
avoid
we're
trying
not
to
build
too
many
cleaners
less
into
a
corner
too
much
before
we
know
exactly
what
we're
gonna
do
there
yeah
it's
I
mean
it
could,
it
would
be
an.
It
is
a
relatively
easy
thing
to
tweak
I
think
the
only
debates
are
about
like
should
it
be,
should
it
be
a
flag
to
rolling
update
sure,
should
it
be
more
like
a
deployment
where
you
specify
it
on
the
API
object
itself,
I.
C
B
B
Of
yet
no
direct
machine
management
with
the
yes
fee,
that
is
really
that's
really
interesting,
I
think
so.
The
the
managing
troops,
the
nativist
other
skilling
groups,
are
great,
and
you
know,
but
if
you're
in
the
kubernetes
api
and
have
visibility
into
the
scheduler,
you
could
maybe
make
decisions
that
are
more
aware
of
what
instance
type
you
need,
or
things
of
that
nature,
or
which
particular
instance
to
shut
down
and
the
heuristics
of
generic
heuristics.
B
That
we
can
express
in
an
auto
scaling
group
or
a
managed
instance
group
are
less,
are
less
less
good
than
we
could
do
if
we
scale
every
launch
instance
is
one
at
a
time,
and
so
I
think
that
the
the
manage
instance
group
does
two
things
right:
it.
It
both
acts
as
a
a
reboot
like
a
watchdog,
so
it
is
something
that
will
launch
the
first
instance
and
it
can
do
the
scaling
and
replacement
and
I
think
the
scaling
and
replacement
and
launching
of
instances
is
something
that
we
could
do
better
from
kubernetes
itself.
B
B
But
you
know
the
all:
the
data
lives
on
persistent
disks,
a
net
CD
and
everything
will
come
right
back
and
that
relies
on
managed
instance,
groups
to
restore
it,
and
so
what
for
the
bottom
turtle
I
think
when
you
want
a
manager
in
this
group,
it
could
be
that
the
managers
this
group
actually
goes
and
launches
those
machines
directly.
But
it's
certainly
for
nodes.
I
think
it
makes
a
ton
of
sense
to
move
to
a
model
where
we
launch
instances
in
a
more
advanced
in
a
more
aware
way
or
just
have
that
option.
B
It
also
be
cool
to,
like
you
know,
maybe
some
GCE,
where
you
can
tweak
the
amount
of
memory
right.
You
know
pick
a
node
that
has
what
we
believe
to
be
the
optimal
combination
of
memory
and
CPU,
based
on
what
we're
actually
seeing
in
the
cluster.
That's
what,
if
that
sort
of
thing,
which
is
something
you
can't
do
in
manage
instance,
groups.
D
And
I
have
a
one
other
question:
if
no
one
has
questions
so
you've
been
you've
been
have
a
lot
of
experience
with
AWS.
How
was
the
experience
actually
working
with
GC
pni
and
open
elders?
No
those
pieces
together
like
what
didn't
work
that
well,
what
worked
well
and
I
think
it.
B
Actually
worked,
great
I
mean
I.
Think
there
were.
There
were
some
interesting
things
around
ACLs
and
I
am
permissions
that
were
I,
guess
different
I
think
you
know,
AWS
I
am
is
not
exactly
easy
either.
So
it's
it's
more
disliked
the
learning
curve.
On
that
front,
there
is
a
big
difference
in
terms
of
the
ACL
permissions
on
Google
Cloud
Storage,
which
actually
was
fairly
involved
things,
so
he
had
to
like
work
around
something
which
was
a
good
change
to
make,
but
we
it's
hard
to
do
it.
B
We
wanted
to
do
cross
bucket
to
get
it
working
in
either
II
in
boss
cos
they
have
more
than
a
hundred
projects,
and
so
we,
when
everything
worked
great
until
we
hit
our
hundred
project
and
then
suddenly
like
one
in
a
hundred
started
breaking
than
one
hundred
and
one
and
you
on
and
on
and
on
whatever
it
was
so
so
it
was,
it
was
yeah.
It's
that
was
that
was
a
that
was
a
gotcha
other
than
that
I
think
I
think
everything
actually
works,
pretty
well.
B
E
A
testing
or
simplified,
actually
I
think
a
lot
of
things
we
could
improve
so
give
him
another
travel
some.
It
is
because
he
just
mentioned
there's
the
need
improve
in
anime
infrastructure,
but
most
of
it
is
also
sweeping
document
where,
while
a
lot
of
place,
it
is
we
sold.
We
don't
have
this
feature,
then
we
have
to
go
I
asked
around
and
we
found
our
actual
feature
support.
There's
a
Hinton
configuration
so
intervals,
figure
so
give
him
another
travel
actually
and
the
end
of
the
acquitting
unit
clearness
they
all.
E
We
have
this
feature
support
and
the
timeline
will
support
and
I.
Think
there's
the
some
news
Nene's
we
saw
is
not
an
antibody
that
actually
concerned
it
is
working.
So
so
some
several
things
actually
give
him
a
lot
of
travel
so
so
and
I
think
you
also
because
the
it
files
couple
issues
to
the
TCE-
and
maybe
you
are
directly
oh
I-
help
you
file,
but
anyway,
there's
a
couple
of
things
for
enhancement
and
also
document,
but.
B
The
I
mean
I,
the
the
response.
Time
was
great
on
those,
and
maybe
that's
cuz
Dorne
was
kind
enough
to
like
they
came
from
door
and
roughly
coming
from
me,
but
you
know
there
are.
There
are
issues
on
the
in
breast
I
which
have
lasted
for
years,
which
are
just
yeah
out
there
and
and
we'd
never
would
love
to
see
fixed
and
they're
gradually
getting
fixed,
but
the
response
time.
The
velocities
definitely
seems
to
be
great
on
GC
cool.
E
Thanks
for
this
way,
so
that's
how
we
talk
about
it
because
we
passed
this
one
because
we
switch
to
ITAR
scaler
I
said
so
we're
talking
about
that
you
using
this
one
to
provide
the
real
services.
You
see,
you
know
coupon
idea,
services,
okay,
GCS
I,
so
so
there's
a
couple
of
things
like
that.
We
have
to
promote
this
to
guy's,
house
and
and
I
assume.
E
We
talk
about
to
retire
off
the
Cuba
app
script
and
then
we
can
use
in
the
pre
submitted
to
using
cops
to
block
off
the
presubmit
today
on
the
GCE.
So
all
those
kind
of
things
we
may
need
to
come
back
after
that
as
a
manager,
and
we
come
back
to
talk
about
more
and
there's
a
some
other
thing.
We
talked
about
before
also
cops
on
the
wis,
fair
or
something
classy
I,
don't
know
so
I'm,
not
sure.
B
There
is
time
going
so
music,
so
the
machines
API
work,
I've
been
working
on
actually
I've
been
trying,
some
that
into
bare
metal
and
so
I
have
a
little
demo
of
targeting
vSphere,
which
I
think
I
will
put
on
the
agenda
for
the
machines
API
for
Wednesday
week,
14th
of
February.
That's
my
that's
my
target
date
for
demoing
demoing
that
but
yeah,
these
fear
and
stuff
is
actually
coming.
As
these
words,
like
my
way
of
testing
bare
metal.
Basically,
why
it
doesn't
it's
the
cheapest
way,
I
found
to
test
bare
metal.
A
Very
last
thing
I
had
for
anyone
who's
watching.
This
is,
if
you
have
any
questions
about
DCP
in
general,
or
there
are
things
that
you'd
like
to
see.
People
talk
about,
maybe
presently
discuss
or
if
there's
something
you'd
like
to
discuss
yourself
and
please
get
in
touch
and
the
mailing
list
is
all
set
up
and
you
can
ping
me
on
slack
or
got
me
an
email,
whatever
I'm
a
happily
send
something
up.
There's
a
pretty
wide
open
schedule
right
now,
but
Matt
and
I
have
happy
weeks.