►
From YouTube: Kubernetes Kops Office Hours 20180817
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello:
everyone:
it
is
Friday
August
17th.
This
is
cops
office
hours.
I.
Am
your
host
and
moderator
I
guess
for
the
day,
just
in
Santa
Barbara
from
Google,
we
have
a
bunch
of
things
on
the
agenda,
including
some
big,
bigger
topics.
So,
if
you
do
want
to
talk
about
something
today,
please
do
add
it
onto
the
agenda
because
otherwise
they're
worried
we
will
not
get
to
it.
A
If
it
is
not
on
the
agenda,
you
might
want
to
slot
it
right
above
the
DNS
issues,
one,
but
otherwise
I
will
thank
you,
Mike,
otherwise,
I
will
get
started.
I
think
I
have
the
first
two
items
on
the
agenda,
which
I
hope
are
quite
right.
The
first
three
but
the
first
two
are
I
have
quite
fast.
So
we
did
the
cops
110
zero
this
week,
I
think
we
did
on
Wednesday
so
that
that
is
now
out.
A
So
a
111
alpha
one
coming
soon,
the
the
only
feature
I
think
we're
committed
to
getting
into
that
other
than
support
for
communities.
111
is
@cd
migration
support
so
that
I
believe
that
works
in
110,
zero,
but
I
think
we
have
to
make
it
for
two
schedule
for
getting
everyone
off
at
City.
We
want
to
get
it
to
be
opt
so
opt-out
events.
A
A
So
that
should
come
soon,
probably
early
next
week.
I
would
guess,
but
we'll
see
and
then
public
service
announcements.
There
is
a
an
Intel
chip,
vulnerability
called
I,
think
the
fancy
name
is
foreshadow,
and
so
there's
a
new
image
that
I
put
in
I
did
PR
for
the
Alpha
Channel
this
morning.
So
hopefully
that
will
you'll
see
that
in
the
stable
Channel
probably
next
week.
A
But
if
you're
not
running
the,
if
you're
not
running
the
default
image,
if
you
specified
an
image,
you
will
have
to
manually
get
the
latest
or
get
an
updated
version
of
your
image
from
your
vendor
or
image
provider.
So
just
be
aware
of
that,
there's
also
a
denial
of
service
attack
at
the
TCP
networking
level
or
IP.
A
Networking
live
I'm,
not
sure,
but
so
it's
it's
a
relatively
important
to
relative
important
kernel
updates
coming
through
and
then
the
next
item
was
just
generally
I
wanted
to
sync
up
with
everyone
about
like
what
our
goal
should
be
over
I
guess
the
next
three
to
six
months
in
terms
of
like
what
do
we
want
to
get
in
and
the
ones
that
I
am
most
aware
of
are
like.
We
have
to
do
entity
3
and
C
3
I
should
write.
It
would
be
nice,
I
think
to
get
the
cluster
API
going.
A
We've
sort
of
had
to
pull
a
couple
of
features,
because
we
haven't
been
entirely
happy
with
what
our
roadmap
would
look
like
versus
the
cluster
API
roadmap
and
add-ons
Adam
and
machines.
A
guy
is
part
of
the
lesser
API,
I
think
and
then
the
add-ons
I
get
currently
when
we
Rev
an
add-on
so
like.
If
there's
a
new
version
of
calico
or
whatever
it
is,
then
we
have
to
do
a
new
version
of
cops
and
it'll
be
great
to
break
that
link.
A
So
that's
going
on
and
then
generally
some
sort
of
projects,
health,
things
I
think
Eric
helped
me
with
that
builds
for
1:10,
zero
and
we'd
love
to
get
that
be
more
automated,
so
we're
going
to
talk
to
the
CNCs
folk
about
getting
it
onto
their
infrastructure,
in
particular
for
hosting
that
the
artifacts.
So
that's
that
will
that's
sort
of
the
well
the
builds
have
to
automated
and
we
have
to
be
able
to
put
them
somewhere.
So
that's
that
would
be
good
and
then
camera
view
pointed
out
to
me.
A
A
B
Mean
it's
it's
not
it's
not
it's
not
too
bad,
but
there's
only
one
person
in
the
world
who
knows
how
to
do
it
and
I
have
seen
it
done,
and
during
that
time
there
were
20
of
times
when
it
was
like.
Oh,
this
might
not
be
working,
so
it
definitely
would
be
great
to
get
it
all
more.
Automated
and
and
I'm
sort
of
trying
to
help
lead
that
charge
for
a
fight
with
Justin.
So
listen
I'm
happy
to
help
as
well.
If
you
guys
need.
A
I
think
like
getting
the
final
stages
of
the
build
over
to
Basel
will
make
our
lives
much
better,
like
we
had
an
issue
where,
like
dr.
R
was
like
misbehaving,
and
it
would
be
nice
to
have
some
in
theory,
more
reliable
builds
that
Basel
gives
us
so
that
that's
something
I'm
gonna
try
to
tackle
I,
think
I.
Think
that
means
for
the
last
remaining
problem
is
our
weaves.
We've
entered
we've
mesh
library,
which
needs
a
go
built
tag
and
go
a
Basel.
A
Go,
doesn't
support
built
tag
so
I'm
just
gonna
send
a
weird
PR
to
like
differently
vendor
the
weave
thing
or
forked
in
or
copy
and
paste
the
code.
I.
Don't
know,
I'll
figure
it
out
in
terms
of
licenses
and
things,
but
that
sort
of
thing
will
help
for
the
automate
builds
all
right.
That's
should
we
any
other.
Anyone
else
wants
to
add
anything
to
the
list.
Yeah.
D
I'd
like
to
add
something,
the
the
aid
of
his
instance.
Types
are
currently
hard-coded
into
cops,
and
if
you
want
to
add
a
new
instance,
let's
say
like
the
are
fives
right
now
you
have
to
wait
for
a
new
release.
Is
there
any
way
we
can
move
that
out
into
like
a
secret
or
some
sort
of
like
you
know,
just
move
it
outside
of
cops
so
that
we
could
just
update
those
instance
types
and
just
have
it
work.
A
C
So
I
I
wrote
a
generator,
so
one
of
the
first
there's
there's
a
lot
of
problems
with
the
way
we
have
it
right
now,
one
if
I
spot
checked
a
couple
of
them
and
some
of
the
data
that
we
had
in
there
was
actually
wrong.
Probably
just
user
error.
You
know
computing
ECU
units
to
these
CPUs
and
all
that
stuff,
so
I
made
a
generator
I
just
rewrote
it
yesterday
to
actually
regenerate
that
code.
But
your
point
is
valid
like
it
is,
it
is
a
bummer
I.
C
Think
one
thing
we
could
do
is
pull
that
out
into
like
channels
or
you
the
same
thing
we're
doing
with
the
add-ons
that
way
we
can
just
you
know
updated
whenever
we
need
to
the
I,
have
a
PR
open
that
basically
will
automate
some
of
that
that
we
can
use
to
to
move
it.
You
know
to
more
automated,
but
you
have
to
pull
it
from
the
pricing
API.
That's
how
I
did
it
at
least
so
I
don't
know,
I
think
we
there's
definitely
a
lot
of
room
for
improvement
there.
So
I
agree
yeah.
D
We're
doing
something
like
similar
to
that
with
the
cluster
autoscaler,
where
we're
pulling
that
from
the
pricing
with
the
generator
yeah,
but
there's
also
like
another
thing:
that
kind
of
needs
to
be
added
with
the
AWS
CNI
plugin.
You
need
to
set
max
pods
and
that's
not
something
you're
gonna
get
from
that.
So
if
we
want
to
add
that
to
the
to
the
instance
types,
it's
not
going
to
be
generated
like
that,
we
need
a
better
way
to
do
that.
D
D
D
D
E
D
A
A
F
A
Doesn't
feel
too
onerous
to
have
to
regenerate
a
file
like
you
know,
when
we
see
the
announcement
there's
a
new
instance
type
Italy,
the
the
fact
the
API
might
be
wrong
is
little.
That's
a
flying
ointment
and
stuff.
I
can
also
put
on
and
give
us
some
shim
provider
or
something
that
we
want
to
get
any.
That's
a
given.
D
A
E
A
Fest
itself
might
change
and
so
it'd
be
great
to
have
the
manifest
pulled
out
and
then
I
think
the
next
step
would
be
if
a
user
or
a
cluster
administrator
wants
to
change
things.
How
do
they
go
about
doing
that,
and
so
that's
where
I
think
customized
comes
in,
but
I
think
that's
sort
of
the
third,
the
third
step.
Okay,.
D
A
Which
are
you
referring
to
I'm?
Imagining
that
that
the
reference
manifest
will
be
something
more
like
like
dark
channels,
file
where,
where
it's
outside
of
the
cops
code
right
so
current
me,
for
example,
the
okay,
the
manifests
for
most
things.
Most
of
the
add-ons
are
baked
into
cops
and
it
would
be
nice
to
have
that
be
updatable
like
we
can
update
an
image.
Okay,.
D
A
There's
a
there's,
a
I'm
sure,
there's
a
lively
discussion
about
whether
the
github
repo
is
the
right
place
to
pull
from,
but
some
form
of
HTTP,
HTTP
or
HPS
addressable
like
source.
That
is
not
not
the
binary,
I
should
say:
okay,
yeah
lots
of
people
have
very
justified
issues
with
github
can
be
a
little.
They
can
rate
limit
requests
about
that
yeah.
D
A
A
But
it's
less
bad
because
it
only
happens
on
cluster
create,
probably
so
it
would
not
necessarily
happen.
You
know
like
every
time
in
instance
boots
and
so
yes,
announce,
request
and
I.
Imagine
if
we
need
it
when
these
it's
boots,
which
I'm
not
sure
we
do,
but
we
would
would
end
up
copying
it
to
the
s3
or
source
states
to
a
bucket,
or
possibly
the
instance
metadata,
but
yeah.
A
A
G
B
A
G
I
figured
we'd,
throw
it
under
that
light
cycles
ignore
or
mourn
anymore,
so
good
anything
else.
A
D
A
A
But
if
you
want
to
do
this,
you
can
like
create
an
API
type,
a
CR
D
for
SSH
keys
and
have
users
that
are
able
to
log
into
your
clusters,
and
you
would
have
a
daemon
set
that
watched
that
API
type
and
it
would
go
and
like
write
the
SSH
authorized
keys
file.
You
possibly
create
users
as
well,
but
it
feels
like
you
know,
we're
sort
of
getting
into
a
hole
user
management
type
thing.
A
A
D
A
D
A
Hope
that
you
got
lucky
I
think
with
a
TD
manager
that
will
be.
It
was
certainly
possible.
I
haven't
actually
tested
that
in
detail
yet
yet
and
depends
that
sort
of
thing
is
the
sort
thing
which
should
be
possible
that
CD
manager,
we
are
likely
to
run
at
city
manager
for
the
near
in
the
near
future,
with
a
configuration
that
will
be
very
conservative
and
try
to
stick
to
zones.
So
I
think
the
the
most
likely
answer
will
be
to
try
a
different
instance
type
in
your
in
the
same
zone.
D
A
A
Assuming
you
could,
assuming
you
could
get
a
snapshot
of
the
volume,
which
is
a
pretty
big
assumption.
If
the
zone
is
down,
you
could
maybe
try
to
do
some
magic
there,
I,
don't
thinking's,
I,
think
sed.
I
think
the
answer
is
gonna,
be
at
CD
manager
will
probably
let
you
at
some
stage
in
the
future,
restore
from
well
you'll
have
to
do
it.
You
have
to
do
a
command.
A
We
have
to
do
the
command
to
restore
from
a
net
CD
back
up,
so
at
CVG
manager
creates
a
CD
backups
in
your
s3
state
store
and
you
we
should
create
a
command
that
lets
you
like
recover
from
one
of
those.
It's
not
gonna
be
automated,
because
no
backups
only
happen
every
five
minutes,
that's
a,
and
so
you
will
always
lose
some
data
in
that.
In
that
scenario,
so
it
has
to
be.
A
Someone
has
to
push
a
button,
but
we
can
certainly
make
it
push-button
easy,
okay,
but
I
think
it
only
applies
to
nan
h8
custer's,
because
as
long
as
your
I
guess,
I
guess
it
also
applies
to
an
HHH
CD
cluster,
where
you
wanna.
You
have
four
zones
and
you
want
to
switch
from
one
zone
to
another,
I
guess
but
yeah
we
can
start
with
all
right,
but
yeah.
It's
a
good,
a
good
once
I
think
about
us
to
get
a
CD
manager
to
be
serviced
or
backup.
Dr
r
ug
zonal.
Have
it
okay.
A
F
I
guess
I
just
started
working
on
OpenStack
and
will
implement
a
ssin
this
week.
So
I
just
had
a
couple
of
questions
around
other
scale.
Learning
requirements
first
off
I
noticed
both
AWS
and
GCE.
Have
the
autoscaler
is
kind
of
part
of
the
core
operational
flow
of
I
was
curious.
Is
that
a
hard
requirement
for
all
providers
at
least
sort
of
longer-term?
F
A
Okay,
so
we
use
a
lot
of
skinning
groups
for
we,
the
auto
screen
groups
in
any
some.
Many
of
us
do
two
things
right:
they
they,
both
in
theory,
do
auto
scaling
and,
like
so
resource
based
dynamic
reconfiguration
of
the
number
of
instances,
but
they
also
do
like
disaster
recovery.
So
if,
if
someone
switches
off
all
of
AWS
and
like
it
and
turns
it
back
on
again
with
auto
scaling
group,
your
masters
will
come
back.
A
So
that's
that's
sort
of
the
real
root
reason
why
we
use
them
for
the
masters,
if
you
like
so
in
a
fun
trick,
is
always
to
like.
Take
your
cops
cluster
and
shut
down
all
your
instances
and
it
will
come
back,
might
take
a
couple
of
minutes
come
back,
but
it
will
come
back
like
the
master.
The
auto
scaling
group
launches
the
master
and
it
finds
its
volume
and
it
reattached
as
itself,
and
everything
should
recover.
A
So
that's
that's
nice
and
that's
sort
of
the
root
reason
when
we
do
when
we
do
the
cluster
in
the
machines
API,
we
we
can
rely
on
on
the
cluster
API
for
launching
nodes,
so
you
won't
necessarily
need
a
no
reason
have
an
auto
scaling
group
for
them
for
the
nodes
there,
because
the
if
the
master
is
up,
it
will
launch
new
nodes
to
replace
any
notes
that
have
been
lost.
I
guess,
that's
sort
of
forward-looking
because
I
don't
know
if
it
actually
happens
yet,
but
that
that's
sort
of
the
requirement
there.
A
The
challenge
is
going
to
be
for
your
master
instance
like
what
do
you
do
if,
if
someone
shuts
down
the
master
instance
and
that's
where
I
know,
a
skidding
group
could
be
useful,
I
don't
know
if
there's
some
alternative,
you
know,
like
an
alternative,
might
be
a
lambda
function
that
like
relaunches
a
master
if
there's
no
master
or
I,
don't
know
something
like
that
or
like
a
protected
instance.
I
think
GCE
has
protected
instance,
yeah.
F
I
guess
the
reason
I
asked
is
the
other
scaling.
Node,
auto
scaling
and
an
openstack
is
typically
handled
by
heat
as
the
component
and
that's
not
always
a
available
on
all
installations
of
it.
So
I
was
curious
if
it
was
possible
to
do
that.
Maybe
I'm
the
cop
side
of
things
or
if
that
needs
to
be
on
the
cloud
provider
side.
So.
A
I
think
once
we
get
the
cluster
API,
it's
not
gonna
be
a
problem
for
four
nodes.
The
challenge
is
just
gonna
be
figuring
out
what
you
what
should
happen
when,
when
you
lose
all
your
masters
right
like
even
if
you
just
have
one
master,
you
could
presumably
use
the
cluster
API
to
launch
more
masters.
But
when
you
lose
all
your
masters,
I
guess
the
question
is
what
happens
and
I
guess
a
good
answer
could
be
tough.
You
relaunch,
like
the
two
cops
right
run,
run
cups
update
cluster.
A
It's
not
it's
not
the
end
of
the
world
like
it
shouldn't
happen.
That
often-
and
so,
if
it
does
happen,
then
that's
fine
and
so
I
think
and
I
think
that's
what
we
had
some
vSphere
implementation
code,
I
think
pursued
that
and
I
think
it's
I
think
it's
acceptable,
but
it
it
depends
if
you
yeah
I,
think
if
you
were
writing,
if
you're
writing,
one
now
I
would
probably
pursue
that
pattern.
To
be
honest,
I
would
I
would
assume
that
the
cluster
API
is
coming
and
and
do
that
so
I
think
I.
A
Think
your
masters,
you
would
launch
as
instances
and
you
would
assume
that
OpenStack
keeps
them
up,
or
at
least
yeah.
Okay
will
we
launch
some
best-effort?
We
launch
them
and
then
I
would
use
the
cluster
area
the
machines
API
for
launching
that
okay,
cool
thanks
and
you're
familiar
with
the
the
cluster
API.
It
meets
every
Wednesday
at
some
time,
sometime
noonish
in
East
Coast,
but
yeah.
They
are
it's
still
it's
still
relatively
early,
but
I
think
people
are
starting
to
make
progress
in
terms
of
actual
implementations
on
alternates
platforms.
A
A
H
I
haven't
seen
your
comment:
yep
thanks,
but
yeah
I
mean
actually
I
should
back
up.
What
I
really
tried
to
do
was
change
this
the
instance
type
to
something
that
was
twice
as
big
and
reduce
the
number
of
nodes
by
half
and
so
naively.
I
just
changed
that
in
the
yellow
file
and
updated
cops
and
it
immediately
killed.
You
know
half
my
instances
because
I
just
reduced
the
minimum
or
maximum
to
half
of
what
it
was
right.
H
A
Makes
yeah
that
makes
more
sense.
The
context
in
percent
so
like
are
the
Google
delete
node
just
on
the
come
and
go
Tomatoes
cuoco
delete
node,
just
unregistered
it
from
the
from
the
API,
and
it
doesn't
do
anything
in
AWS
the
there's
been
trapped
about
like
one
form
of
controller
that
goes
and,
like
you
know,
sinks.
H
A
A
little
contentious
because
maybe
don't
go,
look
at
what's
going
on
there,
but
it's
certainly
you
could
have
a
controller
that
goes
and
implements
that,
but
your
your
actual
is
that
that
will
really
happen
as
part
of
the
cluster
a
pair
the
machines
API
again,
your
actual
use
case,
though
externa
do
seems
fair
and
the
reason
the
reason
so
you're
right
that
we
most
of
these
updates
go
through
a
a
rolling
update
which
would
do
the
right
thing.
So
if
you
just
change
the
instance
size,
it
would
have
done
the
right
thing.
H
B
Would
force
yeah?
It
would
force
you
to
to
do
a
rolling
update
at
that
point
and
then
you
and
that
would
and
then
then
your
instances
are
gonna
get
killed
until
they
were
drained.
But.
A
B
A
little
unpredictable,
so
it's
kind
of
a
weird
like
I,
think
it's
a
it
is
sort
of
it
you're
anything.
You
do
have
a
good
point
on,
because
I
think
like
it's,
it's
pretty
reasonable
to
expect
coop
cuddle,
delete
node
to
do
something
very
useful,
or
at
least
something
to
do
something
like
there
should
be
like
some
way
to
do
it.
I,
don't
I,
don't
have
a
proposal,
but
it
is
on.
It
is
an
issue.
It
is
confusing
at
the
very
least
when
we.
A
E
H
H
B
Because
it
goes
through
auto
because
it's
you're
expected
to
use
the
Koster
autoscaler
I
guess,
which
would
like
force
that
so
they
lost
autoscaler
manages
that
you
can.
You
can
set
the
min
and
a
max
three
cops
instance
group
and
then
have
cut
Custer
autoscaler
set
your
desired,
which
is
definitely
confusing.
A
B
B
A
Makes
sense
and
I
think
I
think
we
were
doing
in
cops
with
the
resizing-
is
obviously
sub
optimal
at
best
and
yeah.
This
is
this
issue.
This
issue
is
a
good
one.
It
makes
a
lot
of
sense,
I,
don't
think
I,
don't
think
once
you're
doing
anything
wrong.
I.
Just
don't
think
we
good
answer
for
right.
What
the
right
way
is:
I,
guess
I
guess.
A
So
if
you
had
to
do
it,
I
would
probably
say
you
keep
the
min
and
Max
the
same,
so
you're
able
to
like
explicitly
set
them,
and
then
you
so
you
look
at
like
I
have
20
nodes
right
now,
I
want
to
have
it
so
I
go
yeah.
It
temporarily
involves
going
to
more
nodes,
I
guess,
I,
don't
know
it's
a
mess,
it's
tricky,
I
was
gonna
say
so
one
option
would
be
use
chest
up
in
an
instance,
type
set
minute
max
of
20
and
then
reduce
min
and
max.
C
Stepwise
yeah
no
I
mean
and
that's
the
way.
I
add
I
mean
I,
rotate
instant
sizes,
all
the
time
so
like
you
can
do
it
but
yeah.
It's
like
it's
multiple
steps.
The
the
way
today
and
you
know,
if
you're
running
the
cluster
autoscaler.
That
makes
your
life
way
easier
because
then
you
roll
over
your
instances
and
yeah
you
pay,
for
you
know
an
hour
two
of
the
extra
instances,
but
then
your
autoscaler
scales,
you
down
when
they're
not
being
used
so
is.
H
B
But
you
can
go
ahead:
Eric,
oh
so
we
I
wrote
a
blog
post
actually
or
a
couple
of
meet.
My
a
couple
of
my
colleagues
wrote
a
blog
post,
that's
about
a
slightly
different
thing,
but
I
think
would
probably
give
you
some
would
help
you
to
understand
kind
of
how
things
are
working
in
terms
of
like
draining,
coordinate,
coordinating
and
the
rescheduling
stuffing.
That's
called
like
the
bestest
cluster
upgrade
and
I'll
post
it
into
the
the
channel
so
that
you
can
not
take
a
look
at
it.
B
A
Thanks
no
yeah.
This
is
like
something
to
think
about
in
terms
of
our
goals
and
in
it
might
be.
This
is
part
of
the
cluster
of
my
machining
work,
although
I
don't
know
we'll
see,
but
certainly
like
today,
what
happens
is
when
you're
just
sucking
that
counts
we
all
we
do.
Is
we
send
that
to
the
a
device
auto
scaling
group
and
we
don't
do
anything
smarter,
and
that
is
because
it
I
think
it
predates
some
of
the
drain
code,
so
yeah
at
the
time.
A
C
C
E
Yeah
I
don't
want
to
take
too
much
time
on
that,
but
I
think
the
discussion
was
Yugi.
If
you
follow
the
community's
kubernetes
repo
and
like
people
experiencing
weird
five
seconds
to
mount
on,
trying
to
resolve
DNS,
essentially
on
not
only
in
cups
but
I
feel
like
it's
important
to
be
discussed
this
here
as
well.
So
it's
kubernetes
in
the
end.
Actually,
it's
not
even
kuehni.
E
Have
this
timer
and
they're
gonna
want
an
explanation,
and
the
other
thing
is
like
there's
this
issue,
which
has
been
open
like
ten
days
ago
or
something
in
the
cops
repo
regarding
how
to
implement
the
workaround,
which
some
people
are
implementing
and
yeah.
So
this
is
what
I
wanted
to
go.
So
essentially,
the
problem
is
to
implement
the
workaround.
We
have
to
somehow
run
cube
DNS
on
every
node,
so
this
smells
like
even
set,
and
we
have
to
do
it
in
a
way
that
we
don't
use
the
service,
the
service
IPS.
E
So
the
pod
IP
story,
which
means
we
need
something
running
like
host
network
with
a
static
IP
or
something
and
we
need
yeah.
So
we
need
something
like
that.
There
are
couple
of
workarounds
proposed,
but
most
of
them
require
patching
a
mice
or
something
which
I
think
my
personal
opinion
should
be
not
what
we
should
do
so
we
can
of
course,
bake
it
into
cop
standard
ami,
but
like
this
we
would
lose
some
flexibility.
I,
don't
know
how
likely
it
is
so
one
patch
it
was
merged
into
I,
don't
know
kernel
or
something
can't
recall.
E
A
bar
is
not
perfect,
yet
and
I
believe
it
would
take
a
while
to
be
kind
of
okay,
so
we'll
be
safe,
that
we
don't
have
this
problem
anymore.
The
nice
thing
is
that
we
have
some
bunch
of
blog
post
about
it.
We
have
easy
way
to
reproduce
it,
so
it's
kind
of
not
a
mystery
anymore,
and
this
is
why
I
think
it's
worth
discussing
or
sharing
ideas
or
what
we
want
to
do
next
to
address
this
yeah.
A
So
the
idea
is
to
enable
people
it's
a
stuff,
getting
in
people's
way
that
want
to
try
these
things
and
we
can
hopefully
figure
out
something
that
works
and
get
it
a
better
fix.
That's
more
automated
in
111,
but
yeah
the
feature
flag.
Is
there
that
camera?
Would
we
called
it?
But
it's
experimental,
clustered
DNS
or
something
like
that.
But
if
you
set
the
feature
flag,
you
can
set
the
cubelet
cluster
ID.
A
If
you
DNS
service
life
begins
at
the
DNS
IP
to
whatever
you
want
and
then,
if
you
have
a
daemon
set,
you
can
try
to
interact
with
that.
There
is
a
nice
repro.
I
personally
have
not
been
able
to
see
the
contract
issue,
though
I
have
seen
DNS
failures,
particularly
on
one
cloud
and
not
another
cloud,
I
think
if
I
can't
be
mean,
but
the
the
more
the
more
commonly
used
cloud
seems
to
have
more
issues
with
sig's
have
a
higher
rate
of
this
test.
A
Failing
than
then
on,
tondc
and
I,
don't
yet
know
where
that's
because
I'm
GCE,
with
cops
for
running
like
the
COS
image
or
whether
it's
different
configuration
or
whether
it's
the
networking
layer,
but
it
does
seem
that
more
people
observed
something
to
be
less.
A
knife.
I
did
see
someone
on
GC
GC
p,
but
not
as
many
as
I
saw
natal
us.
So
there's
something
going
on.
It's
not
entirely
clear
what
it
is
to
me
and
I.
C
There's
that
issue
where
he
Thomas
wrote
a
tool
to
test
this,
and
you
know
I
was
able
to
reproduce
both
the
contract
and
the
other.
But
like
I
tried
every
provide
CNI
provider,
we
have
none,
even
the
Amazon
one
still
had
issues.
I've
tried,
switching
the
DNS
to
core
DNS
and
the
I
got
around
it
by
switching
all
of
our
repos
from
Alpine
back
to
Boone
to
because
of
the
the
what
is
muscle
see
issue
that
they
have
with
contract,
so
I
mean
yeah,
there's
and
there's.
C
A
D
D
A
I
can
take
it,
I
can
take
I
got
unloaded
cluster
like
just
you
know
nothing,
but
the
standard
ones
and
I
put
that
I
put
one
caught
on
with
the
the
test
workload
that
Thomas
wrote
and
someone
I
think
it's
too
much,
and
you
know
it's
doing,
100
requests
per
second,
and
it's
you
see
it
basically
going
off
every
minute
or
so
I'm
AWS
and
certainly
within
5
minutes,
you'll
see
yet
and
for
me
it
wasn't
in
contract.
This
is
with
cube
net
like
there's
no,
like
I
just
don't
know.
E
A
Yeah
the
I
mean
the
obvious
downside
of
kookiness,
and
every
now
it
is
its
DNS
is
not
very
lightweight.
It
watches
every
every
service
and
possibly
some
more,
and
it
keeps
an
in-memory
copy
and
so
I
think
the
people
are
thinking
about
like
running
an
agent.
That
effectively
would
would
give
you
some
of
those
benefits
well
like
forwarding
to
a
real
Kupa,
heavier
cube,
DNS.
E
A
And
I
think
what's
missing,
the
I
think
that
there
is
also
proposal
I
think
in
the
cops
issue
to
use
to
run
DNS
mask
on
the
local
machines,
which
I
think
the
I
think
the
benefit
comes
from.
Caching,
but
honestly
I,
don't
I,
don't
really
know
it's
almost.
If
someone
wants
to
tell
me
how
to
get
contract
the
contract
failure
to
happen.
That
would
be
super
useful
I've
enough
people
have
seen
it
with
like
CNI
providers
or
just
generally
I
might
just
be
looking
in
the
wrong
place.
I
don't
know.
E
But
I
mean
in
general
is
pretty
good
with
at
least
we're
talking
about
it,
because
this
was
one
of
those
hidden
secret
of
kubernetes,
with
only
masters
of
production,
new
and
like
it
at
least,
is
getting
like
over
all
the
channels
and
it's
gaining
some
popularity
on
the
internet.
So
that's
good.
This
is
the
way
to
fix
things.
Right
I
mean
even
though
it's
not
properly
kubernetes,
but
we
have
seen
it
all
around
the
world,
so
yeah.
A
It
does
feel
like
it's
gotten
worse,
I,
don't
know
why
that
would
be,
but
maybe
just
like
people
are
starting
to
notice
it
more
I,
don't
know
right
to
get
a
fix
into
110,
but
I
feel
like.
We
got
two
cops
110,
but
we
got
in
the
thing
that
lets
people
with
run
experiments
at
least
and
get
cops
out
of
the
way.
I
hope
that's
it's
done
better,
but
that's
what
we
that's
we're
able
to
get
him
and
say
yeah.
A
If
we,
if
we
were
able
to
keep
experimenting
and
figure
it
out,
I
guess:
I
guess
this
could
be
a
signal.
Network
topic
in
terms
of
the
underlying
issue
and
I
think
there's
actually
someone
that
is
one
at
Google
working
on
like
looking
at
the
space
of
like
what
should
you
know
which
what
should
run
and
every
way
we're
doing
local
notation?
What
would
it
look
like,
but
I
don't
know
if
they're
in
top
physical
work
or
weather
climates.
A
One
more
item
on
the
agenda:
Rodrigo
is
cops.
Gcp
GA.
It
is
sadly
not
GA,
because
I'm
trying
to
forget
whether
to
do
IP,
aliases
and
I
think
I
pretty
should
do
IPA.
This
is
and
I'm
gonna
write
that
down,
because
it
will
be
a
good
thing
for
me
make
sure
it
works
cheap.
He
needs
UCB,
IP
addresses
and
GA,
but
yes,
it
is
currently
not
and
it
should
be
I
just
had
people
ask
yourself
yeah
it
to
be
clear:
it
works
the
issue.
A
G
A
A
We're
in
alpha,
so
we're
allowed
to
do
that.
I,
don't
know
whether
people
will
I,
don't
know
whether
we'll
find
out
how
to
code
or
actually
running
cups
on
TCP
and
how
they
feel
about
it
like.
Are
they
how
often
they're
turning
their
clusters
and
things?
Immigration
is
the
hard
bit
so
or
is
it
a
hard
bit?
So
it
would
be
nice
to
just
turn
on
my
penises
and
just
make
a
GA.
A
B
Think
we
have
it
in
our
instance:
Titan
our
machine
types
doing
B
I,
don't
think
we
do
I,
don't
think
we
do
either
that's
a
good
point.
I,
don't
know
anything
special.
It
needs
to
happen,
but
yeah
we
need
to
fix
that
up.
I
have
I
have
on
that.
The
validation
thing,
my
PR
for
the
validation
on
nvme
image
types
like
there's,
no
movie
nice.
If
we
had
some
metadata
about
these
things
anyway,
I
have
to
run
some
other
meetings
set.