►
From YouTube: Kubernetes kops office hours 20201204
Description
Recording of the kops office hours meeting held on 20201204
A
Hello,
everybody
today
is
friday
december
4th.
I
am
your.
This
is
the
cops
office
hours
bi-weekly
meeting.
I
am
your
moderator,
facilitator,
justin,
santa
barbara.
I
work
at
google
a
reminder
that
this
meeting
is
being
recorded
and
will
be
put
on
the
internet
and
to
therefore
please
be
mindful
of
our
code
of
conduct,
which
essentially
boils
down
to
being
a
good
person
and
using
a
raised
hand
feature
if
things
get
crowded
or
contentious,
but
otherwise
we
could
probably
just
play
it
by
ear.
There
is
an
agenda.
A
I've
put
a
link
to
that
agenda
in
the
chat.
Please
do
feel
free
to
add
your
names
and
add
any
items
to
the
agenda.
You
would
like
to
talk
about
and
that
way
we
can
be
sure
or
we
can
do
our
best
to
get
through
them
all.
There
are
a
fair
number
of
things
on
there
actually
already
and
my
browser
is
just
catching
up.
I
apologize
there,
we
go.
A
B
That's
done
mostly
works.
I
guess
so
we're
in
a
pretty
good
spot.
I
guess
we
will
discuss
more
about
119.
A
At
the
end
right,
we
there
are
some
known
issues
which
we
deliberately
went
out
or
known
caveats,
which
we
deliberately
went
out
with,
knowing
that
they
were
not
going
to
be
fast
to
resolve,
but
we
can
that's
wonderful.
Thank
you
for
doing
that.
A
Okay,
the
you
also
have
the
first
item
on
our
agenda,
which
is
around
travisci.org
shutting
down
and
travisci.com.
Spinning
up.
Do
you
want
to
talk
a
little
about
that.
B
So
pretty
much
as
last
time.
The
problem
is
that
travis
ci,
whatever
their
org
or
com,
it
is
started.
The
rate
limiting
builds
so
there
a
month
ago.
Well,
two
months
ago,
there
was
no
limit
and
there
were
spikes
up
to
about
1
000
bills,
concurrent
bills.
B
B
B
A
My
point
of
view,
I
agree
with
that.
I
think
I
did
do
an
experiment
where
it
is
possible
to
bring
up
a
github
runner
on
an
aws
arm
instance,
so
that
is
possible.
It's
very
heavily.
There's
like
these
big
warnings,
saying
like
don't
do
this
against
public
repos,
so
we
might
not
actually
want
to
do
that,
but
there
is
a.
A
There
is
a
full-back
position
which
is
vaguely
tenable
there,
and
I
think
the
other
thing
is
you
know
we
already
have
some
e
to
e
tests
that
run
on
arm
or
run
the
server
side
on
arm,
and
I
added
another
one
and
found
an
issue
I
introduced,
which
I
apologize
for
sorry
about
that,
but
yeah.
A
So
the
missing
bit
of
coverage
is,
we
don't
have
tests
of
the
client
running
on
arm
effectively
and
we
don't
have
tests
that,
like
building
on
arm
itself
versus
cross
building
works
whereas,
but
that
would
be
a
go
bug
so
yeah.
I
agree
that
we
probably
don't
need
to
prioritize
this
and
like
if
the
value
we're
getting
out
of
it
is
pretty
compared
to
the
pain
is,
is
relatively
small
now
that
we
have
the
arm
ede,
test
side,
I'd,
say
so.
B
B
This
should
be
interesting
now
that
john
stix
is
merged,
so
we
can
test
the
master
again.
Kubernetes
master.
A
That's
wonderful
yeah,
so
I
agree.
I
agree
with
that.
We
got
rid
of
it
in
icd
manager
as
well
as
you
saw,
and
it
so.
I
think
I
think
it
makes
sense
to
just
turn
it
off
and
if
we,
if
we
hit
a
problem
with
it,
if
we
find
something
we
haven't
covered,
then
we
can
figure
out
how
to
cover
that
case.
I
I
suggest.
A
B
A
All
right,
unless
anyone
else
has
anything
else
on
that
we
should,
I
suggest
move
on.
The
next
item
is
also
your
recipient.
I
think
you
wrote
them.
I
guess
in
order
we
should
randomize
these
next
time.
But
do
you
wanna
talk
about
the
azure
pr?
I
don't
see.
I
don't
want.
B
A
B
I
think
peter
found
some
things.
I
found
some
things,
so
there
are
various
minor
things
I
because
kenji
is
not
that
familiar
with
the
chaops
code,
so
maybe
it's
good
to
go
through
it
a
bit
and
you
know
clean
it
up
before
merging,
but
that's
about
it.
Indeed,
it's
clean
and
pretty
good,
okay
right.
A
We
we
merged,
we
immersed
john's
fix.
I
think
we
are
all
a
little.
We
want
to
test
like
upgrades,
and
things
right
is
that
fair,
john.
A
And
that
yes,
yes,
we
need
to
test
it
itself
and
we
need
to
test
upgrades
as
well
and
I
don't
think
we
currently
have
any
upgrade
testing
at
all.
I
know
peter
you
were
looking
at
that
right
and.
D
Yeah,
that's
in
an
open
pr,
somewhere
cool!
Well,
my
upgrading
is
not,
but
the
framework
is
eventually
yes.
A
A
So,
yes,
we'll
agree
that
it's
good.
Thank
you
for
the
fix,
john
and
we
will
well.
We
need
more
miles
on
that
fix
now.
E
It's
just
still
the
1
18
release
when
they
teamed
up
to
re-release.
I
think
cyprian
was
going
to
land
some
fixes
on
that
one
or
maybe
backboard
some
of
the
fixes
he
had.
I'm
not
sure
if
that
happened
already
or
not.
B
Didn't
because
the
problem
was
that
I
found
other
issues
in
master
so
until
that
one
was
fixed,
I
it
wasn't
something
to
backboard,
so
they
were
related
because
of
the
mix
of
target
groups
created
by
chaos
and
those
external
ones
that
can
be
attached.
It
was
really
a
pain
to
distinguish
between
them
and
well.
B
The
initial
fix
was
not
allowing
to
attach
the
same
target
group
to
multiple
instance.
Groups
was
crashing,
so
I
had
to
fix
that
in
a
way
that
doesn't
break
the
nlb
pr
anyway,
that
is
fixed
and
probably
next
week.
I
will
have
some
time
to
to
look
at
it
and
do
the
back
port,
but
I
would
like
some
review
from
rodrigo
and
peter
if
that's
okay,.
A
Okay,
I
I
wasn't
care
whether
we
were
gonna
when
we
were
gonna
schedule
183
then,
but
we
have
a
ziplocker,
so
we
can
talk
about
that
in
the
last
section.
I
don't
know
if
olay
is
here
only
has
a
bunch
of
or
three
topics
in
a
row.
I
don't
know
if
anyone
can
speak
to
any
of
them.
They
are
the
first
one
is
concerns
about
cert
manager,
bin
data,
size.
B
So
that
I
could
talk
about
it,
ollie
is
trying
to
add
search
manager,
as
we
discussed
in
previous
meetings
and
one
of
the
things
that
I
noticed
so
sorry
I
noticed
it
is
that
cert
manager
is
huge.
So
being
they
don't
ask
me,
don't
ask
them,
but
let's
leave
it
like
this,
so
the
bin
data
size
it's
increasing
by
1.6
megabytes.
B
I
I
don't
say
that's
a
blocker
because
I
don't
know
probably
it
will
get
compressed
or
something
at
some
point
in
the
build.
But
maybe
someone
else
has
some
other
issues.
My
biggest
issue
was
that
being
data
is
already
a
pain
to
get
merge
merges
into
it.
So
it's
adding
this
would
make
it
four
times
bigger.
So.
B
E
A
Yeah,
I
I
guess,
there's
two
concerns
right
or
there's
at
least
two
concerns
right.
There's
the
the
size
of
the
cops,
execute,
the
caps
executable
and
maybe
even
note
up.
I
don't
know
if
it
makes
it
into
those
which
would
be
even
more
of
a
concern
and
then
the
source
code
developer
experience
the
contributor
experience
type
thing.
C
Well-
and
we
also
made
it
vendored
instead
of
built
created
at
build
time,
because
we
wanted
it
chaops
to
be
consumable
by
other
projects
as
an
import.
A
Right,
which
I
think
I
think
is
important-
a
lot
of
people
have
said
they
do
that.
So
that's
I
that's
one
of
the.
I
think
that's
a
reason
not
to
put
it
into
bazel
right
yeah.
I
mean
this.
This
will
eventually
get
addressed
by
the
long
promised
add-on
work,
but
I
don't
want
to
hold
up
this
pr
for
that.
For
that
I
I
think
it's
a
good
thing
to
raise.
I
I
don't
know
why
it's
so
big,
but
lots
of
credits.
A
We
can
I,
I
suspect
we
just
have
to
put
up
with
it.
There
is
a
trick.
There
is
a
trick.
I
think
we
can
ask
gobind
data
to
compress.
A
A
I
don't
see
that
yeah
we
can.
I
think
we
can
ask
it
to
compress.
I
think
it
might
make
the
the
merging
even
worse,.
B
D
A
A
I
think
realistically,
we
might
have
to
live
with
it
until
we
can
and
until
we
can
get
to
the
an
add-on
type
model
like.
Obviously,
I
would
like
to
I'd
like
to
work
more
on
that.
B
A
A
C
Yeah
we're
going
to
get
conflicts
any
case
you
might
as
well
compress.
A
I
think
that's,
I
think,
that's
actually
reasonable
and
I
feel
like
we,
we
sort
of
expect
conflicts.
We
sort
of
expect
only
one
manifest
change
to
be
able
to
land
at
once,
which
is
not
great,
but
is
this
broader
problem
we're
trying
to
tackle
through
add-ons
or
another
mechanism
like?
I
don't
want
to
like
fix
it
on
this,
but
like
somehow
getting
these
manifests
so
that
they
don't
have
to
go
through
a
binary
building.
Chaos.
A
C
Well,
it
sounds,
yeah
sounds
like
we
want
to
go
ahead
and
then
welcome
a
pr
to
compress
the
even
data.
F
F
Justin
I
have
another
meeting.
I
have
to
jump
off
to
a
minute.
Do
you
mind
if
I
grab
my
two
at
the
crap
grab,
my
tooth
down
there?
They
should
be
quick,
so
I
just
want
to
raise
these
two.
I
know
I
think
hackman
has
a
pr
addressing
part
of
9982,
which
is
that
the
terraform
resource
names
can't
start
with
a
digit,
and
we
didn't
catch
that
for
ncd
for
118..
F
I
think
before
we
cut
a
118.3,
we
should
at
least
document
how
to
address
that
for
upgrades,
because
it
was
missed
as
part
of
the
documentation
and
then
all
sort
of
along
with
that
there
is
a
bug
in
the
current
etsy
manager.
That's
bundled
that
if
you're
using
the
metrics
urls
the
restore,
doesn't
work,
that's
been
fixed
in
master,
but
we
should
build
that
build
a
new
lcd
manager
version
and
bundle
that
in
the
next
118
release,
because
that
really
that
really
is
bad.
A
Yes,
the
the
issue
for
people
that
are
not
familiar
with
it
is,
if
you
set
cd,
listenmetrics
url
as
an
environment,
variable
it,
it
will
listen
on
a
port
which
is
great
except
that
environment
variable
was
also
passed
into
the
second
lcd
process,
which
is
spun
up
to
do
a
restore
from
a
previous
version.
A
So
if
you,
if
you
are
doing
that,
then
the
second
one
will
fail
to
start
because
it
has
a
port
conflict,
and
so
the
fix
is
simply
not
to
pass
the
listed
metrics
url
into
the
second
process,
which
is
sort
of
an
internal
process
anyway,
and
shouldn't
really
be
listening.
Shouldn't
really
be
reporting,
those
aren't
the
metrics
you
care
about,
and
so
yes
ryan
fixed
that,
and
we
need
to
get
that
into
183
in
the
least
risky
way
possible
or
18,
and
these
rescue
great
possible.
I
should
say.
A
B
He
helped
with
it
and
we
debated
a
bit
so
there
is
well.
There
are
two
aspects,
one
is
how
we
build
the
names,
so
this
is
what
the
pr
is
fixing.
The
other
is
how
we
go
about
with
the
people
that
already
are
into
this
issue.
B
I
don't
understand
exactly
what
the
consequences
are
of
just
changing
the
names
to
the
correct
ones.
I
mean
I
wouldn't
go
changing
the
names
of
the
lcd
members.
B
It's
in
general,
so
this
is
not
the
trivial
thing
like
go,
rename
run
and
move
on,
at
least
for
me
not
to
look
more
into
it,
and
I
can
say
that
this
was
an
easy
fix,
and
this
is
why
I
did
it
to
unblock
these
people
because
it
took
like
30
minutes
to
do
it
all
but
finding
ways
to
go
around
it.
If
it
it's
a
bit
out
of
the
scope
of
what
I
would
like
to
do
here.
B
B
A
As
I
understand
it,
the
only
the
real
blocker
is
just
terraform
changing
their
rules,
and
that's
only
the
only
rule
is
on
the
name,
and
if
we
could,
I
don't
know
to
what
extent
it
is
possible
to
just
change
the
terraform
resource
name,
because
that
has
to
that
name
has
to
change
anyway
for
the
people
that
are
affected.
A
D
Yeah,
I
think
we
can
change
the
in
the
is
it
an
ebs
volume
that
is
having
the
issue.
D
Yes,
that
would
require
a
lot
more
terraform
state
moving
of
resources.
Oh.
A
A
Yeah,
all
right,
I'm
happy
to
look
at
that
unless
someone
else
wants
to
take
it
on.
A
All
right
we
were
about
halfway
through
or
we'd
done,
the
first
of
olay's
concerns
or
issues
the
second
one
that
oh,
they
meant.
We
just
we
finished
talking
about
the
bin
data.
The
second
one
is
on
the
list
is
use
cops
controller
to
sign,
webhooks
or
a
ca
issuer.
I
don't
know
if
anyone
can
speak
to
that
or
this.
Maybe
these
are
just
requests
to
bring
these
pr's.
I
don't
know.
C
So
yeah
I
I
think
I
have
to
look
over
it
and
figure
out
all
this
stuff
yeah,
because
what?
Because
some
of
these
things
you
like
create
a
the
usual
thing,
is
create
a
ca
per
web
hook
on
install
and
then
other
things
yeah
well
wow.
Well,
because
if
you
can
specify
the
ca,
you
just
generate
a
ca
search,
sign
a
have
it
issue
a
single
thing,
and
then
that's
your
pki
for
that
component,
but
other
things
you
need
more
complicated
stuff.
A
Thank
you
and
then
the
third
one
on
our
latest
list
is
terraform
concerns
with
reconciler
reconciling
sg
rules.
I
assume
that
security
group
rules
I'm
just
clicking
through.
A
A
You
know
what
I
mean.
A
security
group
was
not
a
top
level
resource,
and
so,
if
you
rename
a
non-top
level
resource,
what
actually,
what
does
terraform
do?
Does
it
does
it?
Does
it
actually
figure
it
out,
or
does
it
do
it
rename
and
recreate.
B
He
had
a
good
reason
for
renaming
the
security
groups.
Probably
it
would
have
to
go
a
bit
further
than
this
pr
and
the
previous
one.
So
unless
it's
really
broken,
I
think
we
should
do
it,
and
even
if
it's
some
seconds
of
downtime,
I
guess
how
long
does
it
take
to
activate
new
security
group
rules.
E
The
problem,
too,
is
those
right.
If
we
do,
if
terraform
does
delete
the
old
security
group,
it
does
not
attach
the
new
security
group
to
any
existing
instances,
so
you
could
potentially
have
because
it
will
set
it
to
this
auto
scaling
group,
but
it
won't
go
to
every
single
instance
that
the
auto
scaling
group
has
created
and
reset
the
security
group
to
those
instances.
E
A
Right,
yes,
just
it
looks
like
it's
just
the
rule,
so
it's
it's
it's
weird.
I
think
we
need
to
do
some
terraform
experiments
to
find
out
what
what
on
earth
terraform
decides
to
do
in
this
case.
A
All
right
I
can
try.
I
can
take
a
look
at
that
as
well.
The
next
one
is
mine,
so
there's
an
issue
here
I
opened
a
no,
I
have
a
polar
cross.
Actually
ubuntu
2010
is
the
latest
and
on
some
clouds
like
gcp,
it
like
looks
more
prominent
than
the
the
lts
version
2004.,
and
so
I
tried
running
it
and,
of
course
it
didn't
work,
because
we
don't
support
ubuntu,
non-lts
versions.
A
Node
up,
doesn't
recognize
it
and
says
I
don't
know
what
this
is
and
the
question
is
whether
we
should
try
to
support
those.
I
I
think
there
was
also
a
discussion
about
whether
like
when
2104
comes
out
when
the
next
non-tlc
non.
When
the
next
lts
comes
out,
would
we
remove
support
for
2010
the
previous
non-lts?
A
I'm
not
wild
about
that,
because
I
want
to
break.
I
don't
want
to
like
make
people
use
a
particular
version
of
of
chaops
because
they
won't
like
love,
ubuntu
2010,
for
example,
but
then
this
does
mean
that
effectively
we
end
up
supporting.
You
know
a
new
version
of
ubuntu
every
six
months
or
instead
of
every
two
years.
The
behavior
today
is
not
great.
We,
we
can't
detect
it
in
advance
because
we
don't
know,
what's
actually
in
an
image,
there's
no
or
there's
insufficient.
B
Well,
we
don't
want
to
do
it
for
previous
versions
of
k-ops
to
work
with
newer
versions
of
ubuntu
anyway
right,
because
we
don't
know
what
else
may
break
we.
Never
I
mean
being
able
to
test
it
like
right
now,
back
porting
it
to
118
would
be,
let's
say,
doable,
but
doing
it
for
117,
or
it's
a
lot
of
work
that.
A
Yeah,
it's
a
good
point.
We
wouldn't
necessarily
have
to.
We
wouldn't
necessarily
want
we
wouldn't
have
to
support
in
older
versions.
We
wouldn't
necessarily
backfort.
I
think
we
could
like
the
thanks
to
your
work,
cyprian,
the
the
effort
of
installing
docker,
for
example,
which
was
the
big
problem
with
supporting
a
new
os
or
distro.
A
A
B
Different
formulated,
we
won't
remove
it,
but
we
will
not
support
it
if
you,
if
you
get
into
trouble
with
it
you're
on
your
own,
because
when
the
next
thing
is
released,
I
think
ubuntu
removes
support.
They
don't
release
security
patches
and
anything.
So
we
don't
want
to
be
in
this
situation.
When
someone
comes
to
us
and
say
this
doesn't
work
on
my
ubuntu,
that's
already
unsupported
by
ubuntu,
so.
A
I,
I
think,
that's
a
great
position.
I
think,
like
I
think,
that's
a
good
one
like
saying
we
only
yes
ubuntu
only
support
lts,
but
we
we
should.
I
mean,
to
the
extent
we
support,
but
only
recognize
we
should
still
maybe
recognize
the
other
versions,
particularly
as
it
isn't
a
big
deal
anymore.
We
can
also
support
the.
B
Current
one
that's
released
now,
but
nothing
that
was
already
removed
well
end
of
life,
yeah.
C
A
Are
you
saying
we
would
actively
tell
them?
This
is
unsupported,
or
I
mean
I
think
I
think
the
recommendation
should
be
to
run
on
an
lts,
ubuntu
or
or
a
debian
or
whatever.
It
is
right,
but
the.
A
A
I
made
a
mistake
when
launching
it
on
gcp:
it's,
it
works
a
different
way
and
yes-
and
it
was
more
that,
like
my
experience
when
I
made
that
mistake
was
pretty
poor
and
that's
that
was
the
underlying
issue.
I
also
see
in
chat
by
the
way
kevin
says
one
perspective
data
point
we
never
run
non-lcs
because
of
security
and
compliance
and
stability
concerns
for
us.
I
think,
that's,
I
think,
that's
the
right
recommendation.
E
A
Yeah
and
I
guess
then
the
question
is:
if
that's
the
stance,
should
we
just
not
even
bother
adding
support?
Should
I
like
not
have
used,
22.
like
failing
fast,
might
have
been
the
best
best
thing
for
us
to
do.
In
that
case
it
would
have
been
if
it
was
faster.
Like
you
know,
if
the
cli
tool
told
me
justin,
you
made
a
terrible
mistake,
but
it
takes
two
minutes
to
tell
me
that
I've
had
a
terrible,
stick
or
two
minutes
for
me
to
be
like
something's,
going
wrong.
A
I
think
we
have
some
good
positions
I'll
mod
more
on
whether
we
should
try
this,
what
we
should
do
about
lts
and
whether
there's
some
way
not
lts
and
whether
there's
some
way
we
could
detect
up
front.
A
A
Anyway,
all
right
next
item
on
the
agenda,
eddie
charlie,
I
believe
it's
new-
welcome,
add
additional,
hey,
add
additional
policies,
at
instance,
group
level.
Do
you
want
to
talk
talk
to
us
about
that.
G
Yeah
sure
that's
a
request
I
opened
a
few
days
ago,
and
someone
suggested
that
I
talk
about
it
in
the
during
the
office
hours
because
it
changes
not
a
lot
of
code
but
a
lot
of
files
around
integration
tests.
G
G
So
this
is
probably
coming
from
the
fact
that
we
create
instance
profiles
per
node,
draw
and
not
per
instance
group.
So
we
have
one
instance
profile
for
masters.
One
instance
profile
four
nodes
and
one
instance
profile
for
bastion
nodes.
So
this
pull
request
changes
this.
It
keeps
the
old
instance
profiles
per
node
role.
That
adds
one
instance
profile
per
instance,
group
and
switches
from
the
node
role
instance
profile
to
the
instance
group
instance
profile
when
additional
policies
are
specified
for
a
given
instant
instance
group.
G
In
the
police
there
was
concerns
raised
around
terraform
based
clusters,
because
in
the
first
implementation
I
completely
dropped
the
per
node
role.
Instance
profiles
and
it
would
have
broken
the
cluster
migration
in
case
of
a
terraform-based
cluster.
So
I
changed
the
implementation
and
decided
to
keep
both
per
node
role.
Instance
profiles
and
per
instance
group
instance,
profiles,
because
I
suppose,
if
a
user
starts
specifying
additional
policies
for
given
instance
groups,
he
probably
expects
that
this
particular
instance
group
will
use
a
specific
instance
profile.
G
G
Even
if
we
add
the
new
feature,
the
behavior
should
still
be
the
same
unless
you
start
using
the
feature
and
the
number
of
files
changed
in
the
pull
request
is
huge
because
for
integration
tests
it
generates
some
new
files
for
the
new
policies
and
the
new
roles,
so
it
makes
a
lot
lot
of
changes.
I
think
there's
more
than
200
files
changed
in
the
pull
request,
but
maybe
only
at
most
10
go
code
files
changed
and
not
a
lot
of
of
lines
of
code.
In
fact,
so
here
it
is.
A
I
mean,
from
my
point
of
view,
the
the
use
cases
I
I
think
the
use
case
is
well
under
is
well
known
like,
and
there
is
there
is
the
oidc
support
coming,
which
may
address
some
of
the
like.
The
the
use
cases
I
understand
is
like
different
security
groups.
A
In
fact,
the
different
security
different
aws
security
roles
per
node
and
some
of
the
oidc
support
will
enable
pods
to
get
their
own
first
class
identities,
which
may
address
some
of
this
use
case,
but
I
think
there's
still
additional
use
cases
for
this
in
terms
of
security,
I
think,
if
it
is,
is
it
is
it
possible
to
to
not
generate
the
per
instance
group
objects
if
they
are
not
going
to
be
used,
because
I
suspect
people
may
have
objections
to
sort
of
dangling
objects
that
are
never
used
and
it
would
make
the
pr
a
lot
smaller.
G
Probably
yes,
but
I
thought
about
the
use
case
where
someone
starts
using
additional
policies
for
a
given
instance
group
and
at
some
point
wants
to
remove
them,
and
I
think
it
should
be
the
exact
problem
of
migrating.
A
cluster
using
per
node
rule
instance,
groups
the
the
issue
that
was
raised
with
terraform-based
clusters
like
okay,
you
dropped
the
old
instance,
you
dropped
the
old
rule,
so
you
have
new
ones,
but
it
cannot
be
applied
until
the
cluster
is
rolled
out
and
then
it's
probably
going
to
get
stuck.
So
in
the
opposite
direction.
G
One
starts
using
additional
policies
on
an
instance
group.
Then,
in
the
terraform
resources
there
is
this
new
role,
then
you
stop
using
additional
policies,
so
the
terra
firm
doesn't
have
the
role
generated
anymore.
You
try
to
apply
and
then
you
are
going.
You
are
probably
going
to
get
stuck
again.
In
fact,.
C
So
adequate,
what
are,
what
are
the
use
cases
given
irsa
or
is
it
that
irsa
is
too
hard
to
do?
This
is
a
transition
celery.
C
So
given
irsa
or
you
know,
the
oidc
you
know
being
able
to
give
im
accounts
to
individual
service
accounts.
Are
there?
What
are
the
use
cases
for
this,
or
is
it
just
that
getting
irsa
working
is?
Is
too
difficult.
G
Now
the
the
the
use
case
at
first
was
that
we
wanted
to
run
km
on
a
dedicated
instance
group.
Currently
we
are
running
km
on
the
master
node
and
we
wanted
to
isolate
km
on
a
dedicated
instance
group.
So
we
had
to
allow
this
particular
instance
group
to
assume
other
roles
for
km
to
work,
and
potentially
we
wanted
to
make
to
be
able
to
make
use
of
instance
profile.
G
B
G
I
can
see
other
other
uses
so
in
the
sense
of
to
be
able
to
to
create
an
instance
group
and
give
special
permissions
to
this
instance
group,
because
the
workloads
that
are
going
to
run
on
them,
those
permissions
and.
A
The
yes,
I
mean,
so
it's
a
great.
I
think
you
raise
a
great
question
like
if
irsa
was
an
easy
flag
right.
Would
there
be
a
use
case
that
the
and
maybe
I'm
wrong,
like
the
one
that
I
was
thinking
of,
is
just
pulling
from
different
registries,
but
maybe
no
one
actually
does
that
yeah?
It's.
C
G
A
It
it
is
limit,
well
the
we.
We
wanted,
a
different
profile
for
masters
and
for
notes.
That
was
the
big
one
so
that
you
didn't
have
to
give
right.
So
that's
why
the
feature
exists.
A
The
argument,
I
think,
is
if
we
complete
irsa,
maybe
the
nodes
need
no
permissions
at
all
and
everything
can
be
done
via
pod
service
accounts,
maybe-
and
that
would
be
so
in
other
words
and
each
part,
each
process
or
pod
effectively
runs
with
the
exact,
correct
permissions
and
so
like
today,
the
master,
sorry,
the
control
plane
has
the
permissions
of
api
server
and
controller
manager
and
some
other
stuff
cops
controller.
A
I
assume,
and
so
it
would
and
dns
controller
each
one
of
those
would
actually
just
have
their
the
exact
permissions
they
needed.
So
it
would
be
a
more.
It
is
a
more
secure
stance
that
we
are
aiming
for,
but,
as
john
says,
we're
not
there
we're
not
quite
there
yet.
So
the
question
is
like:
do
we
want
to?
A
Should
we
should
we
do
what
you
thank
you
for
the
for
the
pr
it's
it's
great
and
I
think
if
it
was,
if
it
wasn't
for
the
the
complex,
the
the
extra
objects,
I
think
that
would
be,
we
would
probably
have
merged
it.
I
think
the
question
is:
should
we
pursue
that
or
should
we
pursue
making
individual
roles
for
service
accounts
irsa,
making
that
I
am
also
service
accountant.
C
Because
we
we
actually
run
with
well,
our
control
plane
has
per
node
permissions
because
we
have
an
irs
eight
that
but
everything
else
we
actually
have
a
lambda
and
we
take
the
nodes
down
to
one
role.
C
After
bootstrap
we
found
there
was
one
permission
they
have
to
have
in
order
for
cuba
to
be
able
to
restart,
but
oh
describe
instances
or
something
yeah
yeah.
They
have
to
be
able
to
describe
instance,
but
on.
A
Okay,
I
I
don't
think
why
don't
we?
Why
don't?
We
have
another?
Look
at
the
pr
see
if
there's
some
way
we
can
make
it
lighter
weight
in
terms
of
those
changes,
because
those
changes
are
going
to
affect
everyone
and
whether
and
then
we
can
make
a
better
decision,
a
more
informed
decision
about
whether
we
should
double
down
on
irsa
or
whether
we
can
make
this
find
some
way
to
make
this
less,
because
it
is
a
good
feature.
A
Is
your
name
eddie
eddie,
but
it
is
a
good
feature,
but
we
want
to
make
sure
that
the
weight
is
is
is
worthwhile.
There
are.
Oh,
I've
got
a
ping
on
the
time,
so
I
will,
if
it's
alright
I'd
like
to
carry
on
so
we
can
try
to
cover
the
last
topics.
A
Okay,
thank
you
and
thank
you
for
the
contribution.
Welcome
cyprian,
you
have
two
or
you
have
docker
container
d
version
override
by
arch.
How
to
declare
it.
B
Okay,
so
I
guess
everyone
read
the
news
or
heard
it
firsthand
so
do
we
want
to
try
also
to
have
120
move
to
container
d
and
if
so,.
B
A
It
should
be
the
same
right,
I
think.
Can
you
remind
me:
when
is
the
actual
target
duplication?
Is
it
121
or
122.
target.
A
Parker
removals
123.
23
according
to
the
republic,
so
I
really
like
what
we
did
with
basic
auth.
We
didn't
fully.
We
didn't
quite
have
enough
time,
but
like
we
could
like
in
some
version
where
we
make
container
d
the
default
and
easy
to
switch
back
and
sort
of
discover.
A
I
don't
know
whether
that
should
be
120
or
121
like
do
we
have
time
to
put
it
into
120
but
I've
you
know
I've
been
I
sent
it.
I
sent
it
here.
Here's
what
I
suggest
we
do.
I
sent
some
tests.
A
I
sent
I
added
container
due
to
the
grid
the
massive
grid,
so
we
can
start
to
get
some
signal
on
that
in
our
e
tests
and
we
can
see
whether
it
generally
looks
good
and
then
whether
we
are
in
good
shape
too,
like
across
like
various
weird
configurations,
then
we
should
get
some
idea
whether
it
is
reasonable
to
switch
the
default.
B
So,
given
the
work
done
on
it,
there
is
just
one
flag
that
you
can
do
the
switch
between
one
and
the
other,
it's
the
container
and
the
runtime
flag,
and
I
see
it
only
applied
to
new
clusters
anyway.
So
when
you
create
a
new
cluster,
it
will
run
by
default
with
container
d
instead
of
docker.
H
Okay,
can
I
just
speak
out,
so
I
I
am
I'm
new
to
communities
and
chaos
like
I've.
You
know
started
a
new
role
a
couple
of
months
ago,
so
I've
only
I've
been
contributing
to
cops
for
a
while,
not
for
all
right,
but
this
is
just
working
on
a
bunch
of
stuff
like
adding
instance,
metadata
v2
support
the
cops.
I
think
it
was
working
with
cyprian
today
on
getting
that
pr
thing
so
yeah,
so
I
was
actually
also
in
the
meantime
I
was
playing
around
cryo.
H
I
was
trying
to
work
on
adding
cryo
support
to
cops,
so
that
was
something
I
was
trying
to
do.
Maybe
that
can
be
a
solution
to
I
mean.
Definitely
if
you
start
adding
it
now
will
take
time
for
it
to
be
stable,
but
that's
a
different
thing,
but
that
can
be
another
solution
like
rather
than
making
container
default.
H
If
we
add
cryo
support
that,
I
think
since
cryo
is
right
now
like
people
are
people,
I
think
there
have
been
a
lot
of
requests
for
cryo
support
to
run
kubernetes
deployments
and
boards
with
cryo.
So
I
think
that's
something
I
would
like.
I
would
like
to
start
working
on
and
I'm
starting
to
lay
the
groundwork
for
that
from
my
side.
So
from
my
side
I
mean
it
seems
pretty
straightforward.
H
You
know
like
in
the
node
up,
we
just
add
the
code
to
bootstrap
it
and
we
just
create
a
systemd,
manifest
to
start
the
driver
process
and
hook
up
cubelet
to
to
point
to
the
priority
socket.
But
just
one
thing
you
know
I
was
thinking
about
is
that
in
the
code
base
I
see
the
cubelet
is
pretty
hard
coded
with
not
right.
Now
it's
pretty
delicious.
It's
tight
coupling
with
the
container,
so
it's
not
to
add
a
container
on
time.
H
The
coupling
is
pretty
tight
like
if
I
look
at
the
cubelet
code,
or
there
is
a
switch
case
to
select
between
docker
and
container
d
right.
So,
let's
say
add,
cryo
would
have
to
add
another
case
to
that
switch
statement.
Let's
say,
I
add
rocket.
That's
another
switch
case
that
statement.
So
maybe
so
I
think
in
the
future.
If
you
want
to
support
different
container
runtimes
apart
from
docker,
do
we
want
to
standardize
this
thing
like
how
we
have
a
cloud?
H
So
if
a
cloud
there's
an
interface
which
we
had
I
was
thinking,
should
we
should
we
focus
on
adding
a
framework
for
container
intention.
A
I
think
maybe
I
think
crs
cryosport
would
be
would
be
great.
I
don't
think
anyone
would
object
to
that.
I
think
that's
that'd
be
wonderful.
I
think
I
will
probably
start
by
adding
the
third
branch
to
the
the
switch
statements
and
then,
if
you
wanted
to,
we
have
distribution,
which
is
a
sort
of
enum
like
thing,
but
the
enums
have
fields.
Sorry,
I
have
methods,
so
you
can
say,
like
is
debian
family
and
is
red
hat
family.
A
I
think
that
sort
of
thing
might
be
helpful,
so
you
could
have
like
is
cri,
but
I
think
I
think
you
can
start
off.
I
would
start
off
by
just
adding
it
and
then
seeing
how
it
goes.
We
are
very
close
to
time
I
want
to
rush
through
the
last
couple
of
items.
If
we
can
is
that
all
right
thanks
so
peter
you
want
the
digital
ocean
to
beta.
D
Yeah
we,
I
think
we
discussed
this
back
in
like
april
or
march
or
something
and
there
were
no
objections,
but
the
formal
pr
has
been
made
so
just
a
last
minute
call
for
any
concerns.
Otherwise
I
think
we're
going
to
be
purging
that
later
today
or
this
weekend.
So
that's
exactly
any
opinions
comment
on
the
pr.
A
Cool
the
the
recurring
topics-
I
don't
know
if
we
have
an
informal
meeting
scheduled,
we
have
a
list
of
blockers.
I
suggest
people
want
to
add
blockers.
We
just
add
to
the
document
or
suggest
changes
to
the
document.
B
That
would
be
cool
this
way
we
would
get
through
the
blockers
and
see
if
we
can
do
a
release.
I
would
pretty
much
like
119
released
before
christmas,
if
possible.
That's
your
that's!
Your
christmas
wish
list.
B
Well,
the
people
wife
is
arriving.
So
that's
the
second
best
thing.
A
How
about
I
think
I
can
do
tuesday
morning
at
nine
eastern,
but
why
don't
we
talk
about
it
in
the
cops
dev
channel?
How
about
that?
Why
don't
we
coordinate
it
there
that
way,
anyone
that
had
to
drop
or
is
not
in
the
school
can
can
hopefully
join.
Does
that
work
help
steph
channel
slack?
Obviously,
wonderful,
so
please
add
blockers,
I'm
trying
to
be
respectful
of
people's
time,
and
I
don't
know
if
there's
any
last
topics
you
want
to
bring
up
speak
now,
as
other
everyone
is
leaving
all
right.