►
From YouTube: 20181128 kubeadm office hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
November
28th
2018.
This
is
the
standard
cube
ATM
office
hours,
we're
pretty
light
on
agenda
because
we're
heading
into
the
end
of
really
cycle.
I,
don't
know
if
we
want
to
get
into
planning
yet
other
than
to
maybe
have
like
high-level
items
to
discuss
or
probably
get
into
planning.
Maybe
next
week,
or
maybe
even
post
poop
calm,
because
kook
on
is
a
good
place
to
discuss,
planning
so
I'm
sure.
If
we
were
to
have
any
plans
going
into
cube
con,
we
would
probably
take
those
plans
and
burn
them
right
after
coop
con.
A
B
A
This
release
kind
of
highlighted
the
common
pattern
that
we
currently
have.
We
should
probably
I
know
we
kind
of
wait
till
the
end
of
release,
because
we've
been
constantly
churning
both
the
command
line,
as
well
as
the
configuration
file
but
I
think
in
next
release.
You
know,
as
we
start
to
talk
about
prioritization
later
on,
we
can
probably
do
that
earlier
in
the
cycle
is
there
should
be
less
churn,
I'm
hoping
praying,
but
it
looks
like
there's
not
a
lot
there.
A
The
next
agenda
item
and
Lucas
you're
here
that
I
put
on
the
list,
was
the
blog
post.
If
there
are
things
that
folks
wanted
to
highlight
and
also
to
raise
awareness.
So
if
you
work
for
a
company
and
you're
contributing
to
cuvee
DM,
we
should
get
on
the
blog
train.
So
that
way,
when
the
the
upstream
blog
goes
out,
you
know
other
companies
can
toot
their
own
horn
for
contributions
to
community
am
going
GA.
C
Yes,
that's
a
very
common
pattern
so,
like
we
we
get
this
out
and
if
you,
if
you
wonder
about
the
class,
we
have
a
kind
of
table
and
directly
when
I
mention
that
the
criteria
or
whatever
that
I've
put
easily.
Now
it's
just
like
those
who
are
in
the
reviewers
or
lowness
files.
We
have
a
lot
of
other
great
contributors
as
well,
but
we
can't
list
I
can
roll
no
50
people
but
at
least
the
company's
some
of
the
companies
that
are
there.
C
A
B
C
B
B
C
C
B
A
Went
through
and
did
my
first
pass,
I
can
now
loop
in
some
other
folks.
I
already
did
that
on
the
first
dock
and
I
took
some
of
their
suggestions.
Sometimes
marketing
people
can
we
go
a
little
overboard,
so
you
have
to
make
sure
you,
you
proofread
that
first,
because
this
is
like
community
blog,
so
there.
B
Yeah
in
terms
of
VMware,
to
my
understanding,
we
are
going
to
have
a
mansion,
so
we
have
some
sort
of
a
collective
work
out
all
the
signatures
in
113
and
there's
going
to
be
a
mention
about
this,
like
a
paragraph
of
Canadian
Jade
to
my
understanding,
they
didn't
want
to
us
to
make
an
official
book,
but
maybe
after
the
release
or
I
can
convince
my
office
to
make
a
book
about
it.
Well,.
A
Well,
we'll
have
to
figure
that
out
later
on,
like
I,
can't
I
technically
cannot
coordinate
with
you.
Yet
yes,
but
I
will
I
already
have
some
contact
information
of
who
to
contact
as
soon
as
the
closest
done
and
I
I'll
start
I'll
start
with
a
blog
post
and
if
anything
I
can
do
a
private
blog
and
then
you
know
just
mention
accolades
and
stuff
and
do
my
own
Twitter,
spearing
stuff.
A
D
This
is
just
coming
up
because
I've
been
doing
a
bit
of
research
and
work
in
the
in
the
release.
Department
and
I've.
Seen
some
PR
I--'s
going
through
and
adjacent
has
one
out
to
try
to
keep
our
kubernetes
kubernetes.
Debian
and
rpm
packaging
builds
up
to
date
with
kubernetes
release
and
I.
Gotta,
say
I
think
that
the
code
in
release
is
becoming
mostly
version.
D
E
F
E
A
Didn't
want
to
get
into
too
much
planning
for
the
next
cycle,
but
there's
one
thing
that
I
definitely
want
to
do
for
the
next
cycle.
I
want
to
kill
kubernetes
anywhere
with
fire,
and
we
have
the
pr's
to
get
in
Sybil
the
coop
spray
stuff
in
place.
That
would
be
fine
and
dandy
and
they
can
use
the
artifacts
that
are
published
on
the
mainline
as
part
of
the
CI
system
and
in
the
new
world
order
will
have
to
figure
out
what
priorities
are.
C
C
F
C
That
the
it
builds
very
fast
when
you
send
a
peer
and
I
have
a
script
locally
that
can
fetch
a
cube,
a
DM
and
all
the
artifacts
from
a
PR
they're
uploaded
to
GCS.
So
it's
like
just
pull
down
and
that's
what
we're
all
still
doing
for
for
the
now
cubed
in
commitments
anywhere
we're
gonna
use
them
later,
but
but
like
we
want
to
have
the
Basel
we
want
to
migrate
and
make
that
the
official
thing
yeah.
We
can't
form.
D
D
D
A
It
gets
weird
in
that,
like
the
build
artifacts
are
all
published
from.
We
can
decouple
this
problem.
I
think
that
there's
been
like
this,
the
fact
that
basil
is
generating
the
specs
I
think
is
a
anti-pattern.
You
can
easily
have
the
build
artifacts
being
published
as
a
and
the
specs
and
the
dead
files
separate
similar
to
how
the
release
repository
works,
but
less
weird
right.
A
So
that
way
you
can
just
have
the
building
of
the
packages
all
come
from
a
canonical
source
without
coupling
it
into
the
build
system
itself
right
so
that
that
can
totally
be
done.
I,
don't
know
why
we've
kind
of
circled
the
drain
wanting
to
have
basil
ruled
on
universe,
and
it's
not
going
to
so.
Let's
stop
doing
this
and
we
can,
if
we
decouple
the
generation
of
the
spec
and
the
rpm
files
need
to
have
like
a
canonical.
A
D
C
A
Technically
yeah,
though
the
last
bit
is
the
actual
buckets,
but
you
should
be
able
to
produce
artifacts
all
the
artifacts
right
for
every
every
arc
from
the
main
repository,
and
you
can
do
that
today.
It's
just
that
you,
the
the
problem
we
currently
have,
is
the
coupling
in
in
the
main
repository
and
if
we
decouple
that
for
for
good.
A
You
can
build
all
the
artifacts,
including
including
the
Europeans
and
ten
packages,
as
a
separate
step
right
like
you,
could
do
it
as
a
separate
step.
You
could
have
a
script
that
is
able
to
do
it
from
whatever
there's
a
chameleon
ways
to
do.
This
I
think
we
just
need
to
assign
somebody
to
go
and
the
couple
it
because
currently
there's
it's
generated
from
Basel,
and
if
you
just
that
that
that
will
solve
most
of
the
problems
and
I
think
we
need
to
fix
all
the
upstream
tooling.
That
cares
about
this
stuff,
but
that's.
C
Okay,
so
you
mean
that's
today
and
see
I
when
I
send
a
PR,
it's
gonna
do
both
basil,
but
first
basil
is
gonna,
build
my
binary.
So
then
it's
also
gonna
build
the
depths
and
rpms.
But
you
mean
that
that's
wrong,
because
we
it
should
only
build
the
binaries
and
then
optionally
as
a
separate
decoupled
step,
build
package.
A
Just
for
any
binaries,
exactly
like
I
can
run
rpm
build
on
any
spec
file
anywhere
and
I.
Did
that
all
the
time
you
know
that
there,
as
long
as
we
have
a
location,
relative
path,
location
for
where
to
subsume
the
binaries
that
there
are
other
artifacts
doesn't
matter
where
they
came
from
if
they
were
built
from
a
or
B
it
shouldn't
matter.
As
long
as
I
have
relative
path,
location
for
how
to
slip
that
data
in
it
doesn't
matter,
and
then
then
that
packaging
step
is
totally
decouple.
A
We
had
one
I
wanted
to
have
like
one
single
minute
container
to
rule
them
all
and
just
put
all
of
the
packages
and
metadata
required
to
distribute
side
packages
inside
of
a
single
container,
and
then
it
would
just
be
a
container
artifact,
build
I.
Think
I
lost
that
battle,
because
a
while
ago
everyone
still
wanted
the
canonical
packages.
That's
the
way,
that's
the
way
most
people
consume
it
right.
B
A
To
make
here's
here's
my
grand
vision,
get
rid
of
90%
of
what
is
on
peer
blocking
jobs
and
have
kind
be
the
one
PR
blocker
job
and
cops
should
be
periodic
get
cops
out
of
there,
because
it's
like
a
waste
of
everybody's
time.
It
flakes
all
the
time
and
then,
if
kind
is
on
PR
blocking,
then
every
single
PR
would
be
a
vet
for
Kubb,
a
DME.
B
Yes,
and
also
I
mean
the
proposal
there
was
to
move
some
of
these
called
specific
jobs
to
post,
submit
instant
of
instead
of
periodic,
but
yeah
I
mean
we
for
kind.
Some
ritual
currently
is
working
with
them
to
to
be
able
to
create
multiple
notes
and
also
tomorrow's
any
multi
crystals
at
the
same
time,
but
with
the
multi
node
support
H
a
so
far
that
we
can
also
test
all
of
things
and
the
big
plus
here.
B
A
I
think
we're
still
going
to
have
periodic
store
cloud
providers.
I,
don't
think,
there's
any
way
to
escape
that
enmity,
but
I
think
for
PR
blocking
I
think
we
should
absolutely
get
cloud
providers
on
a
pier
blocking
because
it
is.
It
is
a
single
largest
source
of
flakes
and
haste
in
the
contributor
world.
It
takes
forever.
Well,
you
have
to
slash
test,
slash
retest,
slash
retest
over
and
over
and
over
again
it's
not
fun.
C
A
A
A
A
A
A
Yeah
so
standard
pattern
for
approvers
to
is
that,
obviously
anything
that's
nascent
or
simple
to
understand
and
that
everyone
wouldn't
would
agree
to
just
go
ahead.
Anything
that's
that
would
touch
is
config
or
workflow.
We
should
usually
discuss
amongst
other
approvers
like
like
Lucas
and
I
and
Fabrizio
never
do
anything
unilaterally.
It's
almost
always
converse.
If
we
agree
upon
this
is
the
flow
that
we
need
to
go
forwards
with.
A
B
C
Yes,
I
just
wanted
to
mention
it
like
I,
don't
know,
oh
well,
now,
obviously
paste
the
wrong
link
as
well.
Sorry,
it's
migrated.
What
I
wanted
to
say
was
that
it's
been
migrated.
It's
a
real
kept
proposal
and
that's
now
in
convenience
community
up
for
review
it's
on
a
high
level,
just
a
touch
really
quickly
on
it.
We
have
a
lot
of
code
in
Cuba
diem
that
we
read
them
manually
or
so
to
say
manually
in
a
somewhat
standard
way
and
somewhat
cube
by
them
only
way.
C
In
cube,
scheduler
to
proxy
cubelets
and
the
cube
controller
manager
and
in
cube
ATM
and
it's
written
kind
of
in
the
same
way,
but
there
are
a
lot
of
deviations
so,
for
example,
the
cube
controller
mention
doesn't
even
have
the
config
flag,
it
can't
load,
but
it
can't
can
do
just
the
internal
conversions
you,
madam,
can
load
multiple
versions,
even
multiple
Yaman
documents
now
also
with
trick
checking.
So
if
you
you
specify
the
field
that
is
not
present
in
the
schema,
it's
gonna
send.
Your
warning.
C
C
This
and
also
there's
next
to
no
testing
on
component
config
cube
Adam
is
the
only
one
who
has
this
case.
I
wrote
this
own
framework
for
cubed
m4
for
testing
round
trip
and
defaulting
and
stuff.
So
this
is
not
attempt
to
formalize
all
this
and
make
it
generally
usable.
So
the
requirement
ish
is
that
it
should
must
be
usable,
for
kubernetes
must
not
have
anything.
That
is
not
specific.
That
is
not
needed,
the
company
s
components
and
it
must
be
used
by
all
components,
components,
so
not
every
piece,
but
something
so
like.
C
A
Like
Mike
Dylan
wrote
here's
my
biggest
problem
with
this:
it's
not
that
we
don't
agree
upon
it
or
that's,
not
a
good
thing
to
do,
or
anything
like
that.
Mike
Denis
wrote
the
original
component
config
proposal
like
literally
two
years
ago
right
the
original,
very
beginnings
of
this
stuff,
and
the
problem
is
that
the
API
machinery
approval
process
and
getting
that
through
API
machinery
just
died
because
its
API
machinery.
A
So
the
question
is
like
we:
how
do
we
make
sure
that
this
is
successful
in
this
cross-functional
group?
It
almost
needs
like
a
dedicated
working
group
to
execute
on
this.
Then
it
includes
people
who
can
approve
things
on
API
machinery,
as
well
as
people
from
this
sig
who
are
interested
to
partake
in
that
endeavor,
because
if
you--if,
we
do
it
just
from
within
this
sig.
Unless
we
start
to
like
percolate
approvals,
all
throughout
that
stuff
will
it
will
be
in
this
weird
state
right.
C
C
C
But
the
only
API
types
that
will
be
in
this
repo
like
types
go
files
will
be
the
struct
that
are
shared
between
components,
configs
so,
for
example,
how
to
connect
to
an
API
server
like
kubernetes
client,
that
is
super
generic
and
used
by
nearly
every
kubernetes
component.
That
is
the
thing
that
is
in
this
shed
shared
components
repository
and
in
order
to
make
changes
there.
You
need
from
the
API
approvers
on
this
team.
C
That's
answer
your
question
and
with
regards
to
working
group,
yes,
I'll
see
what
we
can
do
that
I
hope
to
get
it
like
that,
I'm,
not
sure
I
can
at
least
not
myself,
leave
it
or
bootstrap
it
myself
right
now,
when
in
the
military
in
a
half
a
year,
I
can
but
not
in
a
moment
so,
but
if
we
get
some
other
folks
that
can
help
co-chair
at
least
I'm.
Definitely
on
the
tram
I'm
like
committing
to
to
try
to
make
this
possible.
A
A
Gonna
write
the
code,
he
also
writes
code,
yes,
I
know,
I
know
here
it's
code,
I'm
saying
like
this
is
not.
This
is
not
from
a
Red
Hat
perspective.
Knowing
how
OpenShift
works,
they
don't
and
I
can't
really
care
about
this
they're
not
gonna.
Do
this
right,
like
this
openshift
has
its
own
configuration
mechanism,
which
is
a
grand.
E
C
Yeah
I'll
talk
to
Robert
if
we
can
find
more
I,
don't
know
at
the
moment,
but
we
have
also
so
last
time.
I
did
this
components
conflicting
in
112,
the
initial
refactoring,
with
repo
splits
and
all
that
which
was
a
prerequisite
here.
We
actually
had
a
lot
of
Chinese
contributors
and
this
time,
I
also
gathered
some
and
also
Dean's
I,
think
it's
gonna
help
out
or
from
what
he
said.
So
that
is
a
also
strong
contributor
in.
B
C
G
C
Perfect
I'll
I'll
add
you
to
the
list,
then
that
sounds
really
cool
yeah.
So,
with
all
this
in
mind,
yeah
the
only
thing
the
only
reasonable
thing
would
be
about
working
group.
So
I'll
talk
with
the
other,
doesn't
see
if
we
can
make
that
happen
with
one
more
meeting
thing
time
yeah
we.
A
C
Yes,
it
really
is,
and-
and
like
obviously
that's
part
of
also
like
I'm
very
excited
about
doing
this
for
coming
is
held
as
a
whole.
But,
yes,
it
directly
contributes
to
the
health
of
Cuba
among
them
as
well.
Yes,
we
don't
want
to
carry
a
lot
of
extra
stuff
that
we
do
now
and
we
want
the
other
components
to
work
as
they
should
so.
C
B
So
actually
the
approach
we
currently
have
incubating
is
sufficiently
better
than
what
has
to
be
done
for
Universal
codec
to
be
strict,
it's
kind
of
yeah,
but
we
might
get
it
done
like
in
a
month
or
so
I'm,
not
sure
I.
Maybe
I
should
ask
like
for
opinions,
but
instead
of
sending
like
random
PR,
but
anyway
that's
a
little
topic.
Yes,.
C
C
A
We
have
meetings
scheduled,
we
have
a
birds
of
a
feather
session
and
we
have
there's
a
bunch
of
other
sort
of
sig
update
meetings
that
are
happening
at
coop.
Con
I
think
the
most
important
one
is
the
birds
of
the
feather
in
the
picture
beer
summit
day.
So
that's
like
we
have
a
large
section
of
time.
I.
Think
I
was
mentioning
before
you
guys
here,
Lucas
I,
think
going
into.
A
We
could
start
to
have
like
an
initial
plan
of
things
you
want
to
talk
about,
but
usually
once
you
go
into
a
buff,
you
come
out
and
you
burn
your
plan
that
you
originally
had.
So
we
can
have
ideas
that
we
want
to
like
a
strawman
proposal
of
things.
But
after
we
go
through
and
actually
have
a
bob
session,
we
can
actually
discuss
how
we
want
to
outline
some
of
the
execution
items
for
the
next
year
or
so
will
have
high-level
things.
I
think
it's
better.
A
It's
best
to
have
high
level
objectives
for
a
year
style
planning.
Just
generic
statements
that
are
like
make
configuration
better.
You
know,
get
a
che
to
GA.
You
know
simple
things
like
that,
because
the
break
did
the
more
granular
you
get
the
more
you're
lying
to
yourself
and
at
a
year's
time
frame,
scale.
C
C
A
C
Yeah
for
one
thirteen
I
was
I
was
thinking
about
that
like
a
couple
of
weeks
ago
and
then
I
wrote
it
down
on
my
to-do
list.
I
haven't
gotten
to
it
yet,
but
yes
I!
Yes,
if
there's
someone
else
like
could
like
at
least
collect
the
doc
with
what
we
would
like,
ideally
not
to
put
any
more
pressure
on
all
everyone
here.
But
ideally
this
survey
would
be
done
by
the
blog
post,
so
I.
A
C
B
C
B
A
B
C
I
think,
but
but
we
should,
we
should
sync
with
George
and
Paris:
they.
They
are
the
ones
that
are
so,
for
example,
we
could
use
a
community
account
or
something
then
get
from
them.
The
data
afterwards,
but
not
own.
The
subway,
like
I,
don't
know
anyway.
That's
that's
really
thanks.
Thanks
for
bringing
that
up,
I
would
have
forgot
it.
A
There's
a
couple
different
options
like
Fabricio
and
I
have
talked
about
different
ones.
There's
and
I
see
there's
a
lot
more
now
there
are
I,
don't
have
strong
opinions
other
than
what
we
currently
do
today
is
not
ideal.
So
the
the
one
idea
that
I
had
was
just
to
use
a
token
that
was
only
generated
on
at
a
time,
and
then
you
know
it
could
be
either
time
based
too
as
well,
and
then
that
token
could
be
used
to
decrypt.
A
You
would
encrypt
and
store
on
cluster,
and
only
with
that
token,
could
you
decrypt.
So
it's
a
shared
secret,
just
like
bootstrap
tokens
and
then,
but
it
would
only
exist
from
the
init
command
line
after
that
point
in
time.
After
a
timeout
cycle,
it
would
no
longer
be
valid
and
a
person
couldn't
decrypt
the
secrets,
so
you'd
only
have
a
period
of
time
from
when
you
do
the
initial
join.
A
B
B
My
my
proposal
quickly
here
basically
I,
said
to
I,
made
a
little
project
to
to
make
a
proof-of-concept
out
of
this.
It's
by
the
idea
here
is
to
turn
like
QA
da
minutes
into
a
listening
server
that
K
Max
come,
can
pretty
much
transmit
certificates
from
for
a
period
of
time
and
the
other
side.
The
other
side
can
client
into
the
server
and
pretty
much
use
DCP
with
encrypt
everything
being
ripped
it
with.
Whatever
algorithm
we
choose,
I
choose
one
of
the
bests
to
my
understanding
quality.
B
B
E
The
the
other
thing
that
I
wonder
is
is
since
it's
basically
giving
you
know
a
certain
level
of
root
access
to
the
machine
that
that
server
is
running
on
to
be
able
to
transfer
those
certs.
Would
we
be
better
served
by
having
kind
of
like
a
bootstrap
token
that
has
higher
permission,
support
and
can
just
kind
of
exec
a
pod
on
the
on
the
host
and
kind
of
exfiltrate
the
secrets
that
way?
Instead,
that's
what
for.
C
Yeah
and
what
I've
been
thinking
about
earlier,
writing
proposed
like
years
ago,
has
been
about
having
so
well.
The
first
thing
is
that
do
we
want
a
token
to
be
a?
Do?
We
want
some
kind
of
token
to
be
able
to
connect
as
a
master
to
the
cluster.
That
is.
That
is
the
first
question.
We
should
ask
yourself:
you
do
the
end
user
need
a
token
or
a
private
key
to
add
a
master.
A
C
If
we
let
any
user
connect
with
that
token
or
expose
any
like
nectar
with
the
token
that
is
how
secure
our
cluster
is
going
to
be
for
that
period
of
time,
we're
doing
this
shopping,
which
is
fine,
I,
guess
but
but
like
just
having
that
expectation
to
the
user
needs
to
be
communicated
very
clearly.
I
think
I,
see.
A
The
room,
yeah
I,
think
we
could
also
have
like
a
revoke.
You
know
like
if
we're
gonna
say
like
when
you're
doing,
control
playing
editions
like
you
do
you
do
in
it,
and
then
you
have
the
extra
special
token.
These
are
out,
put
it
as
an
it
or
you
have
a
separate
command
line
that
that
gives
you
that
token
for
other
control
playing
notes
to
join,
but
then
having
a
separate
symmetric,
separate
revoke
command.
H
What
I
was
thinking
is
that
I
really
agree
that
this
extra
power
should
be
time
limited
and
what
I
was
thinking
about
is.
Is
it
possible
to
link
this
extra
power,
so
they
asked
us
to
the
trusted
certificate
to
the
booster
on
token,
so
I
try
to
I
say
he
it
another
way.
It
is
possible
to
define
a
group,
a
kind
of
attribute
for
special
token,
so
I.
Thank
you.
Please
give
me
a
booster
token
that
allows
me
to
fetch
cluster
attribute
and
then
this
token
dies.
Like
any
other
token,
I.
A
Talked
with
Moyer
about
this,
and
he
was
pretty
against
the
idea
of
using
a
bootstrap
token,
because
it's
globally
visible
typically
to
other
things.
So
if
you
compromise
any
portion
of
the
control
plane,
you
could
have
access
to
that
token.
The
the
reason
why
I
sort
of
I
think
I
originally
had
written.
My
first
idea
is
using
some
type
of
bootstrap
token,
but
the
more
kind
of
poo-pooed
that
idea
when
I
talked
to
him
about
it.
So.
E
A
G
The
idea
of
having
something
about
having
the
operator
accept
the
new
master
in
right
so
like
we
just
do
the
regular
token
thing
that
the
trust
is
extended
only
because
you've
accepted
that
new
master
in,
like
a
few
ATM
master,
with
master,
invite
and
there's
some
mechanism
like
that,
where
you
just
allow
them
the
new
token
that
has
been
generated
in
when
that
new
master
has
proposed.
It
sort
of
like
the
CSR
Mela.
C
Hence
I'm
like
whether
it's
a
bootstrap
token.
Yes,
it's
it's
stored
in
a
cluster,
but
if
you
have
such
high
privileges,
so
you
can
view
secrets
secrets
in
the
cube
cube
system
namespace
if
you're
kind
of
dead
anyway
I
don't
know
I,
guess
it's
yes,
but
they're
like
so
so
either
a
bootstrap
token
or
like
a
cube.
Config
like
you
said:
Jason
is
it's
really
cool
and
then
the
problem
is
I,
don't
know
if
the
CSR
API
supports
I,
don't
think
it
does
today
it
doesn't
support
multiple
CAS,
still
I,
don't
I
don't
know.
C
Do
we
want
to
like
transfer
the
CI
key
from
master
8
must
be
because
we
can't
do
that
with
the
CSR
API
with
the
CSIRO
API
we
could
like,
but
we
could
generate
all
the
stuff
that's
needed
because
we
go
ahead
and
post
the
thing
and
approve
it
ourselves
and
the
sign
will
kick
in
and
give
it
give
us
something.
I
like.
A
Having
a
separate
step
like
you,
do
cube
80-minute
on
a
single
node,
then
a
comedian
alpha
to
something
right
like
the
Alpha
something
does
the
initial
could
do
the
initial
token
creation
and
store
it
as
secrets
right
encrypted
secrets
that
can
only
be
decrypted
with
the
token
right.
So
that
way
it's
on
the
cluster
temporarily
and
the
tokens
time
based
do
so.
As
soon
as
the
token
expires,
we
can
no
longer
decrypt
the
secrets
so
the
after
a
period
of
time,
then
your
you
have.
A
B
B
A
I,
don't
know
that
they
have
a
tendency
to
live
in
tin,
foil
hat
land,
so
there's
gonna
be
some
level
of
compromise
between
user
experience
and
security
vulnerabilities
because,
like
nothing
technically,
nothing
is
secure
right.
If
you,
the
more
you
get
down
to
it.
So
it's
just
a
compromise
of
where
we
want
to
live
with
and
how
long?
What's
the
time
window
for
that
that
potential
exploit
the.
B
B
A
C
B
A
C
C
So
I,
if
I
just
say
my
two
cents
well,
the
user
can
already
reconfigure
the
cluster
on
a
real
upgrade
when
going
from
112
113.
They
can
reconfigure
stuff
so,
and
we
have
no
validation
today.
Can
days
change
your
service
subnet.
Yes,
that
will
break
the
cluster.
Yes,
we
should
disallow
such
changes
and
when
we
create
this
logic
to
disallow
these
changes
that
we
know
are
dangerous.
C
A
Think
this,
the
behavior
of
rakov,
falls
squarely
into
like
the
cluster
API
controllers
right
because
like
in
order
for
you
to
do
a
real
full
recon,
for
you
need
a
controller
right
and
I.
Don't
really
want
to
get
into
that
space
from
cuvee
DMS
perspective,
like
a
controller
that
as
long
as
somebody,
you
think
and
read
the
cop
and
see
the
Delta
for
the
configuration
file.
That
controller
can
be
a
part
of
cluster
API
or
a
part
of
anything
else,
but
it
should
be
separate.
So.
C
Like
I
think
the
point
is
okay:
the
user
has
changed
the
configuration
now
I
need
to
apply
these
changes
to
the
cluster.
This
cluster
API
is
not
gonna.
Do
that
it's
not
gonna,
go
and
reconfigure
the
API
server.
So
once
the
cluster
API
controller
has
detected
this
change,
this
desires
to
change
something
from
the
user.
It
actually
sets
the
cube
at
em,
apply
these
changes
and
Cuban
and
goes
down
menu.
What
if
I
was
34
I?