►
From YouTube: kubeadm office hours 2020-11-11
A
A
Hello
for
joining
brian
all
right
moving
to
the
psas,
I
added
some
items
here.
The
first
one
is
that
we
are
merging
the
pr
to
the
initial
phase,
pr
to
rename
the
master
label
and
taint
that
cubed
m
uses
to
control
plane.
A
As
some
of
you
may
know,
this
is
going
to
be
a
multi-stage
effort
that
will
spawn
across
multiple
releases
of
cubadium,
and
if
you
look
at
the
the
pr
you
can
see
a
link
to
the
cap
that
describes
the
change
in
more
detail.
We
have
an
action
requirement
actually
required
for
120,
which
is
quite
detailed.
A
A
C
I
had
a
look
over
the
kubernetes
data
site
yesterday
and
I
noticed
that
the
master,
the
word
master
in
master
node
has
been
changed
to
control,
plane,
node
and
we've
dropped
the
use
of
the
term
compute
for
compute
node,
which
is
something
we
did
discuss
over
the
course
of
the
last
few
weeks,
and
it
seems
that
we're
now
referring
to
any
non-control
plane
known
as
just
a
node,
is
that
our
general
concurrence
and
understanding
here
as
well.
I'm
just
wondering.
A
So
nodes
that
host
control
plane
components
in
cube
adm,
we
refer
to
them
as
control
plane
modes.
Yes,
this
is
not.
I
I
later
in
the
meeting
I'm
going
to
talk
about
like
a
potential
small
amount
to
the
kubernetes
to
talk
about
the
new
label
in
terms
of
other
nodes.
C
Nodes
and
not
any
longer,
we've
adopted,
we
were
talking
about
changing
worker
node
to
refer
to
them
as
compute
nodes,
but
it
seems
that
the
website
kubernetes
kubernetes.io,
has
now
dropped
the
term
compute
and
just
refers
to
every
node
as
a
node
and
only
uses
the
prefix
of
control
plane
when
it
is
running
the
control
plane
components.
D
A
A
Yes,
so
compute
node,
we
never
use
this
in
the
cubed
m.
We
also
don't
really
have
worker
node
in
our
user
output,
so
worker
node
is
something
that
we,
the
developers,
call
the
nodes.
That
is
not
a
control,
plane
node,
but
we
also
do
it
in
the
documentation.
If
the
maintainers
of
the
kubernetes
dot
io
website
want
to
change
the
kubernetes
m
docs,
they
will
eventually
get
back
to
us,
but
I
don't
think
we
have
an
action
here
to
perform
right
now.
A
But
yeah,
please
give
me
a
link
to
the
pr
or
issue
that
you
you've
seen
about
this
fabricio.
E
A
I'm
pretty
sure
we
talked
about
worker
notes
in
the
cubed
and
dopes,
but
if
the,
if
this
is
changing,
we
we
will
adapt
to
it,
but
worker
is
pretty
much
everywhere
like
you
can
see.
There
are
at
least
200
references
currently
yeah.
C
Yeah
yeah,
I
agree
with
that,
but
that
was
why
it
was
so
surprising
to
see
that
on
the
website,
I'm
trying
to
locate
it
right
now,
just
bear
with
me
for
a
moment
sure
we're
going
to
move
to
the
next
topic.
In
the
meantime.
That's
that's
fine.
Do
that
we
can
come
back.
If
you
don't
mind.
A
C
A
A
C
A
Yeah,
like
I
said:
if
we,
if
we
have
new
guidance,
we
will
change
it.
Currently,
I
I
followed
the
cncf
slash
kubernetes
group
that
was
created
to
remove
some
potentially
offensive
terminology
from
the
project.
I
joined
their
meetings.
I
watched
the
mailing
list.
If
we
stop
referring
to
not
worker
notes
as
workers,
we
are
going
to
adapt
to
that,
but
I
have
not
heard
anything
about
it.
A
A
A
A
I
have
some
topics
around
the
docs
updates.
This
is
the
first
one
is
our
upgrade
documentation.
It's
kind
of
the
update
we
have
to
do
for
the
upgrade
docks
is
very
verbose
every
release
and
at
some
point
we
got
a
recommendation
from
the
docs
team
to
drop
our
standard
output
that
we
print
in
the
docs
and
try
to
consolidate
everything
to
be
based
on
a
simple
version
variable
instead
of
printing
such
a
detailed.
F
A
And
I
agree
with
that
and
I
can
execute
this
change
before
the
dox
freeze
happens
fabrizio.
What
do
you
think
about.
A
A
Okay,
so
the
the
other,
the
other
bit
here
that
I
have
to
do
is
something
that
rohit
said
did
a
separate
pr,
but
I
closed
it.
A
A
A
The
reference
doc
docs
upgrade
is
a
separate
pr.
It's
still,
it's
pretty
much
updates
all
the
commands
that
we
have
either
graduated
or
deprecated.
A
A
Okay,
so
this
is
the
question
I
have
here
is
related
to
the
the
page
we
have
for
cube,
admoster
creation.
We
have
a
create
a
cluster
with
cube
adm
page.
Let
me
try
to
open.
A
A
Yes,
so
this
is
the.
This
is
removing
the
taint
if
you
want
to
do
control,
plane,
node
isolation,
which
is
pretty
much
running
workloads
in
the
control
plane
node.
So
the
question
I
have
here
is:
should
we
possibly
mention
in
this
version
of
the
docs
for
120,
that
we
now
add
the
new
label
in
parallel?
I
think
it's
a
good
idea.
E
My
opinion
is
not
necessary
because
here
we
are
talking
about
removing
the
taint
and
we
don't
have
yet
are
nutrient.
A
A
Should
we
potentially,
I
can
remove
this
output
as
well,
because
it's
it
becomes
out
of
date
with
the
addition
of
the
new
label.
Can
remove
this
output
only
say
like
the
last
bit
here
like
okay,
congratulations.
Your
question
is
created
successfully
and
I
I
wonder
if
we
should
discuss
parts
of
the
cap
in
the
the
creation
of
the
on
this
page.
E
G
A
A
Okay,
this
is
another
action
action
item
for
me.
A
A
Disappear
for
that
I
don't
see
any
major
changes
around
the
coordinates
migration
code
that
we
have
in
cubadium.
The
diff
is
pretty
simple.
We
just
include
a
new
version
of
the
migration
library
without
any
changes
to
cubedm
itself,
but
apparently
the
image
is
still
not
pushed
to
gcr,
so
this
pr
is
blocked.
A
C
A
I
asked
if
we
have
any
notable
breaking
changes,
but
he
did
not
respond
to
that.
My
assumption
is
probably
no.
G
E
A
A
A
A
So
there's
a
chance
that
we
have
to
defer
this
to
a
mailing
list,
discussion
where
we
also
include
ckpi
machinery,
six
scheduling,
another
component,
config
owning
six,
because
we
have
a
bit
of
a.
How
do
I
say
it's
a
bit
of
a
mess.
We
have
different
ways
how
we
defined
exported
fields
over
api
types,
so
queming,
who
is
from
ibm
docs
he
he
has
been
a
contributor
for
sigdocs
for
a
very
long
time.
A
He
opened
a
pr
for
a
cubed
m
with
the
idea
to
change
the
way
we
print
fields
over
sorry
print
comments
over
the
fields
in
our
public
api
types
and
also
he
applies
some
changes
to
how
we
organize
the
godox
of
our
api.
A
I
think
the
some
of
these
are
basic
formatting
changes
like
okay.
This
is
adding
both
text
here
here
we
have
some
quoting
around
the
fields.
A
I
think
you
also
added
some
okay,
so
this
is
like
a
better
example:
separated
separated.
C
A
Gave
some
example
commands,
I
think
he
extends
on
the
docks
which
I
haven't
reviewed
fully.
I
need
to
review
like
this
very
in
very
detailed
way
to
make
sure
we
don't
you
are
not
making
any
discrepancies
so
quoting
refactoring
the
restructuring
potential
rewarding
here
and
there.
I
think
he
also
added
some
examples
here.
New
examples.
A
Yeah
yeah,
I
think
he
is
extending
on
the
examples
of
the
custom
configuration
corporate
config.
So
this
is
mostly
fine,
but
I
then
I
saw
that
he's
also.
Okay,
this
is
changing
the
indentation,
I
guess
okay,
but
then
I
saw
he
also
modifies
the
the
api
itself.
A
Now
I
really
want
the
cube
adm
api
to
comply
with
golind.
This
is
my
main
argument
when
we
execute
the
linter.
I
want
this
to
be
question
configuration
as
a
comment
on
top
of
the
configuration
field
and
like
part
of
the
mess
we
have
in
kubernetes
nowadays
is
that
every
single
component
is
doing
something
differently,
for
instance
the
couplet.
Here
it
has
the
cluster
configuration
as
lowercase
camera
case,
so
you
cannot
run
a
letter
on
it.
A
It
will
break,
and
currently
a
fun
fact
is
that
we
don't
lint
any
of
the
api
packages
because
of
code
gen,
the
code
generator
we
have
for
kubernetes,
probably
the
data
for
which
you
know
about
this
is
that
it
generates
names,
function,
names
with
underscores,
so
that's
not
linter
compliant.
A
We
disable
the
linking,
and
we
also
this
results
in
a
bit
of
a
mess
in
terms
of
how
we
comment
on
the
fields
and
there's
a
long-standing
issue
in
the
called
gen
repository,
I
think
it
was
created
in
2017
to
fix
the
linter
sorry
to
fix
the
generator,
but
this
is
like
a
so.
This
is
a
cross
seek
problem.
The
code
generator
is
sold
by
api
machinery.
You
have
to
contact
them,
tell
them.
Okay.
A
We
have
to
fix
this
at
some
point,
but
also
a
question
for
this
group
I
had
is
like:
how
do
we
want
to
standardize
the
comments
above
fields
you
can
see
here?
That
queuing
is
adding
quartz
around
some
of
the
fields.
He
is
also
lower,
casing
them,
and
how
do
we
target
both
a
developer
and
an
operator
of
the
customer?
How
do
we?
A
Queuing
is
saying:
okay,
I
don't
care
about
the
developer.
I
want
to
only
show
the
json
name,
but
that's
there's
a
problem
with
that,
because
sometimes
the
json
name
differs
from
the
name
of
the
field
in
in
golang.
A
So
I
think
I
had
a
proposal
with
just
a
second
for
which
to
show
by
example,
I
I
said
that
maybe
we
should
just
add
both
into
the
comet
but
still
comply
with
gauntlet.
So
here
is
what
I
said.
We
can
have
some
field
which
is
golet
compliant,
and
then
you
can
include
the
json
in
parenthesis,
and
I
said
that
we
have
to
decide
for
kubernetes
as
a
whole.
How
do
we
do
this?
For
the
public
types.
E
So
if
I
got
it
right,
the
the
reason
of
the
of
this
discussion
is
that
sig
docs
want
to
include
the
api
documentation
into
the
website.
Is
that
right?
Yes,
okay,
so
my
my
point
here
is
that
we
are
talking
about
good
doc,
so
godok
is
for
developers
and
they
should
be
eagerly
compliant
if
they
are
planning
to
do
these
basically
generate
code.
E
A
Yes,
I
think
queming.
I
agree
with
your
statement.
Yes,
I
think
women
said
the
pr
for
kubernetes,
because
he
knows
that
we
are
more
active
and
we
can
respond
immediately
on
the
discussion.
I
do
agree.
This
is
godox,
we
can
have
the
fields
and
they
like
somewhere
in
this
long
discussion.
I
suggested
the
same
thing
as
you
did.
We
can
have
mutation
code
that
takes
the
comments
above
fields
and
mutates
them
to
make
them.
A
You
know
acceptable
by
administrators
and
I
think
queming
is
not
happy
with
this
idea,
but
I
still
have
to
confirm
with
him.
A
Do
we
have
any
more
comments?
I
think
I
should
open
a
mailing
list
discussion
with
everybody.
E
A
So
he's
in
china
he
normally
does
not
join
the
sig
doc
meetings
and
he
is
in
charge
of
all
the
work
that
is
related
to
doc's
generation.
It's
a
bit
difficult
to.
E
Arrange
this
type
of
meeting
yeah:
let's
try,
we
are
in
europe,
so
it's
not
impossible
to
have
a
meeting
a
dedicated
meeting
with
him.
So,
let's,
let's,
let's
see
if
he
is
available
to
discuss
this,
I
don't.
I
think
that
it
will
be
easier
if
we
talk
instead
of
starting
a
discussion
that
can
degenerate
quickly.
E
Yeah
I
I
would
like
to
better
understand
his
his
concern.
A
Okay,
do
you
want
to
make
your
comments?
The
same
comment
you
added
to
this
agenda.
E
Topic,
I
will
add
a
comment
and
also
telling,
in
the
comment
that
that
we
can
discuss,
is
better
that
we
can.
We
need
to
discuss
this.
A
Okay,
this
seems
fine
and
the
the
idea
it
will
be
to
keep
the
cubed
m
godox
with
pascal
case
uppercase
fields
everywhere.
E
Yeah,
in
my
opinion,
the
goal
is
to
keep
godot
for
developers
and
and
to
replace
basically
field
names
with
json
tags
for
for
the
for
the
public
documentation.
If
this
is
the
goal
of
the
documentation.
A
Yeah,
this
is
a
very
nice
idea.
I
like
this.
Also,
if
we
go
to
the
like,
if
we
type
kubernetes
api
in
google,
it
will
give
you
the
api
reference.
I
think,
and
if
you
go
to
the
api
reference,
you
will
see
how
something
somewhere
is
mutating
the
fields.
A
Let
me
find
a
pod
spec,
for
instance,
or
let's
go
to
the
demon
set.
You
can
see
that
something
is
mutating
the
field
to
be
json.
Actually,
it's
not
mutating
he's
taking
the
json
field
here,
but
it's
still
printing,
the
the
pascal
case
name
from
the
comment.
This
is
basically
the
description
above
the
field.
E
G
I
I'm
kind
of
interested
because
it'd
be
good
to
have
on
cluster
api
as
well
like
it's,
it's
pain
reading
through
their
api
types
to
figure
out
what
to
configure.
I
would
love
the
same
generate
if
we
can
reuse
the
generator
whatever
it
is.
It'd
be
great.
A
Yeah,
I
wonder
if
it's
something
in
kubernetes
kubernetes
that
is
just
hidden
and
we
don't
know
about
it,
yeah
nadir.
You
can
also
ask
this
question
in
in
the
the
thread
that
we
have
here.
Basically
just
ask:
how
do
how
are
you
generating
this
yeah?
A
Okay,
okay,
I
think
this
is
a
good
way
forward.
Thank
you
for
the
discussion.
We
can
continue
on
private
before
we
make
this
a
public
discussion
with
the
other
six.
Do
you
have
any
more
comments
on
this.
A
A
Deployment
as
a
demon
set,
this
didn't
sound
very
good,
but
q
proxy
is
a
demonstrate
currently
in
cuba,
and
this
creates
some
problems
around
upgrades,
both
mutable
and
immutable.
By
mutable
upgrades
I
mean
when
you
execute
the
commands,
which
are
cubadium
upgrade
apply
or
cubanium
upgrade
node.
A
These
are
the
commands
that
support
the
so-called
in-place
upgrade
when
you're.
Basically,
you
have
a
set
of
nodes,
you
replace
the
kubernetes
version
that
is
running
on
them,
and
currently
this
is
a
problem.
There
is
a
there
is
a
problem
around
the
q
proxy
upgrade
because
we
call
cubed
m
upgrade
apply,
it
upgrades
q
proxy
and
it
upgrades
it
as
a
demonstrate
which
upgrades
q
proxy
on
all
the
nodes,
including
the
worker
loads,
but
there's
a
problem
where
the
couplet
on
the
worker
nodes
should
be
the
same
version
as
q
proxy.
A
The
same
problem
exists
with
immutable
upgrades
where
you
join
new
nodes
to
the
cluster,
but
you
so
you
add
a
node
that
is
like
version
120
to
the
119
cluster.
You
have
a
demon
set
that
is
running
q,
proxy
version
119,
but
your
kubernetes
on
the
new
nodes
are
joining
with
like
a
new
version.
A
It
is,
I
mean
this
is
not
exactly
how
it
happens,
but
there's
also
the
skew
problem
there
and
I
believe
there
were
discussion
about
there
was
discussions
about
running
a
couple
of
demon
sets
one
with
the
old
version,
one
in
the
new
version
to
solve
the
immutable
upgrade
skill
problem.
A
But
this
is
like
it's
far
from
ideal
and
somebody
on
this
particular
cube.
Admtkid
asked
like
why
are
you
not
running
qproxy
as
a
static
pot
on
all
the
nodes?
A
And,
first
of
all,
I
would
like
to
clarify
for
the
viewers
of
this
meeting
and
the
viewers
of
the
the
vote
that
this
is
a
very
breaking
change,
because
people
already
have
assumptions
about
how
q
proxy
is
deployed
in
cubadium.
So
we're
just
discussing
the
topic.
We
are
not
planning
to
execute
on
it
directly,
but
basically
the
idea
is:
if
we
have
a
q
proxy
as
a
demon
set
on
all
the
nodes,
it
can
consume
the
q
proxy
config
map
from
the
cluster.
A
As
a
volume
you
can
set
up
q
proxy
the
same
way
you
you're,
setting
up
the
demon
set
but
except
you
can
run
it
as
a
bot
manage
static
port
managed
by
the
kubernetes
on
all
the
nodes
and
potentially,
if
you
skip
the
deployment
of
q
proxy,
the
workers
can
check
for
the
the
lack
of
acute
proxy
configmap.
A
So
when
you
join
these
workers,
they
can
decide
to
not
deploy
the
static
port
for
the
q
proxy,
so
it
works.
Then,
in
terms
of
the
deployment
it
works.
When
you
do
upgrades
you
can
upgrade
the
couplet,
you
can
stop
the
kubelet
upgrade
the
q
proxy
manifest
upgrade
the
couplet
and
restart
the
cobra,
which
will
give
you
acute
proxy
and
couplet
with
matching
versions.
A
A
E
Yeah
so,
as
I
commented
on
the
issue,
I
like
the
idea
of
the
static
port,
because
we
we
already
went
through
these
for
the
other
part
and
and
basically
it
makes
a
node,
let
me
say
more
self-consistent
and
these
play
really
nice
with
multiple
upgrade
and
and
immutable
upgrade
as
well.
E
So
I
like
ddr,
but
I
agree
with
you.
The
changes
is
complicated,
especially
for
for
defining
an
upgrade,
but
for
the
people,
and
it
should
be,
it
requires
deprecation
and
all
communication.
So
this
issued
required
designs
and
it's
something
that
I
would
like
to
to
put
in
the
agenda
for
the
next
cycle,
so
try
to
have
a
care
for
these.
A
Yes,
it
definitely
requires
a
cap
and
the
the
application
itself
is
going
to
be
a
bit
complicated
like
how
do
we
deprecate
introduce
the
static
port
in
parallel
to
the
demo
set,
which
is
not
going
to
work.
A
This,
I
think
I
had
something
else
here
that
it's
also
it
becomes
kind
of
peculiar
around
the
situation
with
static
pots
because
peculiar,
in
terms
of
instance,
specific
configuration,
because
currently
the
demon
set
blocks
you
from
being
able
to
apply
instance,
specific
config.
But
once
you
do
this
with
static
ports,
we,
as
you
know,
we
have
this
gap
around
instance
specific
config.
How
do
you
feed
instance,
specific
configuration?
A
The
only
way
today
is
to
feed
flags
to
the
proxy
binary
in
the
static
port.
That's
fine,
but
we
may
have
to
define
some
sort
of
policy,
because
tobacco
sharing,
if
you
apply
a
bunch
of
flags
to
a
certain
q
proxy
instance,
but
don't
apply
to
the
rest
of
the
instances,
it
will
fail
and
we're
not
going
to
tell
you
how
why
it
fails.
A
A
So,
like
I
said,
there
is
some
requirement
in
terms
of
like
at
least
policy
or
slash
documentation,
how
we
do
static
pots,
and
it
definitely
definitely
needs
a
cap
and
further.
E
Okay,
if
there
are
no
other
priorities,
I
would
like
to
show
something
about
the
the
operator
and
then
and
more
or
less
related
to
the
discussion.
E
For
the
for
the
kubernetes,
if
you
let
me
share
a
little
me:
okay,
just
a
second.
E
E
Okay:
okay,
let's
go
quickly,
so
I
will
show
that
these
are
my
notes,
so
whatever
what
we
are
kind
of
discussing
for
kubernetes
is
to
change
a
little
bit
the
kubernetes
architecture.
Today
we
have
only
a
cli
in
the
future.
We
are
discussing
to
have
three
separated
parts:
the
cli
for
the
day,
one
so
for
unit
and
join
so
basically
cli
responsible
to
transform
a
machine
into
a
node.
But
then
the
shell,
the
cli,
get
out
of
the
picture
and
we
are
planning
to
use
an
operator
to
manage
the
cluster
so
manage.
E
The
cluster
will
be
fully
declarative.
This
is
what
we
are
discussing
behind
the
scene.
There
will
be
a
library
which
will
be
the
building
block
for
buffer
the
cli,
the
operator,
but
also
something
that
can
be
used
by
other
tools.
So
I
skip
the
reason
behind
these.
We
discussed
something
in
the
basket,
but-
and
I
move
a
little
bit
on
how
I'm
thinking
the
operator
should
work
so
today.
Basically,
when
the
operator
does
in
it
it,
it
basically
do
two
things.
E
It
creates
something
on
the
machine
using
the
statistical
manifesto
example,
and
also
you
create
something
in
the
cluster
as
soon
as
possible.
The
kubernetes
config
map
kubernete
config
map
is
basically
is
the
cluster
configuration
from
kubernetes,
but
it
is
basically
embedded
into
a
config
map
which
is
not
nice,
not
not
really
kubernetes
way
or
the
file
of
defining
object.
So
for
the
operator.
E
What
what
I
think
that
we
need
is
to
basically
extract
the
kubernetes,
the
cluster
configuration
from
the
config
from
being
an
embedded
type
into
a
com,
the
config
map
and
basically
create
our
own
crds,
which
is
which,
which
I
call
it
cluster
configuration,
because
I
have
a
lot
of
fantasy
for
names
and
then
the
operators
is
basically
responsible
to
reconcile
to
continuous
they
reconcile
the
cluster
configuration,
so
it
will
watch
nodes
if
the
cluster
configuration.
E
E
Basically,
the
idea
that
I'm
I'm
following
or
prototyping
the
following
is
that
whenever
the
operator
will
find
a
difference,
the
operator
will
create
a
job
that
is
going
a
job
which
I'm
trying
to
keep
not
privileged,
so
a
job
that
simply
mounts
the
atc
kubernetes
folder
on
the
node,
and
this
job
will
be
responsible
to
rewrite
static
pod
managers
so
to
apply
the
changes
to
reconcile
the
status
of
the
static
port,
manifest
with
the
status
of
the
cluster
configuration
everything.
Okay
till
now,.
A
Yes,
this
is
great.
I
wanted
to
share
something
that
I
think
me
me
and
tim
said.
Claire
both
agree
with
is
that
config
maps
originally
were
created
to
exactly
cost
configuration
so
but
yeah
here
it
is
the
new
way.
E
I
add
a
new
way:
okay,
and
the
last
bit
of
this
is
that,
for
the
very
same
reason
that
we
discussed
it
just
before,
is
that
while
we
are
in
during
this
way,
I
think
that
it's
time
for
us
to
start
reasoning
about
a
cluster
configuration
but
also
know
the
specifics
configuration
so
an
instance
specific
configuration
as
well,
and
so
this
is
what
I'm
I'm
working
on.
So
I'm
I
will
show
you
a
simple
demo
of
what
I'm
playing
with.
A
E
E
My
cluster
configuration
so,
let's
assume
I've,
I've
done
init
I've
installed
the
operator
and-
and
I
have-
and
I
have
my
cluster
configuration,
which
is
called
the
name
of
the
cluster,
is
calling
kind,
because
I
I'm
I'm
using
kind
it
is
cluster.
Configuration
basically
is
defining
the
spec
of
of
my
of
my
of
my
node.
For
now,
cluster
configuration
is
really
simple.
In
this
prototype
I
have
only
scheduler
schedule.
Extra
arg
now
is
empty.
That
means
that
I'm
I'm
not
applying
any
additional
change
on
on
top
on
top
of
the
kubernetes
default.
E
E
E
Okay,
so
here
I'm
I'm
looking
at
the
static
port
manifest
on
on
the
node,
and
none
of
them
are
at
the
flag
because
they
are
playing
vanilla
cuban
mean.
So
if
and
now
I
edit
this
object
so
edit.
E
E
E
So
this
is
the
operator
reconciling
the
cluster
configuration
with
the
the
static
port
manifest
on
each
node,
with
the
cluster
configuration
the
last
bit
of
that
that
I
want
to
show-
and
then
maybe
we
have
some
time
for
the
question
that
has
as
discu
presented
before
I'm
also
having
the
node
configuration.
E
As
you
can
see
the
then,
where
is
the
spec
and
down
okay,
as
you
can
see
the
node
configuration
now,
it
is
basically
getting
the
cluster
configuration,
so
it
is
not
applying
anything
different
than
on
top
of
the
cluster
configuration.
I
can.
I
can
go
there
similarly,
to
before
I
decide
that
for
this
specific
node
I
wanted
the
verbosity
set
to
two.
E
E
See:
okay,
there
are
jobs
starting.
I
have
some
problem
in
my
prototype,
some
sometimes
it
does
not
have
a
job
running
and
the
anterior
tries,
but
this
is
something
that
I
have
to
fix,
but
at
the
end
you
get
the
results.
So
this
is
the
gist
of
the
operator
and
that's
it.
I
would
like
to
to
get
some
comments
on
on
it.
A
I
mean
this
is
great.
I've
seen
I
saw
the
obviously
the
first
prototype
of
the
operator.
I
think
the
the
whole
idea
of
allowing
the
operator
to
manage
instance,
specific
mode
configuration
is
great.
This
will
allow
great
flexibility
for
the
users.
Currently,
we
treat
control
plane
nodes
as
replicas,
which
we
have
seen
complaints
from
the
users
about
it.
Overall,
the
operator
pattern
will
definitely
help
to
change
the
cluster
the
way
the
user
wants.
I
have
a,
I
think,
a
couple
of
questions.
A
E
Oh,
I
didn't
get
it
so
far
in
in
the
design.
My
my
tl,
my
idea
is
there
is
that
it
could
mean
when
doing
in
it
will
install
the
operator
yaml
and
the
operatory
element
will
fetch
the
the
operator
image
from
from
some
repository
from
gcr
dot,
io
or
form
a
customize
as
a
repository
that
the
user
can
customize.
This
is
my
idea,
but
is
something
that
I
still
have
to
explore.
A
Promotion
of
the
image-
and
we
have
to
build
it
from
somewhere
from
the
cubed
maripo,
maybe.
A
Yeah
now
this
is
this
is
the
first
part
I
had.
We
have
to
figure
out
how
we
deployed
the
controller
in
a
air-gapped
environment.
The
acoustic
configuration
as
a
crd
complies
with
the
pattern
to
basically
have
a
custom
resource
for
it
in
general,
but
it
should
have
it's.
It's
also
can
be
a
config,
but
but
you
know,
if
you
have
a
load
configuration,
there
is
obviously
a
crd,
because
you
have
multiple
nodes.
It
makes
sense
that
also
the
cost
configuration
is
a
crd,
so
yeah.
A
I
think
at
some
point
I
might
surrender
into
the
whole
crd
business
from
cuba
dm,
but
I
really
I
have
to
say
that
I
really
don't
don't
like
the
amount
of
boilerplate
operators
generated
to
the
project.
It's
just
such
a
mess
so
much
yammer,
cert
manager,
and
we
have
to
trim
everything
down
everything
that
we
don't
need,
because
the
controller
will
be
so
simple
using
clad.
Go
you
don't
need
all
the
boilerplate
that
controller
runtime
gives
you.
So
I'm
really.
E
Next
time,
if
you
want
we,
we
can,
or
in
a
separate
meeting,
we
can
have
a
deep
dive
on
on
how
I'm
implementing
these
pro
and
cones
and
so
on
and
so
forth.
A
Yeah
yeah
yeah-
this
is
like
this
was
more
of
a
comment
yeah.
We
should
definitely
have
this
video.
Another
question
is
about
the
crds
like
what
do
you
envision
we're
going
to
have
in
the
status
for
the
quest
configuration
and
node
configuration.
E
Oh
for,
for
the
time
being,
I'm
using
the
status
for
with
a
set
of
conditions
that
basically
report
if
we
are
in
seek
sync
with
the
underlying
object
and
report
how
the
job
is
performing
if
it
is
failing,
if
it
is
running
and
so
on,
and
so
on
and
so
forth,
so
I'm
I'm
using
the
status
mainly
for
conditions.
E
The
idea
is
that
the
source
of
true
I
I
don't-
really
need
the
status
to
have
information
in
the
status,
because,
because
the
sources
through
is
what
the
node
is
in
in
the
node,
so
the
controller
should
look
at
the
node
for
understanding
what
is
the
core
status
of
kubernetes
or
in
case
of
static
board.
The
controller
looks
at
the
static
board
in
in
the
kubernetes
api.
B
If
I
might
add
the
it
might
be
interesting
for
the
status
to
include
some
of
the,
because
there
might
be
some
overlays
of
configuration
that
what
the
effective
configuration
is
for
the
node
and
so
that
you'd
be
able
to
then
say.
Oh,
I
just
want
this
one
to
be
different,
like
in
your
demonstration,
which
is
really
cool
but
you'd,
be
able
to
very
quickly
say:
oh
okay,
here's
what
the
effective!
A
All
right,
I
guess
we
can
follow
up
on
the
operator
in
another
meeting
being
the
cuban
office
hours
or
we
can
schedule
separate
waiting
for
that.
Does
everybody
else
have
more
comments
on
this.
We
have
two
minutes.