►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180529
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Yeah
I
just
want
to
do
a
PSA
that
code
freeze.
Is
this
the
end
of
this
week?
So
please
any
new
features.
Any
new
feature:
PRS
have
to
be
done
by
the
end
of
this
week
and
I
know
that
there
will
be
minor
bug,
fixes
that
will
go
in
after
the
fact.
So,
ideally
try
to
get
your
PRS
in
before
the
end
of
the
week
and
after
the
LGG
end
in
order
for
them
to
get
in
I
do
think
the
release
is
tentatively
planned
for
June
19th.
B
A
B
I
think
I
went
through
the
backlog
from
our
side
today
and
we're
pretty
good
with
our
backlog.
So
the
folks
that
are
committed
on
our
side
are
pretty
much
lined
up.
There's
a
bunch
of
documentation,
stuff
and
I
again,
I
consider
documentation
be
asynchronous,
so
if
it's
a
milestone
its
milestone,
but
somebody
I'm
sure
somebody's
gonna
nail
me
on
that
one
someday,
but
other
than
that.
I
do
believe
that
everything
is
reasonable.
A
Yeah
checking
the
milestone
here
so
myself
and
if
you
hacker
did
pairing
session
on
Lee
detests
some
days
ago
and
that
went
well.
We
have
a
PR
up
for
review.
I.
Think
the
testing
for
folks
are
gonna
check.
A
Yes,
so
yeah
so
are
the
three
suits
we
have
for
all
other
branches
of
all
so
one.
The
first
is
like
run:
cubed
M,
111,
great
111
cluster,
using
cubed
in
111,
and
the
other
one
is
create
a
110
cluster
with
Q
2,
cubed
and
110,
and
upgrade
it
to
111
with
cubed
and
111.
That
is
an
upgrade
job,
and
then
we
have.
The
third
is
create
a
110
cluster
using
cubed
M
CLI
111.
A
B
Did
want
to
talk
to
one
of
them,
which
is
the
next
agenda
item
I'd,
be
kind
of
jumping
around
a
little
bit
in
the
agenda,
but
there
there's
a
specific
one
that
is
thorny
and
there's
two
separate
issues
linked
against
it.
The
one
I
linked
in
the
document
is
the
is
the
one
that
you
logged,
but
jason
has
a
separate
issue
which
is
directly
related
to
which
is
the
host
name
override
for
the
kulit
and
there's
a
weird
the
way,
the
prophet.
B
B
So
on
some
cloud
providers
like
AWS,
we
actually
inside
of
our
quick
start,
we
cribbed
this
little
hack,
which
I'm
totally
don't
like
what
we
need
to
fix,
which
is
basically
a
Knick
container
for
the
proxies
to
make
sure
that
we
get
the
hostname
override
in
place
because
of
the
way
the
configuration
apparatus
works.
It
is
very
ugly,
and
it
is
a
is
a
broader
problem
with
the
Kubota
configuration
for
the
proxy
I'll
stop
there,
and
hopefully
people
like.
A
So
that's
two
things
that
need
to
be
fixed
here.
One
is
the
practice,
should
probably
be
somewhat
smarter
when
the
doing
the
Auto
Tech
detection
here,
I
guess
without
having
looked
into
the
code
line
by
line,
but
first
a
week
we
should
at
least
try
to
talk
to
signal.
Could
we
make
it
detect
this
scenario
where
the
cubelets
has
issues.
A
Yeah
yeah
so
like
can
we
talk
to
seek
networking
and
see
kind
of
proxy
detect
this
scenario
where
the
hostname
doesn't
equal,
the
node
API
objects,
name.
That
is
the
separate
bug,
and
then
we've
said
this
multiple
times,
but
we
need
to
get
the
proxy
to
do
component
config
correctly
and
in
on
be
level,
and
when
that
happens,
it's
gonna
do
so
that
we
feed
it
a
configuration
file
and
even
though
we
do
that,
we
can
specify
any
flag
that
is
instance
specific,
like
hostname
override
to
it
anyway,
and
it
will
work
it.
B
B
D
So
every
other
thing
that
I
talk
to
you
basically
said
they're
waiting
for
my
coffins
work
to
land
and
to
sort
of
pave
the
road
and
they're
all
in
a
holding
pattern.
Until
that
happens,
and
so
I
think
we've
we've
sort
of
convinced
other
people.
But
it's
the
right
way
to
go
with
the
caveat
of
the
API
server.
D
That
hits
a
lot
of
the
sort
of
hardest
pieces
of
component
config,
and
so
most
people
believe
that
if
that
is
proven
to
work
that
you
know,
they're
their
piece
will
be
much
easier
to
do
and
that
all
of
the
sort
of
hard
parts
will
have
been
figured
out.
And
so
you
know
mike
has
been
sort
of
plowing
forward.
D
B
Okay,
but
interim
we
have
a
bunch
of
bugs
we
need.
We
need
somebody,
it's
kind
of
like
nailing
jello
to
the
wall
right
now.
We
we
here
I,
don't
know
exactly
no
somebody's
gonna
have
to
own
this
stuff
on
the
other
side,
to
make
sure
at
least
we
gets
reviewed
and
bug
fixed
in
a
reasonable
time.
I
don't
know
if
we
have
people
from
our
side
who
want
to
be
able
to
jump
up
into
this
stuff.
It's
a
little
thorny,
but
I
think
there
are
people
who
can
help
navigate
the
landscape
to
get
it
done.
B
A
So,
given
that,
like
the
status
updates,
meanwhile,
is
that
dynamic,
cubelet
config
is
completed,
the
API
types
themselves
got
to
be
the
last
cycle.
In
110
this
cycle,
111
mic'd
often
has
completed
the
work
of
graduating
dynamic
queue,
block
config
beta,
which
is
great
so
technically
we
could
use
it.
We
won't.
We
have
a
lot
of
other
like
we're
doing
it
step
by
step.
So
so
we
have
we're
basically
having
it,
but
we
don't
just
enable
the
order
rotation
thing.
D
As
a
result
and
I
think
that
you
know
what
we
talked
about
with
Mike
was,
you
know,
he's
building
dynamic,
Cuba
config,
but
the
first
part
of
that
is
static,
qiblah
config,
which
is
basically
read
from
a
file
instead
of
command
line
flags,
even
if
every
every
different
binary
supported
static,
config
be
a
component
a
fig.
That
would
be
a
huge
step
forward
for
us,
because
it
would
give
us
API
machinery
around
the
configuration
that
we
give
us
forward.
Backward
compatibility.
D
A
Actually
so,
when
I'm
talking
about
component
config
generally
I'm
just
talking
about
the
static
thing
that
we
have,
this
structured
configuration
using
the
kubernetes
api
machinery,
so
the
component
can
read
from
the
file
I,
don't
I
don't
either
require
folks
or
like
the
SIG's.
To
do
this
dynamic
thing
it
might
I
think
it
might
only
be
needed
for
a
cubelet
to
some
extent.
If
we
do
stuff
like
hosting
in
a
pod,
we
can
have
the
cubelet
to
like
Rika
kprs
or
a
process
or
whatever.
A
That's
that's
really
easy,
but
given
now
that
we
have
had
the
static
cube,
the
static
component
configuration
stuff
in
and
it's
used
in,
GCE
cube
op
scripts
in
production.
Yes,
can
we
get
buy-in
from
like
other
folks
to
do
this
and
can
Google
help
out
here?
Cuz,
like
the
queue
proxy
part,
is
really
annoying.
So
if
not
it's
basically
at
its
current
state,
it's
gonna
block
you
mga
and
be
one
of
the
last
blockers,
so
so,
if
it
doesn't
get
fixed
in
1:12,
we're
basically
just
waiting
for
that.
Yes,
absolutely.
B
D
Let's
maybe
take
this
offline
and
try
to
loop
in
my
coffin
I
think
he
said
that
he
knew
who
on
the
sick,
networking
team
was
working
on
if
they
or
had
been
working
on
it
for
two
proxy
and
if
that
worked
out
today
and
I,
think
he
he's
close
to
those
people
and
can
sort
of
help
prioritize
and
turn
the
screws
there
as
well.
So
I
think
that
would
be
helpful.
Maybe
not
did
not
take
us
to
take
too
much
more
time
in
this
meeting
is.
B
A
B
B
A
So
the
problem
is
like
the
cube
like
what
the
Kuebler
does,
and
what,
if
the
queue
party
should
do,
is
that
it
takes
the
static,
config
reads
it,
and
if
you
have
overrides
that
are
instant
specific
as
flags,
it
will
accept
happily
accept
those.
It
doesn't
do
that
right
now,
which
is
that
problem,
but
the
other
root.
Why
we
even
would
have
to
do
this,
is
that
well
the
queue
proxy
can't
detect
itself
that
it
has
the
node
API
object.
Name
is
different
from
the
host
name.
A
B
B
A
Nobody
has
like
had
time
to
review
it
and
like
get
it
over
the
line,
but
like
the
code,
that's
actually
being
written
to
logic
sense
already
anyway,
we'll
take
that
mole
of
that
offline
and
get
back
to
the
sig
next
time
what
the
conclusions
are
and
we'll
try
to
talk
to
Signet
working
about
fixing
the
ordered,
a
detection
or
fixing
the
flight
proceedings
with
Mike,
because
he
knows
what
to
do
there
and
yeah.
Then
the
general
problem
is
component.
Config
is
under
on,
but
yeah
what
I
started.
A
Thinking
about
like
the
other
day
here
was
that
if
we
had
version
component
config
for
every
component,
all
the
control
plane
components
could
just
have
the
config
C
endpoint
and
we
could
just
like
any
any
D
test
or
whatever
could
just
check
with
the
API
server
check
with
the
controller
manager
scheduler.
Do
they
support
this
kind
of
feature?
Is
it
enabled
one?
A
Is
the
blah
blah
blah
configuration
in
the
cluster,
because
now
we
don't
have
any
way
to
query
that
if
we
had,
it
would
be
super
easy
to
have
configured
points
for
everything,
of
course,
authenticated
and
similar,
but
authorized,
but
still
that
would
be
a
huge
improvement
for
the
general
ecosystem
as
well
and
might
be
a
benefit.
We
could
try
to
sell
the
people.
Why.
B
Don't
we
take
this
meta
discussion
to
a
different
topic.
I
mean
like
I,
had
another
toy,
maybe
in
planning
for
the
next
release
cycle,
because
the
config
ze
endpoint
had
a
bunch
of
people,
needling
it
to
death
overtime
and
was
even
ripped
out.
So
it
was
ripped
out
of
the
API
server.
So
there
did
exist.
It
configs
the
endpoint
for
everything
and
they've
got
pruned
back
and
it's
no
longer
useful.
But
yes,.
B
B
Hopefully
a
test
plan
that
folks
can
enumerate
and
I
think
it'd
be
helpful
to
get
broader
feedback
on
things
that
don't
don't
currently
have
test
coverage
that
we
want
to
have
test
coverage
for
this
cycle
and
eventually
we'll
try
to
get
those
into
automation,
but
I
think
it'd
just
be
it'd,
be
useful
for
folks
to
to
be
able
to
solicit
feedback
or
to
get
their
ideas
into
at
least
a
spreadsheet
of
some
kind
or
a
doc
of
some
kind.
For
us
to
be
able
to
start
getting
a
little
more
formalized,
Rowen
testing
ribbon.
F
F
B
A
So
we
recently
updated
our
owners
file
since
the
mdq
Batum
I
checked
the
other
day.
I,
don't
know
if
it's
it's
true
anymore,
but
I
realized
that
many
of
these
that
we
added
or
are
not
in
our
github
teams.
So
we
should
add
them.
I'm
I
could
be
a
maintainer
for
the
team's
themselves,
so
I
could
add,
but
I'm
not
I,
think
it's
Tim
and
robots
for
the
moment
without
either
the
Sigma
slice
icons,
not.
A
A
D
A
H
A
A
D
D
A
A
A
A
E
Cups
going
great,
we
are
actually
on
the
the
big
thing
that
we're
working
on
is
the
sed
manager
and
the
we
finally
got
an
etsy
d2
d3
parade
of
H,
a
clusters
to
work
which
basically
unblocks
three
cops
cluster
for
moving
to
at
c3
and
like
cluster
resizes.
So
yes,
so
that's
that's
the
sort
of
big
unblocker
in
cops
land
and
then
I
hope
to
bring
that
when
it's
a
little
bit
more
stable
to
this
group
in
the
hope
that
we
can,
you
know
to
look
at
adopting
it
for
old
clusters.
E
E
It
does
bring
our
sort
of
nicer
our
recovery
and
ability
to
resize
resize
the
cluster
from
like
one
to
three
is
sort
of
the
canonical
one
that
most
people
seem
to
want
to
do
and
to
do
minor
version
upgrades
if
there
ever
are
sort
of
sequencing
type
issues
and
backups
and
restores
so
those
sort
of
things.
But
that's
that
sort
of
that
was
the
big
blocker
in
cups,
and
we
have
hopefully
overcome
that
now
and
will
be
proceeding.
E
E
To
do
it,
but
certainly
we
will
be
everyone.
One
at
that
stage
will
be,
will
have
had
a
whole
release
where
it
everyone
will
be
using
the
sed
manager
and
when
it's
following,
like
the
cops
release,
control
using
that
city
manager
and
the
previous
releases.
So
there
really
should
be
no
reason.
Hopefully.
I
E
So
it's
the
cops,
has
a
net
city
manager
built
into
it
and
we're
splitting
it
out
and
I'm
a
repo
called
Koh
POS,
CD
manager
and
I'm
actually
working
right
now
on
the
getting
that
to
be
a
an
Associated
project,
I
think
it's
called,
which
is
like
a
shares
or
a
values.
Type
thing:
that's
actually
not
the
most
I,
don't
know.
E
Anyone
else
has
tried
that
it
is
not
the
most
trivial
thing:
there's
not
the
trivial
thing,
I
imagined
it
to
be
so
I
have
I'll,
first
of
all,
get
that
hopefully
working
and
then
yeah
hopefully
improve
the
process
there
and
then
hopefully
we
can
move
this
if
people
want
to
like,
we
can
maybe
move
it
under
community
SIG's.
If
this
sig
is
interested
in
adopting
that
project,
that
would
be
wonderful,
but
I
haven't
proposed
that
yet
that's
on
me
sounds.
A
E
B
E
B
So
the
original
instructions,
because
I
helped
write
them,
was
tuned
down
to
one
upgrade.
The
one
wipe
out
the
data
for
the
other
ones
to
make
sure
that
when
you
bring
up
the
new
ones
online,
they
sync
with
the
main
new
master
who's
been
upgraded.
Then,
as
you
add
new
members
back
it
would
it
would
percolate
the
data
across
the
new
house,
yeah.
E
Want
and
probably
pretty
soon
actually,
but
no,
no,
no
firm
date,
but
more
sort
of
a
when
it's
ready.
But
we
were.
We
were
really
blocking
on
the
LCD
manager,
which
isn't
actually
gonna
be
required
yet.
But
so
it's
basically
ready,
but
we'll
probably
do
that
this
week
next
week,
sort
of
thing
or
linkbait
it
leave.
E
A
lot
about
add-ons
and
how
that's
gonna,
work
in
in
cups
haven't
really
figured
out
anything
yet
other
than
that
I.
Don't
think,
there's
anything
majorly,
annoying
we're
gonna,
look
at
adopting
the
machines.
Api,
which
is
gonna,
be
awesome,
I
think
at
least
for
the
nodes
and
the
masters
is
gonna,
be
trickier,
but
get
the
machines
a
guy
going
for
the
nodes.
I
think
will
be
an
interesting
adoption.
E
B
Think
the
PSA
regarding,
because
this
overlaps
with
cluster
API
stuff
that
work,
that
you
were
mentioning
Lucas,
that
we
wanted
to
eventually
get
the
cluster
configuration
to
match
the
cluster
configuration
of
cluster
API
for
comedian.
That's
kind
of
a
long
term
objective
that
overlaps
with
some
of
this
stuff.
A
There
we
go,
we
can
find
it
in
the
meeting,
notes
and
cubed.
M
version
won't
be
the
one
Naevia.
So,
basically,
eventually
we
want
the
cubed
M,
the
cluster
part
of
cubed
m,
to
match
the
cluster
API
and
we're
so
from
I
attended
a
meeting
last
week,
and
we
came
to
the
conclusion
that
at
least
initially
cubed
and
we'll
have
more
knobs
and
be
a
superset
of
the
cluster
API,
which
right
now
has
we're
a
few
options
to
be
set.
A
One
thing
that
is
isn't
clear
to
me,
yet
it's
like
how
do
we
wire
in
component
configuration
into
the
cluster
API,
because
that's
gonna
be
a
really
interesting
problem,
and
that
is
also
the
same
kind
of
thing
we
were
facing
with
cubed
M.
We
don't
have
many
knobs
ourselves,
we
have
a
few
and
those
will
probably
migrated
to
component
config,
one
that
they
comes
other
things
are
like
image
for
the
docker
image
to
pull
or
like.
A
Kubernetes
version
to
use
for
the
API
service-
all
stuff
like
that,
so
so
it's
it's
pretty
thin,
but
still
more
than
what
the
cluster
idea
has
currently
so
feature
gates
is
another
thing
like
and
then
we
haven't
figured
out
yet
how
to
map
cubed
and
AJ
mastis
into
the
into
the
configuration
that
I
still
to
be
to
be
discussed
and
I
hope.
We
could
have
some
insight
from,
for
example,
cups
on
that
side,
like
on
how
to
deal
with
the
configuration
there
and
like
do.
A
A
The
only
thing
that
can't
be
shared
is
like
the
service
account
key,
and
we
have
to
find
a
way
to
manage,
like
the
different
CI
keys
as
well
stuff
like
that,
how
do
we
wire
such
things
into
the
API
without
loading,
the
config
that
is
to
be
discussed,
and
then,
when
we
have
a
small
set
of
knobs,
we
absolutely
need.
How
do
we
let
the
user
use
component
configuration
that
is
eventually
gonna
come
to
specify
exactly?
These?
Are
the
things
I
want?
E
Yeah
yeah
I
tell
you
that
it's
not
things
like
the
a
really
tricky
case
is
like
how
do
you
allow
a
user
override
of
one
flat,
it's
sort
of
like
there's
there's?
How
do
you
like?
Allow
a
user
override,
not
lose
the
user
and
not
lose
the
user
overrides
when
you
like?
A
new
version
of
kubernetes
needs
a
different
set
of
flags
right.
E
How
do
you
effectively
merge
those
changes
in
that
this
same
problem
applies
to
add-ons
as
well,
which
are
also
add-ons
being
the
things
that
are
cluster
manage,
but
it's
generally
like
across
the
board.
A
problem
and
I
think
customized
is
a
which
was
formally
called
K.
Inflate
is
a
great
solution,
but
it's
not
clear
exactly
how
that
fits
in
and
it's
so
those
are
sort
of
the
yeah
I
don't
have
any
answers,
but
those
are
like
what
I'm
trying
to
grapple
with
as
well.
All.
B
Right
just
a
general
question,
because
you
mentioned
it
alright.
Now
that
you've
switched
gears
a
little
bit.
Is
there
any
plan
at
all
to
focus
on
the
add-ons,
because
we
talked
about
add-ons
v2
for
like
the
over
a
year
plus?
Is
there
Jerry,
you're
gonna
have
bandwidth
cycles
to
execute
on
pieces?
Yes,.
E
A
Okay,
yeah
cool
thanks
for
letting
us
know
that
yeah,
so
the
promise
of
component
consideration
versus
flags
is
like.
It
gets
way
easier
to
manage
this,
because
we
can
use
automatic
version
upgrades
in
like
conversions
that
it's
done
seamlessly
by
the
API
machinery,
so
so
that
is
partially
gonna
solve
it,
but
still
it's
it's
gonna,
be
a
problem
and
like
for
us,
for
example,
we
cubed
it
has
a
minimum
bar
of
security,
so
on
some
fields
we
do
and
for
stuff.
A
So
we
enforce
the
authorization
modes
to
be
at
least
node
and
arbic,
because
if
you
don't
do
that,
it's
like
the
clusters
gonna
break
effectively.
So
so
that's
why
we're
not
letting
the
user
should
themselves
in
the
foot
they
can
still
go
like.
This
is
like
a
cube
at
a
minute
time.
You
can
still,
if
you
really
really
really
want
it,
you
can
go
and
edit
the
static
Portland
there
you
go,
but
like
such
things
like,
how
do
you
find
a
line
between
what
we
enforce
in
the
component?
A
That
is
owning
the
thing
and
the
user,
letting
these
overrides?
So
there's
gonna
be
a
lot
of
overlap
when
talking
about
this
cluster
configuration
changes
between
cups,
the
cluster,
API
and
cube
am
because
we
want
all
these
two
and
the
add
ons
and
the
rest
of
the
six
that
are
talking
component
consular
agent,
so
finding
some
kind
of
action
plan
and
because
that's
a
lot
of
material
and
a
lot
of
communication
and
a
lot
of
like
discussion
overall.
A
E
Just
yeah
I,
don't
wanna,
it
is
a
tricky
problem.
The
if
you
allow
users
to
change
things,
do
they
change
it
through,
like
Covidien
or
the
machines
api,
or
do
they
change
it
through
the
yes?
Those
are
like.
Do
they
change
through
the
installation
tooling,
or
do
they
change
it
online
in
their
cluster
itself,
and
if
they
do
the
latter?
How
do
we
track
it
back
and
and
preserve
that
change?
If
we
should
preserve
that
change?
E
So,
for
example,
suppose
we
introduce
a
new,
better
earth
mode
right
like
ACLs
right
so
previously,
even
forward
of
the
users
like
opted
into
are
back
with
an
override
flag.
Now,
ACLs
are
better.
Who
do
we
like
Trump
the
the
are
back
thing
and
do
we
keep
the
are
back
think
as
the
user
to
explicitly
specified
it?
You
in
Cuba
diem,
don't
want
to
reject
anything
that
is
in
our
back
and
anything
else,
because
there's
a
new
mode
that
you
don't
yet
know
about
it.
E
E
E
G
A
We'll
we'll
see
I
guess
so,
I
mean
cubed
M
is
it's
gonna
ever
only
execute
locally.
So,
even
though
we
had
this,
so
if
you
want
to
like
this,
get
up
soil
controller
or
operator
pattern,
whatever
we
have
to
have
something
like
the
cluster
API,
that
three
consoles,
so
we
could
have
like
config,
Maps
or
whatever,
that
are
the
source
of
truth
for
component
config
or
the
cluster
configure
machines
via
DS
or
whatever
objects
type
configuration
you
have.
Then
we
have
this
reconcile.
A
E
A
A
Cool
and
I
have
a
few
highlights
of
what
needs
to
be
done
in
the
111
timeframe.
Still
I'm
gonna
add
notes
to
the
discussion
we
just
had
soon,
but
no
trace
racial
options.
Are
we
happy
with
the
name
or
do
you
want
to
change
it
so
fab,
reto
and
I've
been
talking
about
like?
Is
it
the
right,
easy?
The
right
naming
we're
basically
hosting
these
common
flags
that,
when
executed,
incubate
a
minute
or
join
gonna
yell
the
node
API
object?
It's
like
this
bootstrap
structs,
with
named
taint
extracts
to
the
cubelet.
A
A
Yeah
but
then
then
it's
like
it's
still
cooi
circuit
is
going
to
be
kind
of
an
exception.
So
I
meant
it's
used
in
the
sense
that
it's
propagated
to
give
cubelets
in
that
in
the
registration
face,
but
yeah
it's
still
used
in
rest
and
join
and
then
I
think
we're
gonna
store
it
in
the
note
objects
as
an
annotation
or
whatever,
but
but
generally,
these
are
like
for
initialization.
A
So
breezy
pointed
out
that
he
kind
of
interprets
a
machine
in
the
machines
API
to
as
a
some
kind
of
thing,
be
it
a
virtual
machine
or
reload,
whatever
a
computer
running
a
cubelet
or
a
cube,
a
cube
like
a
computer
that
is
about
to
run
a
cubelets
which
is
essentially
the
same
as
node
registration
things.
So
should
we
call
it,
but
what
I'm
I
mean-
and
we
do
have
a
lot
in
common
this,
this
more
for
for
field.
A
A
Yeah
sorry,
so
we
want
to
have
when
running,
cube
in
a
minute
and
join.
We
need
parameters
like
what
is
the
node
name.
If
it's
something
else
than
the
host
name,
what
is
the
taint
to
register
with
the
cubelets,
because
cubelets
can
only
register
themselves
with
taint
once
they
can't
update
their
own
taints
afterwards
and
what
are
the
the
CRI
circuit
to
use
for
the
cubelets
and
what
are
some
instant,
specific
extra
argument,
extra
flags
to
poster
cubelet?
A
This
is
all
I'm
wrapping
in
a
struct
and
now
we're
wondering
what
should
we
call
this
struct
in
the
master
node
configuration?
Should
it
be
node
configuration
options,
node
registration
options,
node
whatever,
or
something
like
machine
cuz?
It
has
some
overlap
with
the
machines,
API
fields
and
there
it's
called
machine.
Spec
and
provincia
said
that.
Basically,
we
could
interpret
machine
as
a
computer
that
is
about
to
run
our
cubelets
and
it's
going
to
turn
into
a
node
API
object
for
each
other.
Do
was
that
a
good
repacked
recap
of
the
discussion.
J
C
J
A
cluster
API,
if
you
want,
if
we
feel
comfortable
to
anticipate
some
step
now,
we
can
call
it
some
machine
something
machine
spec,
because
already
there
is
a
50%
of
field
offer.
If
we
are
not
comfortable,
let
let's
take
Nieman.
That
does
not
create
confusion
and
we
will
see
in
the
next
release
of
the
VI.
B
B
A
So
I
meant
that
so
post
we
don't
need
to
so.
Let's
say
that:
there's
like
that,
co-writes,
okay,
this
something
that
is
specific
to
every
node,
basically
running
in
the
cluster.
So
if
we
have
n
nodes,
less
n,
that
can
be
indifferent,
CRI
circuits
and
hence
it
needs
to
be
node
specific
in
some
way
and
it's
only
used.
These
options
are
only
used
in
the
cubed
M
in
it
or
cubed.
M
join
execution
time.
A
We
only
know
about
this,
so
my
question
is
at
the
when
we
still
know
the
CRI
circuit
for
this
given
node
that
we're
creating.
Should
we
upload
this
information
to
the
cluster
somewhere
like
the
node
API
objects
in
the
cluster
that
was
that
God
created?
Should
we
say
that
hey
just
for
your
information
and
for
people
in
the
future,
this
node
is
using
this
year
right
circuit.
B
B
It's
a
creation
of
debt,
though
the
and
the
question
is
like
we'd-
have
to
make
sure
in
the
next
cycle.
It
turned
into
a
field
in
the
node
status,
object,
config
mapping.
It
would
be
a
little
weird
because
then
you'd
have
separate
config
maps
for
every
single
node,
where
you
could
potentially
have
like
a
weird
mix
cluster,
because
you're
crazy,
like
if
you,
if
you
wanted
to
test
all
the
different
CRI
implementations,
that's
that's
like
an
edge
case,
totally
an
edge
case,
but
the
the
having
it
relate
directly
to
the
know.
C
A
A
Okay,
all
that
is
good,
so
we'll
take
the
node
registration
options,
naming
tomorrow
we're
okay
with
the
with
using
the
annotation.
Meanwhile,
just
it's
like.
Yes,
we
have
to
keep
working
but
yeah,
then
the
config
migrate
thing,
so
just
a
quick
that
it's
only
needed
for
people
feeding
cuban
a
config
file
that
is
old
and
converting
it
to
the
the
new,
a
new
api
version.
So
it's
like
just
this
API
machine
or
a
filter
over
it.
I
have
my
old
config.
I
want
to
be
it
to
be
converted
into
the
new
config.
A
It's
never
going
to
touch
the
faster.
It's
all
it's!
It's
like
a
unix
pipe
kind
of
thing.
So
with
that
in
mind,
is
it
okay?
Cuz
like
this,
is
basically
it's
just
extracting
a
little
piece
that
cuban
and
does,
every
time
it
reads
a
file,
but
just
exposing
that
to
the
user
without
doing
anything
else,
yeah
by
old
I
mean
like
conflict
from.
So
we
have
we're.
Introducing
a
new
API
version
in
version
1
or
2
in
111
and
by
old
I
mean
everything
before
that's
or
like.
B
A
A
A
Just
talking
about
this
kind
of
UNIX
pipe
thing
that
I
feed
an
old
config
to
Cuba
em,
it's
gonna
write
the
same
representation
in
the
new
one
so
for
people
that
want
to
upgrade
the
gamma
files
they
have
checked
in
to
get
themselves
without
creating
a
new
Cuban
cluster
somewhere
like
just
spinning
up
a
we
and
to
run
Cuban
and
in
it
so
it
we
can
see
the
output
of
what's
stored
in
the
cluster
now
based
on
their
old
file.
Well,.
B
That
that
use
case
wasn't
spelled
out
in
the
original
PR,
but
I
I
support
that
use
case.
If
you
have
an
independent,
you
know
get
ops
type
of
flow
for
your
actual
cluster
creation,
then
having
the
having
the
ability
to
be
able
to
version
that
file
independently
without
you
having
to
like
go
to
the
cluster.
To
do
a
diff
is,
is
a
useful
operation.