►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
welcome
to
the
component
standard
breakout
meeting
for
discussing
component
config
problems,
issues
and
evolutions.
This
is
intended
as
a
collaborative
meaning
between
the
component
standard
working
group
and
api
machinery,
and
we
would
like
to
discuss
the
agenda
today
is
in
an
ad
hoc
meeting
document.
That's
been
linked.
C
Can
you
see
the
block
yep
okay?
So
the
first
question
here
is
what
is
like
the
place
where
we
would
like
component
context
for
different
components.
So
the
idea
is
that
many
components
would
like
to
store
their
conviction
component
maps,
but
we
don't
want
to
like
force
data
onto
components.
We
would
like
some
soft
solution.
C
Try
to
just
have
a
file
from
the
components
perspective
and
whether
this
file
is
actually
stored
into
a
config
map
and
brought
on
as
a
configmap
volume
source
or
whether
this
is
actually
pulled
from
somewhere
else
and
put
up
together
by
a
hit
container
or
something
like
this
like.
This
is
basically
the
choice
of
the
deployment
so
actually
like.
C
We
need
to
figure
out
what
we
should
do
in
this
case
and
whether
we
want
to
like
force
a
consistent
policy
on
all
the
components
or
basically
allow
every
component
to
go
their
own
way.
A
So,
where
we're
at
currently,
as
far
as
I'm
aware,
is
that
we
ask
components
to
provide
a
config
flag
that
accepts
a
file
name.
This
is
compatible
when
a
component
is
hosted
in
a
kubernetes
cluster
with
the
configmap
strategy,
and
then
we've
got
some
additional
magic
happening
with
other
components
that
are
not
hosted
by
kubernetes
being
able
to
fetch
from
the
api.
A
C
Well,
currently,
I'm
I'm
aware
of
like
with
cube,
adm
cube
proxy
and
it's
cubelet,
but
it's
done
in
two
different
ways,
although
both
contexts
are
actually
stored
in
copy
maps,
so
we
actually
also
should
probably
agree
on
the
fact
that
component
conflicts
should
be
some
sort
of
opaque
objects
from
the
perspective
of
the
api
server,
so
they're
simply
stored
as
the
block
of
information
inside
of
a
company
map.
A
I
don't
think
that
it's
the
minimum
or
that
it's
the
simplest
object.
A
I
think
that
it
should
be
possible
to
run
kubernetes
outside
of
using
kubernetes
hosting
services,
but
you
might
argue
that
the
config
should
be
introspectable,
so
we've
had
issues
in
the
past
with
people
trying
to
determine
like
what
the
what
subnet
a
cluster
is
running
in
in
order
to
configure
a
networking
add-on
and
not
being
able
to
introspect
the
api
server.
Config
makes
it
really
difficult
to
know
what
that
value
is
like
just
from
the
primitives
inside
the
cluster.
A
So
I
guess
that's
like
running
from
files
outside
the
cluster
and
not
having
the
cluster
being
able
to
know.
A
Yeah,
it's
I
mean,
can
we
answer
the
question
of
like
whether
config
should
always
be
introspectible.
D
A
Like
my
because,
if
you,
if
you
look
at
other
tools
in
the
space
like
prometheus,
has
a
config
endpoint,
you
know
where
you
can.
You
can
see
what
the
running
config
is,
but
docker
has
docker
info
container
d.
You
can
inspect
like
how
it's
running
so
I
mean
it
makes
sense
that
with
some
level
of
access
you
you
should
be
able
to
know,
but
you
may
need
to
query
the
component
itself
and
not
just
the
api.
A
C
I'm
not
aware
of
like
endpoint
that
you
can
actually
query
but,
like
the
the
unwritten
rule,
is
to
allow
components
to
get
their
config
via
a
minus
minus
config
option
and
probably
somehow
query
that
one.
A
B
A
A
And
then,
as
far
as
when
an
introspection
and
point
would
be
built
for
every
component,
that's
a
whole
other
question.
I
don't
believe
that
we
have
those
right
now,
the
only
form
of
introspections
we
have
are
the
canonical
places
that
we
store
configs
and
config
maps.
A
A
Kuberdam
does
something
interesting
in
that
it's
storing
multiple
versions
of
the
kubelet
config
in
different
config
maps-
I'm
not
mistaken.
So
there's
even
that.
I
think
we
have
listed
here
that
the
component
name
would
be
the
key,
but
version
at
a
minimum
is
at
least
another
key
and
then,
if
people
are
running,
you
know
components
for
different
reasons.
You
may
even
need
additional
keys
so.
C
Yeah,
I
think
that
this
is
actually
probably
going
to
go
away
at
some
point
in
the
the
future,
simply
because
it
gives
us
some
headaches
with
regards
to
following
up
the
kubernetes
q
policy
close
enough.
I
think
that
actually
has
much
more
information
in
details
with
this
one.
E
So
I
have
a
few
comments
about
this
question
in
general,
if
you,
if
you
want
to
store
a
company
config
in
the
cluster
in
my
understanding,
config
map,
my
understanding
here
is
that
you
use
use
the
cluster
here
as
a
medium
to
transfer
the
component
config
from
one
node
to
another,
or
this
is
something
this
is
a
component
config
that
applies
to
all
the
nodes,
so
it
has
to
be
shared.
Shared
basic
configuration,
some
sort
of
a
base
configuration
for
all
the
components
that
run
on
the
nodes.
E
If
you
look
at
the
kubrick,
this
is
one
of
their
components,
but
then,
if
you
want
to
apply
customizations
on
top
of
this
base
component
config,
it
starts
becoming
a
case
where
the
config
cannot
be
in
the
cluster,
because
it's
no
longer
shared
it's
customization
per
node
and
I'm
not
going
to
talk
about
the
couplet
details,
but
in
general
it
feels
to
me
like,
if,
if
we
want
to
store
a
component
config
in
the
cluster
in
a
config
map,
the
recommendation
should
be
that
this
is
something
that
is
shared
and
it
should
work
on
all
the
companies
that
are
running
on
different
nodes.
E
And
then
there
has
to
be
another
like
like
the
current
working
q
proxy,
where
you
have
a
separate
the
plan
for
qproxy,
where
you
have
a
separate
configuration
object
that
is
applied
on
top
of
the
base
configuration,
I'm
not
sure
if
everybody
has
seen
the
proposal
for
q
proxy.
E
But
I
feel
this
is
the
right
thing
to
do.
I
don't
think
we
should
limit
the
users.
E
E
If
it's
old,
it
can
try
to
convert
it
if
it's
too
old,
it
can
narrow
it
out,
which
I
think
should
be
expected
instead
of
versioning
and
then
the
deployer
managing
this.
A
I
I
have
some
questions
and
I
think
some
some
fundamental,
like
disagreements
and
understanding
on
problem
space.
So
if
there's
only
one
config
map
allowed
in
the
cluster,
then
it
makes
it
very
difficult
to
do
a
proper,
rolling
upgrade
of
components
if
there
are
multiple
of
them.
The
kubelet
is
a
great
example.
A
If,
like
a
rolling
upgrade,
should
always
allow
you
to
store
both
versions
of
the
config
that
are
intended
to
be
used
explicitly.
They
should
never
be
implicitly
loaded
because
components
started
at
a
certain
time
and
that
doesn't
not
need
to
even
be
a
different
version
of
the
kubelet
config
api
or
different
versions
of
couplets.
F
A
I
I'm
sorry
I'm
having
a
hard
time
hearing.
I
can't
even
tell
who's
speaking.
A
F
Okay,
yeah,
I
don't
know
if
something
on
my
mic
is
messed
up.
It's
just
my
mac.
So
yeah,
I
don't
know
what
it
is,
but
yeah
so
cube.
Control
has
a
an
option,
called
dash
dash,
append
hash
that'll,
append
a
short
hash
to
the
end
of
the
name
of
the
config
map
you're
creating.
F
So
if
you're,
I
think
customize
even
supports
that
or
has
its
own
implementation
of
the
same
thing.
F
So,
if
you
are,
you
know
changing
the
data
that
goes
into
config
map,
you'll
get
a
new
config
map
with
a
new
name,
and
then
you
can
roll
that
new
one
out
by
changing
the
reference
on
a
deployment
or
if
it's
like
cubic
config,
you
know
that's
in
the
node
object
or
whatever
you
that's
pretty
streamlined
to
do
it.
That
way.
So
there's
we're
really
we're
not
forcing
anyone
to
to
change
config
maps
in
place.
C
So
I
think
we
actually
crashed
into
the
second
question,
which
is
about
the
migration
of
different
component
complex
and
like
their
versions.
C
So
here
is
another
question
like
whether
we
should
actually
allow
this
to
be
done
solely
by
the
component
itself,
like
upgrade
their
own
coffee
mug.
When
a
version
is
like
changed
or
like
their
own
conflict,
when
a
version
is
changed
in
the
conflict
or
whether
we
should
actually
ask
cluster
provisioning
tools
to
actually
do
that.
For
us.
F
Yeah,
so
I
think
there's
there's
like
actually
a
couple
sub
questions
here.
One
is
like
you
know:
who's
who's
responsible
for
their
api,
well,
the
components
responsible
for
its
api
like
full
stop.
So
the
component
has
to
do
something.
F
The
common
owners
are
going
to
have
to
do
some
amount
of
work
to
allow
those
conversions
to
happen
now.
I
think
the
second
question
is
like
it
sounds
like
there's
a
common
piece
here
on
some
method
of
up,
converting
and
writing
back
out
or
like
querying
a
component
for
what
you
know.
The
data
in
the
old
version
would
look
like
in
the
new
version
or
something
like
that,
and
maybe
that's
something
that
we
can
reach
common
ground
on
and
implement
as
a
common
tool.
F
So
I
want
to
separate
those
two.
The
the
the
framework
for
like
up
converting
is
maybe
something
common,
but
the
responsibility
for
making
sure
the
conversions
for
your
components.
Api
are
correct
and
implemented
is
down
to
the
owners
of
that
component.
G
F
Yeah,
that's
right,
so
I
think
like
if
you
read
the
api
rules,
correct
me
if
I'm
wrong
jordan,
but
it
basically
says
you
need
to
support
conversions
between
all.
That's
true
for.
G
F
Maybe
I
think
when
we
start
thinking
about
tools
like
cube,
adm,
that
write
config
out
and
if
cube
adm's
involved
or
some
other
tools
involved
in
an
upgrade
there.
There
was
a
conversation
a
few
weeks
ago
about
whether
such
a
tool
should
be
able
to
automatically
up
convert
your
config
across
the
version
boundary
for
you,
and
I
think
that's.
C
Yeah,
I
think
that
by
generally
speaking,
in
most
cases,
we
would
actually
be
like
just
converting
already
existing
options.
So
nothing,
that's,
really
dropped
out
of
support,
so
allowing
us
to
up
convert
the
like
component
configs
for
the
components
that
are
managed
by
keep
admin
will
actually
allow
us
to
do
like
simplest,
simple
upgrade.
That's
like
just
not
bothering
users
to
go
and
manually
edit
yaml.
C
So
in
the
corner
cases
where
actually,
there
are
present
options
that
are
actually
deprecated
and
like
being
deleted.
C
We
can
always
just
fail
the
upgrade
and
point
the
user
to
that
like
piece
of
yaml
and
that
component
config
and
tell
them
like
you
need
to
change
that
config
option
or
delete
it
from
the
component
and
make
sure
not
to
use
it
in
order
to
allow
for
our
automatic
automatic
upgrade
of
via
cube,
adm
or
like
manage
of
the.
G
G
Number
of
retained,
deploy
replica
sets
for
a
deployment
in
earlier
versions
of
deployments.
We
would
just
retain
like
infinite
numbers
of
replica
sets
and
in
later
versions.
We
said
you
know
what
like
that,
should
be:
an
api
api
field
and
newer
versions
of
deployments,
let's
default
to
10..
G
And
if
you
didn't
express
opinions
in
the
old
config,
this
automatic
conversion
would
automatic
automatically
make
you
carry
along
those
bad
defaults
that
we
know
we
don't
want.
F
G
A
version
boundary
typically
is
intended
to
be
a
conscious
change
like
I
was
writing
to
this
old
api.
I
could
expect
the
behavior
of
that
old
api
to
be
consistent
release
to
release
when
I
change
the
new
api.
What
are
the
differences
between
them
and
it
seems
like
this
automatic
conversion
is
trying
to
hide
and
mask
that
which
either
means
you
do
something
unexpected
to
users,
which
is
change
behavior,
silently
or
it
does
what
we
don't
want,
which
is
lose
the
benefit
of
the
version
boundary
like
by
hiding
this
from
users.
A
I
I
just
want
to
add
some
helpful
context
to
this
conversation
regarding,
if
you
haven't
seen
the
optional
fields
and
kept
from
just
an
sb
regarding
component
config.
This
touches
on
the
topic
of
conversions
a
little
bit.
A
And
you
don't
know
whether
or
not
right
like
you
can.
You
can
remove
fields
that
are
equal
to
the
default,
but
you
don't
know
if
the
user
actually
intended
for
those
even.
G
A
G
I
the
tool
is
the
tool,
is
adding
a
layer
between
the
api
and
the
user,
and
so
in
a
sense,
the
tool
is
its
own.
Api
right,
like
people
are
invoking
cube
atom
in
a
certain
way,
with
command
line,
flags
or
overrides,
or
something
and
expecting
consistent
behavior,
so,
like
cube
atom,
has
become
its
own
api,
even
though,
under
the
covers
it's
making
use
of
component
config
apis.
F
A
So
my
proposed,
I
do
have
proposed
solution
that
I
was
working
toward
with
that
kind
of
background,
which
is
that
you
have
the
ability
to
do
a
sophisticated
conversion
that
preserves
what
the
user
is
explicitly
intending
in
their
file.
That
is
exposed
via
the
components
command
line
api
as
an
explicit
action.
That's
documented
as
to
what
it's
doing
the
other
conversion
type
you
know
maybe
could
also
be
available
through
that
method,
but
that
when
an
old
api
is
provided
to
a
component,
that
we
do
the
existing
automatic
conversion.
As
noted
in
the
document.
G
B
G
F
I
think
I
think
what
what
the
question
is
is
like
today
now
correct
me
if
I'm
wrong,
because
I'm
not
super
familiar
with
cuba
dm
and
I
don't
use
it
frequently.
However,
my
impression
is
today
cube.
Adm
has
its
config
for
cube
adm
and,
as
a
result
of
that,
it
spits
out
configs
for
components
of
kubernetes,
and
I
think
the
the
sort
of
question
is.
You
know
across
upgrades
over
a
long
period
of
time.
F
We
have
to
assume
that
on
in
terms
of
the
kubernetes
components,
old
versions
will
be
eliminated
and
what
qbm
was
spitting
out
for
those
before
it
will
no
longer
be
able
to
like
provide
and
it'll
have
to
go
to
the
new
version,
and
the
question
is
like
how
to
manage
that
transition
from
the
end
user
of
cube,
adm's
perspective,
so,
like
you
know,
does
q8m
have
to
you
know,
be
more
helpful
and
say:
hey
look!
We're
gonna
like
spit
out
new
versions
of
these
configs
and
defaults
change.
Do
you
want
the
new
behavior?
F
C
G
B
F
G
And
a
lot
of
the
reason
that
we
have
the
requirements
around
defaults
and
behaviors
staying
consistent
is
one
component
in
isolation,
doesn't
know
how
the
rest
of
the
cluster
is
configured
right,
and
so,
if
the
api
server,
like
suddenly
said
you
know
what
every
time
I
talk
to
a
cubelet,
it
has
to
be
secure.
Now,
like
the
api
server,
doesn't
know
if
all
the
cubelets
are
have
been
configured
correctly,
but
a
tool,
a
cluster
lifecycle
tool
that
is
responsible
for
setting
up
the
whole
cluster
can
know
like.
As
of
this
version,
we
have
this.
G
We
have
correctly
set
up
all
the
cubelets
and
so
like,
as
of
just
pulling
numbers
out
of
the
air
like,
as
of
113,
all
cubelets
start
up
with
a
cert,
that's
signed
by
the
cluster
ca,
so
as
of
1
16,
it's
safe
to
flip
on
the
option
in
the
api
server
to
like
require
that
level
of
security,
and
if
the
insecure
behavior
is
the
kind
of
thing
that
is
deprecated
at
a
version
boundary,
then,
when
cube
atom,
if
cube
atom,
hadn't
made
that
shift
yet
and
it
started
writing
like
v1
api
server,
config
and
the
api
server
was
like
nope.
G
You've
got
to
be
secure
now
like
that
would
be
a
signal
to
cube
atom.
G
The
user
doesn't
care
that
the
apis
are,
they
might
care,
but
they
don't
know
that
the
api
server
flipped
to
being
secure.
They
just
know
my
cluster
kept
working.
G
So
that's
that's
more.
What
I
expected
like
the
the
cluster
lifecycle
tool,
that
has
knowledge
of
the
different
components
and
knows
how
it
is
configuring
them
over
time,
moves
them
to
use
to
stop
using
deprecated
options
to
start
using
more
secure
options,
and
once
it's
safe
to
move
all
of
the
pieces
to
work
together
and
you're,
not
using
any
more
deprecated
options,
then
you
can
start
writing
the
new
v1
config
yeah.
That
allows
those
old
options.
F
That's
a
pretty
reasonable
expectation.
How
like
how
much
does
cubadm
abstract
on
top
of
the
sort
of
low-level
knobs
of
kubernetes,
or
does
it
just
expose
a
lot
of
low-level
bits?
That'll
change
across,
like
it
does.
C
Expose
basically
everything
so
almost
like
cube
proxy
and
cubelet
do
have
their
entire
component
complex
exposed,
so
you
can
actually
supply
them
and
in
addition
to
that,
we
also
support
extracts
and
extra
volumes
for
all
the
kubernetes
core
components.
So
you
can
actually
supply
an
extract
to
the
api
server
and
you
can
actually
override
practically
everything.
That's
exposed
by
a
command
line
and
in
addition
to
that
cube,
adm
actually
does
have
some
knowledge
and
hardwires
a
bunch
of
stuff
and
maybe
even
in
some
corner
cases,
override
user's
byte
options.
G
And
over
time
that
alpha
config
is
deprecated
and
the
beta
and
v1
versions
become
available
and
like
if
the
user
has
taken
it
among
themselves
to
customize
this
install
and
provide
like
the
underlying
config
files,
then
they
should
be
responsible
for
updating
those
being
aware
of
the
changes
between
them.
The
user
doesn't
have
to
provide
those
right
like
cube.
Atom
provides
reasonable
defaults.
If
you
just
say
like
give
me
a
cluster,
so
users.
G
C
Yeah,
that's
pretty
much
what
the
situation
is
so
right
now,
so
we
basically
want
to
cover
the
case
where
users
actually
don't
care
and
we
generate
the
like
the
component
conflicts
for
the
components
so
like
the
most
interesting
case
here
is
like
we
have,
for
example,
a
scheduler
config,
that's
at
the
alpha
v1
alpha
one
level
and
we
actually
want
to
bump
it
into,
for
example,
be
one
better
one
level,
and
how
do
we
proceed
with
this?
C
Do
we
actually,
like
is
the
baby
and
the
component
that
stores
the
logic
inside
of
it
like
for
pumping
each
and
every
field
inside
of
this
config,
or
is
it
actually
invoking
some
somehow
either
a
scheduler
itself
or
some
other
component
to
actually
do
that
automatically?
Basically
outsourcing
the
logic
from
kubernetes?
How.
G
Is
it
so
it
is
generating
a
config
file
today,
right,
yeah,
okay,
I
would
expect
to
wait
until
the
oldest
scheduler.
You
support
also
supports
the
new
version
of
the
config
and
then
just
switch
your
generation
to
generate
the
new
version
of
the
config.
C
G
C
So,
basically,
now
we'll
simply
try
to
preserve
it
or
like
if
it,
if
it
matches
kind
by
kind
it,
it
will
actually
get
automatically
upgraded.
C
Well,
in
the
case
that,
like
the
user
is
not
interested,
it's
probably
going
to
be
cube
atom.
So
if
you
have
like
initialized
clustering
via
running
cubed,
a
minute
and
then
kubernetes
enjoying
on
a
bunch
of
machines
and
you
haven't
supplied
any
config,
so
basically
just
run
kubernetes
a
minute
incubation
join
and
then
upgrade
that
this
version
by
accubation
upgrade.
C
C
It
will
actually
try
to
apply
them,
so
this
is
currently
somewhat
hacky
method
of
allowing
users
to
modify
their
cluster,
so
it
will
basically,
they
use
the
kubernetes
great
logic
to
like
propagate
new
config
values.
G
G
I
know
exactly
what
I
want
just
push
this
out
and,
on
the
other
end
of
the
spectrum
is
just
q
atom
install
like
I
don't
know
what
I
want
give
me
good
defaults
wire
things
up
for
me
like
those
are
the
two
ends
of
the
spectrum
and
both
of
those
are
inputs
and
the
output
should
be
the
config
that
the
component
consumes,
and
so,
if,
if
upgrade,
is
like
looking
at
the
config
in
place
and
trying
to
do
things
to
it,
that
seems
like
it's
confusing
the
inputs
and
the
outputs
it
makes
it
so
that
if
the
user
invokes
queue
atom
and
like
gives
new
options,
it's
hard
to
know
like,
should
we
stomp
the
config?
G
That's
there
and
like
regenerate
it,
or
should
we
try
to
like
patch
in
these
things
to
an
existing
config
like
separating
inputs
and
outputs,
really
really
clearly
and
saying,
the
output
is
owned
by
cube
atom,
the
inputs
are
owned
by
the
user
and
the
power
user.
That
provides
a
fully
provides.
A
config
file
like
that
might
just
be
a
pass-through
like
q.
Adam
takes
that
and
stomps
it
onto
the
output,
the
the
simplistic
user
that
just
says
give
me
good
defaults.
G
Humans
says:
okay,
I
know
how
to
build
something
that
will
work
with
all
my
other
defaults
and
it
stomps
the
output,
but
like
the
the
inspect
the
thing
and
try
to
patch
it
in
place
is
not
gonna
work
well,
especially
across
version
boundaries
and
default.
G
Yeah,
like
you
really
you
really.
It
makes
it
hard
to
handle
cases
like
the
thing
in
place
is
corrupted
or
messed
up
and
it's
like.
Is
it
really
messed
up
or
do
we
is
this
just
a
fancy
user
provided
thing
that
we
don't
understand
and
by
fixing
it
we're
gonna,
destroy
their
data
and
so
like
inputs
and
outputs
and
cube
atom,
owning
the
outputs
and
being
free
to
stomp
them
or
regenerate
them,
or
generate
a
peer
file
and
roll
out
the
new
file?
However,
the
rollout
happens.
F
G
G
As
long
as
only
the
specific
as
long
as
you're,
not
stomping
config
maps,
if
if
it
goes
into
a
new
config
map,
so
that
only
the
version
matched
components
are
paying
attention
to
that,
then
yeah
you're
right.
G
In
cases
where
you
might
need
to
generate
configs
for
multiple
versions
of
a
component
like
if,
if
cube
atom,
1
16
can
deploy
116
or
115
or
114
cubelets,
then
it
might
be
simpler
to
only
have
one
copy
of
the
generation
code
and
be
like
well.
E
And
if
it's
unique,
how
are
you
going
to
permit
joining
notes
to
be
able
to
obtain
access
to
this
unique
configmap.
F
A
F
I
see
that's,
I
think
the
question
jordan
too,
is
like,
if
you're
not
using
dynamic
config.
How
do
you
get
it
to
the
node.
C
G
E
A
I
mean
I
don't
I'm
not
I
as
an
operator,
I
would
not
be
concerned
with
you,
know
old
versions
of
my
config
maps
for
components
sitting
around
in
the
cluster
as
long
as
they're,
clearly
named
by
their
timestamp
passion
version.
I
can
see.
Oh,
like
those
are
old
api
versions
that
were
used
at
this
time
stamp
with
this
hash.
A
A
Yeah,
I'm
not
suggesting
that
the
time
snap
be
the
only
thing
yeah,
it's
just
that
you
put
both
because
they
both
are
valuable
pieces
of
information.
One
was
the
config
used
and
one
augmented
and
then
one
like.
What's
the
unique.
C
Yeah,
I
think
we
have-
and
there
is
like
probably
some
room
for
the
last
question,
which
is
basically
exposing
some
sort
of
validation
for
component
conflicts.
So,
basically,
cubadiem
allows
users
to
just
do
sort
of
a
basic
component,
config
validation
on
what
it
is
generating,
and
this
is
probably
going
to
go
away.
If
we
move
out
of
trees.
C
Since
we
can,
we
can't
actually
use
the
internal
types
of
like
the
different
component
conflicts
and
we
should
probably
like
investigate
whether
we
should
allow
for
some
sort
of
command
line,
option
or
a
piece
of
code
that
we
can
actually
use
inside
of
cubadiem
to
basically
validate
component
configs
or
simply
like
just
prepare
everything
like
just
start.
The
the
public,
the
static
bots
and
just
see.
If,
if
it's
going
up
or
it's
crushing,.
G
Yeah
there
are
like
three
levels
of
validation,
there's
kind
of
the
direct
syntax
validation
like
did
you
put
a
negative
one
in
a
field
that
should
only
be
positive
numbers
or
something,
and
then
there's
like
referential
validation.
So
if
you
reference
like
a
tls
certain
key,
do
those
files
actually
exist
and
do
they
load
and
parse
correctly?
G
And
then
the
third
level
is
like
environmental
validation
like
if
you
start
the
cubelet
on
a
machine
without
swap
or
with
swap-
and
you
don't
have,
the
tolerate
swap
option
turned
on
like
that's
gonna
fail
and
so
like.
The
first
two
can
be
done
on
other
platforms
like
not
in
the
actual
target
environment.
G
If
some
of
the
user
input
is
folded
into
the
config,
and
so
it's
like
a
user
said,
I
want
this
dot
and
the
other,
and
then
that
got
generated
into
a
config
file.
It'd
be
nice.
To
be
able
to
say
is
this:
is
this
valid
or
did
the
user
give
me
data
that
produced
like
an
impossible
config
that
the
component's
gonna
barf
on.
A
F
Yeah,
that's,
I
think
it's
valuable.
I
think
the
question
is:
how
do
we
do
it,
because
there's
today,
like
there's
sort
of
a
policy
that
we
don't
export
internal
types
from
the
tree
if
q8m
moves
out
of
tree
validation
code
today
runs
on
internal
types.
So
how
does
it
continue
to
consume
the
source
of
truth
for
validation,
for
a
given
config
there's
also
a
question.
G
Of
drift
like,
if,
if
validation
changes,
we
ran
into
this
with
the
cube
that
the
cubelet
used
to
run
pods
through
api
validation
before
it
would
run
them,
and
if
it
was
talking
to
a
different
version
of
the
api
server
than
it
expected.
The
api
server
could
give
it
pods
in
the
cubelet.
F
F
That's
the
case
where
you
have
drift
right
because
you're,
like
your
cube,
adm
version
may
not
have
consumed
the
latest
cubelets
validation
code
at
the
point
that
it's
trying
to
deploy
it
like.
G
A
G
G
A
We
already
do
it
in
kubityan,
because
it's
such
an
important
part
of
the
ux
yeah.
A
F
Just
talked
about
for
the
last
half
hour
like
I
don't
want
to
encourage
people
to
build
on
top
of
that.
Yet.
E
A
Just
a
quick
litmus,
we
have
other
agenda
items
that
we
did
not
get
to
do.
We
want
to
discuss
these
at
a
later
point.
A
Okay,
cool
well
yeah,
we'll
publish
this
thanks.
Everyone
for
joining
okay.