►
From YouTube: WG Component Standard 20200114
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
all
right,
good
morning,
everyone
welcome
to
the
tuesday
january
14
2020
working
group
component
standard
meeting.
I'm
going
to
share
my
screen
quick
here.
So
we
have
the
agenda.
A
All
right
so,
let's
get
started.
Does
anyone
know
who
added
the
first
item?
Somebody
wanted
to
talk
about
that.
A
Okay,
I'll
take
it
for
now
and
then,
if
somebody
comes
in
and
says
I
wanted
to
talk
about
that,
we
can
pick
it
up.
So
I
guess
there's
this
thread
on
twitter
sort
of
complaining
about
not
being
able
to
find
enough
documentation
on
kubernetes
configurations.
I
think
specifically
involving
cube
adm
and
some
of
those
component
configs,
but
also,
maybe
just
in
general.
A
So
I
think
that's
something
we
should
address.
I
think
in
general,
our
our
sort
of
api
reference
documentation
for
component
config
has
not
been
particularly
easy
to
find
or
accessible,
since
it's
mostly
just
like
look
at
the
code
for
that
kubernetes
version.
We've
talked
about
in
the
past,
generating
open
api
reference
documentation
or
generating
kubernetes
reference
docs
for
component
config,
but
we've
never
done
it
mostly
because
most
of
the
apis
are
alpha.
A
It's
probably
time
to
start
looking
at
that.
Also,
I
think,
is
the
just
there.
It
looks
like
lee
added
some
some
links
to
different
tools
that
can
generate
those
kinds
of
docs
and
said
sort
of.
Maybe
we
should
go
ahead
and
include
that
with
the
with
component
base
or
something
or
just
have
a
common
tool
that
that
we're
using
for
all
the
component
configs.
A
So
I
think
that's
a
good
idea.
So
if
folks
want
to
work
on
that
kind
of
config
generation,
stuff
or
documentation
generation
stuff,
that's
good.
I
know
pj
vgf
had
expressed
some
interest
last
year
in
working
on
that.
I
don't
know
how
much
progress
got
made
so
I'll
follow
up
with
them,
and
then
I
had
another
note
which
is:
we
do
have
these
sort
of
config
docs
pages.
So
if
you
go
here,
there's
links
to
supposedly
the
config
for
each
of
these
components.
A
But
when
you
go
there,
they
only
report
the
command
line
options
and
we
should
definitely
once
we've
gone
to
beta
with
a
component
config
be
reporting
that
config
struct
here.
How
are
these
generated
and
all
that
populated?
I
think
this
is
just
the
like
generated
from
the
help
text
from
cobra.
Basically
right
now:
okay,
there's
a
there's
a
tool
somewhere.
A
It
might
be
in
the
docs
repo
that
links
all
the
kubernetes
components
and
then
extracts
it.
But
I
don't
it's
been
years
since
I
saw
that
so.
C
Oh,
can
you
hear
me
now
yeah
cool?
Sometimes
I
cover
it
yeah,
so
I
mean
there's
that
generation
help
page
and
then
ben
and
I
were
chatting
about
whether
or
not
the
docs
generally
should
live
centrally
somewhere
like
in
a
you
know,
a
kubernetes
repo
or
in
the
even
the
component
repo.
C
Since
you
know,
components
need
off
generation,
it's
just
a
matter
of
whether
or
not
those
things
should
be
imported.
When
that
repo
is
vendored.
C
A
A
For
the
external
types
right,
so
I'm
wondering
because
there's
definitely
a
docs
generation
tool
for
the
core
apis
that
come
bundled
with
kubernetes.
There's,
I'm
assuming
you
just
like
construct
a
scheme
and
well
it
needs
to
read
the
comments
out
of
files
so
because
it
reads
the
dot
comments
out
of
the
files
yeah.
So
but
I'm
imagining
it
must
just
scan
for
them,
because
those
apis
are
all
over
the
place.
C
Yeah
I
mean
if
you
were,
to
construct
a
vendor
directory.
You
know,
then
it
should
work.
A
C
But
yeah
that
that
is
a
really
good
problem
to
tackle,
because
that
previous
thread
that
I
linked
was
basically
about
like
where
the
heck
is
this
cluster
configuration
structured
in
kubidium.
And
how
do
I
build
one
right
yeah?
So
this.
A
D
Well
not
yet
we
actually
encountered
some
problems
with
the
like
gold
oxide
and
in
general
we
actually
strive
to
have
things
documented
there,
but
it
actually
has
some
problems,
so,
for
example,
older
apis
of
which
are
actually
supported
in
older
kubernetes
versions,
are
actually
deleted
if
they
are
not
present
in
the
latest
master
version
of
kk.
D
So
this
is
one
of
the
pitfalls
that
we
actually
encountered
and
basically
the
idea
there
is
to
just
use
this
site
and
some
of
the
examples
on
the
the
kubernetes.io
side,
but
like
we
haven't
done
any
more
particular
like
large
explanation.
What
goes
where
and
stuff
like
that?
It's
just
the
comments
and
some
examples,
and
we
simply.
C
D
Yeah
but.
C
D
A
C
Yeah,
what
was
interesting
was
that
this,
this
person,
trying
to
use
kubernetes,
didn't
even
know
like
what
an
api
version
was
and
how
to
figure
out
what
magical
api
version
string.
They
were
supposed
to
use
yeah.
A
A
Yeah
I
mean
it
exists
in.
I
think,
there's
two
questions
like
I
think
it
sounds
like
they
probably
knew
what
an
api
version
was
to
begin
with,
because
that's
on
all
the
other
kubernetes
apis,
but
yeah
I
weren't
able
to
find
like
which
version
am
I
supposed
to
set
for
keep
adm
for
this
object.
C
Yeah
I
mean
I
found
the
right
string
like
in
two
places
in
the
docs,
and
they
they
were
not.
You
know
like
trivial
areas,.
D
So
it
seems
that
the
concept
of
api
group,
which
can
actually
change
and
be
like
this
large
piece
of
string,
is
somewhat
like
misleading,
especially
for
not
very
experienced
kubernetes
users
simply
because
most
of
the
base,
kubernetes
types
don't
even
have
an
api
group
and.
A
A
Okay,
yeah
we'll
reach
out
I'll
reach
out
in
the
slack
channel
and
see
if
anyone
wants
to
try
and
tackle
that.
B
Alex
yeah,
so
I
spent
some
time
last
office
hours
with
mike,
going
through
the
command
controller
manager
code
and
basically
just
starting
work
on
finding
out
if
the
current
component
config
for
controller
manager
is
even
serializable
and
just
the
idea
is
to
gather
some
data
like
how
would
this
look
like?
How
would
a
config
file
look
like
what
are
the
pros
and
cons?
What
are
the
other
options
we
have?
How
could
like
those
be
implemented?
B
What
are
they
they're
like
pros
and
cons,
and
just
get
the
conversation
going
again,
probably
with
another
cap
like
yeah
just
list
everything
I
found
out
and
just
try
to
get
the
conversation
yeah
started
again
on
what
should
we
do
in
this
area?
B
So
I'm
just
yeah,
basically
writing
some
tests
and
stuff
and
seeing
if
I
can
serialize
it.
A
A
This
is
a
problem
that
sort
of
blocks
like
a
lot
of
components,
from
getting
all
the
way
there.
So
this
this
kept
purpose
is
just
to
present
a
very
simple
solution
to
that
problem,
so
that
we
can
make
a
decision
and
move
forward.
The
solution
is
basically,
you
know,
add
a
new
kind,
for
instance,
specific
config
like
cubelet
instance
configuration
or
we
can
talk
about.
A
You
know
better
names,
but
so
that
we
can
just
put
those
fields
like
node,
ip
and
and
hostname
override
in
that
and
have
the
option
to
plummet
through
a
separate
file
if
we
so
choose
so
that
we
can
decouple
components
that
are
like
sharing
a
config
map
from
these
instant
specific
fields
that
would
prevent
sharing
a
configmap
in
production.
A
So
we've
got
some
good
feedback
on
it.
I
think
it's
mostly
there.
The
only
question
is
kind
of
like
do.
We
choose
to
do
a
separate
flag,
enforce
it
to
a
separate
file,
or
do
we
just
allow
like
multiple
invocations
of
dash
dash
config
and
then
how
would
that
look
in
code?
I
think
it's
the
last
piece
here
I
like
I
I'm
fine
with
either
I
think
ross.
You
had
some
thoughts
on
multiple
implications.
D
Yeah,
so
so
I
think
that
minus
minus
config
is
probably
our
best
option
and
like
what
we
can
do
here
is
give
more
freedom
to
the
users
to
basically
either
split
their
config
in
at
least
a
couple
of
files,
or
even
join
that
config
in
a
single
file,
but
with
multiple
yaml
documents
inside.
D
So
this
is
much
more
flexible
solution,
as
opposed
to
just
having
separate
facts,
for
instance
config,
which
will
basically
force
users
to
always
have
their
instance
config
in
separate
files,
and
this
might
not
be
very
much
useful,
at
least
for
for
some
use
cases,
so,
for
example,
in
cube
edm.
D
Basically,
the
cubelet
and
everything
that
is
static,
backed
which
is
basically
the
api
server
control
manager.
Scheduler
are
much
more
seen
to
be
put
in
a
single
file
as
a
couple
of
yaml
documents,
because
we
we
don't
use.
We
cannot
actually
use
the
coffee
map
as
a
volume.
A
Source
right,
I
think
that
makes
sense.
So,
let's
I'll
figure
out
what
I
kind
of
I
think
there
were.
There
was
some
prior
art,
maybe
in
cuba,
dm
for
handling
like
the
multi-docs
and
and
multiple
files
like
a
sort
of
a
framework
for
you
know,
reading
in
a
set
and
having
accessors.
For
that.
Maybe
I
thought
there
was
a
comment
on
here
about
that,
but
I
couldn't.
D
So
the
original
idea
was
to
have
such
framework,
but
we
haven't
actually
had
the
time
to
develop
it.
Yet
so
we're
not
storing
anything
in
multiple
files.
It's
everything
in
one
file
got
it
and
yeah,
and
the
other
thing
is
that
we
actually
use
something
called
a
document
map
which
is
basically
a
standard
column
up
between,
like
the
group
and
basically
not
the
group,
but
the
the
current
and
the
object
which
basically
allows
us
to
have
a
like
a
single
group
version
kind
per
file,
and
this
works
for
us.
D
But
it
probably
won't
work
for
everyone,
so
it
doesn't
pay
attention
to
object,
meta
name
simply
because
the
stuff
which
we
actually
use
do
not
have
object,
method
got
it.
C
The
I
I
like
the
multiple
indications
of
config
as
well
for
the
same
reasons
again,
is
that
just
some
extra
color
on
that
is
that
the
implementation,
if
you
do
something
like
that,
should
do
a
deep
merge
on
the
deep.
A
A
So
you
can
so
so
typically,
what
we
would
see
in
production
is
like
a
cubelet
would
serve
it's
a
good
question
but
like
actually,
this
is
a
good
question.
How
many
people
just
serve
cubelet
on
all
interfaces
versus
not
versus
so
there,
because
that's
a
good,
a
good
point
that
maybe
there
are
some
fields
that
like
in
most
cases,
are
shareable
like
in
in
almost
every
production,
cubelet
deployment.
I've
ever
seen.
A
It
just
serves
on
zero,
zero,
zero
zero,
but
maybe
people
are
trying
to
set
that
up
differently
on
different
places,
and
so
maybe
the
real
solution
is
just
to
have
a
way
to
override
with,
like
you
said,
a
deep
merge
and
we
actually
don't
split
it
into
separate
objects
at
all.
C
Yeah,
I
mean
just
as
you
know,
somebody
who's
done
a
lot
of
piping
config
around
and
messing
around
with
it.
There
are
things
that
are
easy
to
use
and
there
are
things
that
are
not
and
supplying
multiple
configs
that
override
each
other
with
the
deep
merge.
That's
like
natively,
supported
by
the
tool,
is
one
of
the
easiest
things
to
work
with.
Otherwise
you
have
to
throw
it
in
somewhere
else.
A
I
think
there's
no
free
lunch
right,
like
you
can't
like
that,
gives
you
a
lot
of
utility
as
far
as
like
plumbing
config
down
and
getting
it
merged,
but
it
can
also
make
it.
The
trade-off
is
that
looking
at
the
final
config,
you
have
to
trace
back
to
you
know
arbitrary
number
of
of
merged
objects
to
figure
out
where
something
came
from,
as
opposed
to
being
like.
Oh,
it's
only
from
like
at
most
two
objects.
C
Yeah,
I
mean
that's
not
as
much
of
a
trade-off
if
the
kublet
can
just
output
its
configuration.
A
D
So
I
think
that,
like
most
of
the
linux
distros
and
in
general,
software
for
linux
is
basically
suffering
from
this.
So
a
lot
of
tools
can
actually
fetch
your
configuration
from
like
several
like
three
or
four
different
places
and
then
try
and
merge
them
and
the
result
is
not
always
predictable
and
usually
when
something
goes
wrong.
D
Users
have
to
basically
go
and
find
all
the
locations
that
may
contain
configuration
files
and
then
go
and
iterate
over
each
configuration
file
and
find
like
the
wrong
option.
There.
C
So,
basically,
there's
these
new
types
that
are
being
built,
I
believe
for
the
api
server
for
cube,
cuddle,
server,
side
of
play.
Sorry
and
these
new
types
track
field
ownership
and
we
could
build.
A
C
A
A
Yeah
all
right
future
things
I'll
keep
thinking
about
it.
Let's
see
if
we
can
make
a
near-term
trade-off.
I
think
there's
also.
If
we
don't
do
emerge
to
start,
we
still
can
in
the
future,
either
in
a
future
api
version
or
in.
A
Whatever
so
I
I
also
want
to
think
about
like-
and
this
is
a
more
general
problem
that
maybe
other
people
have
thought
about,
but
we
we've
talked
a
lot
about
wanting
to
refactor
the
structural
like
how
our
apis
are
structured
in
future
versions,
including
splitting
large
objects
into
smaller,
separate
objects,
and
we
also
have
this
requirement
that
we
be
able
to
convert
between
concurrently
available
api
versions
and
represent
the
same
values
in
each.
D
So
we
kind
of
did
something
like
this
in
cuba
adm
we
when
we
actually
split
up
our
early
alpha
configurations
in
several
kinds
and
the
way
we
actually
did.
It
was
through
a
lot
of
custom
code
and
a
lot
of
hacks.
So,
for
example,
at
some
point
we
actually
had
to
disable
the
fuzzer
test
and
basically
write
up
our
own
tests,
which
verified
different
types
of
conversions,
but
it's
not
cool
and
it's
like
potentially
dangerous,
especially
if
you
actually
disable
the
diffuser
test,
but
sometimes
it's
necessary
right.
C
A
C
Pre,
like
you
would
need
to
do
that
explicitly
for
different
targets.
A
C
Component
will
own
the
api
anyway,
and
the
conversion
will
probably
happen
through
the
component
so
yeah.
So
the
component
can
just
yeah
call
that
multiple
times
for
that
version.
A
All
right
we'll
keep
thinking
about
that
too
I'll.
Add
a
note
here.
C
A
Okay,
so
I
guess
the
last
thing
that's
on
here
is
the
looks
like
the
cfp
is
open
for
the
contrib
summit.
Cubecon
eu
2020.,
so
folks
are
interested
and
speaking
of
contrib
summit
check
this
out.
A
I
will
probably
fill
this
up
for
giving
a
talk
on
our
working
group.
I
know
alex
you're
interested
in
doing
something
around
this
also.
B
Yeah
would
like
a
working
group
update
fall
under
this
or.
A
We
did
like
a
table
lee
at
the
contrib
summit
stuff
right
and
then
we
had
a
maintainer
track.
Talk
on
the
sort
of
the
standard
keep
contract.
B
Okay,
but
like
the
the
maintainer
track
that
hasn't
been
finalized,
either
right.
A
I
haven't
seen
anything
yet
about
this.
This
is
the
first
I've
seen
it
so
and
it's
it
was
sent
out
yesterday.
So
yeah
yeah
I'll,
look
through
and
see
because
I
want
to
go
and
I
think
it'd
be
fun
to
see
all
in
europe.
D
So
I
think
that
this
is
for
the
contributor
summit
alone
and
the
maintainer
track
is
a
different
story,
and
I
think
that
we
should
probably
poke
some
of
the
the
folks
that
are
sick
leads.
So,
for
example,
tim
sinclair,
lumiere
on
the
cluster
lifecycle,
side
and
they're
actually
doing
some
preparations
for
the
like
for
the
projects,
not
sure
how
this
stands
with
word
group.
So,
okay,.
D
Yeah
I
can,
I
can
basically
contact
them
awesome.
Thank
you.
A
All
right
great
take
care.
Everyone
have
a
great
tuesday.