►
From YouTube: WG Component Standard 20191112
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
welcome
everyone
to
the
tuesday
november
12th
working
group
component
standard
meeting.
We've
got
a
short
agenda
this
morning,
but
since
lee
and
lumiere
were
starting
to
have
a
conversation
about
some
technical
things,
maybe
we'll
just
continue
that
for
a
minute.
So
you
guys
can
finish.
B
B
Thank
you.
They
are
not
explicitly
my
plants,
they
are.
They
are
tiniest
plants,
but
she
does
have
very,
very
many
cool
and
we
may
not
be
receiving
whatever
our
security
deposit
was
back
on
this
place,
because
there's
also
like
a
pull-up
bar
that
we
built
yeah,
that's
tanya
and
pepsi
with
you
too.
B
Oh
no
he's
just
lined
up
anyway,
yeah,
so
lubin
and
mir-
and
I
were
just
chatting
about
some
somewhat
unrelated
stuff
with
regard
to
the
end-to-end
test,
suite
and
machinery
for
the
project,
since
we're
coming
up
on
code
freeze
and
into
the
release
window
and
basically
kinder
is
a
offshoot
development
tool
that
we
wrote
around
kind.
A
B
And
it's
used
to
create
like
short-lived
lightweight
virtualized
environments,
using
containers
for
the
end-to-end
test
suites.
So
we
are
able
to
do
like
a
kubernetes,
a
deployment
very
quickly
with
a
tool
like
kinder,
so
kinda.
C
B
Get
you
like
to
the
cluster
kinder
will
get
you
to
the
infra
without
the
cluster
set
up,
so
you
can
run
end-to-end
tests
and
stuff,
but
we
need
to
update
it
basically
and
it's
been
a
little
bit
of
a
fight
with
or
just
there's
been
some
some
dependency
challenges
between
the
two
projects
so
gotcha.
I
need
to
check
that
out.
That
seems.
B
It's
super
useful.
The
reason
why
I
was
talking
with
lubimir
about
supporting
mounts
is
because
I
use
it
like
basically
every
day
to
spin
up
almost
clusters
so
that
I
can
do
development
on
kubernetes.
A
B
B
Yeah
I
have
like
a
little
shell
script
that
like
basically
uses
the
cluster
config
and
like
just
sets
up
a
few
nodes
that
are
ready
to
be
used
and
it
takes
like
five
seconds,
it's
so
nice.
So
it's
it's
definitely
good.
It's
kind
of
the
same
thing
as
like
using
kind
if
you've
ever
used
kind,
and
then
you
log
into
the
nodes
using
docker
exec
and
run
a
kubernetes
okay,
it's
kind
of
a
similar
effect
there
right
except
you,
don't
have
to
wait
for
the
cluster
to
bootstrap
cool.
B
But
so
I
don't
know
if
you
had
any
further
comments
lubamir,
but
that's
basically
why
that
config
stuff
is
important
to
me
personally
is
exposing
ports
and
mounts
are
useful
for
development.
D
Yeah,
so
so
we
are
facing
a
problem
there
that
is
relevant
to
this
group.
I
do
not
want
to
bring
this
as
an
agenda
specifically,
but
basically
including
a
public
type
from
something
that
is
alpha
in
your
external
project
and
using
the
fields
like
kinder
is
using
with
the
kind
config
is
very
problematic
in
terms
of
maintenance.
You
have
to
schedule
releases,
you
have
to
match
the
upstream
project
schedule.
D
Somehow
we
are
also
trying
to
match
the
kubernetes
release
cycle,
so
these
alpha
v1
alpha
alpha
3
kind
fields,
it's
very
hard
for
us
to
manage
them
and
also
like
supporting
the
dash
config
on
the
command
line
for
this
upstream
tool
from
it's
very
it's
very
difficult
for
us.
So
that's
why
we're
dropping
this,
and
it's
really
really
super
related
to
this
group
and.
D
D
Yes,
so
so
there
is
a
cubanian
bootstrapper,
which
is
like
responsible
for
basically
applying
the
topology
for
the
kubernetes
nodes
and
the
currently
they
what
they
did
is
they.
They
have
a
kubernetes,
bootstrap,
config
file,
format
and
inside
this
format
they
have
pretty
much,
indeed
configuration
and
joint
configuration
fields.
These
are
types
from
kubernetes.
D
Pinning
to
a
version
which
is
like
something
and
if
kuberium
changes
this
config
for
in
the
future,
if
this
config
no
longer
supports.
D
So
if
the
binary
of
kubernetes
no
longer
supports
this
config
type,
it
becomes
a
problem.
So
what
I
suggested
there
is
that
they
show.
Oh.
A
D
Yeah,
it's
pretty
much
the
same
if
v1
core
v1,
if
you
developed
a
client
based
on
client,
go
and
using
core
v1
and
suddenly
kubernetes
has
core
v2.
You
have
the
same
problem.
You
have
to
update
your
whole
logic,
yeah.
A
D1
v1
to
v2
is
more
challenging
v1
to
extensions
of
on
top
of
v1
that
are
still
v1
shouldn't,
be
as
hard
right,
because
you
just
import
those
and
get
the
same
extensions.
A
Yes,
it
should
be
easier
for
essentials,
but
it
sounds
like
at
least
so
in
the
in
the
case
of
alpha
stuff
yeah,
it's
totally
a
problem.
I
don't
know
if
there's
a
good
solution,
because
that
stuff
is
guaranteed
unstable.
B
B
A
Like
it
works
because
we
have
the
old
times
yeah
right
like
if,
if
you
so
so,
where
it
shows
up
in
cube,
ctl
is,
if
you
add,
an
altogether
new
api
group
or
an
altogether
new
kind
to
an
existing
api
group.
Old
cube.
Ctls
can't
recognize
that
at
all
and
if
you
add
a
new
version
of
an
api
group
that
is
not
compatible
conversion
wise
with
an
old
version
of
that
api
group.
You
have
the
same
problem
right.
So
if
you
go
from
like
v1
to
v2,.
A
Actually,
we
still
say
we'll
do
conversions
for
kubernetes
there,
but.
B
A
B
So,
to
me
like,
if
cluster
api
wants
to
build
things
based
off
of
you
know,
v1
beta
2,
kubiti
m
types,
then
they've
got
a
sliding
window
right
to
to
keep
updating
that
so
that
competing
binaries
that
are
disturbing
that
api
right,
like
still
support
them
for
a
period
of
time
and
then
they've
moved
the
project
forward
and
if,
if
you're,
using
an
old
version
of
cluster
api
that
supports
an
old
api
and
and
kubernetes
doesn't
support
that
anymore.
Man
like
fix
it
right
yeah.
I
think
that
there's
a
challenge,
though
right
like.
A
Upstream
projects
that
people
are
taking
dependencies
on
aren't
moving
their
apis
forward
quickly
enough
and
so
they're
in
these,
like
not
great
stability
situations,
and
then
people
go
well.
I
want
to
build
something
on
top
of
this
and
like
they're,
not
going
ga
anytime
soon,
so
I'm
just
gonna
use
the
alpha
and
like
then
they,
you
know
later,
it's
like,
oh
well.
A
This
is
really
painful
to
use
the
alpha,
which
is
true,
so
I
I
think,
like
some
of
this
actually
falls
on
those
upstream
projects
to
like
graduate
their
apis
more
quickly,
so
that
they
have
a
lot
of
interest
in
the
project
they
can
get
out
of
an
alpha
state
and
offer
stability.
That
makes
it
less
painful
for
people
to
build
on
top
of
them.
D
D
So
you
can
feed
the
name
for
for
the
metadata
and
then
use
a
google
sorry,
the
js
json
multi-doc,
or
something
like
that
to
specify
the
qubit
and
config
at
the
bottom
of
this
courser
api,
bootstrapper
config,
and
what
this
is
going
to
do
is
that
you're
not
pinning
if
the
kubernetes
config
is
out
of
date.
D
B
So
I
I
like
that
using
object.
Ref
like
makes
sense
to
me
that
it
would
be
keyed
by
kind
and
then
it
would
go
resolve
the
api
group
and
version
at
a
later
time
through
the
api
server.
B
B
It
wasn't
intended
to
be
user
facing,
but
something
like
that,
whereas
if
you
want
to
actually
object
ref
to
a
kubiten
config
in
like
a
multi-doc
yaml
like
you're,
going
to
want
to
you
open
up
the
use
case
to
have
multiple
kubernetes
configs
in
there
and
they'll
probably
need
names,
and
you
probably
want
to
be
able
to
object,
ref
them
by
name
yeah,
even
if
you
stuck
them
in
a
config
map
like
that,
would
be
sufficient.
B
A
It
gets
really
tricky
because
I
don't
well.
I
don't
know,
first
of
all,
what
needs
to
parse
the
cube
adm
configuration.
If
it's
just
cube
adm,
then
you
just
have
to
plug
it
to
cube
adm.
If
there's
something
in
the
cluster
api
that
actually
needs
to
parse
it
and
do
stuff
to
it.
That
gets
a
lot
more
complicated,
because
then
it
does
end
up
needing
to
know
some
version,
but
if
it's
opaque,
then
they
shouldn't
embed
it.
C
So
it
actually
depends,
but
one
thing
is
for
certain:
the
object
method
doesn't
get
embedded
in
this
cycle,
so
we
still
don't
have
any
ability
to
basically
distinguish
between
different
objects
out
of
the
basic
cuban
types
and
another
way
to
basically
try
and
treat.
C
This
is
to
basically
just
stop
embedding
the
type
inside
of
the
cluster
api
types
by
simply
like
replacing
it
with
a
map,
so
a
user
will
be
actually
embedding
the
same
type
again,
but
as
the
steering
block
sort
of
the
way
that
the
maps
actually
do
this.
C
So
this
way
we
actually
won't
get
any
sort
of
validation
or
stuff
like
that,
but
at
least
or
we
we
are
going
to
have
to
write
things
down,
so
we
don't
have
to.
C
We
don't
get
these
automatically,
but
I
think
that
the
problem
here
is
a
little
bit
more
like.
C
It's
shared
between
different
components,
so,
for
example,
in
cuba
dm
we
were
facing
sort
of
a
similar
problem
with
component
conflicts.
So
the
problem
here
is:
what
do
we
do
with
the
component
configs
that
get
obsolete
and
we
need
to
upgrade
them
and
we
aren't
actually
the
component.
We
are
something
that
provisions
them
and
in
in,
in
our
case,
in
kubernetes,
these
components
are
usually
q,
proxy
keblet,
whatever,
but
in
cluster
apis
case,
these
components
are
like
cube,
adm
and
other
stuff.
D
Yeah,
but
for
kubernetes
we
support
conversion
on
the
command
line.
That's
the
difference
like
going
back
to
the
whole
discussion
about
components,
exposing
conversion.
We
support
the
conversion.
So
if
the
user
feeds
this
old
configuration
in
the
costa
rica,
bootstrapper
config,
you
know
with
a
map
or
a
multi-dock.
D
If,
if
we
they
feed
this
config
to
the
kubernetes
binary,
the
qubit
in
binary
can
potentially
first
try
to
convert
it
to
the
preferred
version.
If
it's
in
the
you
know,
this
is
in
the
support
queue
still.
A
B
Yeah,
if
the
user
wants
to
upgrade
right
from
version
of
the
type
to
the
data,
not
not
the
com,
end
components,
but
yeah
I
mean
basically
what
ross
is
referencing
is
that
people
have
that
same
exact
use
case.
If
you
are
working
with
the
kubelet
configuration.
A
B
So
the
conversation
that
we've
had
right
with
regard
to
this
is
that
there's
there's
two
answers.
One
is
that
you
either
allow
people
to
work
with
the
internal
types
and
the
defaulting
and
conversion
functions
somehow
without
having
to
import
the
entirety
of
kubernetes,
and
then
that
is
you're
baking.
Those
types
into
whatever
project
binaries
are
using
those
types
or
you
have
a
standard,
cli
interface
that
you
can
shell
out
to
for
any
component
that
does
component
config
stuff.
B
A
Yeah,
that
was
the
more
advanced
version
of
this
use
case
right.
That
was
like
something
like
cube:
adm
wants
to
take
somebody's
old
types
and
then
use
the
new
component
binaries
to
up
convert
them
so
cube
adm
can
automatically
keep
someone
within
the
support
sku,
but
in
this
case
I
think
it's
just
that
we
want
to
make
like
if
there's
some
meta
machinery,
that's
plumbing
configuration
through
to
end
components.
A
B
Yeah,
basically,
that
advanced
use
case
that
you're
talking
about,
I
was
not
talking
about
the
the
whole
like
automatic,
converting
on
a
user
behalf
thing
in
my
opinion,
is
actually
dangerous,
not
that
we
can't
build
things
in
kubernetes
for
users
to
explicitly
do
those
things,
but
the
standard,
cli
interface
is
the
rather
the
atomic
unit
of
like
well
this.
This
component
has
a
component
config
and
there
exists
zero
tools
to
actually
upgrade
right.
So
from
a
user
standpoint,
you
need
something
that
you
can
actually
otherwise
you're
just
hand
crafting.
You
know
200.
A
Yes,
the
question
is
like
what,
whether
the
cube
of
binary
just
does
that
automatically
or
so
whether
or
not.
B
There's
nothing
that
can
be
done
automatically
to
upgrade
a
user's
config
file
like
at
some
point.
The
kubelet
will
stop
supporting
alpha
one
right:
it
it'll
be
beta
one
but
yeah.
Well,
yeah
v1
alpha
one
like
will
no
longer
be
supported
in
the
kubelet
binary,
which
means
you
can't
just
pass
it
a
v1
alpha
one
config.
B
So
at
some
point
the
user
needs
to
take
their
alpha
config
and
get
to
a
newer
one.
Yes
and
they're
it's
ideal.
If
a
user
doesn't
have
to
do
that
by
hand,
so
there
needs
to
be
some
tooling
that
has
ownership
of
that
api.
Yes,
that
allows
the
user
to
take
their
config
and
actually
get
to
a
more
upgraded
version
so
that
they
can
pass
it
to
a
future
version
of
the
kubelet
yeah.
A
A
C
Yeah,
so
it's
much
more
easier
for
cloud
providers.
Qbn
basically
has
to
export
the
whole
the
whole
type
of
the
component
config
to
like
the
user,
and
this
is
much
more
difficult
to
translate
into
the
new
version.
So
basically,
what
I'm
proposing
in
the
already
merged
cap
for
kubernetes
component
config
strategy
is
to
basically
whenever
such
such
problem
is
data,
an
old
version
is
deleted
and
we
we
have
only
the
old
version,
and
this
is
basically
a
user
supplied
component
config,
which
was
basically
not
generated
by
kuberium.
C
We
asked
the
user
to
go
and
manually
replace
that
component
config
so
sort
of
edit
the
config
map,
and
if
the
component
config
was
actually
generated
by
kuberium,
then
we
just
scrub
it
and
generate
the
new
one.
A
Right,
yeah-
and
I
think
it's
so
it
sounds
like
that
is
sounds
like
cube.
Adm
needs
to
be
the
caller
of
whatever
you
know
tool.
We
invent
to
do
these
up.
Conversions.
C
Yeah,
so
it's
going
to
be
like
the
the
consumer,
and
but
the
color
is
not
guaranteed
to
be
only
a
qba
team.
It
may
be
even
end
users
and.
D
C
Api
whatever-
and
actually
this
actually
sounds
pretty
similar
to
the
validation
cap,
which
is
not
merged
right
now,
and
it's
a
pending
discussion
and
it
basically
proposes
the
same
interface
but
for
validation
and
currently
like
we
want
to
use
the
interface,
should
it
be
actually
accepted
and
implemented
in
some
components.
D
I
mean
it's
usually
a
choice
of
the
component
itself,
whether
they
should
support
the
alpha
transitions.
If
they
want
to
not
break
the
users,
they
can
still
implement
the
conversion
and
support
the
you
know
the
alpha
2
to
alpha
3
in
a
safe
matter
manner
yeah.
I.
A
B
Yeah,
that's
that's.
Definitely
a
decision
on
whoever's
implementing
the
final
form
of
the
api
machinery,
that's,
but
that
it
is
good
to
point
out
what
I
don't.
A
A
I
want
I
want
to
leave
that
option
open
to
whoever
is
implementing
the
component,
because
the
reality
is
that
as
much
as
like
it,
you
know
we
need
to
set
user
expectations
appropriately,
but
sometimes
important
things
get
built
on
stable
apis
and
we
also
need
to
be
able
to
like
like
see
the
reality
and
implement
backwards
compatibility
in
those
cases
when
it
happens,
even
if
it
shouldn't
happen.
Sometimes
it
happens
so.
A
I
don't
know
if
it's
that
explicit,
but
I'm
gonna
pull
it
up.
A
All
right,
let's
get
out
of
here-
keep
cons
next
week-
hope
to
see
folks
there
if
anyone's
going
lee-
and
I
will
both
be
there
representing
the
working
group-
take
care
everyone.