►
From YouTube: WG Component Standard 20190611
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Started
the
recording
welcome
everyone
to
the
tuesday
june
11th
2019
working
group
component
standard
meeting.
Let
me
take
a
look
at
the
agenda
here
and
see
here
and
I
can
share
my
screen
as
well.
A
Okay,
so
we've
got
a
couple
things
ross
is
that
yours,
the
cube
controller
manager,
stuff
and
q
proxy.
C
I
literally
just
copied
and
pasted
it
from
last
week.
I
didn't
do
any
work
in
this
area
since
I
was
okay,
but.
C
No,
I
just
wanted
to
make
sure
that
it
was
on
there
to
kind
of
I
know.
Lucas
and
stefan
were
pushing
that
work
and
I
figured
we
should
go
see
if
there
are
any
comments
in
here.
C
C
Yeah,
okay,
yeah
first,
in
the
background,
jason,
the
coupe
controller
manager
is
a
parent
component
config
to
many
other
component
configs
and
so
lucas
and
stefan
at
kubecon,
created
a
proposal
to
kind
of
scaffold.
The
serialization
and
deserialization
of
of
the
cube
controller
manager
component,
config
and
they're
doing
that
with
a
type
that
wraps
the
bytes
and
they're
also
doing
some
interesting
api
group
kind
stuff
with
the
names
of
the
keys.
C
So
if
you're
interested
in
this
area,
this
is
definitely
a
place
where
we've
been
lacking.
C
So
this
was
brought
up
last
week
and
I
haven't
taken
much
more
time
to
look
at
it,
but
there's
no
poc
code
ross.
Did
you
want
to
move
on
talking
about
cuproxy.
B
Yeah,
if
you're
finished
with
the
control
module.
B
So,
okay,
my
idea
with
this
is
just
to
like
start
with
a
simple
proposal
with
a
few
like
goals
and
my
my
main
goal.
From
mostly
from
my
cuba
dm
based
perspective
on
the
q,
proxy
configuration
is
basically
to
preserve
the
way
configuration
is
handled
in
the
cube
proxy,
so
not
narrow
it
down
to
a
configmap
or
something
like
this
just
provide
the
some
files
with
yaml
documents
in
them
and
how
the
q
proxy
read
the
configuration
from
that.
B
So
users
should
not
be
restricted
to
using
configmap,
but
the
conflict
map
should
be
made
available.
Then
some
like,
for
example,
on
windows
notes.
It
may
actually
be
quite
difficult
to
access
a
complete
map
simply
because
q
proxy
is
running
as
a
normal
window
service
and.
B
So
we
need
to
basically
allow
for
these
configurations
to
be
structured
in
a
way
to
like
not
not
make
it
possible
for
users
to
have
a
total
mess
of
the
configuration
easily.
So,
for
example,
the
like
the
the
address
range,
the
the
center
address
addresses
are
like
shared
between
both
windows
and
linux
posts
and
if
they
live
in
a
couple
like
different
coffee
maps
or
different
configuration
files,
they
can
actually
become
messed
up.
B
So,
depending
on
the
actual
cost
operating
system,
the
cube
proxy
is
going
to
look
into
the
specific
key.
B
C
Yeah
I
mean
it'll
have
a
different
image
because
it'll
be
a
different
build,
but.
B
Yeah,
so
that's
the
that's
one
of
the
ideas
and
the
the
other
idea
is
that
basically,
there
are
hosts
specific
settings
such
as
the
hostname
overrides
after
the
config
file
and
find
others
overrides
and
stuff
like
that,
and
these
clearly
do
not
belong
to
a
cluster-wide
shared
configuration.
B
So
my
proposal
is
to
actually
split.
B
B
Well,
presumably,
you
get
like
a
couple
of
viable
documents
to
possibly
mess
up
with.
Hopefully
one
of
them
will
live
in
a
like
config
map
or
some
other
place.
That's
shared
in
the
cluster
and
q.
Proxy
will
access
it
from
there
and
the
other
one
is
going
to
be
prepared
like
in
some
other
way,
be
it
by
in
its
container
or
by
some
third-party
tool.
B
So
the
idea
is
that
from
q
practice
perspective,
we're
not
actually
interested
in
how
we
got
those
yaml
documents,
but
we
need
to
provide
it
in
the
comment
line
and
just
stuff
from
that,
and
possibly
they
should
live
in
like
a
couple.
Different
jungle
files
or
some
users
may
actually
want
to
to
combine
them
in
a
single
yamaha.
B
So
probably
allowing
for
a
couple
of
minus
minus
config
options
at
the
common
time
is
going
to
be
a
good
idea.
A
Be
I
wouldn't
well,
that's,
maybe
an
api
design
discussion
for
the
future
and
whether
to
put
you
know
all
the
paths
inside
of
q
proxy
config,
I
think
we
have.
We
have
the
same
discussion.
That's
going
to
happen
on
the
keyword
side
right
like
should
the
cube
config
path
be
in
the
cube
cubic
config
structure
or
not
there's
arguments
for
either
side
of
that.
A
There's
the
the
argument
for
putting
it
in
there
would
be
the
look.
The
entirety
of
the
cubelets
api
should
ideally
be
versioned
and
contained
in
this
standard
object,
and
that
path
is
in
fact
part
of
that
api
to
configuring
the
keyboard
right
and
then
the
argument
against
it
might
be
like
we
might
want
some
top
level
things
to
sort
of
be
distinct
on
the
command
line,
and
I'm
not
I'm
not
really.
B
Yeah,
I
think
that
we
actually
have
some
quite
some
time
to
figure
this
out,
simply
because
I
think
that
the
the
this
step
of
splitting
the
kinds
should
be
actually
done
is
probably
one
of
the
last
steps
prior
to
going
to
beta.
So
my
proposal
is
to
actually.
A
Yeah,
I
think,
maybe
yeah
as
one
of
the
last
steps.
It
would
be
good
we
it
would
be
good
to
first
get
a
representation
of
all
the
all
the
you
know:
pre-existing
options
in
your
your
sort
of
monolithic,
config
structure
and
then
figure
out
how
to
split
them
into
the
kinds.
I
think
there's
kind
of
like
three
three
big
big,
like
sort
of
points
that
I'm
noticing
in
this.
A
This
whole
thread,
which
is
you
know,
first
of
all,
the
we
want
to
just
do
that
that
solution
of
splitting
into
sort
of
a
local
and
a
remote
config
object
so
that
things
like
host
name,
override
and
node
ip
and
whatever
else
anything
that
really
is
totally
local
to
an
instance
and
can't
be
sort
of
shared
between
multiple
can
just
be
set
locally
and
and
that
way
that
doesn't
impede
your
ability
to
share
a
config
object
across.
A
You
know
a
whole
pool
of
resources,
and
then-
and
so
so-
that's
point
one,
which
is
that
we
wanna
split
into
this
local
and
remote
config
objects,
and
I
think
I'm
okay
with
that.
That
solution
has
been
brought
up
before
it's
just
nobody's
done.
It
the
other
thing,
is
this
question
of
platform,
specific
configuration
and
what
are
the
guidelines
around
how
to
structure
that,
and
I
I
wrote
a
doc
like
last
year
that
tried
to
think
about
this
and
think
about
sort
of
in
terms
of
okay.
A
We're
gonna
come
up
with
some
structure
and
guidelines,
and
you
know
what
are
our
apis
going
to
look
like
over
time
as
we
add
stuff
to
them
under
different
sets
of
guidelines.
I
took
a
really
strict
approach
to
grading
them
in
this
stock
and
sort
of
determined
like
we
just
want
a
top-level
discriminated
union
for
everything.
But
then
tim,
hawkins
kind
of
came
back
and
said:
hey
look,
there's
some
heuristics.
You
could
use
to
to
do
this
a
little
better
and
it
should
be
pretty
obvious
what
things
are
shared.
A
So
I
think,
as
far
as
I
mean,
feel
free
to
take
a
look
at
this
doc,
which
is
linked
in
the
comments
on
ross's
doc,
but
I
think
the
the
solution
here
is
probably
to
have
like
some
top
level
shared.
You
know
common
across
all
platform
things
and
you
know
then
a
substructure
for
each
platform
that
you
can
set
as
ross
kind
of
suggested.
So
I
think
those
those
two
approaches,
I'm
happy
with
then.
A
The
other
thing
I
I
really
want
us
to
talk
about
is
like
we
keep
saying
we're
going
to
have
this
cluster-wide
object
and
then
the
the
two
things
that
come
to
mind
when
I
hear
that
are
a
there's.
Also,
this
whole
cluster
api
thing
going
on,
which
is
also
talking
about
cluster-wide
objects
and
like
we
should
probably
talk
to
them
and
see
see
where
this
fits
there
and
then
also
because
cube
proxy
and
like
also
cubit
and
a
lot
of
these
component
config
things
are
node
level
components
and
I'm
specifically
talking
about
q
proxy.
A
Here
you
know,
is,
I
know,
q
proxy
does
depend
on
some
cluster-wide
networking
configuration,
but
the
question
is
because
nodes
are
often
configured
as
pools
of
resources
and
and
I'll
apply
like
this
configuration
to
that
pool.
Does
it
make
really
makes
sense
to
describe
this
as
a
cluster-wide
thing,
or
should
we
be
thinking
more
in
terms
of
a
pool-wide
thing.
C
In
the
case
of
proxy
we're
dealing
with
the
daemon
set
and
like
it's
a
config
map,
but
I
suppose
we
could
be
registering
crds.
C
A
Yeah,
that's
also
a
good
point.
So
a
lot
of
people
deployed
q
proxy
as
a
demon
set.
Now,
although
I
think
I
don't
think
all
providers
do
that,
like,
I
think
gke
still
runs
it
as
a
static
pod.
So
it's
really
configured
at
a
pool
level.
Instead
of
the
sort
of
cluster-wide
approach,
I
think
gk
wants
to
move
to
using
a
demon
set,
but
there
there
may
be
blockers
that
on
our
side.
B
Yeah,
it
actually
makes
sense.
So
basically,
what
I
did
here
is
like
a
mirror
proposal
for
what
we
were
already
doing
in
kubernetes.
So
I
was
thinking
in
terms
of
well
demon
set
and
like
from
kubernetes
perspective.
The
the
cluster
configuration
is
like
just
this
cluster
with
this
specific
control
thing.
B
So
I
was
not
thinking
in
terms
of
pools
of
machines
or
stuff
like
that,
but
like
yeah,
I
think
that
probably
jason
can
show
us
some
perspective
from
the
pool
of
machine
perspective
like
closer
api
stuff,
but
yeah.
B
A
One
thing:
one:
one
convention
I
thought
of:
I
was
working
on
the
cubit
config
stuff.
It
was
more,
you
know,
call
this
shareable
one
just
straight
up,
keylet
configuration
and
then
call
the
local
one
like
local
cubic
configuration
or
cubit
local
to
make
that
distinction,
because
we
should.
You
know
I
wanna.
A
I
want
us
to
kind
of
focus
on
like
the
good
apis
are
the
ones
that
can
sort
of
lift
to
higher
levels
and
to
be
used
in
a
lot
of
flexible
ways
so
like
if
I,
if
I
have
a
cubic
configuration,
ideally,
I
can
just
throw
it
at
a
cubelet,
and
you
know
it
should
mostly
work
and
I
should
be
able
to
scale
to
that
configuration
across
multiple
keyboards
or
multiple
pools
of
cubelets,
without
changing
it.
A
As
long
as
it
was
already
correct
for
my
environment
and
the
things
that
impede
that
are
are
mostly
these
sort
of
local
paths
and
and
like
the
local
ip
has
is
you
can
only
set
one
of
those?
You
know
per
instance
of
the
thing,
so
you
can't
actually
share
obviously
share
an
ip
across
multiple
keyboards,
so
basically
making
making
our
apis
higher
level
so
that
they're
easier
to
work
with
and
then
keeping
that
sort
of
lower
level
local
stuff
in
the
local
configuration.
F
Yeah,
I
know
kid
proxy
gets
a
little
ugly,
especially
when
you
talk
about
some
of
the
flags
that
need
to
get
passed
in
to
avoid
hair
pinning,
and
things
like
that.
So
I
know
in
the
past
we've
had
to
plumb
through
and
override
on
the
daemon
set
level.
You
know
the
the
host
name
or
the
ip
address
for
the
local
instance,
and
I
suspect,
at
least
for
the
foreseeable
future
they'll
still
be
in
since
it's
where
we're
going
to
need
to
do
that
as
well.
A
F
Flag
well
so
in
the
past,
like
originally
the
way
we
hacked
around,
this
was
using
in
a
nick
container
and
basically
taking
the
daemon
set
using
it
as
like
a
template
and
then
modifying
it.
That
was
really
ugly
and
not
something
that
we
wanted
to
stick
with,
and
I
know
that
something's
been
put
in
place
since
then,
but
I'm
not
exactly
sure
what
it
is
to
try
to
work
around
that.
F
It's
one
of
the
reasons
why
we
wanted
to
have
like
a
preference
for
being
able
to
use
config
flags
to
override
kind
of
like
those
types
of
things
for
the
config,
because
that
seemed
like
a
natural
way
to
to
be
able
to
handle
it.
Because
then
you
could
use
the
download
api
to
kind
of
fill
in
kind
of
the
host
name
or
ip
address
type
of
thing.
A
Yeah,
I
wonder
if
it
would
be
we've
and
we've.
We
talked
about
that
in
a
few
meetings
ago
of
either
generating
flags
that
match
the
config
or
you
know
which
I
was
I
was
kind
of
against,
because
I
just
it
seems
over
complicated
to
me.
But
one
option
might
be
me.
You
know
a
flag
that
lets.
You
set
some
arbitrary
json
on
the
command
line
that
you
could
paper
over
to
config
with.
F
A
Yeah
there's
a.
I
totally
agree
that
there's
a
huge
ux
problem
with
config
right
now
versus
flag,
where
it's
a
regression
to
go
to
config,
because
you
can't
really
leverage
things
like
downward
api
or
you
know,
or
whatever
I
mean.
Maybe
you
could
mount
it
and
then
run
some
bash
in
your
pod
config
to
set
the
file
before
your
container
runs.
But
it's
really
ugly,
like
you
said
so
yeah.
I
totally
agree:
it'd
be
good
to
have
some
kind
of
templating
solution
or
something
to
fill
that
gap
with
the
downward
api.
C
As
like,
configuring,
node
pools
here
goes
and
kind
of
talking
about
downward
api
and
domains
and
ips
when
you're,
considering
a
node
pool
like
the
domain
or
the
construction
by
which
the
host
names
are
created,
the
most
generic
and
safe
thing
you
can
do
with
that
per
node,
but
really
like
that's
that's
gonna,
be
a
pattern.
C
That's
the
same
across
all
nodes
in
the
pool
from
pragmatic
sense,
which
makes
it
quite
irritating
you
know
to
use
as
an
operator,
because
then
you
have
to
add
this
kubernetes
specific
platform
stuff
to
your
node
provisioning.
C
I
also
left
a
comment
in
our
meeting
notes,
kind
of
highlighting
how
annoying
how
difficult
it
is
to
do
interface,
selection
of
with,
like
basically
anything
where
you're
trying
to
get
an
ip
for
binding,
and
I
think,
like
there's
a
lot
of
improvement
that
we
can
do
here.
This
is
a
higher
order
need
when
talking
about
config,
because
we
we
have
a
lot
of
basic
stuff.
We
need
to
handle
first,
but
I
think
it
should
inform
our
decisions.
C
I
would
like
to
be
able
to
put
you
know
like
ens10
p3,
into
a
node
pool,
specific
config
and
then
like
have
coupe
proxy
bind
to
that
address
right
and
to
be
able
to
specify
whether
or
not
it's
ipv4
or
ipv6
without
having
to
know
in
advance,
especially
in
advanced
score
at
you
know,
at
dynamic,
node
provisioning
time
like
placing
a
file
somewhere
on
disk
like
where
that
ip
address
has
to
be
put
into
some
special
place
for
kubernetes.
In
order
to
be
able
to
read
it,
yeah.
C
So
like
a
a
library
for
host
name
generation
and
for
interface
selection
that
supports
interface
names
and
whether
or
not
it's
ipv4
or
v6,
like
these
simple
kind
of
decisions
like
having
a
central
place
for
that
that
we
can
retrofit
into
existing
components
and
encourage
other
components
to
use
in
the
future.
Sounds
like
the
right
move
to
me
and
does
sound
like
a
component
config
concern.
F
F
Well,
it
might
not
just
be
a
node
pool
thing.
One
of
the
use
cases
that
I've
heard
very
frequently
in
the
past
is
wanting
to
you
know
separate,
like
ingress
egress
traffic.
F
You
know
outside
of
the
cluster,
from
like
intra-cluster
communication
and
even
trying
to
segregate
kind
of
storage
interfaces
from
you
know,
regular
networking,
config.
C
F
C
You
know
I,
I
would
think
that
I
I,
as
an
operator
would
want
that
to
be
possible,
as
opposed
to
jumping
into
cloud
init
or
whatever,
and
trying
to
drop
files
on
the
system
that
are
mounted
in
the
right
place.
A
C
Have
it
that
sort
of
the
thing
that
I
linked
is
actually
the
kubelet
configuration
overrides
yeah
but
yeah
anything
that
binds
to
an
address
and
it's
trying
to
like
look
for
the
api
server
ip
and
all
that
like
it's,
it
gets
really
buggy
when
there's
more
than
one
interface,
which
makes
things
like
what
jason
is
describing
just
really
hard
to
work
with,
because
you
have
to
like
write
bash,
that's
not
going
to
fail
more
or
less.
C
Yeah
and
a
lot
of
things
don't
support
using
the
host
names
like
you,
you
need
to
give
the
ips
to
the
cooper.
From
my
understanding,
you
can't
bind
to
something
else,
although
if
it,
if
you
can
do
that,
then
I'm
not
sure
that
it's
documented
well
enough,
I'm
not
sure
so
anyway,
yeah.
That
was
just
something
I
wanted
to
bring
up,
because
it's
a
config
concern
that
irritates
me.
We
are
at
the
end
of
our
time.
Do
we
have
a
separate
meeting
block
for
this?
C
No
we're
about.
At
the
end
of
our
time,
yeah
yeah,
I
mean
the
only
other
thing
I
I
was
going
to
bring
up
was
just
when
it
comes
to
add-ons
and
component
config
right.
There's
some
higher
order
needs
with
regard
to
serializing
and
like
not
having
an
api
server.
Yet,
maybe
so,
but
we
can
talk
about
that
next
week.
A
Sounds
good
yeah
put
it
on
the
agenda.
Please
yep
I'll
move
it
over
there
ross.
If
you
want
to
write
this
up
into
a
cap
kind
of
like
take
the
comments
into
account,
that
would
be
awesome
if
you're
planning
on
working
on
this.
B
Yeah
I'll
actually
try
to
put
up
a
cap
tomorrow
or
on
thursday
and
submit
it,
and
I'm
planning
on
working
on
this
one
for
the
next
time
at
least.