►
From YouTube: WG Component Standard Meeting 20190115
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
Public
or
whatever
it's
it's
in
the
community,
repos
that
we
meet
at
this
time,
but
we
should
do
some
more
marketing
for
the
next
time.
Okay,
there
we
go
we're,
recording,
yes,
hello
and
welcome
to
the
first
meeting
of
workgroup
component
standard
today,
we're
gonna
go
through
the
backlog
and
see
what
the
scope
is.
This
a
sickness
or
workgroup
is
as
well
as
some
of
the
backlog
and
sand
prioritizations
for
the
next
cycle.
B
Okay,
so
the
first
thing
I
could
go
through
is
the
workgroup
infrastructure
and
the
meeting
notes
can
be
found
in
the
slack
channel,
but
also
based
on
here
in
the
chat
call.
So
we
got
the
approval
from
the
steering
committee
and
others
to
form
this
working
group
through
the
convenience
community
put
a
quest
3008
around
a
week
ago,
and
we
have
got
a
label.
We've
got
a
mailing
list
created
for
us.
We
recently
some
days
ago,
got
the
slack
Channel.
B
Now
we
have
the
zoom
meeting
as
of
some
half
an
hour
ago
or
something
when
I
created
it.
It's
using
c-class,
lifecycles
zoom
account
at
the
moment,
and
that's
that's
all
fine,
and
we
can
have
this
kind
of
host
key.
It's
so
that
we
can
rotate
to
who
is
the
host
attending
stuff
is
created.
Creating
a
github
team.
B
So
a
lot
of
the
feedback
we
got
from
the
community
pool
was
basically
that
this
workgroup
will
mostly
have
the
docs
first
approach
and
will
help
split,
we'll
we'll
think
about
how
to
split
the
dependencies
and
packages
and
stuff
for
the
component
based
repo.
But
the
component
base
repo
will
in
itself
be
owned
by
sig
API
machinery,
which
has
which
owns
kind
of
all
the
code
where
we're
dealing
with
today
as
well
and
we're
moving.
B
This
is
obviously
thing
that
will
go
through
the
API
approval
process
and
like
kept
with
cigar
detector
and
all
the
API
other
API
approvers,
but
still
I,
think
we're
gonna
be
involved
in
starting
these
processes
and
getting
them
actually
done
by
coordinating
with
with
the
actual
things
like
node
or
SiC
network,
then
we
already
have
a
capful
component
base
and
obviously
we
want
to
work
on
that
short
term,
especially
so
we
so
we
get
a
sense
of
where
we're
going.
Yes,.
C
B
Other
words
I
just
went
through
the
workgroup
infrastructure.
What
we
have
the
selectional
mailing
lists,
labels,
stuff
and
now
I
just
said
that
we're
gonna,
although
we're
not
gonna
owned,
actual
or
care,
that
much
about
the
API
structures
of
components,
config
we're
gonna,
probably
be
the
ones
that
kick
to
the
process
and
I
like
to
start
there
like.
Okay,
let's
make
the
cube
proxy
component
config
go
better,
well
create
an
issue
for
it
will
create
some
kind
of
placeholder,
whatever
Google
Doc,
where
signet
work
can
come
in
and
like
say,
hey.
B
We
want
to
make
it
like
this
whatever,
but
we'll
help
facilitate
that
thing
yeah,
and
that
fits
well
together
with
the
docs
first
approach.
We
should
have
like
in
this
work
group
as
a
whole
to
think
about
these
problems,
and
then
we
already
have
the
component
base
cap.
That's
approved
the
second,
maybe
the
most
important
kept
going
forward.
D
I
think
we
should
be
careful
not
to
do
too
much
at
once
with
that,
like
probably
it's
not
one
component
standard
cap
like
there
were
a
number
of
areas
outlined
in
the
original
cap
right,
so
it's
probably
like
a
kept
for
like
this
is
the
framework
for
flags
and
component
config
and
that
stuff
and
that's
like
its
thing
and
then
maybe
there's
another
one
for
like
config
Z
and
another
one
for
Scalia.
True.
B
B
Yes,
so
I
think
I
think
that's
like
on
a
high
level
that
the
main
goals
of
this
this
work,
group
and
I
also
I
also
just
mentioned
that,
although
component
base
is
owned
by
technically
owned
by
a
sig
API
missionary,
we're
gonna
help
with
moving
most
of
that
code,
how
to
get
to
get
going,
but
obviously,
for
example,
Jordan
and
Clayton.
Another
API
machinery
folks
will
will
have
the
approval,
rights
and,
and
all
that
kind
of
stuff
there
as
well
and
we're
gonna
sink
closer
with
them
and
yeah.
That's
that's
most
for
the
infrastructure
stuff.
B
D
B
B
I'll
paste
this
in
the
shaft
we
have
72
56,
nine
that
got
nudge
that
actually
created
the
first
component
base
or
the
component
base
repo,
and
this
was
basically
because
earlier
we
had
debated
when
doing
the
first
component
configure
refactoring
112.
We
had
debated
with
many
folks
where
we
should
put
the
chair.
B
The
shared
types,
for
example,
clients,
configuration
kind
of
connection,
configuration
and
leader
election
configuration
and
stuff,
and
we
just
kind
of
threw
this
into
both
API
machine
or
API
server,
and
we
weren't
really
sure
where
it
all
fit,
and
that
split
didn't
make
any
sense
really
in
at
the
end
of
the
day.
So
now
we
actually
have
a
dedicated
good
place
to
put
it
if
you're,
building
a
components
config
for
a
component.
These
appetite
that
are
useful
to
you
under
shared
between
so
so
they,
this
PR,
aggregated
that
into
component
base.
D
B
B
My
other
counter
proposal
was
to
just
create
the
option.
This
is
PR
70
to
80
h3,
and
this
this
would
only
create
this
option:
the
strict
option
in
the
JSON
sterilizer,
which
would
then
use
in
a
dedicated
config
loading
config
serializer
package
that
we
would
create
inside
of
component
base.
So
then
we
have
our
own
most
special,
most
coped,
serializer
for
especially
configs
with
regards.
B
B
C
C
But
it
matters
a
lot
if
it's
in,
like
the
mains
these
days
or
potentially
in
the
Meiji
code
path,
so
like
starting
with
something
more
scoped,
seems
much
better
to
me
and
then
getting
all
the
various
places
that
load
config
files
running
through
a
common
config
loader
seems
like
a
good
starting
work
to
me.
So
that
pool
is
on
my
list
too,
to
take
a
look
at,
but
I
I
think
that's
a
much
better
starting
point.
D
B
And
Shannon
had
set
up
slack
yesterday
that
he
might
like
the
question
we
have
with
with
these
as
we
move
code
in
the
component
base,
should
we
before
we
know
exactly
what
they're
gonna
look
like
they're
like
go
code,
public
scheme
of
the
go
code
that
is
before
we
know
exactly
how
that's
gonna
look
like.
Should
we
create
an
internal
or
experimental
directory,
let's
top-level,
and
that
then
has
the
the
same
directory
structure
for
the
code
that
we
still
want
to
change?
B
C
General,
you
just
described
the
the
conundrum
that
we
have
with
all
of
the
repos
that
we
want
to
export
like
we,
we
haven't
put
in
the
time
to
think
through
how
its
consumed
to
the
point
that
we're
confident
that
this
is
like
the
best
API,
and
so
until
we
take
that
time.
We
don't
have
a
lot
of
confidence
that,
like
this,
this
works
really
well
as
an
API
for
consumption.
C
At
the
same
time,
our
options
are
like
put
it
out
there
and
let
people
use
it
or
they
go
copy-paste
write
their
own
and
we
have
no
visibility
and
no
no
commonality
in
how
people
consume
this.
If
we
do
put
it
out
there
and
people
start
using
it
and
then
later
we
figure
out,
we
want
to
change
this.
There
was
some
flaw,
some
shortcoming.
Our
auctions
are
fixed,
that
in
a
incremental
compatible
way
like
at
a
second
second
constructor
or
an
option,
or
something
or
change
it
in
a
way
that
makes
people
react.
C
So
my
preference-
if
this,
if
the
point
of
this
is
for
things
that
load
config
to
use
it
I,
don't
think
I
would
start
by
fencing
things
off
and
saying.
No,
actually,
you
can't
use
this
I
would
probably
make
this
surface
area
as
small
as
possible,
so
the
smaller
the
surface
area,
the
more
likely
we
are
to
be
able
to
adjust
or
improve
or
incremental
II,
add
to
it
in
the
future,
and
so
I
would
yeah
make
the
surface
area
very
small.
A
C
A
C
A
C
B
C
Yeah
anywhere,
we
already
have
like
a
unified
component
that
we're
using
that
just
happens
to
live
under
API
server
and
we're
wanting
to
get
in
into
component
Lib
yes,
moving
is
is
fine
for
something
like
the
config
loading,
where
we're
not
actually
strict,
and
we
need
to
be
strict
and
we're
not
actually
unified,
and
we
want
to
be
unified
as
much
as
possible.
Just
keep
the
surface
area
of
the
thing
we
exposed
small
doctor.
That's
all
set.
B
Let's
make
Nick
makes
perfect
sense,
yeah
yeah,
the
way
I
wrote
it.
Whatever
proof
of
concept
is
like
you
pass
in
just
the
codec
and
the
codecs
and
the
scheme
so
and
the
codecs
are
created
from
the
scheme.
So
that's
all
and
then
I
think
we.
We
can
add
this
to
the
scheme
package
or
whatever.
So
it's
it's
really
easy
to
use
for
for
the
consumers.
But
yes,
we
do
the
decoding
of
configs.
B
That's
really
and-
and
that
has
the
options
we
like
the
the
characteristics
we
want
like
strict
decoding
right
and
another
common
mistake
or
whatever
is
today,
is
like
we,
we
only
support,
Jason
or
yellow
in
the
places
we
decode.
We
don't
support
like
just
let
some
weird
trickery
around
so
so
with
having
this
one
package,
we
can
actually
support
both
in
a
unit
tested
way.
We're
also
there's
no
test
so
so
yeah
but
I
hope
to
proceed
with
this
72
88
3
as
soon
as
possible,
and
that
all
sound
reasonable.
B
So
the
next
other
bigger
bigger
PR
is
we
have
five
minutes
left
just
when
I
mention
this
two
words
have
has
opened
PR
that
breaks
out.
Now
we
have.
The
controller
manager
is
well
instead
of
many
controllers,
but
our
the
way
we
have
structured.
The
config
is
like
this
mega
types
go
file
and,
as
we
discussed
with
Mike
and
Stefan
during
during
cube
con,
this
is
not
ideal
ideal
because
eventually,
as
we
move
controllers
as
we
break
them
out
into
different
binaries
or
whatever,
it
makes
it
really
hard
for
consumption.
B
B
C
D
Especially
as
we
think
about
like
splitting
controllers
up
and
moving
them
into
separate
binaries
recomposing
them.
However,
we
want
right.
That's
gonna,
be
a
very
interesting
question
to
answer
in
terms
of
I,
want
to
move
this
controller
out
and
still
be
able
to
specify
that
client,
information
or
I
want
to
run
all
these
controllers,
together
with
the
same
client
information
right.
A
B
C
Know
exactly
who's
talking
really,
and
maybe
it's
the
kind
of
thing
like
when,
when
a
controller
gets
pulled
into
controller
manager,
like
the
controller
manager
gives
it
its
client,
and
so
it's
gonna
do
based
on,
like
setting
of
those
servers,
account
client
for
it
and
when
it
runs
standalone
like
by
default,
it
looks
for
it
in
cluster
config,
but
if
you
want
to
run
it
separately,
you
can
explicitly
give
it
a
cute.
Config
file
like
I,
think
there's
some
reasonable
things
we
could
do,
but
just
kind
of
describing
or
thinking
through.
C
You
ran
it
standalone
in
cluster.
If
you
ran
a
standalone
out
of
cluster,
if
you
ran
it
composed
in
controller
manager
or
you
ran
it
composed
in
controller
manager,
but
you
needed
to
tune
like
rate
limits
for
a
particular
one
like.
How
would
you
express
that
we
we
don't
want
to
make
any
of
those
like
too
insane.
B
Cool
yeah:
that's
we're
gonna,
try
to
keep
this
these
meetings
shorts
and
focused
right
now.
After
days,
sick
lost
lifecycle
meetings
starts,
so
we
have
to
end
this,
but
we
now
have
all
the
logistics
our
infrastructure
created
for
us,
except
for
github
team,
will
be
set
up
in
the
coming
days,
so
starting
to
think
about
these
starting
to
create
the
kept
like
Google,
Documents
or
whatever,
where
we
will
explore
these
ideas
will
make
a
lot
of
sense
and
will
be
exciting.
I
just.