►
From YouTube: 20180508 sig cluster lifecycle
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
never
recording
cool
hi
and
welcome
everyone
to
the
class
lifecycle
or
weekly
meeting
today
is
the
8th
of
May,
and
we
have
a
cube
con
with
many
folks
just
ride
home
from
coupon.
It
was
an
exciting
week.
We
had
lots
of
discussion
in
the
hallway
and
also
good
content
on
keynotes
and
normal
sessions.
A
B
Master
life
cycle
leads
are
willing
to
upstream
QbD
and
Eng
cluster,
moving
it
to
kubernetes
six
repository
yeah.
Just
in
case.
If
someone
is
wondering
what
this
project
is,
it's
about
running
multi,
node,
Cuban,
a
taste
test
clusters
with
just
docker,
and
it's
based
on
cube
ADM.
It's
for
example,
used
by
Cisco
folks
from
Signet
work
for
their
ipv6
work
and
well.
B
There's
hope
that
if
the
project
is
up
streamed,
then
maybe
more
people
will
be
able
to
participate
in
it,
because
there
are
things
that
we
at
my
aunty's
don't
have
enough
resources
to
complete
like,
for
example,
your
item,
shell
scripts
and
go
there
and
well.
The
question
is:
is
that
confirmed
that
we
should
proceed
with
upstream
in
this
project
and
if
so,
which
steps
should
be
taken?
B
C
From
the
steering
committee
perspective,
as
long
as
the
sig
is
okay
with
sponsoring
it,
which
I
have
no
objections
to
so
I
think
what
you
should
probably
do
is
write
up
an
email
to
sig
cluster
lifecycle
and
send
it
around,
but
the
only
thing
that
the
only
constraint
that
you
need
to
add
there's
who
is
going
to
be
in
the
owners
files
and
that's
pretty
much
all
there
is
to
it.
We
do
need
to
set
up
automation
on
it
if
you
want
automation,
so
that's
that's
all
there
is
now
at
this
stage.
B
C
I
think
the
sig
maintainers,
the
the
TLS
and
chairs
should
probably
be
in
that
list,
as
well
as
anyone
from
Orientis
who
is
willing
to
spend
cycles
to
do
reviews
and
whatnot
as
they
come
in
I,
do
know
that
there
there
have
been
a
number
of
PRS
that
have
gone
into
it,
that
I've
kind
of
stalled,
so
I
think
once
we
can
get
that
a
proper
home
that
other
people
can
okay
it.
We
can
get
some
of
those
peers
in.
B
Okay
sounds
good.
Also,
I
will
mention
this
in
email.
We
need
maybe
a
few
more
confirmations
from
our
management
in
Moran
chess.
It
will
be
done
shortly
and
another
thing
is
about.
If
we
agree
on
the
talk,
how
do
we
move
it
to
kubernetes
six
repository
because
well,
if
maybe
there's
a
way
to
do
it
directly
like
grant
in
very
temporary
permission
to
their
aunties,
it
would
be
a
bit
easier
from
standpoint
of
Carden,
a
kitchenette
with
our
administrators
sure.
C
D
C
Everything
that
will
be
incriminated
SIG's
will
have
to
abide
by
the
cnc
of
CLA.
So
that's
one
of
the
constraints
right.
So
if
anyone
wants
to
contribute
to
it,
they
have
to
sign
the
CN
CF
CLA.
So
the
same
as
all
of
kubernetes
proper.
The
the
other
constraint
is
that
it
needs
to
have
you
know
some
reasonably
SH
open-source
license.
Apache2
is
fine.
You
know,
I
know
that
there's
a
couple
other
licenses
that
are
currently
supported
within
the
CNC
F,
but
that's
that's
pretty
much
it,
but
the
the
existing.
D
C
D
D
C
A
B
C
The
way
it's
worked
for
other
things
that
got
donated
was
it
just
transfers,
ownership
and
all
the
history
is
maintained.
So
it's
worked
on
a
couple
of
different
projects:
I
helped
transition
presided
over,
which
is
a
separate
scheduler
by
Cambridge,
and
we
got
that
into
the
kubernetes
sig
repo
without
issue.
So.
E
So
that's
that's
kind
of
what
I
would
like
to
get
is
any
feedback
on
this
PR
have
opened
for
koob
ATM
images,
which
will
list
the
docker
images
that
cuvee
DM
uses
for
an
install
over
the
past
couple
of
weeks,
I've
seen
a
good
number
of
people
talking
about
installing
in
an
air-gapped
environment
and
having
a
command
to
list
all
of
the
images
they
need
before
they
run.
Covidien
would
be
really
helpful.
So
it's
just
a
matter
of.
C
C
You
want
to
make
it
simpler
for
power
users
who
want
to
have
the
escape
hatch
of
generating
and
config
and
being
able
to
twiddle
everything
that
they
need
to
twiddle,
and
it
sounded
to
me,
like
images,
was
a
portion
of
config
or
would
apply
to
that's
not
strictly
true.
It's
like
a
there's,
a
Venn
diagram,
there's
an
overlap
of
it.
It's
not
wholly
consumed
by
so
I.
Don't
have
strong
opinions
other
than
I
know
that
there
will
be
I
know
that
there
will
be
a
config
sub
command.
A
Yeah,
so
the
current
cubed
M
config
month,
we
have
is
config,
upload
and
convict
view,
and
that's
just
like
if
you
want
to
create-
or
if
you
want
to
have
an
easy
way
to
change
your
a
config
map
in
the
cluster.
You
can
use
cuban
config
upload
and
if
you
want
to
just
instead
of
typing
cube
city
I'll
get
with
a
lot
of
Palmeiras,
you
can
do
cuban
em
coughing
view.
As
Tim
said,
they
are
pretty
basic
right
now
this
month.
A
So
in
order
to
actually
serve
the
uses,
we
won't
we
we
will
make
them
more
sophisticated,
I.
Think
Cuban
and
config
images
would
be
our
list.
Images
would
be
perfectly
fine,
as
as
the
actual
output
there
is
dependent
on
the
config
map
itself,
because
we
can't
say
we
can't
hard
code
this
for
every
every
cubed
M
installation.
Instead,
it's
dependent
on
a
lot
of
factors
so
having
that
under
configures
seems
very
reasonable.
I.
F
C
Sense,
yeah
I
think
it's
more
of
a
UX
of
like
Venn
diagram
that
I'm
thinking
of
because
from
a
user's
perspective,
who
wants
to
understand
how
like
where
the
images
are
being
pulled
from?
It's
not
it's
super
transparent
to
have
a
top
level
command,
but
it's
also
very
onerous
to
maintain
the
top
level
commands
for
all
the
things
it's
so
I'm
I'm.
It
does
make
sense,
logically
to
have
it
under
config
in
my
mind,
but
I
don't
from
a
UX
perspective.
There's
definitely
like
a
cleaner
feel
for
like
having
a
top
level
command,
but
yeah.
A
Basically,
making
it
easier
for
the
user
to
handle
the
the
configuration
of
cubed
M,
which
will
at
least
at
this
point
in
time,
storing
a
config
map
if
we
have
a
transition
to
see
RDS
or
whatever,
that's
gonna
be
handled
transparently
same
in
UX,
but
yeah
it's
better
than
as
Tim
said
we
might
want
to
do
a
generate
or
something
for
those
who
don't
want
to
go
to
kubernetes
io
Sox
and
check
the
reference
yam
from
there.
You
can
easily
do
like.
This
is
three
faults
for
everything
and
all
right.
It's
as
you
like.
G
So
the
one
thing
that
seems
odd
to
me
is
that
I
wouldn't
necessarily
expect
a
command
nested
under
config
to
perform
some
type
of
a
operation
other
than
if
we're
updating
the
config
in
the
cluster,
and
if
this
is
going
to
be
used
to
actually
like
be
able
to
potentially
pre-poll
images.
In
addition
to
just
listing
images,
then
would
we
have
you
know
like
list
images
provide
lives
separately
under
config
and
then
have
like
a
pull.
Images
live
under
needs,
a
different
top
level
command.
A
So
I
don't
think
we're
ever
gonna.
Do
people
just
like
that,
or
at
least
I
wouldn't
expect
it,
because
that's
really
up
to
what
CRI
implementation
you're
using
so
as
we
can't
really
choose
or
favor
a
specific
specific
one,
I
think
it's
easier
to
to
just.
If
we
have
this
this
list
images
command
or
something
it's.
H
C
H
F
F
A
I
I
I
using
the
batch
for
loop
is
probably
just
fine,
but
if
you
want
to
make
it
as
easy
as
possible
for
those
in
an
air-gap
that
scenario
you're
right,
if
I
wasn't
an
air-gap
pre-cooling
can
be
difficult,
but
if
I'm
in
an
air-gapped,
then
I'm
gonna
have
to
pull
them
all
anyways
and
push
them
into
a
REIT
registry.
So
pre
pulling
them
will
be
just
fine,
because
I'm
gonna
have
to
read
tagged
them
and
put
them
into
my
own
registry
anyways
that
make
sense.
F
Yeah
but
you're
still
gonna
have
to
write
a
bash
for
Luke
to
push
them
to
your
registry.
So
why
can't
you
have
that
Luke
pull
and
push
right
if
you
have
to
write
custom
code
to
ok,
watch
the
gap,
I
think
what
Lucas
is
saying
is
that
if
we
read
that
coding
cube
admin,
then
we
are
sort
of
by
default,
picking
which
types
of
images
we
are
going
to
write
code
that
pull
down
right
like
we
know
how
to
pull
docker
images.
We
know
how
to
pull
rocket
images.
F
I
It
just
be
the
image
that
that's
referenced,
it
wouldn't
matter
whatever
one,
your
references,
isn't
it
the
reference.
Doesn't
change
based
off
the
CRI
right,
it's
the
if
I'm
using
rocket
and
then
I
find
reference
a
rocket
image.
Then
that's
what's
going
to
pull
regardless
of
whether
I
change
later
it's
the
code,
it's
the.
C
I
C
Of
Kubb
ATM,
so
codeinsight
comedian
would
have
to
have
a
different
thing.
So
if
it's
gonna
pull
its
be
a
docker
pull
for
docker
for
whatever
cRIO,
I
don't
even
know
what
it
is.
It's
something
else
so
that
that
changing
of
CRI
pull
command
that
that
piece,
whatever
you're
using
to
pull,
would
have
to
change.
Based
upon
the
CRI
and
listing
of
images
just
makes
us
say
we
don't
care,
we
had
to
throw
up
our
hands,
then
you
can
pipe
the
output
of
that
into
whatever
command-line
tool
you're
using
for
your
CRI
environment.
C
C
C
So
I'll
do
the
update.
The
Liz
has
a
PR
or
a
kept
up,
we're
talking
about
master
config,
because
we
got
bitten
by
it's
super
hard
in
Winton
to
have
a
well
version
component
config
for
cube,
ADM
and
folks
should
on
their
eyeballs
towards
that,
and
we
should
have
a
discussion
in
a
later
date.
Once
we've
had
enough
cycles
to
think
about.
C
F
You
sound
pretty
wishy-washy
on
time.
Friends
they're
like
110
versus
111.
Two
pretty
big
jump
is
if
this
is
important
enough
for
111,
do
we
want
to
set
a
deadline
for
people
to
review
the
cap
and
say
and
back
to
lazy
consensus?
Everybody's
got
a
week
or
two
and
after
that
we're
gonna
address
comments
and
move
forward.
I'm.
C
Cool
with
that,
I
would
ideally
like
to
get
this
in
place
for
111,
just
at
least
to
get
hard
versioning
in
place,
and
then
we
can
iterate
it
or
add
fields.
I
want
to
have
the
explicit
list
today
and
what
wouldn't
it
be
complete
and
that's
why
we
call
it
v1
now
for
two,
but
at
least
I
have
the
hard
versioning
in
place
for
111
because
of
how
much
pain
and
suffering
accosted
us
in
110.
So.
A
Yeah
I
think
I
think
the
the
only
worry
and
why
fighting
was
like
by
the
way
is
that
code
freeze
is,
did
we
say,
29th,
I,
think
or
something
like
it's
pretty
soon,
and
this
is
a
major
thing.
I
tried
to
do.
It
was
before
I
can
last
fall
or
something
updating
the
API
types
is
hard
and
spec'ing
out
getting
them
really
right.
It's
hard,
probably
like
two
weeks
of
work,
at
least
and
then
actually
making
cubed
internally
use.
A
K
Already
uses
a
separate
internal
structure
that
is
not
versioned,
so
we
don't
need
to
change
all
of
the
internal
stuff.
We
just
need
to
write.
Well,
we
don't
even
need
to
write
the
converters
because
they
already
exist
and
we
already
use
them.
What
we
need
to
do
is
just
do
some
enforcement
around
making
sure
that
the
versions
exist
in
our
set.
They
could
be
sent
right
now.
K
A
K
C
A
I
think
we
have
done
that
like
from
the
beginning,
at
least,
but
the
problem
with
the
1:10
thing
was
that
we
referenced
a
cubed,
acute
proxy
external
API
type,
configure
a
component
configuration,
but
then
they
were.
There
were
changes
in
that
external
pipe
from
one
nine
to
one
10,
which
is
some
without
an
API
bump
change,
which
is
something
that
never
should
happen.
K
C
So
I
think
the
TLDR,
if
you
step
back
cause,
you
got
other
things
to
go
through
means
that
take
a
look
at
the
Kemp,
we'll
try
to
talk
about
it
more,
maybe
next
week
once
folks
have
had
time
to
discuss
it
and
as
Robert
mentioned,
we
want
to
make
traction
to
at
least
have
some
level
of
MVP
for
while
defined
versioning
it
in
the
1:11
cycle.
If
possible,.
K
So
my
question
is:
what
is
the
behavior
we
want
to
have
around
this?
It's
probably
going
to
be
one
of
the.
If
we
see
a
root
CA
that
already
exists,
and
we
don't
see
a
key
and
we
look
for
all
of
the
component
configuration
and
decide
whether
it
exists
or
if
we
see
the
CA,
don't
see
the
key
and
don't
see
all
the
other
things.
We
plow
forward
trampling
on
everything
and.
K
This,
but
we
couldn't
so
we
continued
anyway,
is
that
is
that
clear
to
everyone,
I
have
a
PR
open
somewhere.
That
I
also
talked
about
this
on,
but
that's
the
that's
the
basic
thing
I
want
to
solve.
When
should
we?
When
should
we
attempt
to
use
an
external
CA,
and
what
should
we
do
if
we
don't
have
all
of
the
things
that
we
need
to
make
that
external,
CA,
usable.
K
A
G
G
I
think
the
issue
with
that
approach,
which
is
the
current
approach
right
now,
is
you
don't
get
notification
like
right
now?
You
don't
get
notification
of
what
you're
missing.
You
just
get
an
error
message
saying
that
the
key
file
doesn't
exist
and
it
can
you
can't
create
the
certs
that
you're
trying
to
create
I
think
what
we
really
need
to
do
is
trigger
external
CA
mode.
G
If
we
see
a
certificate
cert
without
the
key,
but
then
have
a
pre-flight
check
that
verifies
that
all
of
the
expected
certificates
that
we
need
are
present
and
if
they're
not.
We
should
fail
at
that
pre-flight
step
to
tell
the
user
what
certificates
they
need
and
preferably
give
them.
Some
guidance
on.
You
know
how
to
create,
but
not
necessarily
how
to
create
those
sorts.
But
what
are
the
prerequisite
for
those
certs
if
there's
a
specific
common
name
or
organization
needed
in
the
subject,
yeah.
K
K
Get
those
into
the
main
case
website,
so
that's
one
part
of
this.
The
other
part
is
just
the
error
messages
and
to
be
clear,
we
are
talking
about
over
a
dozen
files
that
need
to
be
named
exactly
correctly
getting
this
right
is
non-trivial
and
just
doing
one
thing
if
everything
is
already
perfectly
correct
and
otherwise
plowing
forward
blindly
I
think
is
not
right
way
forward.
K
A
So,
like
I
think
if
we
see
the
cert,
but
not
the
key,
that
should
be.
That
is
the
external
CI
thing,
and
then
we
must
require
all
of
the
generated
client
certs
and
give
good
output.
We
don't
give
good
outputs
at
this
time
if
we
see
both
the
C,
a
cert
and
the
key.
That
means
that
your
company
probably
has
have
caused
some
kind
of
custom
CA.
They
want
to
use.
So
then
we
should
use
these
this
cert
and
key
to
generate
all
the
rest
of
the
things.
A
K
So
basically
the
decision
tree
looks
like.
Is
there
a
CA,
sir?
If
no,
then
we
just
generate
everything.
If,
yes,
is
there
a
CA
key?
If
there
is,
then
we
use
that
CI
key
and
generate
everything
using
that?
If
no,
then
we
look
for
all
of
the
certificates
and
if
that's
successful,
we
continue.
If
not
each
throw
an
error.
That's
sound
right.
Yes,
that's.
G
Right
there
is
one
other
curveball
in
there
as
well,
in
that
we
will
not
clobber
existing
certificates.
So
if
certificates
already
exist,
we
will
just
use
what's
what's
already
there
and
I.
Think
I
don't
see
an
issue
with
that
behavior
as
long
as
that
certificate
that
already
exists
is
valid
for
the
CA
that
that's
there
yeah.
A
F
L
F
G
I
think
if
they
provided
us,
the
private
key
for
the
CA,
then
they
intend
for
us
to
actually
generate
certificates.
So
I
don't
see
an
issue
with
us,
generating
the
certificates,
but
I
do
agree
that
we
probably
need
to
provide
some
output
to
the
user
on
whether
we're
reusing
an
existing
certificate
or
whether
we're
generating
a
missing
certificate.
In
that
case,
we.
A
F
Hairs
going
forward
right,
like
someone
might
say,
like
I've,
generated
my
API
server
serving
search,
but
they
forget
to
generate
the
various
client
source.
I
need
and
we
should
yeah.
Oh,
you
forgot
a
couple
we'll
fill
those
in
for
you,
like
that's,
very
convenient
I
guess
I'm
just
worried
that
people
might
not
expect
that
behavior
right,
obvious
I'm,
worried
about
the
obvious
'no
sipping.
G
K
K
C
A
So
when
we,
when
we
added
the
or
embedded
the
proxy
configuration
inside
of
the
cube
alien
configuration
a
component
configuration
that
is,
we
had
the
promise
/,
we
expected
it
to
go
beta
that
cycle
as
well.
It
didn't-
and
it's
still
in
alpha
and
then
as
it
was
all
fight
changing
in
backwards,
compatible
manners
without
pumping
the
Russian,
which
was
which
bitters
are
really
hard
so
going
forward.
We
really
need
this
to
be
beta
and
I
hope.
That's
gonna
happen
for
111
and
I'm
or
I'm
also
talking
to
Tim
about
it.
A
So
the
Peter
from
this
sig
has
been
having
has
had
PR
up
for
months
more
than
half
a
year
or
something
that
graduates,
the
older,
Q
proxy
component
configuration
to
better,
but
it
hasn't
got
the
final
LG
TM
from
the
sig,
so
I
hope
we
can
coordinate
with
cig
networking
to
actually
make
this
happen
for
111.
A
Yeah
I
mean
I'm
I'm,
also
gonna
talk
to
Mike
Dauphin,
so
at
last
cute
con
Olson
I
tried
to
convince
the
folks
there
as
well
I
mean
it's
been
ongoing
all
last
autumn,
but
then
they
said
well,
we'll
do
the
keep.
Let's
go
first
now
the
cubelet
actually
has
gone
first
and
it
the
destruct
themselves,
are
better,
so
I
hope.
That
means
we
can
do
the
cube
proxy
configuration
status.
Well,.
C
A
And
it's
is,
though,
yeah
communities
stuff
and
that
it's
possible
to
do.
This
is
probably
something
we
should
try
to
avoid
in
the
future.
Like
have
a
bot,
throw
an
error
or
a
message
or
something
you
can't.
You
can't
change
all
like
you
can't
change
truck
types
between
mine
emotions
or
something
anyway.
A
J
C
But
at
the
same
point
there
are
known
issues
that
exist,
especially
with
their
employment
of
good
DNS,
for
anybody
using
gue
ADM
in
the
wild
right
and
only
Jiki
is
the
only
provider
that
deploys
the
autoscaler.
That's
required
for
this
to
make
this
not
suck
so
but
I
think
defaulting
good
DNS
within
a
convenient
deployment.
Well
having
the
flip
of
the
future
gate
enabled
so
that
person
can
go
back
to
could
be
Nestor
they
need
to,
but
that
would
be
like
it
would
be.
A
breaking
change,
but
feature
gates
are
ok
to
break.
C
A
Know
why
we
even
had
have
to
do
a
breaking
change,
because
if
we
just
go
with
core
D
and
s,
we
have
the
feature
flag,
which
is
core
DNS
right
now,
if
we
said
that
to
being
GA
in
the
next
version
and
it
being
or
we
could
even
have
it
better
to
conform
with
the
rest
of
kubernetes,
but
just
set
the
default
to
true
and
the
people.
All
the
people
that
want
still
cube
dns
will
set
feature
gates
false
according
as
two
falses.
It's
that.
A
G
Gate
to
true-
and
there
was
an
another
PR
that
was
out
there
to
change
it
to
GA,
so
I
pulled
that
one
into
my
PR
as
well.
The
the
bigger
concern
is
is
how
do
we
want
to
transition
from
having
a
feature
gate
for
cube,
D,
a
net
or
accordion
s
to
having
a
future
gate
feature
gate
to
set
cube
DNS
instead
and
and
I?
Think
you
that's
something
that
one
do
we
even
need
that
feature
gate,
or
do
we
eventually
just
kinda?
G
A
Think
that
just
going
with
the
co
DNS
feature,
gate
is
just
fine
and
as
long
as
the
community
supports
cube
DNS
at
all,
which
is
probably
like
a
year
or
something
from
now
in
the
future.
We'll
just
have
this
core
D&S
flag,
they're
still,
and
you
can
set
it
to
false.
If
you
don't
want
core
DNS,
which
should
be
the
defaults
for
nearly
all
clusters
in
the
future.
If
you
don't
want
this
disable
it
and
there.
G
C
A
G
Are
there
any
cases
where
somebody
would
want
to
replace
the
provided,
DNS
solution
with
with
a
custom
one
that
that
would
be
the
case
that
I
would
see
for
having
some
type
of
config
file
option
like
now?
It's
not
related
to
core
DNS,
specifically
or
cube
DNS,
but
recently
somebody
was
asking
about
being
able
to
deploy
without
cube
proxy
because
they
want
to
run
something
that
replaced
cubed.
C
G
D
C
A
K
A
So
I
think
there's
gonna
be
a
small
skew
between
the
rest
of
kubernetes,
which
we're
cheeky,
probably
is
gonna,
be
the
bar
and
us
so
we're
gonna
enable
it,
while
it's
still
be
de
for
the
whole
of
kubernetes,
we're
gonna
enable
it
by
default,
but
in
112
it's
probably
gonna
be
GA
for
every
one.
That
means
that
is
pretty
much
every
cluster
it's
gonna
be
default.
C
C
So
if
there
are
folks
in
the
sig
there's
currently
22
people
on
this
call,
who
would
like
to
help
with
that
as
a
great
way
to
get
your
feet
wet
on
committee
m,
as
well
as
being
able
to
understand
the
UX
experience
at
how
expectations
around
it?
The
second
PSA
is
that
more
that
to
Robert
and
Lukas
know
that
you're,
both
back
yay,
we
need
to
work
on
a
sec
Charter
that
needs
to
get
done,
hopefully
by
the
112
cycle.
That's
all
I
have
for
PSS.
A
Yeah
cool,
so
the
last
thing
that
we
talked
about
during
coop
con
a
bit
anything
is
the
onus
file,
so
releasing
that
one
again
we're
gonna
like
remove
Jo
beta
as
an
approver.
He
has
other
tasks
to
do
and
now
a
lot
of
other
happier
folks
are
grunting
up.
So
that's
really
great
to
see
and
I,
don't
know
how
it's
what
the
status
of
critters
rouses
but
I
from
what
I
saw
he's
been
active
in
the
past
for
API.
Instead,
it's
really
so
he
is
currently
a
reviewer.
You
know
in
our
own
as
well.
A
Lee
has
been
active
in
the
mentoring
program,
which
is
piloted
by
or
led
by,
Parris
Pittman
from
Google
had
a
pilot
mentoring
program
that
started
late
last
year
after
Austin
and
has
now
progressed
during
the
winter
here
spring,
and
the
outcome
of
that
is
that
lease
is
going
to
be
added
Essen
review
after
active
working
and
collaboration
in
the
sink.
So
that's
that's
great.
He
confirmed
those
like
some
minutes
ago
that
he's
okay
with
being
added
and
stepping
up
there,
hopefully
like
for
just
generally
we're
trying
to
get
more
approvals.
A
A
M
H
A
E
B
A
Cool
I
see
thumbs
up
and
always
like
will
will
go
on
github,
of
course,
so
the
the
pull
request
will,
together
with
this
meeting,
the
pull
request
will
be
the
source
of
decision
making
for
us
Inc.
And
lastly,
if
if
we
have,
if,
for
example,
lists
or
Chuck
Oh
Jason
want
to
become
previuos,
I
think
it
should
work.
Work
real
well
to
do
like
I
would
be
really
happy
to
see
that.