►
From YouTube: Kubernetes SIG Apps 20170925
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
If
anybody
wants
to
continue
taking
notes
or
to
help
with
those
again
in
the
agenda
is
where
we
take
notes,
and
so
that
should
be
shared
in
chat
for
everybody
to
see,
and
the
very
first
thing
that
we
have
today
is
some
announcements
today
is
the
25th,
and
so
the
first
one
we
have
is
if
you're
eligible
to
vote
now
is
the
time
to
vote
for
the
steering
committee.
The
vote
ends
on
October
3rd
and
then
I
think
it's
the
the
Thursday.
After
that
we
start
to
learn
the
results.
A
If
you
are
eligible
to
vote,
you
would
have
gotten
an
email
saying
here
is
your
personal
voting
URL
that
you
can
go
to
and
everybody's
is
different,
so
you
can
go
there
and
take
your
vote.
You
might
want
to
check
your
using
Gmail.
It
ends
up
in
one
of
the
other
folders
or
check
your
spam,
if
you're
not
sure,
but
there
are
hundreds
of
people
eligible
eligible
to
vote
and
if
you
are,
please
get
out
the
vote.
A
The
next
one
is
about
testing.
If
you
look
at
the
flaky
test
jobs,
we
still
have
a
one
flaky
test,
job
associated
with,
say,
gaps
and
here's
the
name
of
it.
It's
the
job
shouldn't
run
a
job
to
completion
when
tasks
sometimes
fail
and
are
not
locally
restarted.
Its
listed
in
the
flaky
test
jobs
file-
that's
linked
there
if
somebody
who's
familiar
with
jobs
and
the
test
could
go,
take
a
look
at
that.
It
would
be
appreciated
because
it
would
be
great
if
say,
gaps
was
not
associated
with
any
of
the
flaky
test
jobs.
A
A
B
B
Just
said
that
context
before
the
you
know
demo,
so
we
identified
an
area
where
I
mean
he
helped
the
the
teams
inside
right,
hi
and
the
folks
we
work
with.
They
were
struggling
to
do
kubernetes
native
development.
So
the
way
things
would
happen
was
was
the
first
first,
the
developers
would
have
to
learn
kubernetes
then
they
need.
Then
they
needed
to
learn
how
to
deploy
on
kubernetes,
and
they
were
not
essentially
solving
business
problems.
They
were
more.
They
actually
spend
time
setting
up
the
infrastructure
and
clumping
and
a
lot
of
them.
B
You
know
resorted
to
certain
certain
tuning.
Like
I
mean
they
would
resort
the
talker
tooling,
or
they
would
use
docker
compose
to
define
a
MIDI
container
applications.
However,
that
would
not
scale
in
production
because
the
production
would
be
running
kubernetes
or
open
chef
and
any
way
of
converting
between
different
file
formats,
very
good
abstractions
and
limiting
acts
and
the
production
folks
just
won't
get
enough
knobs
to
turn
that
cabinet
is
provided.
B
Also,
certain
means
would
would
have
you
know
container
in
kubernetes
experts
in
it
they
would
do
all
the
heavy
lifting
for
containers
and
everything
this
for
the
rest
of
the
folks
won't
won't
care
about
it.
So
this
entire
talk
topic
procurement,
making
an
application
definition
easier,
deploying
to
Google.
It
is,
of
course,
also
discussed
in
Yukon
earlier,
so
that
is
where
we
we
started
lighting
catch
and
we
we
were
really
trying
to
know
and
understand.
How
can
we
make
writing?
Kubernetes
applications
applications
much
simpler.
So
what
do
you
essentially
done
in
cages?
B
So
what
we've
essentially
done
at
the
root
level.
So
in
our
angle,
we
have
essentially
most
odd
spec
and
the
kubernetes
controller
spec,
a
kubernetes
controller
would
be
I
mean
in
90%
of
the
cases.
I
hope
you
kubernetes
deployment.
So
so
so
we
merge
what
second
deployment
spec.
We
also
support
other
controllers,
like
jobs
in
which
we
have
such
job
spec
and
code
spec,
and
we
Auto
populate
all
the
other
metadata.
For
example,
you
do
need
to
specify
ok
is
if
it's
a
deployment
or
its,
you
know
patch,
even
Jo,
or
something
like
that.
B
They
will
just
take
care
of
it,
plus
we've
added
certain
shortcuts
that
we
identified
so
while
working
with
teams
here
may
be
identified
certain
patterns
that
that
was
getting
common
in
application
definition,
so
we
were
always
like.
Ok,
how
do
we?
How
do
we
save
us
a
couple
of
levels
of
indentation?
However,
we
make
Yamin
concise,
and
you
know
more
intuitive
that
so
that
was
the
basic
idea
of
cage
right
now.
What
you
see
on
screen
is
so
someone
who
is
just
getting
started
with
kubernetes
write,
an
application
developer.
B
All
he
or
she
would
need
to
write
is
is
something
as
simple
as
this.
So
it's
just
your
application
name.
The
container
image
that
you're
going
to
use
I
am
the
service,
so
I
very
root
level.
What
you
see
is
this
Center
containers
filled
right.
It
is
essentially
part
of
odd
spec.
So
these
so
whatever
you
can
define
in
a
pod
spec,
you
can
define
here
under
the
field
services
that
is
again
service.
B
Stick,
so
whatever
you
can
do
in
inside
a
service
spec
you
can
define
here,
but
if
you,
if
you
guys,
remember
the
shortcuts
that
I
was
talking
about
so
you
added
certain
shortcuts,
for
example,
in
this
case.
In
this
case
it
will
be
port
mappings.
So,
instead
of
you
having
to
write
ports
and
then
port
and
then
name
the
port
and
target
port
and
normal
port
in
protocol,
you
can
just
define
that
in
one
line
using
port
mappings-
and
you
can
just
give
you
no
target
quote
and
Putin
and
the
protocol
there.
B
So,
however,
since
this
is
vanilla
service,
you
can
always
define
your
ports
the
long
way
as
well,
so
we
are
not
taking
anything
away
from
the
kubernetes
spec
in
any
way.
We
are
only
appending
things
to
it.
I
am
just
restructuring
the
end.
I
think
it's
better
suited
for
application
definition.
We
jump
over
to
the
demo
so
right
now
what
we
just
saw.
B
That
was
a
very
simple
application,
but
over
time
you
like
when,
when
when
we
say
that
you
know
caches,
actually
catch
actually
goes
into
your
CI
CD,
and
all
of
that
which
means
that
cage
is
locally
production
ready.
That
would
mean
ok,
so
here
in
for
mov
have
this
you
know
full-blown
google
full-blown
get
lav
example.
As
you
see
on
the
screen,
we
have
three
files
in
the
root.
One
is
Kate,
laugh,
another
is
another
status.
B
So
what
we
follow
essentially
in
cages,
you
know
your
your
one:
catch
file1
file2
be
able
to
define
your
one
micro-service.
It
can
be
handled
appropriately
by
paints.
So,
for
example,
if
one
team
wants
to
take
care
of
get
lab,
ok
all
you
need
is
that
one
files
which
is
co-developed
that
service
everything
all
the
kubernetes
constructs
that
you
need
for
good
luck
or
you
need
for
post
rest
only
or
you
need
for
it
is
it
should
be
in
your
respective
files.
I
will
quickly
go
into
the.
B
I
will
quickly
show
the
gitlab
fine
so,
like
I
told
you
guys
earlier,
this
is
again
prospect
and
deployments
back
knowledge
at
the
at
the
level.
So
you
see
something
very
familiar.
This
is
this
is
all
vanilla,
kubernetes,
but
again
all
the
levels
of
indentation.
All
the
metadata
is
cookie
con.
You
just
need
to
take
care
of
defining
your
application.
You
see,
you
know
feels
like
replicas.
This
is
coming
in
from
the
deployment
spec
feel
container
this.
This
is
coming
from
the
pods
bed.
B
This
is
all
very
simple
and
I
mean
very,
very
similar,
very
much
more
intuitive
environment
variables
are
the
way
they
should
be
so
again,
so
we
we've
added
certain
shortcuts.
Another
shortcut
that
we
have
added
would
be
health,
so
we
noticed
a
pattern
in
which
you
know.
Lioness
dogs
and
readiness
troops
would
be
the
same
for
a
lot
of
applications,
so
be
so
we
added
this,
which
is
I
mean
inside
inherently.
B
It
is
liberalizing
on
the
same
definition
of
abilities
group
at
the
likeness
or
readiness
Pro,
but
when
you
define
something
with
help,
it
generates
both
aliveness
and
readiness
Pro
with
the
same
values,
because
that
was
the
pattern.
That's
common
to
a
lot
of
applications.
However,
since
this
is
all
vanilla,
kubernetes
spec,
you
can
define
both
of
them
different.
You
can
set
different
in
it
initial
milliseconds
for
both
and
all
of
that.
So
that's
that's
one
shortcut
that
we've
added
as
we
come
down
the
file.
B
We
see
that
these
are
the
kubernetes
services
that
that
you
define
I
I,
showed
you
guys
so
so
see
this.
This
is
again,
you
know
service
pick,
so
you
see
type
load
balance
and
all
of
that
coming
in
from
the
service
pack.
What
you
see
here
is
both
mappings,
which
which
I
showed
you
guys
earlier.
So
you
don't
need
to
write
the
full
own
codes.
However,
you
you
can
define
it
like
this,
which
is
the
standard
definition.
B
However,
if
you
notice
this
a
new
line
called
end
point
here
again,
we
have
appended
this
shortcut
to
the
service
port.
What
this
would
essentially
mean
is
this
will,
just
by
adding
this
one
line,
just
by
writing
in
point
cache
will
case
is
going
to
create
an
ingress
resource
for
you
all.
The
required
information
is
going
to
come
from
from
from
the
port
here
and
the
service
name
there,
and
all
of
that.
So
yes,
so
you
can
define
your
industry
source
like
this.
B
However,
if
you
want
to
define
an
industry
so
separately,
you
can
do
that
at
the
root
level,
despite
typing
in
increases
and
followed
by
an
area
of
ingress
spectrum.
That's
one
thing
that
we
liked
it.
Then
you
have
volume
claims
the
way
they
should
be.
The
knee
of
Secrets
configures,
the
full
full-blown
production
etiquette
lab
application.
Basically
again
we
have
to
so
it
requires
two
other
things:
possession
it
is
again.
I
will
quickly
skim
through
the
Postgres
file,
all
the
same
environment
variables.
Again
we
see
health
is
there
and
CPU
resources
volume
mount
services.
B
B
Now
I
am
going
to
do
a
catch
create
on
on
the
directory,
which
had
all
my
HTML
files.
You
can
see
that
all
of
those
things
are
alive
getting
generated
and
deploy
to
the
cluster
without
any
key
abstractions,
because
it's
all
coop
and
it
is
in
the
end,
without
any
limitations
and
okay.
So
this
thing
has
been
deployed,
but
I
I
just
want
to
focus
on
one
thing:
if
I
quickly
do
a
line
come
on
all
of
the
Yambol
files
present,
which
means
all
these
these
three
case
definitions
attributed.
B
So
this
was
180
lines
of
of
essentially
I
mean
kubernetes.
Spec
that
we
had
to
write
k
generate
is
a
command
which
will
just
generate
the
artifacts
is
not
I
will
do
a
line
count
on
this
and
the
line
compass
411,
that
is
from
180
lines
of
non
leaky,
abstraction
a
find
application
using
catch.
You
generate
411
lines
of
kubernetes
code
to
kubernetes
application
definition,
kubernetes
artifacts,
I
quickly
show
the
output
data
generate
shows
you
can
see.
Okay,
PVC
is
were
generated
like
the
way
we
defined
in
a
file
I'll
increase.
B
You
can
see
just
by
specifying
that
one
line
endpoint
of
the
entire
full
blown
inverse
resource
were
generated.
We
quickly,
they
can
look
at
line
this
probes.
So
using
the
hell
shortcut
you
can
see
that
both
line
nestle
it
in
a
spokes
were
generated.
All
of
that
so
this
is
where
we
see
you
know
kick,
is
adding
a
lot
of
value.
B
B
We
check
if
it
working
yeah
so
get
lab,
is
up.
Another
thing
that
we
could
do
is
so
by
by
default
in
cage.
The
overheads
controller
that
we
assume
that
you
want
to
provide
is
the
cupola
test
deployment
pitches,
which
is
the
common
which
is
the
most
common
controller
in
90%
of
the
cases.
However,
you
can
also
define
different
abilities
controllers
and
we
believe
that
you
know
one
controller
should
be
able
to
define
your
entire
micro-service
in
its
entirety.
So
what
you
see
in
this
definition?
It's
just
a
kubernetes
job.
B
B
B
B
B
B
B
Anyone
would
have
to
write
just
to
define
a
simple
WordPress
application,
but
you
see
on
the
right
is
the
same
definition
for
done
same
definition,
written
using
cage
on
the
right
so
on
the
right
this
sketch
and
what's
on
the
left
is
so
like
I'll
like
we
have
spoken
about
most
of
the
things
it
builds
on
top
of
Google,
it
is,
is
no
DSL.
There's
no
leaky.
Abstractions
is
merely
a
rearrangement
of
COBOL.
It
is
thick
so
soon
application
definition
attached
shortcuts.
It
enriches
the
end-user
experience
and
it's
built
for
Devon
prod.
B
What
else
so
we
we
support
multiple
other
shortcuts
health
and
endpoint
is
happen
to
be
one
some
of
them.
We
have
for
Pahlavi
mounts
and
other
stuff.
We
also
support
external
kubernetes
resources.
So
let's
say
the
these
stateful
set
controller
is
not
yet
supported
by
catch.
Then,
just
by
specifying
a
field
in
your
camel
you
can
link
it.
You
can
link
your
case
definition
to
an
external
kubernetes
resource
which
will
be
a
which
will
be
essentially
a
file,
so
I
mean
you
are
not
limited
in
any
way.
B
What's
next
the
so
LSP
integration
with
vs
code
Eclipse
a
that's
in
the
works,
the
mechanics
which
is
working
on
it,
then
the
B.
We
are
looking
into
integration
with
CSV
pipelines
for
stuff
and
yeah
that
that's
about
it
so
charlie,
a
colleague
of
mine,
he
very
beautiful
website
at
Kate's
project
or
not
go
visit
it
then
we
talk
on
github.
We
talk
on
slack
and
be
the
application
that
I
demoed
the
gate.
Lock
thing
you
can
find
it
at
my
poke
contains
caffeine
catch-all.
A
A
E
Right,
so
let
me
before
I
do
the
demo
I'm
just
gonna
share
a
few
slides
I'll,
try
to
keep
it
very
brief
and
leave
some
some
time
for
questions
oops
all
right,
so
you
should
see
the
slides,
hopefully,
and
let's
see
so
so.
My
name
is
Felix.
Simcoe
I've
worked
for
Hoshi
Corp
and
we
make
a
tool
called
telephone
and
obviously
I
don't
want
to
convince
you
that,
like
terraform,
is
the
only
way
to
provision
your
community's
resources.
E
So
terraform
is
basically
a
multi
provider
tool
which
allows
you
to
manage
your
infrastructure
is
code
and
my
provider.
We
mean
different
things.
You
can
manage
resources
like
traditional
resources
like
VMs
in
cloud
environments
like
AWS,
Q
cloud
as
your
OpenStack,
etc
or
DNS,
or
even
other
things
which
are
related
to
your
infrastructure,
like
pay
duty
schedules,
data,
doc,
monitoring,
bitbucket
or
github
repositories
or
users.
Things
like
that
it
has
a
pluggable
in
architecture
where
binaries
are
talking
with
each
other
over
RPC
protocol
and
the
soda
generally.
E
Now,
before
we
talk,
communities,
people
or
the
existing
users
of
terraform
generally
perceive
terraform
as
a
tool
for
managing
one
particular
a
part
of
the
kubernetes
cluster.
So
let
me
decompose
what
it
means
for
me
to
do
manage
given
at
this
cluster.
So
before
you
start,
you
need
some
kind
of
basic
infrastructure,
whether
you
run
on
on
bare
metal,
that's
I,
pick
C
or
in
the
cloud,
it's
it's
AWS,
Google
cloud
or
anything
else,
VMware,
and
then,
on
top
of
that,
you
expose
the
interface,
which
basically
means
you
have
a
running
operating
system.
E
Your
Linux
flavor
of
your
choice,
and
on
top
of
that
you
typically
have
some
kind
of
configuration
management
like
puppet
chef
or
end
table,
which
typically
involves
things
like
setting
up
things
like
at
CD,
cubelet,
the
API
server,
the
CA
certificates
and
things
like
that
and
then
finally,
you
get
to
the
top
layer
which,
which
is
the
kubernetes
api,
and
what
I'm
gonna
talk
about.
Mostly
is
the
top
layer.
E
On
the
other
side,
you
could
use
languages
for
data
like
Jason
or
llamó,
and
the
high
level
which
is
typically
allow
you.
They
offer
you
a
lot
of
ways
to
shoot
yourselves
in
a
food.
You
know
they
have
classes
abstractions
like
functions,
and
things
like
that
and
on
the
other
side
languages
for
data
lack
these
features,
which
makes
them
very
simple,
but
at
the
same
time
they
also
like
features
which
you
may
find
useful
in
describing
infrastructure
like
referencing
or
comments.
E
E
Have
the
DSL
and
form
as
well
as
puppet,
has
chosen
this
path,
despite
the
fact
that
DSL
designing
a
DSL
is
quite
a
challenge
so
Heights
here,
you
can
find
it
on
github.
It
is
used
in
various
github
projects,
like
console
fault,
no
matter
telephone
and
it
is
Jason
compatible
which
is
useful
for
in
case.
You
want
to
generate
the
ghost
now
before
I
finally
jump
into
the
demo.
I
just
want
to
explain
what
kind
of
workload
do
we
expect
people
to
run
through
the
telephone
provider
and
Cuban
at
ease.
E
E
E
Here
we
go
so
what
we
have
here
is
provide
a
definition
which
says
to
which
cluster
to
connect
to
that's
basically
the
config
context,
and
then
we
have
a
replication
controller
and
a
service,
and
here
we
have
basically
translated
the
dehyde
seal
definition
to
or
the
other
way
around,
basically
insulated.
The
the
API
swaggered
definition
due
to
the
HCl
to
the
schema
here.
So
you
can
specify
this
the
spec
of
the
controller
here
and
the
template,
which
is
the
container
and
some
limits
and
requests.
E
E
First
need
to
initialize
their
phone,
which
would
normally
download
the
community's
provider
in
this
case,
but
I
already
have
it
didn't
do
much
and
then
the
nice
feature
of
terraform
is
the
plan.
It
may
not
seem
as
useful
at
this
point
since
we
are
just
creating
two
new
resources,
but
I'll
show
you
in
a
second,
how
that's
useful
when
you
have
an
existing
infrastructure,
so
we
have
just
created
the
replication
controller
and
end
the
service,
and
now
what
I'm
going
to
do
is
I'm
going
to
change
something
in
the
conflict.
E
E
E
E
That's
that's
a
good
question,
so
we
make
the
distinction
on
the
API
level.
So
obviously,
if
you're
scheduling
a
replication
controller,
then
throw
from
one
deal
with
the
state
of
those
pots
which
are
scheduled
by
the
replication
controller,
and
so
it
should
I
think
the
distinction
is
there
on
the
API
level.
E
Basically,
if
I
don't
know,
if
you're
there
are
some
overlaps,
obviously
so,
if
you're,
if
you
have
like
the
auto
scaling
the
pot
autoscaler
and
then
you
have
the
replication
controller,
then
I
think
the
autoscaler
itself
is
fine
because
it
remains,
as
is
during
the
auto
scanning
procedures,
but
the
replica
set
will
obviously
bump
up
or
decrease
the
number
of
replicas
that
are
running
and
that
will
be
reflected
in
in
the
API
as
well.
So
I
think
there
are
ways
to
avoid
that
overlap
as
well,
but
I,
don't
like
commenting
exact
question.
C
E
The
gent,
like
the
recommendation
that
we
give
to
perform
users,
is,
to
always
start
the
state
remotely
either
way,
especially
if
they
work
in
a
team,
but
by
default.
If
you
don't
do
that,
it's
stored
locally
and
I
can
show
you
what
what's
in
there
in
a
second,
so
it
basically
it's
a
it's
anything.
You
find
anything
you
get
from
from
the
API,
as
you
can
see
here,
so.
E
So
if
it
can't
reach
the
API,
it
will
probably
error
out,
but
you
can
pass
in
the
Refresh
equals
false,
in
which
case
you
telephone
to
do
not
do
the
refresh.
So
you
basically
acknowledge
that
you
may
have
an
out-of-date
state
and
the
plan
may
not
be
accurate
due
to
that.
But
yes,
you
can
do
that.
Okay,.
A
A
So
the
next
thing
that
we
have
is
1.8
is
slated
to
come
out
this
week
and
so
we're
starting
to
slide
into
1.9
planning.
So
we
can
hit
the
ground
running.
There's
some
1.9
planning
that
we
can
look
at
what's
inside
of
kubernetes
itself
and
then
there's
some
ecosystem
stuff
that
we
can
look
at
and
talk
about
as
well.
That'll
be
happening
in
the
same
time
frame
to
kick
that
off.
There's
a
link
to
a
document
which
is
the
1.9
planning
document.
A
We
can
see
it
fantastic.
So
what
I've
opened
up
is
a
1.9
priority
planning
document
and
there's
we
can
add
columns
for
companies
as
it's
needed
or
add
other
details.
I
asked
folks
even
after
we
continue
today
to
please
fill
this
out
in
the
next
few
days.
I'll
also
be
looking
over
the
1.8
planning
document.
In
fact,
I
can
share
that
as
well.
We
had
two
documents
on
that:
the
one
not
8
planning
and
we'll
be
revisiting
what
we
ended
up
accomplishing
where's.
My.
A
A
There
are
some
things:
daemon
side
deployment,
replica
set,
stateful
set
that
are
currently
v1
beta
we'd
love
to
get
these
things
to
stable
in
1.9,
and
so,
if,
if
folks
want
to
work
on
stability
issues
more
than
features,
we'd
love
that
because
it
would
be
fantastic
if
they
were
stable
parts
of
the
API
rather
than
v1
beta
2,
and
this
is
a
new
group
for
them
to
be
in
kubernetes
one
date.
That's
coming
out
this
week,
they've
moved
there,
and
so
now
stability
is
a
big
thing.
A
So
those
are
the
notes
and
I'm
also
not
sure
I'll
like,
for
example,
one
of
the
things
that
has
been
coming
up
in
some
of
the
meetings
is
how
long
after
version
of
kubernetes
comes
out
to
some
of
the
supporting
tooling,
such
as
helm,
come
out
to
match
that
and
I'm
not
sure
I
think
come
2.7
is
the
next
release
and
I'm
not
sure
when
that's
going
to
be
out?
If
somebody
has
more
detail,
it
would
be
fantastic,
but
I
noticed
that
a
lot
of
the
Microsoft
folks
are
not
on
today.
A
So
with
that's,
what
I've
got
I
see
the
chorus
folks
have
added
a
create,
create
first
rolling
update,
there's
a
proposal
in
for
that
to
be
part
of
the
daemon
sets.
If
anybody's
not
aware,
this
is
if
I
remember
right.
This
is
about
create
first
rolling
update,
that's
where
you
create
it,
and
then
you
destroy
the
old
one.
Is
that
right.
D
Yeah,
that's
correct.
It's
Ryan
from
core
else,
there's
some
issues
that
we
need
to
talk
about
with
the
design
doc,
but
we
like
to
get
see
this
1.9.
Okay,.
A
A
A
Pole
request
to
implement
some
details
around
like
there's
the
pod
termination
semantics,
which
is
one
of
the
things
to
fur
containers.
Do
if
you
don't
know
what
the
four
containers
are
in
the
go
programming
language
there's
the
idea
of
defer
a
function,
that'll
run
when
the
parent
function
ends
and
that's
the
idea
with
defer
container.
So
my
thing
ends,
and
so
here's
something
I'm
deferring
to
fire
off
or
cleanup
or
whatever,
compared
to
other
ways
that
you
can
do
it
and
that's
what
defer
containers
are
it
takes
that
that
same
idea
here?
A
A
I
sounds
like
nobody's,
got
a
whole
lot
to
say
about
planning
I,
stop
sharing
my
screen
for
a
moment
to
see
if
I
could
get
two
text
chat
here.
Does
anybody
have
any
comments
on
this,
or
is
this
something
that
folks
would
rather
go
take
a
look
at
for,
say
the
next
week
or
two
and
jump
in
here
and
add
details
as
we
go.
A
C
A
Sounds
good
to
me
so
now
this
document
is
out
there
feel
free
to.
If
your
company
name
is
not
listed
at
the
top,
there
feel
free
to
go
ahead
and
add
it,
and
what
we
want
to
do
is
just
like,
let's
see
with
the
1.8
planning
that
we
had
in
fact,
I'll
share
that
document
in
chat
here
and
I'll
link
it
into
the
meeting
minutes
for
it.
You'll
see
that
there
is
there
were
folks
signed
up
to
do
it,
and
then
we
even
tracked
if
anybody
was
going
to
do
a
row.
A
So
there
was
an
idea
of
doing
suggestions
and
then
seeing
what
was
going
on
there
and
then
seeing
if
anybody
picked
it
up
and
so
we'll
actually
go
through
and
I
added
a
column
wasn't
implemented
and
we'll
go
through
to
see
what
was
implemented
and
what
wasn't,
but
I
haven't
had
the
chance
to
do
that.
Yet
that's
all
I
had
links
to
around
here
to
try
and
cross
link
all
of
this
stuff.
So
it's
a
little
bit
easier
to
find.
A
F
A
For
anybody
not
familiar
with
chart
museum,
it's
a
project
that
provides
an
API
for
putting
and
pulling
packages
from
a
chart,
repository
and
generating
your
index
IMO
file,
those
things,
and
so
where
the
app
registry,
which
we
talked
about
recently,
is
more
of
a
docker
hub
style,
where
the
current
implementation
in
helm
is
maybe
more
similar
to
Debian.
This
takes
that
that
back-end
server
side
element
that's
similar
to
Debian
and
and
put
some
more
API
semantics
on
it,
and
so
it's
within
that
flow
of
what
home
post
today.
F
Thanks
for
providing
a
good
explanation,
there
yeah
I
put
a
link
in
the
in
the
chat.
So
if
you
guys
have
any
issues
using
it,
please
file
an
issue
and
look
forward
talking
on
October,
2nd
yeah.
A
Well,
if
there
is
nothing
else,
I
can
give
everyone
12
minutes
back
anything
else
going
once
going
twice
going
three
times.
Alright
folks
well
have
a
wonderful
week.
You
can
find
us
online
in
slack
in
cig
apps
if
you've
got
continued
questions
and
please
go
vote
and
fill
in
any
planning
stuff
into
the
docs,
otherwise
have
a
wonderful
week,
and
with
that
I'm
gonna,
stop
recording
and
in
the
meeting
thanks.
Everyone.