►
From YouTube: Helm Developer call 20180412
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
hello:
everyone
welcome
to
the
helm,
dev
call
for
Thursday
April,
12
2018
for
anyone
who's
new
or
watching
this.
For
the
first
time
we
start
off
with
some
announcements.
Then
we
go
into
our
and
stand
up
for
me
to
the
core
maintainer
z'
and
anyone
else
who
is
reporting
out
on
something
they're
working
on
with
helm
and
then
we'll
go
into
some
discussion,
and
so
both,
let's
start
with
the
announcements,
I
didn't
think
we
had
any
listed,
we're
there
any
that
anyone
wanted
to
bring
up.
C
So
we
are
currently
doing
a
survey
and
I've
got
the
link
here,
while
other
folks
are
talking
that
collects
stuff
about
how
you're
using
kubernetes
to
run
applications
and
build
applications
for,
and
so,
if
you
have
a
few
minutes-
or
you
know,
people
in
your
company
who
should
be
taking
this,
please
send
it
on
to
them.
So
that
way
we
can
collect
as
much
data
as
possible.
All
of
the
results.
The
raw
data
will
be
made
available
to
anybody
who
wants
it,
it'll
be
publicly
available
along
with
some
of
us
keeping
an
analysis
of
it.
D
A
Thank
you,
I
think,
that's
all
for
announcements,
we'll
go
ahead
and
get
started
with
stand
up.
Let's
go
first!
So
this
last
week
I
was
working
mostly
on
PRS.
We
managed
to
not
get
quite
a
few,
we're
still
suffering
a
little
bit
from
the
like
knockout
one,
two
more
pop
up
issue,
and
so
that's
not
a
bad
thing,
but
we
do
have
a
lot
of
PRS
coming
in
through
through
the
Q,
so
we're
trying
to
get
through
those
as
fast
as
we
can
sorry.
A
E
F
So
this
last
week,
I
was
mostly
working
on
the
release
and,
like
Taylor
said,
there's
just
been
an
influx
of
a
large
amount
of
stuff
coming
in
through
the
stuff
which
has
actually
been
really
nice,
because
we're
actually
getting
feedback
on
the
release.
Candidates,
which
is
fantastic
people,
are
finding
bugs
they're
submitting
issues
and
then
we're
able
to
go
through
the
release
checklist
to
figure
out
if
there's
actual
bugs
before
we've
cut
and
release
2.9.
So
I
think
that
is
great.
F
So
the
only
things
that
I
had
for
this
week
is
that
I
was
issue
Sherpa,
so
I
was
trying
to
aggressively
close
anything
that
was
kind
of
like
duplicate
issues
notice.
A
few
that
were
like
the
connection
refused
bare
metal
like
bare
metal
clusters,
seems
to
be
like
a
very
common
issue
that
I'm
seeing
happen,
but
like
pop
up,
maybe
once
every
two
weeks,
just
probably
with
the
docs
there.
G
Timing,
that
was
good
timing.
Wasn't
it
yeah,
so
so
I've
spent
the
last
week
wearing
my
operator
hat
and
we're
contra
Nettie's
clusters.
If
you
need
this,
is
my
operator
hat
so
yeah
so
I've
been
I
had
a
major
cluster
outage,
so
I
spend
a
bunch
of
time
dealing
with
that,
but
I
did
get
a
couple.
Things
done
on
how
one
of
them
is.
G
H
Yeah
I,
like
the
Hat,
where
I
need
to
climb
one
of
those
I
could
use
into
that
so
yeah
I've
been
sick,
so
I've
been
kind
of
not
doing
home
stuff
stuff.
That
I
was
doing
before
that
I
gotta
hop
back
on
this
just
kind
of
doing
some
POC
work
around
the
pump
three
proposal,
also
looking
at
how
we
can
refactor
out
some
of
the
gr
PC
stuff
to
make
it
an
easier
transition
for
for
going
to
a
client,
a
link
architecture.
B
Hey
haven't
done
too
much
this
past
week,
I'm
just
following
up
on
some
issues
from
my
issue
sure
for
a
week
the
other
week,
but
other
than
that.
Nothing
really
something
requests
in
for
the
helm,
cartographers
mailing
list.
So
we
should
have
that
relatively
soon
for
and
that's
the
mailing
list
for
charts
and
he'll
maintain
errs.
That's
it
for
me,
toss
it
to
Brian
hardik
I.
C
So
I've
been
working
on
one
interesting
thing:
I
was
comparing
memory
use
and
performance,
and
allocations
of
EML
parsing
in
JSON,
parsing
and
I'll
be
posting
that,
probably
later
today,
I
think
my
results
aren't
good
enough
shape.
The
brief
summary
is
the
mo
parser
we
use
is
the
worst
and
memories.
If
you
went
with
go
yeah
mo
it
uses
about
half
the
memory
to
allocate
the
same
thing
by
the
way
for
reference.
C
I
used
a
200
when
I
was
doing
this
210
Meg
ml
father
wait
230,
Meg
yeah
mo
file
in
order
to
really
boost
the
memory
on
this
to
see
what
the
worst-case
scenario
is.
I
know
we're
not
going
to
get
there,
but
the
memory
on
go
mo
was
about
half
as
much.
What
was
interesting
is
if
you
look
at
the
the
lima
library
we
use
now
compared
to
the
json
parser,
using
the
same
structure
we
use
for
the
way
our
objects
are
defined
in
Hjelm.
Today
it
uses
ten
times
the
json
memory.
C
If
it
were
a
json
file
and
parsing
it
n
times
the
memory
to
do
the
same
thing
so
there's
obviously
some
optimizations
and
stuff
under
the
hood
here,
but
pretty
much.
Our
UML
parsing
is
the
worst
case
scenario.
So
when
we're
talking
about
the
memory
and
stuff,
that's
one
of
the
things
going
on
it
might
be
worthwhile.
C
Trial
me
in
fact
I'm
gonna,
post
the
raw
stuff
that
I
did
with
directions,
so
somebody
else
can
look
at
it
and
see
if
they
can
find
a
better
way,
I'll,
try
and
post
all
of
the
details
up
there.
This
was
just
my
you
know:
I
wanted
to
go,
take
a
look
at
what
was
going
on
and
with
all
the
stuff
that
came
out
of
Jay
frog,
so
more
ram
won't
save
you
from
context
exceeded.
Yeah
I
was.
C
Trolling
me
I
was
trying
to
get
into
the
whole
thing
of
where
Jay
frog
was
talking
about
memory
use
when
we're
adding
index
to
a
memo
and
repos
and
the
time
in
memory
it
takes
and
looking
to
understand
what
was
going
in
there
yeah.
There
are
easy
ways
to
troll
me:
it's
not
hard!
So
that's
what
I
did
this
week.
B
C
A
D
A
A
F
F
There
has
been
a
lot
of
things
since
we've,
like
freeze
the
branch,
throw
the
release
and
it
seems
like
there's
a
lot
of
value
or
there's
a
lot
of
features
that
have
been
released
in
there
and
I
know
prior
to
the
release
candidate
process,
Adam
and
I.
What
we
were
doing
is
that,
every
time
that
we
were
testing
and
doing
it,
we
would
just
cut
straight
from
master
for
testing
these
kind
of
things.
F
So
but
I,
but
I
fully
understand
the
value
of
doing
a
release
candidate
and
making
the
release
candidates
only
testing
them
incrementally,
as
we
go
out
with
a
v29
zero.
So
I
was
just
wondering
what
is
the
general
feeling
from
the
rest
of
the
core
maintainer
on
whether
we
should
cut
a
release
candidate
for
straight
from
master
or
if
we
should
continue
to
cherry-pick
specific
fixes
that
we're
finding
with
our
c3
onto
our
c4
and
can
do
with
two
nine
release.
F
The
only
context
that
I
just
want
to
add
here
is
that
some
users,
when
I've,
been
saying
that
we're
merging
the
pr
into
master
they've,
been
asking
when's
the
next
time
that
we're
gonna
or
when's
the
next
release
that
we're
gonna
be
able
to
put
this
in
and
I've
had
to
tell
them
that.
Basically,
it's
not
in
2.9,
because
of
the
current
really
or
because
we've
already
cut
the
release.
Candidates
so
it'll
be
in
2.10
and
it's
been
I
think
the
I
think
that
general
pulse
has
been
from
the
user
community
or
from
the
community.
F
It's
been
either
they're
somewhat.
Okay
with
that
or
they're
a
little
bit
more
on
the
upset
side,
because
they
have
to
wait
for
their
feature
to
be
in
a
release
branch.
So
that's
just
what
I
wanted
to
bring
up
and
see
if
anyone
had
any
opinions
and
I
see
Rena
out
of
my
potato
hot
potato
to
Adam,
first
I
guess:
Michelle.
H
Yeah
the
whole
point
of
doing
an
RC
released
is
because
that
is
essentially
what
we
are
going
to
ship
for
a
release.
So
if
we
start
adding
in
extra
features,
we're
gonna
break
our
senator
contract
and
as
much
as
it
pains
me
to
my
OCD
to
say:
don't
cut
this
next
release
from
master,
because
that's
what
we've
done
every
single
time,
we
I
think
we
should
actually.
C
I'll
jump
because
I
in
meted
before
Taylor
did
you
know
you
said
people
who
had
opinions
I.
Some
of
you
saw
that
I
tweeted
out
last
Friday
that
in
the
previous
month
there
had
been
downloads
of
helm
from
over
59,000
unique
IP
addresses
in
the
previous
month
alone.
Right
and
that's
just
four
more,
we
do
it.
C
We're
gonna
do
because
we
pull
in
some
other
things
and
we
might
pull
in
some
other
bugs
with
it
that
are
being
tested
on
the
stability
here.
You
know
we're
pretty
sure,
but
we're
not
super
sure
and
let's
follow
along
with
just
the
semantics
of
it.
So
that
way
we
have
a
good
repeatable
process
and
a
known
expectation.
If
we
do
things
in
a
stable
way,
because
we
have
a
long
tail
of
silent
users
who
aren't
engaged.
B
Was
just
gonna
I
agree
with
Adam
and
Farina
on
this
and
I
just
I'm
not
opposed
to
doing
like
a
quick
2.10
like
after
within
a
few
weeks,
I
just
I
think
we
should
continue
making
sure
that
their
release
hand
doesn't
have
new
features
that
that
could
possibly
end
up
breaking
other
things,
and
then
we
would
keep
going
on
RC
cycle
and
yeah.
So
that's
it
for
me.
Taylor.
A
Yeah
I
was
just
gonna
agree
and
also
just
state
like
this
is
how
a
lot
of
software
projects
work.
I
mean
you
you
try
to
get
if
I
were
to
submit
something
to
kubernetes
right
now,
I,
don't
think
it
would
drop
in
there
until
what
1.12
right
now
like
if
I
submitted
it
today
and
got
it
approved
today
and
might
not
get
to
like
1.12
and
sells
what.
A
And
so
like
I,
don't
feel
really
bad
making.
Some
people
wait
six
six
weeks
for
a
feature
which
is
generally
about
our
release.
Cadence
and
and
honestly
we
build
a
canary.
So
if
somebody
really
needs
the
the
bleeding
bleeding
edge,
they
can
make
and
use
canary
and
we
try
our
best
to
keep
that
fairly
stable.
There
might
be
some
new
things
it
might
not
work
entirely
but
like
for
the
most
part,
it's
pretty
stable,
so
I
feel
like
we
are
pretty
covered
with
with
that.
A
F
Right
so
it
sounds
like
we're
all
in
general
agreement
with
that
I
don't
know
if
we
have
that
actually
codified
in,
like
the
contributing
document
or
anything
in
the
release
process
document
when
we
actually
switched
over
to
release
candidates.
So
maybe
an
good
action
item
from
this
discussion
is
to
put
just
like
a
section
in
the
documentation.
I'll
be
happy
to
take
us
on
as
an
action
item.
F
H
F
B
F
It's
that's
essentially,
what
I'd
be
looking
at
is
like
how
do
I
respond
to
users
that
are
asking
for
like?
This
is
not
just
the
first
time
that
people
have
been
asking
for
that
question.
It's
like.
When
is
the
timeline
for
this?
What
is
the
timeline
for
that?
So
I'd
like
to
have
like
kind
of
a
cookie
cutter?
F
B
H
A
F
Think
the
queue
is
still
relatively
large,
so
I
think
it
would
still
be
good.
We've
actually
seen
some
good
activity
and
some
features
being
merged
in
in
this
last
week.
So
I
think
that
should
be
good
until
we're
starting
to
cut
down
on
the
pull
request
from
like
75
I'd,
say
down
to
like
60
or
50
would
be
fantastic.
Yeah.