►
From YouTube: Kubernetes Community Meeting 20180405
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
Notes: https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
B
Good
morning
this
is
April
5th
the
community
meaning
for
kubernetes.
This
meeting
will
be
publicly
posted
on
youtube,
so
please
be
mindful
about
what
you're
saying
is
being
recorded.
My
name
is
Jose
Palafox
and
I.
Work
at
info
I've
been
participating
in
the
community
for
a
couple
of
weeks
now,
I'm
just
kidding
ramp
in
soon
TM
and
in
contributor
experience,
but
I've
been
absent
for
a
couple
of
weeks.
So
my
apologies
to
those
folks.
So
today
we've
got
a
demo
from
Jay
frog
and
I'll
hand
it
off
to
our
speaker.
B
C
Thank
you
guys.
Thank
you
how
to
provide
me
opportunity
to
speak
so
this
is
Jay
Nisa
I
am
a
salt
different
developer
as
a
frog,
and
today
I
will
be
demoing
how
to
use
artifactory.
As
repository
and
repository
even
spinning,
a
party
factory
in
Cuban,
it
is
using
hand
chart
so
right
now,
I
have
my
Cuban.
It
is
cluster
abandoning
which
will
be
this
cluster.
That
I
will
be
using
it's
up
and
running.
Let
me
connect
to
my
cluster.
It's
a
three
node
cluster
with
newer
than
of
Cuban.
It
is
one
nine
six.
C
And
I
am
connected,
I
will
do
al
minute
which
will
deploy
the
Taylor
and,
in
my
Cuban
it
is
cluster
I
I
have
already
started
cube
serial
proxy
on
the
side
to
see
the
Cuban.
It
is
you
I
and
it's
we
can
see
that
cluster
is
clean.
Now
next
step
I
will
do
is
so
we
have
our
official
hand
chart
for
our
tea
factory
delays
in
official
charge.
C
Repository
we
have
to
were
than
of
our
one
is
artifactory,
which
is
bro
or
OSS
version,
and
the
second
is
Heidi
factory
I'd
say
is
for
difficulty
highly
available,
and
this
demo
I
will
be
using
Ida
factory
pro.
So
what
I
will
do?
I
will
just
copy
the
command
and
install
Hardie
factory
using
hands
before
that?
Let's
do
a
memory
for
list
just
to
make
sure
I
have
my
required
I
will
delete
in
the
old
repo.
C
And
so,
if
we
go
back,
what
this
installation
will
do
so
basically
artifact
tree
has
it
uses
a
database.
In
this
case
we
are
deploying
Postgres
with
official
post-grad
and
then
it
deploys
artifact
itself,
and
then
we
are
using
nginx
as
a
load
balancer
to
do
reverse
proxy
for
docker
registry.
So
so
it
seems
that
you
know
it
has
installed,
deployed
the
qrd
factory
properly.
C
It
will
take
some
time
in
having
part
up
and
running
and
creating
load
balancers
right
now
we
see
it
it's
pending,
so
it
will
take
couple
of
second
between
then
I
can
basically
explain
what
this
ad
does.
As
I
mentioned,
it
deploys
pro.
It
uses,
post
gracious
official
chart
of
ballistic
well,
and
then
we
use
nginx
for
reverse
proxy.
We
also
support
in
Greece,
which
you
can
use.
Such
a
support
is
already
available
and
we
have
added
readme
how
you
can
use
in
Greece
with
our
D
factory,
to
eliminate
the
use
of
nginx.
C
C
Okay
till
then
we
should
yeah.
We
already
got
the
load
balancer
external
IP,
okay
and
same
that
ID
Factory
is
up
and
running.
This
is
the
onboarding
experience
or
spy
on
UI
onboarding
of
artifactory.
What
we'll
need
to
do
we'll
need
to
paste
a
license
in
this
case.
I
already
have
my
license,
which
is
just
copy
and
paste
you
can.
Basically,
if
you
want
to
have
a
trial
license,
you
can
just
go
to
our
defect:
G
from
coms,
less
artifactory
and
request
for
trial
license.
C
You
just
need
to
fill
the
proper
information
and
you
will
get
a
license
in
email.
So,
let's
go
back.
A
defect
is
I've
been
running
for
this
demo.
I,
don't
want
to
change
the
admin
password
I'm,
not
setting
up
any
proxy,
and
this
is
the
setup
wizard.
So,
basically,
with
one
click
you
can
set
up
all
this
repository.
Now
we
say:
ID
effective
is
universal
binary
repository
because
you
can
see
yourself.
We
support
pretty
much
all
different
package
types,
beta
he'll
meet
a
docker
beta
power,
be
10
p.m.
in
this
demo.
C
I
will
be
using
only
helmet,
docker
repository.
So
let
me
click
on
create
the
what
it
did.
It
created
four
different
repositories
for
darker
out
of
this
docker
local
is
a
local
repository.
Ninety
factory
docker
remote
is
caching,
its
proxying
docker
hub
Myntra
docker
remote
is
our
official
main
tree
repository
printer,
a
talker
repository
and
this
docker,
it's
a
virtual
repo,
which
is
aggregating
to
all
this
three
to
remote
and
one
local
repo,
and
same
way
we
created
a
3d
poker
hand
and
local,
which
is
local
repository
where
you
can
publish
your
chart.
C
Lm
remote
is
proxying.
Our
stable
repository
and
helm
is
a
virtual
repo
which
aggregates
both
local
and
remote
repo.
So
it
is
done
now
using
a
defect.
Where's
Henry
pose
waive
a
simpler.
In
this
case,
I
will
be
using
a
virtual
repository
to
pull
and
push
a
defect
from
so
let
from
the
setup
wizard
you
can
basically
get
how
to
set.
You
know
how
to
point
your
help
line
to
a
defect
in
this
copying.
This
come
month,
and
here
we
saw
authentication.
So
you
know
so.
C
The
previous
version
of
helm
was
not
supporting
authentication,
but
I
am
from
Melbourne
Musa
from
the
Frog.
He
at
the
contributed
support
for
authenticating
remote
repository,
which
will
be
available
in
next
release
of
help
till
then
you
know
you
can
use
an
authenticated
were
then
I
will
be
using
that
what
I
am
doing
now
is
I'm,
adding
a
remote
repo
named
help
and,
to
my
hand,
blind,
and
it's
got
a
dead
now.
Let
me
do
help
repo
update
to
update
the
index
and
it
will
fetch
the
index
from
multi
Factory.
C
B
C
We
are
almost
done
so
now.
If
we
go
back
to
artifactory
UI
and
in
repository
list
we
have
am
repository
mode
repository
where
we
see
that
our
artifact
a
chart
which
will
fetch
what
got
cast
and
even
you
can
see.
There
is
no
information
about
the
charity,
but
it
shows
that
my
artifactory
chart
has
dependency
on
Postgres
and
and
it
calculated
index
like
hmmm
repository,
does
now,
let's
push,
you
know
one
of
the
charts
so
right
now
I
am
in
in
my
example,
jad
repository
where
I
have
created
an
example
chart
called
node
watson.
C
C
Okay,
my
Rd
factories,
my
proxy
Allah,
is
connected
to
artifactory.
Now
I
will
push
the
archive
to
virtual
repo.
So
here
this
is,
you
is
upload.
Node
version
is
name
of
the
archive
name
of
the
chart,
and
I
will
upload
it
to
virtually
poverty
Factory.
It
seems
it
is
updated.
Let
us
go
back
and
as
virtually
post
proxying
the
local
repo
we
see
that
chart
a
node
were
then
got
published
here.
It
calculated
the
index,
and
if
we
want
to
fetch
it,
we
can
basically
fetch
it
same
way
from
from
Rd
Factory.
C
So
that
that
concludes
the
demo
of
you
know
how
you
can
use
artifact
and
oppose
it
the
same
way
you
can,
you
can
use
it
for
a
docker
registry.
So
if
you
want
to
give
have
a
hands-on,
we
recently
published
a
blog
on
on
the
Frog
like
how
to
use
artifact
3
at
the
doctor
and
hand
registry.
This
is
just
an
example
of
simple
CI
CD
that
you
can,
you
know,
do
it.
It
has
a
link
to
an
example,
github
repo,
where
we
have
explained
all
the
step.
C
D
Much
if
we
have
a
few
minutes,
we
got
okay.
What
is
the
difference
between
the
commercial
version
in
the
free
version,
and
why
did
you
demo,
the
commercial
version
I'm
just
wondering
and
then
how
much
does
the
commercial
version
cost
so.
E
E
D
C
B
B
G
F
G
Okay,
so
we
are
now
in
officially
in
week,
one
of
I
kubernetes
1.11
I,
am
the
release
lead
for
this
cycle
I.
One
of
the
things
that
has
me
really
super
happy
is
that
we
have
a
populated
release.
Team
was
actually
even
a
releasing
of
shadow
whose
PR
is
pending,
so
we
have
leads
for
all
roles
except
at
release,
manager,
which
has
some
special
requirements,
and
we
have
shadows
for
a
bunch
of
other
roles.
G
G
G
We've
got
the
dates.
The
important
dates
on
here
feature
freeze
is
April.
24Th
ahora
has
already
started
collecting
feature
information,
so
that
means
that
you
really
need
to
get
your
feature.
Issues
filed
against
the
feature
repo
for
the
features
that
your
save
your
plans
to
implement
for
1.11
the
code
code.
Slash
will
be
May
22nd
code
for
easily
May,
28th
doc.
Final
documentation
deadline
is
June
11th
and
then
a
released
targeted
at
June
26.
G
There
is
one
sort
of
known
problem
with
the
schedule
that
we
discussed,
which
is
that
June
26
is
the
week
before
the
American
Independence
Day
holiday,
that
happens
to
fall
into
the
middle
of
the
following
week,
causing
all
kinds
of
destruction
of
the
schedules.
It
wasn't
really
good
way
around
us.
We
didn't
want
to
shorten
people's
development
time
and
moving
the
release.
G
B
H
So
I
mean
I'm,
not
sure
how
much
that
I
have
to
say
I'm
trying
to
cut
one
point
ten
point:
one
next
Thursday
the
12th
so
far,
the
biggest
almost
loudly
requested
fixes
are
to
cube
CTL
and
if
you
have
anything
that
is
I
will
send
an
email
with
the
drafts
release
note
and
the
pending
PRS
later
today.
If
you
want
to
cherry-pick
anything
that
is
not
in
this
email,
please
contact
me
I,
think
that's
about
it
really.
G
Okay,
I,
as
as
people,
probably
where
I've
been
on
the
release
team
for
a
few
cycles.
Now
this
is
a
relatively
new
graph
that
we
added
to
dev
stats
because
of
some
problems.
I
observed
in
the
1.9
cycle,
the
1.9
cycle,
particularly
one
of
the
things
that
I
observed,
was
that
sig
node
seemed
to
be
a
lot
more
heavily
loaded
with
changes
than
the
other
SIG's.
And
then
this
was
causing
problems
when
it
came
to
code,
freeze,
phase,
etc,
because
it
meant
that
there
weren't
enough
people
to
look
at
the
test.
G
G
G
G
So,
for
example,
sig
Docs
here
is
always
going
to
show
off
that
with
a
lot
less
work
load
than
they
in
fact
out,
since
most
of
their
work
happens
in
it
nan
kk
the,
but
for
the
rest
of
this
thing,
so
there's
there's
what
you're
seeing
here
is
what
is
called
saluté
PR
workload,
and
that
is
the
number
of
PRS
multiplied
by
a
coefficient
for
the
size
label
on
the
PR.
So,
for
example,
size
M
is
multiplied
by
one
size.
Large
is
multiplied
by
two
etc
to
give
an
idea.
G
Obviously
you
know
there's
a
lot
of
fudge
factor
in
that,
but
again
we're
just
trying
to
get
a
general
idea
of.
Is
the
cig
heavily
loaded
or
not
right
now
scroll
down
a
little
bit
so
and
then
the
next
thing
that
we
actually
have
is
relative
PR
workload
and
relative
PR
workload
does.
Is
it
compares
that
PR
workload
against
how
many
people
did
we
have
making
review
comments
in
the
previous
week?
That
is
how
many
active
reviewers
do
we
have
this
week
to
give
an
idea
of
what
is
the
workload
per
reviewer
in
that
sick?
G
G
It
seemed
like
six
scalability
was
being
tagged
in
most
of
the
PRS
that
were
assigned
to
auto
scaling,
but
not
necessarily
vice
versa,
and
actually
looking
at
the
PR
workload.
I
does
show
that
to
be
the
case
on,
because
if
you
can
see
that
the
auto
scaling
PRS
are
basically
almost
a
pure
subset
of
the
scalability
PRS,
the
when
I
click
actually
the
next
one.
G
And
then
looking
at
workload
for
sig
node
shows
us
a
completely
different
pattern
in
sig
workloads
like
in
this
case.
So
one
of
the
important
things
about
the
way
that
this
records
PR
workload
is
it's
not
new
PRS
within
the
time
period,
it's
PRS
that
were
open
during
the
time
period,
because
it's
been
my
observation
that
it's
actually
often
old
PRS
that
caused
the
most
work.
You
know
that
has
a
pair
that's
three
months
old
is
often
three
months
old,
because
it
requires
a
lot
of
changes,
possibly
has
adverse
performance
impact,
possibly
this
test
problems.
G
In
other
words,
it's
a
big
work
generator.
So
we
don't
want
to
not
count
the
old
PRS
and
so
that,
if
you
look
at
a
heavily
loaded
sig
like
sig
node,
you
actually
discover.
The
reason
why
there
have
reloaded
is
not
so
much
that
they're
getting
a
whole
bunch
of
new
PRS,
but
that
they
have
a
lot
of
these
really
long-running
PRS.
That
can't
be
closed
quickly,
so
at
any
given
time
they
have
a
whole
bunch
of
open,
PRS
under
discussion.
So
then
click
on
the
next
one.
G
So
there's
also
now
these
graphs
are
nights
for
showing
how
that's
changed
over
time,
but
if
I
want
a
snapshot
and
I
want
it
to
be
more
mathematical
to
say,
hey,
what's
the
workload
for
this
sig
like
right
now,
you
can
look
at
the
PR
workload
table,
which
is
the
related
chart,
and
this
also
gives
you
idea
of
the
relationship
between
the
different
numbers
right.
So
we
have
the
number
of
actual
PRS
that
are
open
so
like
for
sig
off
we
have
41
PR
is
open.
G
We
have
21
reviewers,
who
were
active
during
the
period
we
have.
The
absolute
PR
workload
is
99,
which
is
the
41
PRS
multiplied
by
their
size
factors,
and
then
finally,
the
relative
PR
workload
is
that
divided
by
the
number
of
reviewers
so
gives
you
an
idea.
Now
again,
all
of
these
numbers
have
substantial
fudge
factors
involved,
so
we
don't
really
care
about
the
difference
between,
say,
4.7
and
3.1.
G
What
we
care
about
is
the
difference
between
4.7
and
0.6,
because
one
shows
a
heavily
loaded,
sig
and
the
other
one
shows
a
sig
that
is
not
currently
on
that
busy
and
so
with
the
release
team.
If
we're
looking
at
which
saved
need
to
look
at
and
intervene
in,
and
maybe
ask
some
of
our
maintain
errs,
who
do
multiple
SIG's
to
pitch
in
and
help
with.
This
gives
us
a
better
idea
of
who
the
target.
G
G
And
stuff
that
is
not
labeled,
first
of
all
stuff,
that
has
no
sig
label
at
all,
is
not
going
to
show
up
in
this
chart.
The
and
the
second
thing
is
I.
You
know.
Obviously
the
size
labels
are
automatically
determined,
there's
a
lot
of
potential
error
for
that,
and
yet
sometimes
it
takes
a
while
for
the
size
levels
to
get
applied.
So
there's
there's
that
fudge
factor
so
again,
like
I,
said.
Look
at
this
more!
We
look
at
this
more
in
terms
of,
is
this
sake
heavily
loaded
or
not?
Not
not.
G
F
G
I
G
G
One
of
the
questions
there
is:
is
this
a
huge
spike
for
that
sig?
Or
are
we
just
at
that
point
in
the
dev
cycle,
when
a
lot
of
people
are
filing
PRS?
And
so
that's
the
main
thing
I
look
at
the
overall
graph,
for
is
to
say:
hey.
Does
you
know
sig
off
suddenly
have
a
lot
of
PRS,
or
does
everyone
have
a
lot
of
PRS
because
we're
in
week
seven
of
the
release
cycle?
G
F
So,
as
far
as
what
these
charts
mean
and
what
we
should
do
based
off
of
them,
this
is
still
kind
of
an
exploratory
video.
So
what
are
all
these
charts?
What
should
we
do
with
them?
What
do
they
mean
is
kind
of
an
exploratory
thing?
That's
why
we
try
and
present
the
charts
of
the
community
each
week
to
see
if
these
are
helpful
or
not.
This
is
sort
of
the
first
chart
that
was
driven
with
some
sort
of
use
case
in
mind,
at
least
from
just
like
a
release
trafficking
perspective.
F
If
we
can
better
understand
which
six
are
more
heavily
loaded,
others
what
we
should
do
based
on
that
is
definitely
an
open
question.
We
would
welcome
continued
collaboration
over
this
sort
of
stuff
in
the
dev
stats
slack
Channel.
We
also
have
recurring
meetings
where
we
sort
of
discuss
what
we're
trying
to
do
the
steps.
That's
we
are
looking
at
producing
sort
of
a
prioritized
queue
of
work
to
make
this
more
useful
there.
F
Those
of
us
who
have
the
opinion
that
a
lot
of
these
charts
are
really
shiny,
but
maybe
not
so
useful,
and
we
kind
of
have
the
paradox
of
paralyzing
choice.
There
are
so
many
vegetables
to
look
at
what
do
all
mean,
and
why
should
I
care
is,
is
something
that
can
probably
be
better
answered
if
we
just
trimmed
the
number
down
to
something.
That's
more
reasonable.
F
So
look
for
look
for
a
presentation
about
this
at
KU
Connie
you
and
that's
kind
of
sort
of
the
timeline
were
looking
at
in
terms
of
trying
to
make
was
going
actionable,
but
we
are
you're
genuinely
interested
in
any
and
all
help
to
make
this
more
usable,
def
stats.
Those
stats
is
basically
used
possible
to
use
for
all
CN
CF
projects
and
I
have
to
give
a
huge
shout-out
to
the
Lucas
who
was
talking
earlier
about
the
github
tags,
he's
like
incredibly
responsive.
F
If
you
have
questions
about
leaving
dashboards
or
what's
going
on
behind
the
scenes,
he's
great
for
it,
but
as
far
as
we
can
tell,
kubernetes
is
really
the
only
active
user
of
these
dashboards
and
we
are
trying
to
use
it
to
kind
of
put
together
ways
of
measuring
project
health
overall
from
a
reporting
perspective,
as
well
as
what
our
dashboards
that
can
tell
us.
We
should
do
a
thing
right
now,
because
we've
costs
any
threshold
or
something
good
good
stuff,
like.
B
A
B
J
Thank
you.
Yes,
this
is
Rob.
First,
all
that
said,
cluster
ops
and
the
story
with
cig
clouds
drops
if
you've
been
following.
It
is
sort
of
the
same
trend.
We've
had
we're
having
trouble
having
quorum,
Chris,
McKinney,
McKinney
and
I,
and
other
leaders
in
the
past
wanted
cluster
office
to
be
a
place
where
we
can
collect
operator
and
operators
come
together.
But
it's
something
where
community
is
going
to
have
to
decide.
They
want
to
contribute
and
be
part
of
that,
so
we're
sort
of
still
keeping
it
going.
J
We're
looking
for
additional
leaders
to
come
in
who
have
a
passion
for
collecting
operators
and
getting
people
going
and
that's
you
know
we
don't
we
don't
want
to
lose
it.
We
think
it's
important
sort
of
see
a
trend
where
the
on-premises
or
the
independent
operators
for
kubernetes
you
do
need
a
place
to
collaborate
and
Chris
and
I
are
both
vendor
neutral.
So
it's
it
ends
up
being
we
end
up
being
reasonable
hosts.
J
J
A
Just
got
a
quick
question
statement
for
you:
Rob
with
on-prem
kind.
J
That's
exactly
what
our
goal
would
be.
We
we
don't,
we
don't
see
a
need
to
have
cluster
a
cig,
cliff
eration
here,
I,
we
don't
and
we
don't.
You
know,
have
a
big
stake
in
you
know,
being
you
know,
holding
a
flag
for
the
the
chair
positions.
You
know
we're
really
just
trying
to
make
sure
there's
a
home
and
we
want
to
participate.
Okay,
athlete
yeah,
happily
consolidate
and.
A
I
just
made
a
quick
statement
for
everyone
that
gets
totally
fine
when
we
do
this
and
I
think
it
really
helps.
The
project
did
not
have
unnecessary
overhead
when
we
do
it
because
cigs
are
relatively
cheap
to
run,
but
there's
also
a
lot
of
overhead
there
as
far
as
that
kind
of
stuff.
So
if
you
know
don't
be
afraid
to
ever
just
hold
a
meeting
without
feeling
that
you
need,
you
know
infrastructure
there
in
place.
You
can
just
schedule
a
meeting
and
then
get
people
there.
So
thanks
Rob
I
appreciate
that.
B
K
So
sick
docks
is
growing.
We
have
two
new
sigdoc
maintain
errs
and
we
have
five
new
contributors,
regular
contributors,
people
who
have
begun
contributing
and
participating
actively
in
weekly
community
meetings
over
the
last
two
months,
and
that
has
shown
us
that
our
the
experience
of
our
new
members
has
shown
us
that
our
contributor
guidelines
need
work.
It
is
not
clear
from
our
current
guidelines
how
to
become
a
member
of
the
kubernetes
organization,
and
there
are
ways
that
we
can
link
better
to
existing
documentation.
But,
frankly,
our
overall
contribution
guidelines
need
revision.
K
Another
piece
of
news
is
that
we
are:
we
are
moving
forward
with
a
plan
to
migrate
the
kubernetes
website
from
Jekyll
to
Hugo,
and
we
have
met
with
a
contractor
we're
contracting
with
Bjorn
Eric
Peterson,
who
I
think
at
this
point
has
contributed
like
80%
of
the
Hugo
codebase
who's
going
to
do
our
website
migration.
For
us,
we
synced
up
with
him
yesterday
for
an
initial
estimate,
scope
of
problems
to
solve
and
we're
on
track
to
proceed
for
migration
with
a
completion
date
targeting
April
30th
another
piece
of
migration
work.
K
So
if
you,
if
you
visit
the
blog
you'll
notice
that
if
you
go
to
blog
dot,
kubernetes
I/o,
it
should
hopefully
be
pointing
now
and
redirecting
to
kubernetes
do
slash
blog.
We
have
not
posted
I,
haven't
put
up
a
blog
post
on
it
yet,
but
that
is
scheduled
for
the
next
day
or
two
to
go
out
and
announce
that
more
likely
to
the
community.
K
The
main
reason
for
doing
that
migration
was
to
resolve
the
technical
debt
of
continuing
to
work
in
blogger
and
to
make
life
easier
on
the
actual
blog
team
working
in
bloggers,
not
not
as
not
especially
a
joy
and
so
getting
that
integrated
more
into
into
the
sig
Docs
workflow
and
getting
them
set
up
and
I
want
to
shout
out
again.
I
want
to
shout
out
much
gratitude
and
thanks
to
tests
infra
for
the
the
automation
bots
that
make
it
possible
to
have
blog
level
ownership
of
of
PRS
and
approvals.
D
D
G
B
A
A
So
please
remember
to
register
for
that,
if
you're
having
problems
with
the
forum
or
anything
just
ping
myself
for
Paris
and
for
the
current
contributor
track,
that's
if
you've
gone
to
a
contributor
summit
before
we
have
sessions
that
we
want
people
to
like
vote
and
stuff
like
that
and
that
will
be
sent
out
on
Monday
next
is
on
reddit.
This
Tuesday
we're
gonna
have
what
they
call
that
AMA.
A
Let's
ask
me
anything:
that's
where
people
on
reddit
in
the
kubernetes
subreddit
will
ask
a
bunch
of
questions
about
kubernetes
and
people
will
answer
I've
gotten
about
five
or
six
volunteers
so
far,
but
the
more
the
merrier.
So
if
you're
interested
in
that,
if
you
use
reddit
feel
free,
it's
gonna
be
about
a
six-hour
window
where
people
just
ask
questions
on
reddit
and
then
you
just
post
them
and
that's
all
I've
got.
Thank
you.
F
F
This
page
is
generated
based
on
this
yellow
file.
The
CMO
file
is
consumed
by
tool,
that's
responsible
for
pushing
labels
to
every
single
people
in
the
communities
organization,
and
then
we
generate
documentation
and
based
on
that
they
will
file.
So
if
you
have
any
questions
about
what
I
give
them,
what
label
means
tells
you
a
little
bit
about
it
tells
you
who
can
add
it
and
it
tells
you
what
crowdfunding
is
responsible
for
it.
F
So,
for
example,
you
know
right
now,
crowd
has
a
plugins
page.
You
get
to
hear
from
there.
I
could
have
quickly
to
show
that
anyway,
so
this
shows
you
sort
of
what
commands
do
and
then
the
fan
and
sort
of
also
share
like
what
labels
and
such
they
can
add.
So
we're
gonna,
try
and
tie
all
of
that
matters.
F
So
again,
that's
they're,
not
by
oh
goodness,
there's
that
second
thing
is
the
app
deaf
working
group
has
recently
put
together
a
super
Nettie's
application
survey,
and
we
would
welcome
your
participation
and
sharing
this
survey
link
with
anybody.
You
know
who's
using
applications
using
the
cloning
or
operating
applications
on
kubernetes.
We
plan
on
making
this
feedback
public
at
the
conclusion
of
the
survey,
and
this
is
kind
of
intended.
I
just
do
this
a
picture
of
like
how
do
you
to
burn
Eddie's
these
days?
F
You
know
what
can
we,
as
as
the
communities
community
and
project,
do
to
better
serve
ourselves
in
the
direction
of
applications?
So,
link
to
this
is
in
the
meeting
notes
right
here.
It's
from
the
BG
app
dev.
It's
also
linked
in
their
slack
Channel
and
I'm
terrible
sense
of
a
ethic.
You
learn
any
stuff
as
well.
That's
it
for
me:
I
won't
hand
off
to
you
DIMMs.
B
Looks
like
not
so
we're
going
to
keep
on
moving
to
shoutouts,
so
we
have
just
a
couple
this
week.
I
think
person
goes
out
Aishwarya
and
next
to
the
meteor
contributors
team
that
participated
yesterday,
so
Guinevere
Erin
Chris
sodden
I'm
going
to
mess
up,
name,
I'm,
really,
sorry
so
dead
and
Carolyn.
So
thank
you
for
participating
in
the
meet
our
contributors.
Parasitics.
If
you
have
some
notes
that
you're
putting
in
currently
do
you
want
to
talk
a
little
bit
about
what
you
wrote
there
or
ronzo
okay,
cool
sure.
L
Meet
our
contributors
is
a
once
a
month,
I'd
like
to
do
twice
a
month,
livestream
series
where
it's
very
similar
to
user
office
hours,
however,
consider
a
contributor
office
hours
if
you're
a
new
or
current
contributor,
and
you
are
curious
about
why
your
test
flaking
or
why
a
certain
contributor
got
into
kubernetes
or
anything
in
between
this
is
the
venue
for
you.
So
I
appreciate
all
contributors
that
have
volunteered
to
date
and
would
love
to
see
more
new
faces.
Take
questions.