►
From YouTube: Kubernetes Community Meeting 20190927
Description
We have PUBLIC and RECORDED weekly meeting every Thursday at 5pm UTC.
See this page for more information: https://github.com/kubernetes/community/blob/master/events/community-meeting.md
B
Good
morning
afternoon
evening,
2:00
a.m.
depending
on
your
time
zone
welcome
to
the
September
7th
27th
and
September
27th
kubernetes
community
meeting
the
usual
caveat
supply.
Please
mute
your
microphone
if
you
are
not
currently
the
one
speaking
and
this
meeting
is
being
recorded,
so
don't
say
or
share
anything
in
this
meeting
that
you
don't
want
to
belong
to
the
entire
public.
C
C
I
am
here
not
too
much
to
say
other
than
the
release
is
going
out
today.
The
final
bits
are
building
now
I'd
expect
that
they
should
be
published
by
around
5:00
p.m.
Pacific.
Today
it
takes
some
time
for
stuff
to
to
ripple
through
other
than
that.
Peng
find
me
vise,
Keon
slack
or
vice
cure
on
github
is
the
one
twelve
dot
patch
manager
and
any
issues
that
are
to
happen
to
arise
in
112
need
targeted
for
cherry
pick
there
and
obviously
for
a1
dot,
12.1
ASAP.
C
B
Can
you
add
the
retro
link
to
the
notes
there
and
for
anybody
who
is
involved,
it
has
comments
I'm
actually
before
we
go
further
here
can
I
have
volunteer
to
take
notes?
Somebody
wants
to
take
notes
for
this.
This
will
probably
be
your
easiest
community
meeting
ever
to
take
notes
because
we
had
a
whole
slew
of
cancellations
today,.
B
B
B
D
Thanks
Josh
Aaron
Creek
and
Berger
here,
aka
Aaron
of
cig
beard,
aka
air
in
front
of
the
pod
aka
heir
at
stake
testing
right.
So
let's
talk
about
test
failures.
I'm
gonna
share
my
screen.
These
links
are
available
in
the
meeting
notes.
If
you
want
to
follow
along,
let's
see
that's
the
wrong
screen
to
share.
Let's
try
this
again
sorry
about
that.
D
D
D
So
this
dashboard
goes
through
and
takes
a
look
at
every
build
and
every
test
failure
over
the
past
week.
It
then
clusters,
together
the
failure
texts,
so
things
like
this
across
all
tests
and
tries
to
identify
clusters
of
similar
failures
that
happen
across
different
tests
and
across
different
jobs.
The
red
lines
here
represent
failed
tests
and
the
blue
lines
here
represent
failed.
The
one
build
could
fail,
but
it
could
contain
multiple
failed
tests.
What
we
see
here
is
a
great
example
of
a
bunch
of
recent
failures
happening.
D
This
is
also
kind
of
gives
me
a
chance
to
demonstrate
doing
it
live
because
I
think
other
pieces
of
our
test
infrastructure
are
a
little
busted
but
loosely
speaking.
The
way
this
operates
is
it
runs
a
bunch
of
bigquery
data.
It
expects
that
bigquery
data
to
get
there
by
way
of
a
utility
that
scrapes
everything
out
of
GCS
buckets
using
the
bigquery
data.
D
We
can
then
see
that
something
started
failing
here
around
September
27th,
so
I
can
click
on
the
latest
failure,
but
before
I
do
that
I
can
also
click
these
drop-down
arrows
to
see
like
what
jobs
has.
This
failure
been
occurring
in
with
these
little
spark
lines
that
show
like
yeah.
This
definitely
started
happening
recently
across
all
these
jobs,
and
then
there
seems
to
be
like
sporadic
failures
and
other
jobs
over
the
past.
We
I
can
also
see
which
tests
this
failure
seems
to
happen
in.
D
In
this
case,
it
looks
like
they're
all
admission,
webhook
related
other
links
that
I
have
available
to
me
are
I
can
search
github
for
this.
So
if
I
click
this
link
button,
this
gives
me
a
direct
link
just
to
this
triage
cluster.
If
I
search,
github
I
can
maybe
see
if
somebody
has
helpfully
filed
an
issue
with
a
reference
to
this,
and
in
fact,
I
filed
an
issue
saying
the
test
screw,
Tisdale
will
show
that
shortly.
D
So
this
has
been.
We
have
tried
in
the
past
to
have
a
bot
automatically
file
like
the
top
ten
of
these
clusters,
in
the
hopes
that
somebody
would
actually
fix
them.
That
generally
hasn't
worked
well
in
the
past,
but
this
can
be
an
incredibly
useful
tool
for
people
who
are
on
the
release
team
or
the
CI
signal
role
to
identify
what's
going
on.
Where
is
it
happening
so
on
and
so
forth?
Other
things
I
can
do
with
this
tool.
I
take
a
look
at
this
admission
weapon
code
thing.
D
That's
the
test,
that's
failing,
I
post
that
in
the
test
button
hit
enter,
and
then
that
will
show
me
all
of
the
different
failures
that
are
happening
in
the
test
named
admission
webhook
it.
Actually
it's
actually
like
a
regular
expression
right,
so
I
can
see.
There
are
actually
numerous
failure
clusters
that
seem
to
be
occurring
within
this
particular
test,
but
most
of
them
kind
of
look
like
this
I
can
also
do
the
same
thing.
For
if
I
want
to
see
everything,
that's
happened
in
jobs
related
to
like
container
D.
Again,
it's
a
regular
expression.
C
D
Failure
clusters
that
are
known
to
contain
at
least
one
of
the
container
d
jobs
you'll,
also
notice.
The
check
boxes
up
here
show
that
I'm
only
looking
at
failures
that
have
happened
in
the
CI
jobs.
These
are
the
jobs
that
are
continuously
running
periodically
after
pull,
requests
have
been
merged,
so
I
can
also
use
this
check
box
here
to
include
failures
that
have
happened
from
pull
requests
or
look
at
just
the
failures
that
have
happened
in
PRS,
which,
okay,
that's,
what
just
doesn't
look
right.
Oh
right,
we're
not
running
container
D
jobs,
PRS
all
right!
D
That
looks
a
little
better,
so
follow
me
through
the
trail
of
woe.
Is
that
is
oh
look.
This
is
the
top
failure.
Let's
try
and
chase
down
what's
happening
and
see
if
I
can
be
helpful,
so
I'm
going
to
click
on
the
latest
failure
that
has
happened
in
this
failure.
Cluster
clicking.
That
link
will
take
me
to
the
governador
page,
so
goober
nadir,
just
to
remind
folks
a
place
where
I
can
see
all
of
the
artifacts
and
logs
for
a
given
job
run
or
test
run.
D
This
is
taking
me
directly
to
the
failure
text
for
the
test.
That's
failing,
including
the
command
that
I
can
use
to
run,
to
reproduce
the
test
locally.
If
I
happen
to
have
a
cooper,
Nettie's
test
cluster,
a
link
I
can
click
to
see
the
standard
out
and
standard
error
from
the
test
log
at
the
time
it's
happening
so
on
and
so
forth.
If
I
then
scroll
up
here,
there
are
two
links
that
seem
to
be
kind
of
useful.
From
my
perspective,
one
is
clicking.
D
The
name
of
the
job
will
usually
send
you
back
to
the
tester
dashboard
for
that
job
and
then,
if
I
click,
the
recent
runs
link
that
will
take
me
back
to
Cooper
inators
view
of
having
scanned
back
through
the
previous
buckets
the
problem
I'm
facing
right
now.
Oh
look!
Actually
it's
back
up
to
date.
So
now
that
I've
clicked
on
test
grid
I
can
see
that.
Well,
look
all
of
these
these
admission
hook.
Things
have
failed,
failed
and
been
failing
since
9:27
or
actually
since
some
time
here
in
926.
D
So
the
next
thing
I
could
do
is
try
and
see.
If
there's
a
difference
between
these
two
you'll
notice,
I'm
flipping
back
and
forth
between
the
columns
here
I
know,
we
need
to
work
on
labeling
what
these
columns
are,
but
the
large
the
larger
one
here
is
most
likely
the
inference
right,
the
version
of
code
that
corresponds
to
the
version
of
kubernetes
under
test,
and
so
unfortunately,
that
hasn't
changed
across
these.
So
it
must
be
something
else.
Environment
related.
So
this
is
just
an
example
of
how
I
can
use
triage
to
kind
of
quickly
troubleshoot.
D
What's
going
on,
I
could
not
do
this
process
against
a
variety
of
other
jobs.
The
bug
I
was
going
to
show
you
is
that
this
view
seems
to
be
out
of
date
in
comparison
to
kubernetes
view.
Rejuvenator
also
lets
me
walk
back
through
previous
test
results
according
to
human
nature,
does
like
a
slightly
different
way
of
looking
at
things
where
it
bounces
back
through
bigquery
or
sorry.
It
bounces
back
through
GCS,
where,
as
I
believe,
tests
could
take
some
of
its
results
out
of
bigquery.
D
So
some
of
the
fun
issues
I've
dealt
with
in
the
past-
let's
find
it
here
are
that
sometimes
triage
will
fall
wildly
out
of
date.
You
can
see
this
with
this
line
up
here,
where
it'll
tell
you
bills
from
such-and-such
a
date
to
such-and-such
a
date,
and
sometimes
the
state
is
like
wildly
in
the
past.
D
Where
do
I
go
to
figure
out
what's
happening
when
I
see
that
that
is
stale?
The
first
thing
I
can
do
is
take
a
look
at
that
the
job
that
is
responsible
for
running
all
of
the
bigquery
stuff.
We
have
since
updated
that
job
to
dump
its
logs
into
GCS
and
be
visible
on
test
grade.
The
same
way
most
of
our
jobs
are
I.
Can
click
on
one
of
these
I
can
then
take
a
look
at
the
build
log.
D
I'll
just
take
a
look
at
the
raw
build
log,
and
you
can
see
how
this
job
is
authenticating
itself
setting
up
bigquery
and
then
it's
running
these
big
queries
things
it's
running
a
Python
script
based
on
the
outputs
of
those
big
queries,
the
public.
The
data
set
that
it's
hitting
for
all
of
this
is
publicly
accessible,
so
this
is
much
like
the
get
the
publicly
accessible
github
data
set.
Didn't
bigquery.
Anybody
can
query
this.
D
It's
just
that
it
gets
charged
to
your
google
cloud
account
and
you
get
a
bunch
of
free
usage
right
out
the
gate,
so
you
can
see
it's
going
through
all
of
the
different
tests
and
clustering
them
together
in
based
on
the
number
of
failures
in
the
test,
so
I
wanted
so
forth.
The
code
responsible
for
doing
all
of
this
stuff
is
here.
You
can
see
that
kind
of
lacking
our
Doc's
on
this.
D
D
That
thing
is
called
kettle:
we've
since
set
up
a
metric
that
shows
us
the
ingress
rate
of
data
into
bigquery
from
GCS,
and
then
we
can
see
that
their
gaps
of
times
times
where
a
Ketel
job
ends
up
freezing
or
pausing,
and
so
we
have
an
alert
setup
in
Bella
room.
We
also
have
a
similar
test
period,
job,
that's
responsible
for
going
through
and
verifying
that
the
bigquery
tables
are
at
least
six
hours
up
to
date
and
then
starts
failing
and
going
red
we're.
Then
gonna
hook
this
up
to
test
grades,
alerting.
D
B
B
So
please
do
take
a
look
at
this
grid
for
when
your
sig
is
delivering
their
next
update,
so
that
you
don't
get
in
the
position
of
having
somebody
from
contributor
experience
contact
you
three
days
before
the
community
meeting
and
say:
oh,
we
have
to
do
that
so
by
the
way.
So
next
week
is
the
retrospective
and
then,
after
that,
we're
going
to
get
updates
from
service
catalog,
instrumentation
and
network.
B
So
this
is
kind
of
a
notice
to
those
three
SIG's
that
if
you
hear
something
next
week
and
there's
no
last-minute
throwing
stuff
together
so
thanks
and
do
keep
track
of
that,
and
if
you
can't
do
a
particular
week,
then
please
let
us
know
on
st.
contributor
experience
and
we
will
shuffle
the
schedule
okay.
B
E
E
So,
okay,
yeah
so
I
think
it's
fine.
So
for
that
so
the
release
branch
for
2.4
has
already
been
caught
and
the
release
process
has
been
kicked
up.
So
I
think
estimated
time
when
I
release
it
will
go
out,
is
sometime
around
October,
maybe
made
or
to
late
October.
So
that's
one
that
come
on
for
release
will
be
officially
out.
So
these
are
the
new
features
for
spark
on
communities.
Seeing
two
point
four:
so
we
now
have
support
for
Python
and
R.
E
So
these
are
long
requested
features
from
the
users.
So
client
mode
is
also
a
very
popular
feature
that
user
has
been.
Users
have
been
requests
requesting,
so
the
climate
support
will
allow
user
to
run
things
like
spark
shell
or
you
know
like
notebooks
like
blanket,
shuffling
or
or
attributer.
So
it
also
comes
with
support
for
us
mounting
a
few
types
of
volumes
since,
like
our
post
pass
and
PVCs
or
can
be
supported
now,
it
also
has
better
finer
grained
control
on
the
memory
request
for
executor
pods.
E
So
previously,
the
only
Singh
support
is
in
integer
values
for
executors
memory,
request,
actress
of
exterior
cost
non
memory,
so
CPU
request.
So
now
you
can
actually
use
fractional
values
or
me
disabuse
like
what
do
you
would
usually
do
with
communities?
There
is
also
something
husband
to
the
sticker
scheduler
back-end
code.
That
makes
it
more
about
the
two
failures
talking
to
the
case
over
a
foot
simple
so
in
terms
of
role
map
or
in
future
work.
E
So
this
these
are
the
things
we're
working
on
or
plan
to
work
on
the
first
one
is:
there
is
actually
a
some
work
in
progress
to
make
it
possible
to
use
a
template
to
customize
the
driver
and
Exeter
pods.
So
what
the
reason
why
we
wanted
this
is
because
the
spark
configuration
model
is
like
you
have
a
lot
of
spark
config
properties
to
configure
each
aspect
of,
for
example,
the
driver
or
executors.
So
with
so
many
different
options
that
you
can
dr.
E
configure
the
past,
we
are
facing
a
problem
of
explosion
of
the
number
of
spark
config
properties,
so
we
decided
to
stop
adding
more
new
config
properties,
things
that
it
was.
You
know
just
allow
you
to
passing
a
template,
so
you
can
actually
use
that
to
customize
the
driver
or
the
external
process
they
they
want.
So
the
community
is
also
working
on
a
new
shuffle,
sir
done
to
support
and
I'm
a
resource
allocation.
E
So
then,
so
dynamic
resource
allocation
in
spark
allows
you
to
actually
grow
or
shrink
the
number
of
Exeter
paths
at
runtime
based
on
the
load,
so
we're
also
working
on
Kerberos
support
for,
for
example,
to
access
security
of
us
in
spark.
So
we
also
plan
to
work
on
support
for
local
application
application
dependencies.
There
are
your
lay
on
your
local
submission
client
machines,
so
right
now
usually
have
to
find
a
way
to
either,
like
you
know,
bake
those
into
images
or
upload
them
to
somewhere
like
a
remote,
HDFS,
cluster
or
GCS
or
s3
whatever.
E
So
then,
half
spark
download
these
remote
dependencies
locally
so
want
to
get
get
better
support
for
these
dogged
dependencies.
So
there's
also
has
been
some
discussion
around
providing
really
really
lives
for
drivers,
parts
for
for
streaming
applications,
but
also
this
is
really
critical
because
for
streaming
applications,
if
whatever
reason
the
driver
restarts
it
needs
to
restore
from
some
some
check
wanted
data.
So
it
knows
you
know
where
it
has
left
so
and
also
knows
where
to
start
with.
This
is
very
important
for
streaming
application,
while
the
other
scene
is
so
for
streaming
application.
E
You
need
to
make
sure
that
the
new
driver
pod
has
exactly
the
same
name
as
the
old
one.
So
from
a
user's
perspective,
they
don't
see
that
as
a
completely
new
summation,
or
rather
you
know
something
that
restores
from
some
check
point
yeah.
These
covers
the
spark
aspect,
so
we
also
to
collaborate
with
our
open
source
project
like
air
flow,
so
I
said
some
of
these
has
have
already
been
covered
in
maybe
in
a
previous
update,
but
just
to
be
sure,
like
you
know,
everyone
knows
what's
going
on
here.
E
So
in
earful,
at
one
point
n,
we
have
the
newcomer,
nays
operator
and
Carini's
executor
for
running
tasks
in
arbitrary
pods,
so
there's
also
a
kubernetes
airflow
operator.
In
the
you
know,
you
know
community
sense,
there's
sort
of,
like
you
know,
a
acrimony,
this
custom
controller
with
a
c
rd
for
managing
the
lifecycle
of
airport
deployments.
This
is
also
open
source,
so
we
also
have
a
project
cost
for
operator.
So
this
is
also
a
custom.
Controller
is
necessarily
for
managing
the
lifecycle,
spark
applications.
So
this
projects
received
some
new
features
as
well.
B
Well,
I
think
that's
plenty
Wow,
okay!
Well!
Thank
you
very
much.
If
you
have
questions
you
can
ask
you
know
and
chat.
In
the
meantime,
we're
going
to
get
a
cig
update
from
shyam
about
six
capability.
Who've
been
busy
this
week
with
the
release.
Oh
yeah,
please
pace
that
sides
link
into
the
note,
so
we
can
from
him.
So
I
am.
B
A
F
Hear
you
good,
okay,
so
I
think
a
last
update
for
6k
video
is
given
sometime
in
early
August
and
since
then,
we've
been
working
on
a
bunch
of
different
things
for.
F
Firstly,
in
governance,
I
think
the
biggest
milestone
was
that
we
finally
have
a
scalability
Charter,
and
this
has
been
a
work.
That's
been
happening
for
a
while
now
and
I
think.
This
is
a
really
big
milestone
for
six
scalability,
because
often
we
enter
into
like
these
discussions
close
to
the
release
of
how
are
we
going
to
deal
with
scalability
issues?
Are
we
going
to
like
lock
the
release
in
them?
Are
we
going
to
neglect
them
things
like
this?
F
So
we
find
identified
a
bunch
of
things
with
respect
to
the
testing
infrastructure
and
all
the
resources
that
scalability
can
scale
ability
consumes
right
now
and
there's
been
a
plan
laid
out.
It's
part
of
a
github
issue.
I
can
find
that
which
basically
identifies
the
areas
and
the
work
items
to
be
done
before
we're
ready
to
send
send
this
stuff
over
to
CN,
CF
and.
F
Besides
those
we
been
working
on
some
newer
areas,
one
of
them
is
about
networking
s
allies.
So
for
a
long
time
now,
I
think
more
than
two
years
we've
even
three
years,
we've
just
lived
with
two
sa
Louis
in
Cuba
ideas
as
to
performances,
hellos,
and
one
of
them
was
the
API
called
latency
SLO
and
the
other
is
the
odd
start,
updating
the
SCO
and
we
starting
to
have
discussions
and
proposing
newest
allies
in
different
particles.
F
The
first
one
is
with
networking
and
voytek
has
been
working
on
this,
and
this
should
have
been
published
on
the
six
under
community
repository
under
six
scalability.
So
they
have
a
bunch
of
new
isolates
for
networking
and
if
you
want
to
start
to
monitor
of
those
and
besides
that
there
has
been
some
progress
that
we
made
on
cluster
loader
version.
Two
I
think
I
mentioned
about
this.
F
F
We
haven't
yet
migrated
all
the
jobs
to
it
yet,
but
we
have
a
prototype
which
says
that
it's
possible
to
from
those
tests
on
us
to
load-
and
hopefully
this
opens
up
avenues
for
a
lot
of
other
teams
and
developers
wanting
to
scale
test
their
features
to
be
able
to
do
it
to
be
able
to
easily
do
it.
Choosing
this
tool.
F
It's
part
of
the
per
first
tests
repository
and
there's
also
documentation.
They
reported
on
with
respect
to
CI
health
and
our
tests.
I
think
they
have
been
mainly
two
things.
One
of
them
is
speeding
up
our
scalability
pre
submits,
which
was
kind
of
a
technical
debt
that
we
got
onto
ourselves
back
when
we
added
these
pre
submit
some
while
ago,
I
think
more
than
a
quarter
ago.
This
KTP
so
much
were
the
slowest
one.
So
there
was
like
kind
of
the
bottleneck
for
them.
F
Besides,
that
I
think
the
other
main
area
is
fixing
scalability
issues
with
the
release,
and
we
did
actually
end
up
like
always
seeing
a
bunch
of
scalability
issues
close
to
the
release.
One
of
them
was
with
the
performance
of
node
controller,
where
the
writing
change
to
nodes,
and
there
was
an
inner
fish
inefficiency
in
the
design
or
the
way
in
which
the
taint
controller
was
implemented,
which
made
cluster
startup
time
really
really
slow.
This
is
this.
F
It's
because
that
the
these
scalability
have
you
been
seeing
on
the
last
one
month,
kind
of
masked
these
kind
of
issues.
So
while
we
were
fixing
the
other
issues
like
this,
the
one
with
not
controller
and
they
a
bunch
of
other
things
with
another
regression
with
controller
manager,
this
basically
ended
up
masking
the
regression
with
Cole
DNS
and
by
the
time
we
fix
the
others.
F
So
hopefully
that
should
be
better
now,
yeah,
I,
guess
that's
what
I
have
on
my
side.
B
B
So
if
you
were
looking
for
something
to
do
to
contribute
to
kubernetes,
consider
joining
them,
okay.
Well,
thank
you
very
much.
That's
it
for
sig
updates
for
this
week,
again,
no
sig
updates
next
week,
but
two
weeks
from
now
we
have
several
six
coming
up.
Please
do
look
at
your
cellphone
schedule
and
make
sure
that
your
sig
is
prepared
or
rescheduled.
So
now
some
announcements
so
Harris
you
want
to
do
the
steering
committee,
auction
announcements
or
doors
I'm.
G
Everyone
who
is
eligible
to
vote
you
officially
have
less
than
one
week
to
do
so
that
we
close
at
6
p.m.
Pacific
time
on
October
3rd
one
more
time,
6
p.m.
Pacific
October
3rd
is
the
last
time
you
can
vote
for
the
three
slots
that
are
up
for
grabs
on
the
steering
committee
election
check,
your
email
for
the
sibs
ballot.
That's
a
Cornell
voting
service,
it's
titled
pull
kubernetes
steering
committee
election
for
2018.
G
G
It
looks
like
I've
got
the
next
one.
The
next
thing
is,
we
do
have
a
contributor
experience
survey
that
is
circulating
around
links
in
the
agenda.
This
survey
is
going
to
shape
this
special
interest
groups
direction,
go
forward,
at
least
for
the
next
four
to
six
months.
You
all
are
contributors
that
are
listening
to
this
right
now.
It
would
be
great
to
have
your
feedback
things
like
automation,
mentoring,
how
we
communicate.
G
G
Next.
One's
also
me
sig
chairs,
leaves
t
tech
leads
keepers
of
zoom
licenses,
people
who
upload
YouTube
videos.
Please
check
your
email
for
issues
relating
to
those
two
services.
We
have
still
about
40%
of
SIG's
and
working
groups
that
are
not
using
the
correct
zoom
license,
and
we
have
quite
a
few
of
you
who
are
missing
many
many
past
recordings
of
your
meetings
on
YouTube,
which
makes
us
not
as
transparent
as
we
should
be.
G
So
please
definitely
check
those
items
and
we
are
happy
to
help
in
any
way
please
reach
out
to
community
if
you're
gonna
do
and
we'll
help
you
there
as
well
guess
the
next
one's
me
to
contributor
summit
for
Seattle
right
now
we
have
a
waitlist
issue.
We
are
not
sold
out.
I
repeat:
we
are
not
sold
out.
There
is
a
registration,
technical
issue
that
we're
working
through
right
now
we
may
be
pulling
out
of
the
cube
con
co-located
registration
process
completely.
G
G
B
But
we
can
finish
up
with
some
shout
outs:
no,
a
shout
out
from
dims
to
Jonas
I
for
launching
cloud
native
Boston,
a
shout
out
from
Tim
to
the
whole
release
team
from
Timothy,
st.
Claire,
the
whole
release
team
and
everyone
else
who
put
in
a
crazy
effort
to
make
the
artsy
to
release
and
actually
the
final
release.
Today.
B
Presumably
I
ace
gives
a
shout
out
to
stephen
augustus
for
help
staff,
the
1:13
release
team
and
steven
augustus
going
to
shout
out
for
everybody
who
volunteered
we
actually
have
quite
a
few
volunteers
for
the
1.13
release
team.
I'll,
tell
you
as
a
as
a
task
lead.
One
of
my
difficult
tasks
is
going
to
be
turning
a
couple
of
those
people
down
to
haven't
quite
figured
out
how
to
do
yet,
but
there's
a
limit
to
how
many
people
I
can
mentor.
B
So
that's
awesome.
I
almost
think
we
should
create
a
position
called
release,
team,
HR
and
assignment
to
Steven,
because
that
seems
to
be
his
de-facto
position
the
and
then
a
shout
out
from
Aaron
to
Cole
Wagner
for
all
of
the
work
that
he's
done
to
kill
off
the
munchers
and
make
tide
live
for
fur
merchants,
instead
of
using
lenders
so
yay.
B
So
thanks
so
much
and
as
a
reminder
to
everybody.
We
do
these
shout
outs
every
week.
So
if
you
have
a
shout
out
for
somebody
else
posted
in
the
shadow,
it's
channel
on
slack
or
email,
contributor,
experience,
I
guess
and
we
will
include
those
because
lots
of
people
do
amazing
stuff
for
the
project,
and
that
is
all
for
this
week's
community
meeting.
The
recording
will
be
on
YouTube
and
next
week
we
will
have
the
retrospective
for
the
112
release.
So
see
you
next
week.