►
From YouTube: sig cluster lifecycle 2020-05-05
A
Alright
so,
like
I
said
earlier,
I'm
going
to
skip
my
topics
that
require
like
attention
from
tim
seclair,
who
is
not
present
today,
so
the
next
topic
we
have
is
by
dixato's
nick.
Take
it
away.
Please.
B
Hi,
my
name
is
nick.
I
work
on
a
tool
called
tilt
which
lets
people
develop
against
kubernetes
clusters
and
a
pattern
that
we've
seen
among
clusters
a
lot
recently
mom
particularly
test
clusters
and
local
local
tests
and
debugging
clusters
is
that
they
all
have
some
sort
of
local
register.
B
Local,
insecure
registry
that
you
can
push
directly
to,
but
we
on
the
tilt
side
have
to
do
a
fair
amount
of
work
to
try
to
figure
out
where
the
registry
is
and
what
I
would
like
to
propose,
and
I'm
trying
to
kind
of
coordinate
among
all
the
different
local
cluster
developers
is
some
sort
of
way
to
communicate
to
other
tools
where
this
registry
is
the
initial
proposal,
which
is
thankfully
thank
you
for
putting
that
on.
The
screen
is
just
to
add
some
annotations
to
the
kuba
system.
Namespace
about
that
just
say.
B
This
is
where
the
cluster
is,
and
also
if
you
haven't,
set
up
a
if
you're
sorry.
This
is
where
the
registry
is
and
if
you
haven't
set
up
a
registry.
Here's
how
you
set
up
registry
for
users,
the
cluster.
I
think
what
I
kind
of
would
like
some
feedback
on
is
just
you
know.
Does
this
seem
like
a
reasonable
idea,
I'm
not
totally
clear
on
what
the
naming
conventions
for
annotations
should
be,
and
then,
if
I
wanted
to
standardize
this,
what's
the
right
way
to
do
that?
A
Right
so
I
committed
to
this
issue
that
they
created.
I
believe
you
created
it
in
multiple
reports.
Toys,
multiple
projects
kind
is
one
of
them
in
general.
The
way
we
tackle
things
in
the
kubernetes
ecospace
is
like
you
mentioned.
We
create
a
cap,
but
some
of
these
projects
are
actually
do
not
belong
to
the
kubernetes
ecosystem.
A
Internally,
like
saying
they
are
not
part
of
the
kubernetes
and
kubernetes
six
organizations
github,
but
they
are
essentially
part
of
the
ecosystem.
So,
even
if
you
create
a
cap,
we
cannot
guarantee
that
they
will
adapt
the
same
namespace
annotations,
but
a
cap
is
definitely
the
right
process.
We
follow
so
once
you
create
it,
I
guess
we
can
start
reviewing
it.
I
added
some
comments
here.
I
guess
we
could
discuss
this
topic
more
on
the
meeting.
A
In
particular,
I
have
a
certain
concern
about
permissions,
because
our
main
question
here
is:
who
is
going
to
have
access
to
the
registry
annotations?
Is
it
going
to
be
the
customer
administrator?
Only
the
worker
knows
you
know.
Kubrick
clients
are
they
are
going
to
be
able
to
access
the
namespace.
B
Yes,
that's
that's
a
really
good
question.
I
haven't
thought
through
those
questions,
because
this
is
mostly
for
kind
of
local
private
clusters
and
I'm
not
totally
sure
how
these
play
out
for
say
a
big
shared
cluster.
Certainly
for
this
use
case,
you
want
the
individual
kind
of
app
devs
to
be
able
to
access
this
information
so,
but
I'm
not
totally
sure
if
there's
a
standard
way
to
publish
that
kind
of
information
that
should
have
access
to.
C
B
This
I'm
not
sure
I
don't
understand
what
the
what
the
alternative
suggestion
is
just
to
create
a
new
name
space.
C
B
D
Yes,
yeah,
I
I
was
thinking
this
is
a
like
classic
use
case
for
crd.
I
think
you
you
did
comment
in
the
thread
about
going
to
a
crd,
I
feel
like.
We
have
historically
had
challenges
with
annotations,
both
from
a
sort
of
security
point
of
view,
a
versioning
point
of
view.
All
these
sort
of
things
and
crds
are
now
fairly
lightweight.
D
It
sort
of
sidesteps
a
lot
of
the
security
issues.
We
can
have
versioning
to
me
yeah
and
then
I
think
it
sounds
like
you
were
agreeing
with
that.
I
think
like
to
me.
A
crd
is
a
is
an
easier
process
and
you
can
always.
You
then
have
different
options.
You
can
create
one
that
is
unique
to
tilt.
So
you
know
like
in
your
own
namespace
in
your
own
project,
and
then
you
can
say,
look
this
works
great.
D
Can
we
promote
it
to
be
a
a
generic
one
that
other
tools
with
a
similar
use
case
would
have,
and
I
can
imagine
that
there
are
other
use
cases
that
it
would
match
for
and
then
we
could
come
with
a
more
generic
name
if
it
doesn't
have
the
word
tilted
it
or
something
like
that
or
you
could
just
go
straight
to
like
creating
the
more
generic
form
in
some
more
like
nominally
neutral
ground.
D
I
I
don't
know
what
your
thoughts
are
on
our
crd
and
why
you
went
to
annotations.
I
guess.
B
I
think
I
responded
that
a
little
bit
further
down
the
thread.
A
lot
of
what
we're
trying
to
do
is
document
the
things
that
already
exist
in
the
cluster
that
are
not
tilt
specific.
That
people
are,
they
already
have
some
setup
instructions
somewhere
that
say
set
this
up
and
then
just
plug
this
into
a
configuration
somewhere.
B
Crds
didn't
seem
like
the
right
way
to
do
that
kind
of
configuration.
Since
it
was
it's,
it's
communicating
the
state
of
the
cluster
as
it
you
know,
as
it
is
rather
than
kind
of
a
desired
state,
but
I'm
not.
I
guess
I
would
look
for
like
prior
art
in
this,
for
this
kind
of
a
thing.
D
That's
fair,
I
mean
you
could
say
it's
the
yeah,
that's
her!
It's!
I
don't
know
of
any
prior
art
for
like
instructions
on
how
to
interact
with
a
namespace.
For
example.
Openshift
might
have
something
like
this.
I
don't
know
if
anyone's
familiar
with
anything,
they.
D
A
If
it
becomes
potentially
a
crd,
then
it
becomes
a
resource
that
you
are
going
to
maintain
yourself
and
it
kind
of
feels
like
it's.
It
don't
belong
in
the
zombieland,
because
the
kubernetes
cap
anymore,
I
think.
D
I
think
it
would
be
nice
to
have
a
cap
if
we're
going
to
put
it
in
kate's
dao
or,
if
we're
going
to
put
it
in
a
new
in
a
new
like
if
we're
going
to
collaborate
on
a
shared
crd,
I
feel
like
it
might
not
be
required
to
do
that
as
step
one
and
like
the
versioning
of
crds
is
pretty
rich
so
that
you
could,
you
know,
literally
define
a
a
tilt
crd
today
with
those
three
fields
like
registry
registry
from
cluster
and
registry
help,
or
you
know,
structure
how
you
will
and
then
like
evolve
it
from
there
like
the
versioning
facilities
are
reasonable
and
then
later
we
can
change.
E
I
don't
think
that
it
like
the
problem,
is
being
fully
communicated
because,
like
for
tilt
to
produce
a
crd
like,
we
would
then
expect
kind
like
in
order
for
them
to
get
the
behavior
that's
desired
here,
like
the
problem.
Is
that
we're
trying
to
create
a
service
discovery
and
point
for
tilt
to
discover
information
right?
E
So
this
is
trying
to
if
I'm
understanding,
correctly
get
the
user
to
a
zero
config
experience
on
the
tilt,
dev
side
right,
so
a
dev
points
tilt
to
a
cluster
and
they
don't
have
to
configure
what
registry
is
supposed
to
be
used.
It's
automatically
determining
from
the
cluster
infrastructure,
what
it's
exposing
about
itself
right.
So,
if
tilt
were
to
maintain
the
crd,
then
we
would
need
to
convince
all
of
these
other
projects
to
adopt
it
and
that
doesn't
seem
like
the
right
maintenance
model,
which
is
I
mean
it's
fundamentally
the
same
thing.
E
It's
just
that
this
is
different
from
a
lot
of
crd
applications
where
you
have
full
control
over
the
user,
installing
the
api
and
implementing
like
controllers
or
whatever,
but
in
this
example.
What
you're,
actually
trying
to
do
is
get
a
bunch
of
people
to
adopt
an
api
in
the
way
that
we've
used
annotations
before,
which
is
like
extending
core.
F
Oh
yeah,
so
this
reminded
me
of
a
a
desire
I
had,
which
was
was
to
fetch
the
cluster
cider,
which
is
another.
I
think
it's
similar,
because
it's
kind
of
a
thing
that
the
person
who
set
the
cluster
up
chose
and
put
that
information
somewhere.
F
So
I
filed
that
issue
three
years
ago
and
hasn't
really
gone
anywhere,
except
there
were
a
few
comments
pointing
at
component
config.
So
so
I
guess
that's.
My
concrete
suggestion
is:
is
component
config,
something
that
we
can
throw
into
the
ring.
E
E
E
When
the
more
you
know,
direct
solution
is
to
just
put
the
right,
config
value
in
tilt
and
then
also
in
container
d
and
not
have
kubernetes
involved
at
all,
but
it
gets
a
little
bit
confusing
because
then
tilt
is
putting
objects
into
kubernetes.
You
know
that
rely
on
this
value.
So
it's
a
little
bit
like
the
flow
of
the
data
like
picking
the
right
arrows
is,
is
part
of
the
problem
and
the
same
thing
goes
for
the
cluster
cider.
A
So
a
tobacco
city:
this
is
a
proposal
that
goes
beyond
the
scope
of
tilt,
so
it's
basically
a
proposal
of
adapting
a
common
pattern
for
defining
this
local
registry.
It's
not
only
about
theodore
tilts
here.
This
is
a
component
config,
so
it
used
to
be
like
if
we
are
going
to
make
a
proposal
for
local
registries,
it
has
to
apply
to
these
multiple
deployers.
B
Yeah
that's
correct.
I
expect
that
lots
of
other
tools
would
want
to
use
something
like
this
and
it's
purely
just
yeah,
the
the
registry.
Sorry,
the
cluster
community,
those
tools
how
it
how
they
want
to
interact
with
it.
E
Sorry,
nick
I
haven't
read
the
repo
proposal,
but
do
you
picture
this
being
a
namespace
specific
configuration.
B
I'm
not
sure
I
mean,
I
think
the
problem
is.
Is
that
all
the
prior
art
on
this
is
basically
just
random.
Docs
and
shell
scripts,
like
the
individual
clusters,
have
written
for
how
to
configure
them
and
there's
some,
because
it
involves
this
container
d
patches
on
startup.
That
tends
to
be
a
little
bit
hairy,
but.
B
E
B
A
So
how
do
people
think
about
going
with
the
kubernetes
cap
process?
Nick,
please
note
that
it
kind
it
can
be
quite
bike
shedding.
Unfortunately,
people
are
going
to
drop
a
lot
of
comments
there.
People
are
opinionated,
so
if
you'd
like
to,
we
can
go
this
approach.
A
Of
course
everybody
is
going
to
also
provide
their
concerns.
Like
what
happens,
if
we,
you
know,
I
started
adding
my
concerns
here
like
who,
after
these
values,
are
added
in
the
namespace
who
is
who
is
responsible
for
maintaining
them,
and
things
like
that,
so
I
can
show
you
the
captain
blade.
I
guess
just.
A
A
So
it's
a
pretty
detailed
document
it.
Basically,
the
motivation
for
this
document
is
to
cover
as
much
as
possible
all
the
corridor.
Cases
like
it
kind
of
makes
the
outer
think
about
this.
The
feature
the
proposal
and
at
the
bottom.
We
have
something
like
alternatives
like
alternative
proposals
to
this
particular
proposal.
So
you
can,
I
guess,
also
enumerate
like,
instead
of
using
the
name
space,
what
we
can
do
with
crds,
or
something
like
that,
so
I
think
it's
appropriate
to
go
through
a
cap
process.
D
I
think
it
might
be
helpful.
I
would
also
encourage
you
to
pursue
a
tilt
specific
crg
in
parallel
and
see
if
you
can
get
like
kind
of
someone
to
to
adopt
it
and
as
a
point
of
order,
don't
call
it
container
registry.
Sorry,
don't
call
it
cluster
registry,
because
that
has
been
something
else
in
the
past,
so
think
of
a
different
name.
I
was
thinking
container
registry,
which
is
why
I
slipped
to
there.
But
cluster
registry
in
the
past
has
been
a
sort
of
database
of
clusters,
so
that
will
cause
confusion.
D
D
I
suggested
container
registry
accidentally,
but
what,
if
honestly,
I
just
I
just
want
to
like
make
a
point
that
call
it
cluster
registry
might
confuse
some
people
right.
C
D
A
So
in
terms
of
where
to
set
the
cap,
I'm
sure
we
have
some
documentation
on
how
to
work
on
caps.
Just
second.
A
Okay,
so
this
this
page
has
the
instructions
in
terms
of
where
to
set
the
cap.
You
should
send
it
to
the
folder
in
the
kubernetes
house,
with
the
repository
that
is
called
sequester,
lifecycle
and
and
from
here
maybe
we
should
create
a
new
folder
that
is
called
generic
or
something
so.
My
like
my
my
idea
for
having
a
cap
like
that
is
for
the
purpose
of
standardization,
I'm
personally
not
advocating
that
much
for
making
it
clear
that
I'm
not
advocating
too
much
for
this
feature.
A
E
Is
anyone
else
interested
in
the
points
that
brian
brought
up
about
the
similarity
of
this
to
the
pod
and
service
ciders
and
other
cluster
infrastructure
details
that
are
currently
not
exposed?.
A
A
A
So
is
that
really
the
problem?
We
don't
have
a
way
to
get
the
controller
manager
as
well.
I
guess
the
only
way
to
we
don't
have
a
way
to
get.
What
is
the
configuration
from
a
endpoint
is
that
the
problem.
F
Well,
yeah:
that's
my
argument
that
the
configuration
of
container
registry
and
the
configuration
of
a
few
other
things
are
basically
the
same
problem.
E
E
How
you
have
to
do
it
today,
like
you,
have
to
configure
tilt,
and
then
you
have
to
configure
container
d
right,
and
so,
if
you
change
that,
instead
to
just
you
have
to
configure
container
d-
and
you
have
to
configure
kubernetes
until
just
changes
where
it
looks
at
like
the
topology
of
how
information
is
being
stored
is
not
fundamentally
different.
It's
just
more
complicated.
E
D
Yeah,
I
just
I
think
I
think
lee
you
brought
up
the
idea
of
the
multi-tenant,
like
the
broader,
the
idea
of
a
broader
application
to
a
multi-tenant
cluster
in
the
cloud.
So
I
think
when
we
are
running
a
local
cluster,
where
we
have
configured
container
d,
then
yes,
it
is
one
to
one
to
component
config,
but
when
we
have
a
a
cluster
in
the
cloud
that
we're
all
sharing,
but
we
just
want
to
know
like
when
I'm
targeting
this
name
space.
D
H
H
D
Yeah
exactly
yes,
I'm
excited
by
that,
and
so
I
think
that
in
the
broader
use
case
it
won't
map
one
to
one
with
component
configurable
one-to-one
with
a
continuity
setting.
E
D
Perhaps
but
I
don't
know
that
they'll
end
up
that
different
other
than
that
one
of
them
will
have
the
like
in
cluster
name
set,
and
one
of
them
will
have
the
customer
name
like
the
same
and
so
doesn't
necessarily
need
to
be
set.
E
A
B
Yeah
no
actually,
this
discussion
has
been
super
helpful
because
I
think
we've
definitely
on
tilt,
have
had
this
problem
of
trying
to
figure
out
how
the
cluster
was
configured
to
you
know
see
which
things
it
would
support
in
some
generic
way
to
support
that
kind
of
use
case
would
certainly
really
help
us
speaking
just
to
the
the
next
action
items
handling
that
filing
is
a
cap.
I
think
what
we
mainly
need
to
move
forward
with
this
is
the
prov.
B
The
people
who
develop
these
clusters,
the
kind
team,
the
k3d
team
or
the
microkates
team
really
just
want
want
the
pyres
to
be
kicked
before
they
implement
anything,
and
so
a
a
kept
process
where
we
kicked
some
tires
is
totally
great,
and
if
this
is,
if
this
is
the
right
sig
to
kick
those
tires,
then
I
am
all
for
it
and
I
can
buy
all
that.
A
E
Oh
yeah,
this
was
kind
of
related
to
I
I
didn't
realize
timothy
wasn't
going
to
be
able
to
be
here
today,
but
he
did
mention
the
last
time
we
met
that
we
were
going
to
try
and
schedule
a
cluster
add-ons
presentation
and
discussion
in
kind
of
the
direction
of
the
group.
E
We
have
several
work
streams,
lots
of
people
doing
great
work
in
different
organizations
and
incentives,
and
we
would
be
happy
to
kind
of
collate
that
all
together
and
present
it
to
the
group
to
get
some
more
feedback
as
well
as
if
anyone
else
wants
to
jump
in
on
the
project
and
help
implement
things
then
yeah,
but
I'll
have
to
try
and
ping.
Timothy
again,
I
understand
everyone's
busy
right
now
with
all
kinds.
A
A
Okay,
if
you
think
that
the
next
meeting
is
appropriate
time,
you
can
prepare
your
you
know,
data
presentation,
slides
and
we
can
present
it
next
meeting.
If
you
think
you
can
book
a
separate
time,
you
know
you
can
ping
timothy
in
advance
or
after
the
meeting.
E
Yeah
I'll
ping
him,
but
we'll
get
everything
prepared
anyway,.
A
But
just
so
you
know
his
calendar
is
very
booked.
So
that's
why
I'm
saying
that
maybe
the
meeting
is
the
more
appropriate
time,
even
if
he
actually
cannot
join
today.
E
A
Topics:
okay:
let's
go
for
the
subproject
readouts
costrados.
E
Yeah
we're
just
happy
to
welcome
our
google
summer
of
code
mentees
to
the
project
and
yeah
got
everything
all
of
our
ducks
in
a
row
there
so
and
then
for
wg
component
standard.
We
postponed
our
meeting
today.
Last
week
we
had
a
discussion
about
features
and
where
the
definition
of
features
should
exist.
E
The
meeting
recording
is
quite
nuanced,
but
feel
free
to
pop
into
the
slack
channel.
If
you
are
interested
in
the
breakout
of
features
from
kubernetes
kubernetes,
mainly
like
how
we
maintain
our
gates
and
how
people
could
extend
the
feature
gates
object
from
like
a
third
party
controller
and
that
kind
of
stuff,
so
I'll,
probably
list
that
in
my
bullet
points,
that
was
our
main
discussion
from
last
week
and
we
have
some
action
items,
including
a
repository
to
create
so
take
it
away.
Justin.
A
Are
you
getting
enough
interest
for
google
server
of
code
for
the
items
you
prepared
side
of
me?
Are
you
getting
enough
interest
for
the
google
summer
of
code
items
you
prepared
for
questrados.
E
A
All
right:
do
you
think
that
you're
going
to
use
the
costa
radon's
meeting
for
questions
for
the
mentees.
E
It's
kind
of
already
that
way
in
that
it's
it's
pretty
collaborative,
but
then
I'm
there
is
a
minimum
one-to-one
mapping
for
mentored
to
mentee,
where
you're
kind
of
the
recommended
time
commitment
is
about
five
hours
a
week
of
like
personal
mentorship
in
some
way.
So
that
is,
that
kind
of
happens
out
of
band.
A
G
E
A
All
right
cooper
is
standardly.
E
Yeah
sorry,
I
already
gave
this
update,
but
it
was.
It
was
just
about
the
feature
breakout,
oh
yeah,
yeah,
all
right
so
I'll
leave
the
bullet
point.
There.
D
Yes,
the
I
think
the
main
thing
is
that
I
messed
up
cert
rotation
on
our
lcd
manager,
one
of
our
components
about
a
year
ago,
so
we
have
a
one
year
validity
on
the
cert
and
so
we're
thinking
through
how
to
update
that
back
port
it
and
like
posting,
an
advisory.
Basically,
we
have
to
do
an
update
and
we
have
to
make
sure
everyone
notices
and
that's
a
little
challenging.
I
think
the
the
thing
of
interest
I
think
for
other
projects
is
to
think
about.
D
Like
you
know,
if
you
have
something
like
this,
how
do
you
communicate
to
your
users
and
how
do
you
like
try
to
get
them
to
update
and
make
sure
they
update
before
they
encounter
whatever
disaster
is
about
to
befull
them
as
it
were
so
yeah?
We
have
a
we're
going
to
be
doing
that.
I
guess
this
week,
but
yeah
we're
still.
We
still
have
a
little
bit
of
time
on
the
year
before,
like
we
actually
shipped
it.
So
hopefully
we'll
get
there
in
time
and
hopefully,
like
some
people
have
encountered
it.
D
A
Yeah
the
way
we
communicate
stuff
like
that
in
cuba,
dm
is
with
an
action
required
section
in
the
release,
notes
and
you
you
utilize,
the
actual
required
label
in
kubernetes.
I
guess
projects
that
don't
have
that.
Maybe
should
have
a
an
actual
required
section
in
the
release.
Notes
anyway,
for
such
items.
D
Yeah
I
mean
the
other
thing
is
like
no
one
like
does
the
release
notes
like
reach
enough
people.
It
would
be
the
way
I
would
put
it
like
for
something
of
high
consequence,
like
a
security
vulnerability
or
like
your
cluster
will
stop
working
on
this
date.
Type
thing
like
those
sort
of
things
like
are
important
and
we
don't
really
have
a
great
channel.
We
actually
have
a
channel
in
cops
right
now,
where
we
can.
D
Every
time
you
run,
we
check
the
latest
version
like
to
advise
you,
whether
like
there's
a
new
version
of
kubernetes
available
or
something
like
that,
and
what
we're
going
to
do
is
we're
essentially
going
to
say,
like
we're,
going
to
mark
the
old
versions
of
cops
as
obsolete,
which
we
can
do,
and
so,
if
you
run
cops
like
the
cli
tool
which
you
don't
have
to
do,
but
if
you
do,
then
you
will
get
notified,
but
that's
good,
but
it's
still
not
everyone,
because
it
relies
on
someone
like
actively
running
cops.
D
D
D
Yeah,
I
think
if
we
had
a
if
we
had
a
cops
users
group,
that
would
be
ideal
because,
like
a
cops
advisories
group
that
would
have
been
ideal
because
then
it
would
be
a
low
bandwidth,
mailing
list
that
people
wouldn't
send
to
their
spam.
Folder.
A
Yeah
people
nowadays
disable
their
notifications
completely
all
notifications,
so
it's
really
hard
to
reach
people
who
are
you
know
actively
not
wanting
to
be
reached.
I
guess
kubernetes
death
is
one
of
the
ways
swag
channels
like
leave,
ancient
twitter,
I
mean
after
sending
multiple
messages.
If
people
still
have
this
issue,
it's
really
like
you
know
it's
their
fault,
that
they
are
not
following
the
project.
D
D
But
I
think
we're
going
to
look
at
like
we're
going
to
look
at
like
whether
it's
possible
to
have
a
like
very
low
bandwidth
list
explicitly
for
things
like.
A
All
right,
I
think,
dario
dropped
from
the
call
he
says
for
hddm
that
he
is
on
the
way
of
adding
repeatable
binary
releases.
Let
me
check
this
actually,
I,
by
the
way
I
I
saw
fate
about.
A
A
A
Oh
I'm
going
to
pick
daryl's
like
about
this.
Hopefully
he
can
with
his
administrator
privileges.
He
can
just
select
them
and
remove
the
label.
A
A
Here
we
have
a
new
cab
for
potentially
replacing
the
the
customized
support
with
raw
patches.
We
are
basically
doing
something
that
kind
of
did
already.
They
decided
to
use
basically
json,
regular
patches,
merge,
patches
and
strategic
merge
patches,
and
I
added
a
kit
for
that.
A
A
What
else
we
have
some
failing
entry
tests
that
I
have
to
attend
to
later
today,
any
questions
for
kubernetes.
A
Yeah,
this
is
only
part
of
the
puzzle.
We
have
an
issue
actually
fabrizio
created
a
number
of
issues
related
to
flakes
during
cuba.
Dm
join,
and
this
is
only
one
of
them.
I
think
we
have
something
like
five
different
topics
that
we
have
to
cover
this
cycle,
but
if
you
are
interested,
the
pr
number
is
90
64
5,
I'm
going
to
add
it
to
the
list.
E
A
All
right,
midi,
cube
sheriff.
H
Yeah
yeah,
so
we
don't
really
have
anything
that
interesting
to.
We
have
a
release
coming
out
this
week,
but
it's
not
like
there's
nothing
huge
there,
so
my
manager,
who
used
to
be
the
primary
developer
on
mini
cube,
he's
been
working
hard
to
open
source
a
couple
of
his
like
side
projects
and
we
finally
like
10
months
later,
we
actually
they
are
actually
open
source
and
ready
for
consumption.
H
So
triage
party,
I
know
tim,
knows
about
trash
party,
but
it's
basically
a
way
for
a
group
of
people
to
triage
larger
github
projects
and
also
to
triage
across
several
github
projects.
I
know
a
lot
of
you
guys
own
several
different
projects,
so
it's
a
good
way
to
to
to
to
do
that.
Just
with
like
use
yaml
and
then
you
can,
you
can
deploy
it
anywhere
and
and
have
a
an
application
up
and
running
and
then
slow
jam
is
a
is
a
performance
tool
for
go
applications
to
measure.
H
If,
if
something
in
your
application
is
taking
a
lot
of
time
to
complete
without
taking
a
lot
of
cpu
or
any
other
like
measurements
on
your
actual
computer,
then
this
is
a
good
way
to
figure
out
what
package
is
actually
waiting
or
causing
the
slowdown.
So
those
are
the
two
I
think
they've
been
really
useful
for
us
in
terms
of
figuring
out
performance
and
keeping
our
our
open
bugs
under,
like
you
know,
a
thousand
or
whatever.
So
that's
I
just
wanted
to.
Let
you
guys
know
that
those
are
things
that
exist
now.
A
I
mean
congratulations
for
open
sourcing.
This
had
a
grateful
that
thomas
was
able
to
persist
to
the
through
the
open
sourcing
process
process.
I
personally
didn't
know
about
slow
job,
but
I
already
knew
about
triage
party.
I
guess
I'm
going
to
try
this
off,
because
it's.
H
A
A
H
Yes,
so
we
have,
we
have.
We
have
azure
vms
set
up
for
our
ci
now.
So
that's
all!
That's
all
they're
really
helpful.
The
same
windows
folks
so.
A
Okay,
any
questions
for
bbq.