►
From YouTube: Kubernetes SIG Apps 20190513
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
to
kubernetes
cig
apps
for
Monday
May,
13
2019,
it
looks
like
I'm,
Matt,
Farina
and
with
us.
We
have
all
three
co-chairs
here.
Adnan
and
Ken
are
here
and
so
I'll
start
by
sharing
the
meeting
minutes
into
an
agenda
to
chat
here.
So
we
can
get
on
the
same
page
and
the
first
thing
up
is
we
actually
have
a
bunch
of
announcements.
A
The
first
one
is
is
that
Janet
is
going
to
be
joining
us
as
a
we're
planning
on
her
joining
as
a
new
co-chair
to
say
gaps.
She
has
been
one
of
the
people
that
has
been
leading
in
the
workloads
api's
for
some
time.
She's
been
Co,
chairing
the
last
year's
worth
of
coop
comms
and
cloud
native
cons
with
her
last
one
I
believe
coming
up
here
in
a
couple
of
weeks
and
or
next
week
actually,
and
so
she
we've
invited
her
to
come
in
and
so
welcome
Janet.
A
A
A
The
next
one
is
during
coop
con
cloud
native
con
in
Europe,
and
so
we're
gonna
go
ahead
and
cancel
that,
because
a
number
of
folks
can't
make
it
and
they're
gonna
be
running
around
busy
and
then
the
one
after
that
is
Memorial
Day
in
the
United
States,
and
so
a
number
of
us
won't
be
able
to
make
it
that
day
and
so
the
next
two
meetings
are
cancelled,
which
means
I,
think
believe
it's
June,
3rd
or
somewhere
on.
There
is
the
next
meeting
that
we'll
have.
A
We
also
have
some
important
dates
coming
up
here
before
the
next
time
we
meet
May
28th
is
when
1.15
beta
1
is
slated
to
be
released
right
after
that
May
30th
is
when
we're
expected
to
have
code
freeze
for
1:15
and
then
May
31st
is
the
docs
deadline
where
our
open,
placeholder
PRS
need
to
be
open
for
any
Docs
changes
that
are
going
in,
and
we
have
a
couple
of
little
things
that
we've
been
talking
about
doing
in
this
release
cycle
and
so
documentation,
for
example,
and
code
freeze
effects
those
things
and
then
the
last
day
we
have
is,
if
you're
going
to
coop
Connie
you
next
week
at
11:00
a.m.
A
B
A
Sure
can
alright,
then.
The
next
thing
we
have
is
to
discussion
topics
the
first
one
we're
going
to
get
into
the
tight
loops
and
cascading
failures
that
are
impacting
a
cluster
and
then
we're
going
to
go
into
bug
triage
stuff.
The
tight
loops
and
cascading
failures.
First
came
out
in
Sig.
Apps
are
sega
architecture
at
last
Thursday.
As
one
of
the
talking
points
Ken
did
you
want
to
kind
of
describe
what
was
going
on
since
it's
in
workloads,
and
you
believe
you
know
what's
going
on
there,
yeah.
B
So
if
you
remember
the
bug
that
was
relatively
recent
about
the
scheduler,
placing
PI's
on
news,
that
kulit
can't
run
them
on
time,
security
context
will
be
one
or
another
one.
If
you
had
a
runtime
class,
that
was
not
appropriate
if
you
set
up
Windows
and
you
didn't
send
your
change
accelerations
correctly,
basically
anywhere,
where
deployments
in
particular
replicas
sent
actually
ended
up
end
up
replacing
PI's
on
a
node
where
it
can
schedule.
B
B
The
other
ideas
would
be
like
so
kind
of
my
thesis
is
like
well.
The
scheduler
is
trying
to
play
spy
on
a
location
where
they
can
never
run
and
I
think
that
the
correct
thing
to
do
is
to
solve
that
the
the
interaction
between
the
scheduler
and
kulit-
and
you
know
it
bridges-
the
barrier
properties
that
prohibit
up
high
from
ever
running
on
a
node
that
should
be
surface
to
this
default
scheduler.
The
default
scheduler
should
make
a
more
intelligent
decision
about
policemen
scheduler
like
effectively.
That
leaves
the
pipe
ending.
B
You
don't
have
the
problem.
That
means
that
were
still
being
asked
to
kind
of
like
look
at
all
of
the
issues
globally
and
kind
of,
because
not
all
of
them
are
identical.
They're
they're,
like
lose
more
than
one,
is
here
with
a
look
at
them
and
come
up
with
a
list
of
suggestions
for
a
smaller
group.
It's
probably
crossfades
to
address
that
individually.
It's
probably
a
longer
term
thing.
This
isn't
something
that's
going
to
get
fixed
in
like
1,
1,
5
or
1,
1
6,
but
they're
kind
of
thinking.
B
The
kind
of
thing
I'm
thinking
in,
though,
is
that
you
want
to
I,
don't
want
to
make
it
harder
for
people
who
are
writing.
Custom
controllers
to
write
custom
controllers
so,
like
any
two
main
kids
right
now
are
either
operator
kid
for
cookie
builder.
Both
of
them
are
using
controller,
runtime
and
controller
tools.
At
this
point,
there
are
certain
things
we
can
do
to
make.
B
You
know
to
add
joint
libraries
there's
that
make
it
easier
for
people
to
consume
back
on
logic,
but
I
don't
want
to
make
a
rocket
science
to
produce
a
controller
that
works
well
by
mandating
that
all
controllers
implement
very
complicated
business
logic
in
order
to
try
to
determine
whether
apposite
they're
going
to
schedule
or
going
to
or
create,
are
going
to
actually
be
schedulable.
So
it's
really
not
a
concern
of
the
controllers,
and
that
domain
is
really
the
main
of
the
scheduler
itself.
C
Yes,
sir,
regarding
the
schedule,
I
think
with
the
move
of
them
on
sets
do
actually
schedule
being
the
one
deciding
on
which
note
to
place
the
boat.
We
don't
have
any
control
that
actually
chooses
a
note,
so
that
one
was
certainly
ration
between
schedule
and
queue
blood,
so
I
think
we
are
pretty
safe
there.
I
Rico.
B
Apparently,
the
only
one
that
I'm
aware
of
that
still
buzz
it
would
be
well.
There
are
some
things
there
right
like
if
you
start
mutating
the
staple
set
pod
template
that
can
be
problematic.
Other
demons
at
pod
template
the
kind
of
agreement
that
we
have
with
API
machinery
is
the
correct
way
to
do.
Injection
is
to
inject
only
on
the
pod
itself,
so
you
don't
implement
mutating
admission
control
for
staples.
That's
you.
You
implement
mutating
admission,
control
response
back
and
that's
safe
to
do.
B
C
B
B
On
it
blameless
vehicles,
it
is
so
I,
don't
think
that
happens
in
Demon's
says
it
does
it's
another
issue.
We
don't
do
the
comparison
there.
We
do
the
comparison
between
replica
Czech,
pine,
template,
spec
and
deployment.
Pi
templates.
That
for
sure
so,
like
the
one
thing
we
know
is
unsafe
and
the
issue
that
arose
originally
was
when
someone
implemented
a
mutating
web
book
that
modified
the
replica
set
high
templates
packing
cause
it
ain't
even
side
of
their
deployment.
B
There
was
another
one
which
people
said
where
they
implemented
a
mutating
webhook
unstaple
set,
but
the
issue
got
closed
and
it's
not
clear
to
me
exactly
what
the
person
was
doing
and
that
didn't
destroy
their
entire
work
load,
but
it
caused
a
ridiculous
number
of
controller
revisions.
So
there
was
a
tightening
there.
I
just
wasn't
able
to
figure
out
what
the
titan
was.
B
But
in
general
we
should
consider
this
category
of
things
as
tight,
loops,
I,
guess
and
I
signed
up
to
go
through
and
look
through.
All
of
the
issue
try
to
look
through
all
of
the
issues
and
figure
out
at
least
do
a
cross
correlation
of
them
and
then
the
idea
is
to
sit
down
and
categorize
them
and
then
go
talk
to
Sigma
and
promising
scheduling
and
figure
out
some
remediation
strategies.
B
E
B
A
B
Okay,
probably
in
a
Google
Doc
to
start
with,
and
then,
if
there's
some
recommendations,
we've
probably
put
it
on
like
the
public
documentation.
So,
for
instance,
I.
Don't
think
it's
captured
anywhere.
What
it's
not
clear
to
me
that
the
state
utilization
of
mutating
web
books
in
interaction
with
kubernetes
were
close
objects
is
captured
anywhere.
So
that's
one
thing.
We
know
we
need
to
update
and
we
need
to
go
update
the
box
to
indicate
it
Daniel.
B
B
The
staple
set
one
was
an
open
source
issue
that
was
eventually
closed,
because
the
guy
is
kind
of
disappeared
and
didn't
give
more
information
like
I
can't
if
I
can't
reproduce
it
and
I
can't
see
what
you're
doing
there's
not
much
I
can
do
for
you.
The
replicas
set
one
where
we
were
tight.
Moving
due
to
pods
security
context.
Is
it
open
issue
presently
that
I
think
is
labeled?
B
Apps
scheduling
ends
note
at
this
point
and
most
of
I
think
Craig
ACK
has
commented,
I
know,
I
commented
and
Lee
commented,
I
think
the
general
consensus
there
was
no
dance.
Keller's
should
figure
out
to
do
something
more
intelligent
than
scheduled
positive
place
where
the
immediately
details
they
either
leave
and
stuff
impending,
and
you
can't
schedule
it
or
don't
start
the
pod.
B
If
you
know
the
security
context
isn't
going
to
happen
because,
usually
like
the
the
definite
thing
would
only
occur
in
the
event
that
the
pod
actually
gets
terminated
and
we
have
to
delete
it
and
recreate
it.
That's
when
you
would
tightly
if
the
pod
never
like
it
gets
stuck
attending
or
it
never
gets
started
will
never
try
to
recreate
the
pod
in
any
other
controller
cases.
B
So,
but
looking
at
each
of
them
individually
is
kind
of
what
cigar
protection
wants
to
do
and
then
categorize
them
and
then
come
up
with
a
long-term
strategy
across
states
to
remediate
it
in
terms
of
like
looking
at
it
as
like.
How
severe
is
it
there's
more
than
one
opening
issue
around
around
this
general
topic?
But
it's
not
he's
not
I,
think
the
most
problematic
thing
for
users
that
I've
seen
lately,
but
it's
something
that,
like
I
guess,
city
architecture
just
wants
to
address
or
start
addressing
now,
because
lately
we
start
talking
about
it.
B
C
C
C
B
A
Yeah,
so
we
really
just
need
to
understand
each
of
the
individual
things
and
then
a
tackle
appropriately,
which
I
think
was
the
plan
that
we'd
come
to
in
Sega
architecture
which,
to
just
kind
of
dig
in
understand
them,
get
it
figured
out
and
then
case-by-case
deal
with
it.
It's
part
of
the.
What
is
it
back
in
December
at
the
kubernetes
contributor
summit,
one
of
the
things
that
Brian
grant
asked
everybody
to
focus
on
for
this
year
was
kind
of
reliability.
C
C
B
We
do
it
deployments,
only
one
is
still
using
the
template
comparison
and
it's
not
doing
it
at
high
spec.
It's
only
doing
it
to
replica
set
and
deployment
spec
we're
still
probably
problems,
but
in
music
I
think
you
should
go
find
Janet
doc,
because
we
considered
using
generation
but
there's
a
problem.
We
considered
a
bunch
of
things,
so
we
should
at
least
share
out
like
what
what
we
already
thought
about
doing
and
then
go
from
there
and
see
what
what
still
left
are
looking
for.
B
C
C
A
The
next
section
we
actually
have
is
a
bug
triage
session.
We
were
gonna,
do
a
little
bit
of
walking
through
bug,
triage
--es,
but
I
know.
I
was
hit
up
on
the
side
at
the
end
of
last
week
about
a
question
that
went
along
with
this
of
should
we
have
a
separate
bug
triage
session,
or
how
should
we
incorporate
it
into
this,
because
some
of
the
other
SIG's
do
that
so
I
thought
before
we
jumped
into
the
gouging
itself.
You
take
a
minute
and
talk
about.
When
would
we
do
that
as
a
group.
F
Yeah
so
I
made
this
suggestion
last
week,
I
actively
follow
a
lot
of
the
saiga
and
see
gaps,
issues
and
PRS
and
I
saw
repeated
pattern
where
I
see
a
lot
of
issues
come
up
which
are
like
similar
in
pattern
or
behavior.
Asking
the
same
questions
and
I
feel
like
it
might
help
the
community
as
well
as
humanities,
in
specific
to
to
have
an
actively
active
session,
because
this,
a
graphs
meeting
is
probably
bi-weekly
right,
I
think
so
I
mean
it
may.
F
B
One
of
the
things
we're
concerned
about
is
that
we
do
have
a
lot
of
active
participants
that
are
in
EU,
and
you
know
like
them
gain
on
this
call
like
trying
to
sync
up
in
a
time
that's
good
for
both
US
and
EU.
You
know
that
that
can
be
tough
like
we
put
it
out
there.
The
idea
that
we
could
do
a
second
one
but
I
think
there
was
not
a
consensus.
We
would
do
it
now.
I
am
not
particularly
opinionated
either
way,
but.
A
So
I
think
we
have
time
in
these
meetings.
If
we
spend
maybe
20
minutes
every
week
or
10
minutes
every
week,
I'm
not
sure
if
we
need
more
than
that,
maybe
the
easy
way
would
be.
Is
we
take
a
little
bit
of
time
in
these
meetings
and
do
it
and
then
we
see
if
we
outgrow
it
and
if
we
do
need
to
outgrow?
Naturally,
then
we
add
some
additional
time.
How
does
that
sound.
B
Bugs
all
day
long,
the
level
at
which
they're
produced
on
kubernetes
would
actually
still
be
greater
than
the
amount
of
effort
we
can
apply
to
triaging
them
so
like
timeboxing,
it
is
probably
the
only
strategy
that
can
possibly
be
effective.
I
mean
nothing
say
that
we
won't
do
any
triage
outside
of
the
meeting,
but
like
to
safely.
You
could
spend
an
infinite
amount
of
time
throughout
you
bugs
on
kubernetes.
The
whole
reason
they
added
the
leg
cycle
sail
stuff
was
because
it
was
a
monotonically
evil.
Everyone
in
the
project
contributing
it's
a
monotonically
increasing
you.
A
B
It
doesn't,
it
doesn't
work,
even
if
you,
even
with
all
the
traps,
we're
doing
and
technically
we're.
Not
actually
you
know
issues
are
produced
in
a
refit
and
there's
a
reason.
The
lifecycle
stale
thing
exists,
I
mean
the
best
we
can
do
is
usually
pick
out
the
ones
that
are
a
high
priority
or
burning
and
then
go
from
there
or
something
that
we're
building
in
the
ones
that
sound
extreme.
C
B
Will
reduce
reason,
people
get
very
confused
about
the
difference
between
issues
and
other
channels
of
communication,
like
slack
or
Stack,
Overflow
and
they'll
have
like
the
only
issues
that
are
effectively.
Questions
really
aren't
appropriate
for
an
issue
for
them
right
and
that's,
and
though,
there's
an
effort
put
into
like
trying
to
get
people
to
the
appropriate
channel
for
the
information
they
need,
but
that
doesn't
it.
It
still
is
what
it
is.
A
B
All
right,
so
we
can
go
top
to
bottom.
Some
of
these
have
already
been
triaged
or
at
people
assigned
to
them.
This
one
is
kind
of
interesting
and
worth
bringing
up
so
effectively
when
someone's
creating
a
job
with
the
on
failure,
policy,
they're,
seeing
no
back
off
and
the
reason
they're
not
seeing
back
off
is
because
of
an
container
failure.
When
the
in
a
container
fails,
it's
actually
going
to
cause
the
pot
to
go
back
into
well,
it
never
gets
running
right.
B
So
if
it's
never
running,
it's
ever
recorded
at
stage
a
pending
I
think
this
person
actually
issued
a
PR
with
a
proposed
fix
for
it.
It's
currently
assigned
to
Maureen
and
boxfish
we're
down
to
solution
Janet.
So
looking
at
the
PR
I
mean
I
think
the
issue
is
valid.
Given
that
I
mean
the
expected
behavior
of
the
job
is
modified
by
the
in
a
container
and
there's
no
way
to
actually
get
the
back
off
the
limit
to
work.
When
the
in
a
container
fails
and
the
odd
number
goes
to
pending.
B
B
D
B
C
D
C
Yeah
I
think
we
do
now
I
I,
recall
I
was
there
like
I
know
we
had
some
evicted
pots
and
it
wasn't
restarting
them
I
think,
but
nowadays,
so
every
epoch
teapots
that
are
felt
shouldn't,
be
counted
to
replica
state.
When.
B
A
B
This
is
really
old,
outer
support,
so
then
I
guess
the
question
is:
what
do
we
do
with
it?
Like
generally,
something
that's
1/5
is
so
far.
I
have
the
support
that
there's
not
likely
we're
not
cherry-picking
it
so
I
guess
I
mean
one
thing
we
could
do
is
say
it
just
ask
replicate
something
yeah.
It
is
reproducible.
B
D
B
Generally,
the
issue
is
no
longer
useful
than
we
disclose
it.
The
other
thing
that
all
happens
is
that
it'll
automatically
go
stale
after
some
period
of
time
and
so
close
right.
So
you
can
clearly,
ideally
like
you,
resolve
the
issue
and
if
it's
just
something,
that's
not
reproduced
by
current
stage
just
and
it
serves
over
us,
just
close
it
in
the
event
that
there's
no
response
for
some
period
of
time,
it'll
close
itself
anywhere
I
personally
prefer
to
let
people
close
their
own
issues
that
they
open.
That's
not
always
possible.
B
G
You
cannot
first
restart
a
job.
The
only
reasonable
approach
for
that
one
would
be
duplicating
a
job
or
cloning
whatever,
because
if
you
would
clear
its
current
status,
you
will
lose
the
data
from
the
previous
run
so
I'm
guessing
the
easiest
approach
would
be
to
have
a
create
command
or
some
kind
of
a
duplicate
command
which
would
replicate
the
same
job,
I,
can't
think
of
anything
else.
Yeah.
B
G
B
G
You're
basically
stumping
the
idea
of
a
job
it
run
it
completed
so
I'm
guessing
you
want
to
run
something
different
or
maybe
you
want
to
have
a
cron
job
and
then
spin
up
jobs
on
demand
which
is
feasible
as
well.
What's
that
Krantz
CLI
told
him
that
we
have
there's
a
create
job
from
cron
job
yeah.
A
And
you
can
just
may
not
have
anything
to
do
with
cron
jobs
right.
This
is
kind
of
just
a
user
experience
thing
where
somebody
says
yeah.
We
fired
this
job
off
and
now
it's
some
other
random
point.
I
just
want
to
run
it
again.
That
was
already
run,
it's
kind
of
a
user
experience
thing
more
than
anything
else
and
don't
think
they're
talking
technically,
we
want
to
rerun
the
same
thing.
They
will
kind
of
want
that
characteristic.
Tough.
We
grab
that
job.
A
That
was
wrong,
sure
a
head
of
status
lets
you
know
effectively
you're
you're
copying
its
template
and
then
just
rerunning
it
again
running
that
template
and
creating
a
new
job
out
of
it.
But
it's
the
ability
to
user
experience
on
that.
That's
simplification!
That's
probably
what
they're
asking
for.
B
Yeah
yeah
but
I
mean
yeah.
The
honest
truth
is
we're
probably
not
going
to
modify
the
v1
jobs
API.
In
order
to
do
this,
I
mean
hey
there.
There
are
valid
other
ways
to
do
it,
so
what
I
usually
do
is
I
was
just
saying
tell
it
like
I,
don't
want
to
send
somebody
like
the
thing
we're
going
to
consider
doing
it
when
it's
very
likely
we're
not
so
we're
unlikely.
Consider
the
job.
Api
I
mean
it's
just.
We
go
and
copy
a
complete
job
to
create
a
new
one
and
I.
G
Haven't
looked
into
the
huddle
just
to
plug
in
because
apparently,
somebody
created
a
plug-in
which
does
exactly
that.
It
gets
and
recreates
the
same
job,
maybe
renaming
it
and
that's
perfectly
reasonable,
viable
right,
yeah
I'm,
not
seeing
even
us
doing
in
this.
You
lie
an
option
for
duplicating
a
job.
I
know.
B
G
Especially
when
it
comes
to
create
and
like
I
would
say,
dumb,
simple
commands
such
as
create
set
and
all
of
those
we
are
rather
prefer
going
towards
the
server-side
dynamic
commands
which,
because
we
currently
maintain
a
way
too
many
commands
which
are
just
filling
in
the
details
about
research
and
I'm.
Talking
about
real
life.
So.
A
Could
I
suggest
that
maybe
what
we
tell
them,
then,
is
that
this
is
the
kind
of
thing
that's
not
gonna
happen
inside
of
the
API
at
least
not
anytime
soon,
so
the
suggestion
would
be
that
they
create
their
own
code
or
a
custom
coupe
control
plugin.
If
one
does
not,
does
n't
want
to
exist
that
lets
you
retrigger
job
I
just
did
a
search
for
water.
I
didn't
find
it
well.
F
A
F
D
D
G
G
A
C
C
A
B
B
Would
say
no
it's
like
a
well-defined
behavior
for
something
that's
Re,
Ga,
something
we
should
look
at,
but
probably
we
want
to
catalog
it
and
address
a
to
uniform
way
across
all
the
controllers,
but
I'm
hoping
to
suggestion
otherwise,
I
would
say
like
cron
jobs.
Ga
should
be
cron
jobs
specific
and
like
the
dependencies
for
job
controller.
That's
already
been
g8
yeah.
B
We
should
probably
take
a
look
at
what
we
can
do
there
I
mean,
but
honestly,
if
you're
clearly
in
their
cluster,
because
you
launched
a
job
with
an
image
that
doesn't
exist
and
you're,
not
monitoring
anything.
But
yes,
we
want
safety
rails
to
help
our
users
have
a
good
time
and
to
protect
our
control
plane
honestly,
like
having
a
resource
quotas.
B
We're
pod
might
help
this,
but
using
the
system
appropriately,
and
you
know
being
able
to
monitor
the
fact
that
you're
having
massive
numbers
and
image
pool
failures
for
a
cron
job
would
probably
be,
if
you're
launching
a
massive
ass
workload
on
an
image
that
doesn't
exist
or
many
of
them.
So
we
even
put
guard
rails
in
there,
but
that
person
might
be
giving
themself
a
slightly
bad
time.
G
B
B
This
is
the
next
step
here
to
implement
a
cap,
and
if
they
want
to
open
the
cap,
you
know
I
feel
like
we
should
take
it
at
least
well.
We
can
discuss
the
captaincy
if
we
think
we
want
to
do
it
or
if
it's
something
we
want
to
accept,
but
if
it's
going
to
be
a
built
in
we'd,
probably
have
to
take
it
to
sig
architecture,
to
approve
adding
a
new
built-in,
API
and
generally
they're
trying
to
not
do
that.
B
D
B
Yeah
I'm
saying:
have
you
considered
implementing
this
as
an
extension,
we
might
be
willing
to
sponsor
a
mystic
acts
of
project.
Please
consider
submitting
a
cap
against
the
gaps
and.
B
A
F
We
added
an
initial
cap
which
was
provisional
and
then
I
updated
it
with
some
suggestions
on
what
might
the
options
look
like
for
States
assets.
I
wanted
somebody
to
look
at
it,
and
especially
people
who
are
very
familiar
with
stateful
code
words
to
comment
on.
If
those
suggestions
make
sense,
which
one
should
be
the
right
way
to
go.
I
have
some
pending
comments,
I'll
updated,
but
if
somebody
could
review
it.
A
A
F
So
basically,
I
think
my
main
question
was
like:
does
the
gaps
has
in
the
past
thought
about
doing
sessions
to
like
code
walkthroughs
or
some
deep
types
into
certain
areas
of
say,
gaps
I
mean
my
main
thinking
around?
That
is
like
how
do
we
expand
the
reviewer
and
approver
cool
so
that
we
can
get
more
people
to
actively
look
into
things
and
and
maybe
help
out
I've
seen
like,
for
example,
six
storage
has
done
some
code
walkthroughs
in
the
past
I
attended
deep
dive
stuff
like
this.
What
do
people
think
like?
Would
that
be
helpful?.
A
This
is
actually
also
one
of
those
things
that
I
think
Paris
has
brought
up
with
sakes
to
help
expand.
The
knowledge
base
is
for
somebody
to
do
code,
walkthroughs
and
I
want
to
say
make
API
machinery
has
done
the
same
thing
just
had
more
context
to
this
woody.
Those
of
you
who
work
on
the
workloads
controller
think
about
doing
some
code.
Walkthroughs
of
the
workloads
controllers.
B
So
I
think
is
Mike
he's
talking
right
now
we
did
so
much
Walker's
about
he
got
to
and
he's
going
for,
reviewer
right
now
likely
to
get
a
roof
for
it,
and
you
got
there
primarily
by
doing
like
active
contribution
and
just
getting
shadows
so
I'm
wearing
a
family.
You
know,
as
we
triage
bugs.
If
you
want
to
go
deeper
either
one
assign
yourself
and
then
one
of
us
will
shadow
you
help
you
figure
it
out
and
potentially
help
you
fix
it
and
that's
one
way
to
start
getting
contributions
in
other
work.
B
Was
it
yes,
if
you
look
at
like
Joseph,
his
contributions
are
primarily
around
like
getting
sidecars
to
work.
So
you
know
there
are
different
ways
to
grow
or
viewership.
I'm
were
a
fan
of
doing
it
via
contribution,
but
I'm
not
opposed
like
if
we
want
to
like
spend
some
time
doing,
walkers
of
a
particular
controller,
the
way
that
it
I
just
don't
know.
If
that
will
get
you
there
right
like.
B
If
you're
looking
to
be
a
more
active
contributor,
you
can
be
a
walk
from
code
all
day,
but
if
you
don't
get
the
requisite
contributions
on
you're
not
going
to
get
to
contributor
shipper
reviewers.
Yet
so
it's
not
everything.
Something.
It's
not
clear
to
me
that
clearly,
walkthroughs
lead
towards
based
on
how
we
assess
contributor,
ship
and
reviewers
of
lead
towards
getting
their
shadowing
seems
to
be
the
way
that
actually
gets
the
Commission
or
against
the
reviews
done,
and
to
do
this
batter
yeah.
A
I
think
the
reason
that
it's
been
talked
about
elsewhere
is
it
helps
folks
learn
about
what's
going
on
in
a
controller,
especially
all
of
the
tribal
knowledge.
That's
not
documented
in
there.
That's
outside
of
that
and
I
know.
From
a
video
standpoint,
some
of
the
code
walkthroughs
have
been
some
of
the
more
popular
videos
that
people
are
interested
in
looking
at
and
watching
to
try
to
understand,
what's
happening,
and
so
they
have
actually
shown
to
be
fairly
popular.
G
There
are
two
possible
approaches
to
kotor
for
the
workloads.
There
can
be
one
general
which
will
just
go
through
package
controller
directory
pointing
that.
Well,
this
is
this
many
controllers
that
we
have
and
just
naming
which
people
who
are
not.
Where
else
it
will
be,
it
will
be
useful,
useful
for
them,
and
then
the
other
approach
would
be
picking
one
of
the
controllers
and
slice
and
dice
it
to
explain
more
or
less
how
it
does
how
it
works
with
the
letter.