►
From YouTube: Kubernetes SIG Apps 20220307
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
good
evening,
good
afternoon,
depending
on
where
you
are
today
is
march
7th-
and
this
is
another
of
our
sick
apps
by
weekly-
calls-
my
name
is
mate
and
I'll
be
your
host.
Today,
from
what
I'm
looking
at
our
agenda,
it's
pretty
packed,
so
I'm
quickly
skimming
through
announcements.
I
don't
see
any
topics
that
we
that
we
need
to
share.
Probably
the
most
important
for
all
of
us
is
that
the
code
freeze
for
grenades,
129,
is
in
the
last
week
of
of
march.
Let
me
quickly
go
through
my
notes.
A
It's
29th
so
march
29,
that's
roughly
three
and
a
half
weeks
from
today
so
make
sure
that
all
your
prs
be
reviewed
by
that
date.
A
A
Our
report
covers
the
majority
of
the
work
that
happened
in
the
span
of
121
through
123
releases,
including
all
the
stable
beta
and
alpha
features.
I
hope
that
I
didn't
skip
any
of
those,
since
I
had
to
do
those
manually,
although
there
was
discussion
that
this
will
be
auto-generated
next
time.
A
I
did
cover
topics
such
as
project
health,
how
many
people
are
contributing
to
sig
apps?
How
many
people
are
are
present
in
the
sega
apps
meetings
on
a
bi-weekly
cadence?
How
many
forks
are
present
in
the
in
the
slack
channel,
as
well
as
our
sub
projects.
A
Does
anyone
have
any
questions,
topics
or
something
that
wasn't
clear?
I
remember
that
aldo
was
asking
questions
about
approvers
in
the
controllers.
I
did
answer,
but
I
think
I
will
repeat
it
here.
The
question
was
basically
around:
do
we
are
we
expecting
anyone
to
be
an
approver
for
all
the
controllers?
A
The
answer
is
no,
it's
fine
and
it's
perfectly
viable
for
anyone
to
become
an
approver
in
a
single
controller
or
in
a
group
of
controllers
that
he's
working
most
on
so
in
case
of,
although
he
was
asking
specifically
about
the
batch
controller,
so
cron
job
and
and
job
controller.
Yes,
it's
perfectly
valid
to
be
an
approver
just
for
those
two
controllers,
we're
not
saying
that.
Yes,
you
have
to
be
an
approver
for
everything
or
nothing.
A
I
think
the
amount
of
controllers
that
we
own
is
pretty
broad
that
it
might
be.
It
might
be
harder
for
us
to
expect
from
any
single
a
person
to
grasp
all
the
controllers
at
once.
A
If
there
are
no
questions,
we
are
hoping
to
merge
the
the
report,
as
it
is
sometime
today
or
tomorrow.
A
Okay,
the
next
topic
was
from,
I
believe,
abdullah
or
aldo.
There
is
a
discussion
about
kubeflow
donating
mpi,
operated
kubernetes,
and
there
is
a
link
to
this
discussion.
Aldo
abdullah.
Do
you
want
to
speak
about
it.
B
Yes,
sorry,
I
forgot
to
put
my
name
there.
Yes,
so
we
we
proposed
this
long
time
ago
in
queue
flow.
That
mki
operator
is
a
it's
a
common
enough.
It's
a
common
application.
That
is
it.
It
goes
beyond
the
scope
of
kubeflow,
which
is
ai
and
ml.
B
So
we
we
thought
back,
then
that
it
was
useful
to
have
to
have
this
application
this
operator
in
a
different
location
and
yeah
over
the
course
of
the
the
months.
This
has
become
like
more
understood
in
in
keeps
law,
and
they
are
now
in
favor
of
donating
this,
or
at
least
some
of
some
of
the
faults
are
in
favor
of
donating,
the
in
the
operator
to
kubernetes
and
the
motivation.
The
technical
motivation
is
that
they
are
currently
working
on
kind
of
like
a
kcm.
B
They
are
building
a
single
controller,
a
single
manager
that
hosts
all
the
controllers,
and
then
the
mp
operator
doesn't
really
fit
it's
it's
it's
a
little
bit
different
and
also
since,
if
this
is
geared
towards
ai
and
ml,
if
someone
that
comes
from
an
hpc
background
wants
to
contribute,
they
they
might
get
lost
in
the
the
in
the
code
base,
because
he
has
all
the
things
they
they
might
not
care
about.
B
B
So
that's
that's
the
that's
the
the
history
of
what's
been
happening
here
and,
of
course,
if,
if
it
comes
to
kubernetes,
the
most
natural
fit
is
sea
gaps
and
obviously
not
as
a
main
controller,
not
as
a
core
controller,
but
it
could
be
done
as
a
as
a
sub
project.
So
that's
that's
that's
my
proposal.
A
Can
you
quickly
describe
what
mpi
operator
does
for
those
folks
that
might
not
necessarily
be
working
with
with
kubeflow
what
the
purpose
is
of
the
mpi
operator,
and
then
we
can
follow
up
with
other
questions
that
people
might
have.
B
Mpi
is
a
message
passing
interface,
so
it's
a
it's
a
framework
library
to
to
build
distributed,
applications
that
are
highly
cohesive,
so
they
they
work
together
towards
computing.
One
one
single
thing.
A
Yeah,
I
guess
I
I
hope
so.
Does
anyone
have
any
questions
about
what
mpi
is?
If
now,
we
can
jump
over
to.
B
I
can
describe
a
little
bit
what
it
does
so
in
an
mpi
np
application.
You
have
a
driver
and
you
have
workers
and
all
these
workers
are
indexed
by
a
number.
So
you
need
an
and
what
you
need
is
a
a
a
communication
channel
between
all
the
pots,
all
the
workers,
which
would
be
pots
so
for
that
we
have
to
set
up
some
services
and,
and
then
there
is
more
like
there's
more
setup.
That
needs
to
be
done,
like
ssh,
keys
and
whatnot.
B
C
Can
I
ask
a
question:
yeah
go
ahead.
Sorry,
I
had
my
first
time
here.
My
name
is
suraj.
The
question
I
had
was
so
just
talk
to
me.
It's
not
a
question
more
about
confirmation,
so
this
is
completely
new
to
me.
So
the
way
I,
if
I'm
understanding
this
right
mpi,
is
a
protocol
that
is
used
to
communicate
between
the
machines
and
the
machines
are
kind
of
doing
some
sort
of
sorry.
I
don't
know
much
about
the
distributed
algorithms,
but
something
like
mappings
and
they're.
C
D
Mpi
is
more
like
a
framework
that
allows
for
distributed
computation,
particularly
in
the
realm
of
high
performance
computing,
so
there
it
does
have
multiple
protocols
that
are
supported
under
the
hood
for
machine
to
machine
communication
like
low
performance
it'll
do
tcp
higher
performance.
You
can
get
it
to
do
infiniband
directly,
but
the
idea
behind
it
is
that
I
want
to
perform
this
computation.
Usually,
let's
just
say
it's
numerical
linear
algebra,
because
those
operations
are
the
ones
that
people
really
care
about
and
optimizing
mpi.
D
D
What
npi
does
and
the
operator
makes
it
possible
to
do
it
on
kubernetes,
because
traditionally
to
set
up
an
mpi
cluster,
you
would
literally
be
installing
the
npi
daemon
one
of
the
mpi
demands
like
mpic
or
something
else
across
all
the
machines
inside
of
the
cluster
and
then
allowing
the
controller
to
kind
of
time.
C
C
D
A
machine
was
something
you
wrapped
and
stacked.
It
was
a
physical
skew
that
you
had
to
purchase,
plug-in
provision
and
add
to
the
cluster,
and
then
it
lived
out
its
lifetime
until
you
decommissioned
it
or
it
broke,
whereas
with
kubernetes
resources
are
ephemeral,
and
not
only
are
they
ephemeral,
they're
elastic,
which
presents
some
challenges
in
how
you
would
take
a
traditional
npi
type
scheduling
problem
and
you
know,
make
it
work
inside
of
a
kubernetes
cluster.
D
It
has
it's
there's
some
specifications,
it
has
multiple
implementations
and
from
a
programming
perspective,
you
program
you
write
like
a
c
or
c
plus
or
another
language
program,
my
experiences
c
or
c,
plus
plus
using
the
mpi
primitives
and
the
software
the
libraries
that
you're
using
perform
help.
You
distribute
your
compute
under
the
hood.
D
I
see
you
could
think
of
it
like
in
in
more
if
you're
familiar
with
like
cuda
or
any
of
the
things
that
you
would
use
for,
gpus
or
openmp,
which
is
like
a
multi-threading
framework.
Mpi
is
like
that,
except
for
parallel
processing
across
multiple
computers,.
C
A
Which
resulted
in
the
issue
that
you
pointed
in
the
notes
how
the
current
implementation
of
the
mpi
operator
looks
like
how
many,
how
much
of
it
is
to
be
made
generic
enough
to
be
cons
to
be
moved
over
to
the
grenadier
six
work?
Because
I
assume
that
currently,
as
it
is
it's
tightly
coupled
with
cube
flow.
So
how
much
effort
would
it
be
required
to
make
it
less
tied
to
cube
flow,
but
rather
generic,
so
that
we
can
easily
move
it
over.
B
Actually,
it
is
not
tightly
coupled.
There
is
no
cross
dependencies
for
once,
and
there
is
no
api
fields
that
refer
specifically
to
to
other
keyflow
features
it.
It
always
was
kind
of
the
couple
already.
The
only
thing,
of
course,
the
the
the
q
flow
project
define
a
kind
of
set
of
common
apis
kind
of
like
the
meta
ideas
that
we
have
in
kubernetes
and
all
the
projects
inherit
those,
but
they
are
they're
just
apis.
B
They
are
just
a
few
names
just
just
to
have
them
common,
but
they
don't
they
don't
dictate
any
behavior,
so
just
duplicating
that
code
would
be
good
enough
and
perhaps
doing
some
cleanups,
because
they
didn't
really
adhere
to
the
api
guidelines
that
we
have
in
kubernetes
and
yeah
minor
things
like
that.
A
Okay,
I
assume
you
and
abdullah
will
be
working
on
this
topic.
B
Hi
well,
actually,
no,
we
might
in
the
in
the
future,
but
for
now
carlos
here
that
is
in
the
call
he
has.
He
has
a
high
interest
on
the
project
willing
to
do
the
migration.
Okay,.
A
E
Now,
yeah
much
better.
Okay,
I've
been
working
on
on
that
portion.
What
that
aldo
just
described
the
api
and
yes,
so
the
common
so
under
cube
flow.
There
is
a
github
project
that
is
called
common
and
the
mp
operator
inherits
that.
But
if
you
really
go
and
see
it,
it's
a
kind
of
like
a
wrapper
around
a
stateful
set
jobs
that
insert
some
metadata
for
qflo
so
duplicating
that
inside
the
mpi
prayer
itself.
E
I
wouldn't
be
much
of
a
leg
up
work
and
will
really
help
the
project
itself
for
a
cube
builder
and
using
the
the
controller
generator.
E
So
it's
really
good
for
the
project
itself,
because
I
was
trying
to
to
get
the
controller.
The
cube
builder
to
outpost
set
some
defaults
and
it's
really
hard
when
you
are
importing
an
api,
and
that
is
even
good
for
the
project,
and
I
will
be
working
with
argo
on
that.
If,
if
we
get
the
approval
to
donate
it.
D
So
my
only
thing
was
mainly
the
idea.
Is
you
want
to
donate
this
so
that
people
don't
keep
re-implementing
them
and
we
have
like
kind
of
a
uniform
solution?
That's
the
community
supported
version
that
everyone
can
leverage
and
build
upon,
and
I
get
that
aim.
Maintainership
is
my
only
concern
like.
Would
it
be
the
current
maintainers
who
would
continue
to
contribute
and
maintain
the
project.
B
Sorry,
yes,
in
terms
of
future
development.
At
the
moment
there
is
not
much
going
on,
although
there
might
be
carlos
is
working
on
on
pmx
pmix,
which
is
a
protocol
to
do
communication
between
the
workers.
Sorry,
the
driver
and
the
worker.
B
So
I
I
guess
what
I'm
trying
to
say
is
I
I
don't
expect
a
huge
maintenance
burden,
but
I
am
willing
to
to
maintain
to
review
and
all
those
kind
of
things
and
I'm
pretty
sure
carlos
will
will
will
be
able
to
do
that
as
well.
E
E
That's
a
good
thing
and,
as
aldo
said,
a
milestone
they
have
for
two
or
three
years
down.
The
road
is
called
pmix,
so
not
mpi,
but
pmi
the
same
letters,
but
in
a
different
order,
and
it's
kind
of
like
a
a
next
version
of
the
core
itself
of
what
we
will
call
mpi
and
I'm
already
working
with
them
and
the
proof
of
concept
are
working
and
they
are
interested
in
that
to
the
point
that
they
are
writing
c
code
internally
for
on
the
mpi
project
to
be
kubernetes
aware.
B
Because
that's
a
question
for
me
not
yet
I
think
the
major
problem,
the
major
limitation,
is
that
the
the
workers-
don't
don't
really
work
to
completion
right.
They
are
more
like
servers,
and
so
it
it
kind
of
behaves
more
like
a
stateful
set
in
that
sense.
B
So,
but
I
guess
the
discussion
is
open
to
to
decide
on
what
to
do.
What's
the
best
lower
level
workload
api
that
we
can
use
and
long
term,
because
currently
the
the
mpa
operator
is,
is
creating
robots,
which
is
might
be
problematic.
E
Another
way
yeah
for
sure,
but
making
the
case
on
maintaining
the
project.
I
will
be
there
and
I
will
be
driving
the
community
into
bringing
more
contributors
to
it
as
well.
That's
that's
one
of
the
things
I'm
committing
to.
A
Okay,
that
sounds
reasonable
to
us.
I
guess.
E
G
Yeah
so
aldo
or
abdullah,
do
you
see
that
the
direction
that
you
are
thinking
of
at
this
point
of
time
is
to
bring
mpi
under
the
cube
batch
working
group?
Is
that
what
you
are
thinking
in
the
long
term.
B
Most
important
is
how
it
would
fit
in
c
gaps
today.
A
So
for
this
particular
bit
being
a
sort
of
controller,
it
seems
natural
to
to
fit
more
under
the
the
segues
rather
than
the
other
two
six
that
are
part
of
or
are
sponsoring
the
the
work
group
batch
work
group.
G
No,
I
I
completely
agree,
but
I'm
thinking
more
in
terms
of
what
is
the
end
goal
here
like
you
want
to,
because
some
of
the
the
goals
that
we
had
set
for
the
working
group
is
to
improve
the
workload
api,
identify
the
gaps
in
that.
So
I'm
wondering
if
this
is
started
as
part
of
that
effort,
or
you
think
this
is
as
this
as
a
separate
effort.
G
The
other
question
that
I
have
is
like
at
this
point
of
time,
apart
from
the
the
kubeflow
project,
is
there
any
other
project
that
is
using
this
or
is
there
any
interest
that
this
will
be
used
in
future
so
that
we
can
arrive
at
consensus
that?
Yes,
this
is
the
pattern
that
or
this
is
the
library
that
you
want
to
agree
upon.
E
Well,
not
the
library
but
yeah
mpi,
as
with
kubernetes
and
open
source
project.
It
has
many
implementations.
So
there
is
the
open
mpi
that
is
kind
of
the
standard,
but
you
will
find
intel
mpi.
There
is
a
cray
mpi.
E
There
are
many
implementations,
so
the
the
the
mpi
operator
under
qr96
should
sell
as
a
like
a
generic
implementation
that
if
these
other
companies
wish
to
provide
kubernetes
support,
they
can
contribute
and
add
their
specific
needs
for
their
libraries,
but
the
basic
of
distributing
parts
and
connecting
the
pods,
like
the
very
basics
of
the
mpi
implementation,
should
be
the
focus
of
of
the
mpi
operator
and
users.
Aside
of
qflo
yeah,
I
can
provide
a
list
of
companies
using
already
the
mpi
operator.
E
So
far
is
just
like
on
the
proof
of
concept
level,
because,
as
you
can
even
see
the
project
is,
is
not
it's
kind
of
on
a
beta
like
pre-beta,
alpha
kind
of
a
stage.
So
but
companies
like
nvidia
red
hat
itself.
I
work
red
hat
intel.
E
I
have
friends
there
and
they
are
already
running
some
internal
proof
of
concepts
with
the
npr
creator.
So
that
that's
why?
I
think
there's
a
big
interest
on
upstream
contributing
companies
to
have
this
under
6
because
they
are
planning
on
having
future
products
that
will
be
based
on
on
the
npr
operator.
G
Got
it
so
there
are
two
pieces
here:
one
is
the
generic
api.
The
other
is
various
implementations
that
can
be
done
by
various
companies,
but
at
this
point
of
time
you
think
that
mpi
operator
is
perhaps
going
to
be
the
implementation
that
most
of
the
companies
would
confirm,
or
vendors
would
confirm.
G
E
With
mpi,
the
difference
on
the
implementations
mostly
go
on
the
hardware
side
of
things
if
you're
buying
a
networking
interface
from
this
vendor
or
the
other
vendor.
So
that's
that's
when
the
differences
are
kind
of
come
into
place,
but
the
standard
overall,
when
you
are
writing
c
code
or
fortran
code
to
do
mpi
is,
is
the
same.
B
Sorry,
maybe
you're
getting
confused
mpi
is
already
standard,
I'm
so
sorry.
I
need
to
take
a
call.
D
Mpi
is
a
standard
and
there
are
multiple
vendor
implementations.
Popular
ones
are
like
openmpi
and
pic
there
are
proprietary,
implementations
and
stuff,
but
you
wouldn't
necessarily
need
to
include
that
here.
Based
on
from
what
I've
seen
inside
of
the
architecture
would
exist,
you
could
use
a
different
mpi
implementation
in
terms
of
the
libraries
and
so
forth
by
providing
a
different
container
to
implement
it
and
then
you're
fine,
like
the
the
the
the
operator,
is
vendor
agnostic
with
respect
to
the
implementation
of
mpi,
that's
used
inside
of
the
cluster.
F
Right
has
basically
compile
your
application
with
the
npi
library
and
this
application
on
your
container,
and
then
the
mpi
operator
would
deploy
this
container
for
you
so
that
they
can
communicate
with
each
other,
like
in
the
various
instances,
different
instances
of
like
the
workers.
So
when
you
launch
the
operator,
you
say
I
want
10
workers,
it
will
basically
place
10
instances
of
this
binary
running
on
in
in
parts
on
ten
different
nodes,
and
it's
just
like,
basically
launching
it
using
mpi
star
which
basically
ss
edges
to
the
nodes.
G
F
Yeah,
that's
the
api
operator,
so
the
mpi
operator
is
is
a
crd
and
a
controller
that
that
basically
reconciles
the
crt.
So
you
you
describe
your
job
like
it's
called
the
eye
job
saying
that
this
is
the
container.
I
want
this
number
of
workers
and
this
is
the
driver
and
then
the
controller
basically
creates
the
pods
that
represent
the
workers
and
make
sure
that
they
are
able
to
communicate
with
each
other
and
what
not.
E
Yeah,
so
to
provide
more
background
there,
when
you
are
going
to
deploy
an
api
application
and
the
good
thing
is
the
the
hello
world
is
hostname,
so
every
linux
machine
has
hostname.
You
need
to
provide
two
things
to
the
mpi,
the
mpi
binary
in
in
the
launch
host
right.
You
provide
a
host
file
that
the
mp
operator
controller
is
doing
that
for
you
and
injects
that
in
a
configmap,
but
let's
not
get
there,
and
you
will
also
pass
through
the
mpi
binary.
E
How
many
ranks
so
in
the
in
the
mpi
ecosystem
is
called
ranks
for
qr
analysis
would
be
more
like
bots,
so
how
many
pods?
Are
you
going
to
run
for
for
these
specific
application?
So
you
pass
to
plus
the
end
of
the
ranks
and
the
clock
for
the
host
file,
so
the
host
file
will
kind
of
provide
the
ips
or
or
the
using
urinary
we
can
use
dns.
So
you
would
provide
the
the
host
list
and
how
many
ranks
or
processes
you
want
to
run
for
four
qrs.
E
This
gets
translated
into
how
many
pots
do
you
want
to
run.
So
that
is
what
the
mp
operator
is
doing.
It
is
providing
that
list
of
list
of
dns
names,
so
it
will
run
x,
number
of
posts
and
then
it
will
communicate
with
kurnetis
and
create
a
list
and
inject
that
list
as
a
conflict
map
and
you
as
a
user
you,
you
must
know
how
many
times
you
will
run
and
on
the
crd
of
the
mp
operator,
you
define
or
do
it
with
10
with
20.
G
Yeah,
I
understood
that
part.
I
think
what
I
was
asking
more
about
is:
is
the
crt
going
to
be
generic
enough
to
be
used
by
all
the
employees,
all
the
mpi
use
cases,
or
it
doesn't
matter
at
this
point
of
time
all.
E
Right,
that's
what
abdullah
was
explaining.
What
really
really
changes
is,
how
do
you
compile
your
container,
but
the
crd
and
and
what
the
mpi
operator
is
providing
to
the
user?
Is
generic
enough
in
such
a
way
that
it
doesn't
matter
with
which
compiler
you
put
inside
your
container?
It
will
work,
because
what
you
need
is
the
kind
of
the
scaffolding
that
the
mp
operator
is
providing
to
you
right
like
running
the
stateful
sets
creating
the
the
config
map
with
the
host
list
like
the
this.
This
is
scaffolding
that
the
mp
operator
is
creating
for.
E
A
Okay,
I'm
hearing
no
other
questions,
so
I
guess
the
the
overall
sentiment
is
that
we
are
fine
with
with
sponsoring
this
project
to
be
one
of
our
sub
projects.
A
Okay
and
that's
the
case,
we
can
move
over
to
the
next
topic:
ryan,
confirmance
testing.
H
Good
morning,
yes,
we
have
two
topics.
The
one
is
I
can
talk
to,
that
this
test
have
been
created
by
stephen
he's
the
creator
of
it.
It's
been
sitting
for
a
while.
We
would
just
like
some
eyes
on
it,
because
this
freezing
code
freezes
approaching,
and
we
really
like
to
get
it
on
the
test-
trip
to
run
it's
two
weeks
so
before
we,
so
we
can
merge
it
in
as
part
of
conformance
before
the
end
of
this
release.
A
H
Yes,
give
me
a
moment:
I'm
going
to
share
with
you
quickly
a
link,
so
in
the
group
apps
we
have
basically
only
control
revision
left.
If
you
have
a
is
look
chat,
I'll
share
this
link
here,
so
there's
api,
snip
and
what's
left,
connect
for
apps,
so
right
here
only
group,
so
we
really
took
a
hard
swing
added
a
few
releases
back
to
try
and
cover
everything
except
to
control
the
revision.
But
that's
the
next
topic
that
we'll
discuss.
We
have
some
questions.
Stephen's
done
a
lot
of
work
on
controller
vision.
H
Yeah,
so
not
everything
is
under
app,
so
this
is
for
unstable
batch.
So
this
would
also,
if
we
get
this
to
merge,
we
will
cover
just
about
all
of
that.
Okay,
there's
another
job
test
that
we're
busy
working
on
that
will
cover
the
rest
of
jobs,
so
we're
really
trying
hard
to
and
also
if
you
look
at
zoom
out
I'll,
take
you
there.
H
H
A
Okay,
there's
any
volunteers.
H
A
H
I
It's
the
main
issue
around
controller
reversion
is
just
I've
already
found
and
already
gone
through
some
of
the
stuff,
where
there's
an
example
of
daemon
set
driving
controller
revision
behind
the
scenes.
I
Give
me
a
tuple
too,
to
do
with
a
controller
revision.
So
I
can
just
make
a
little
bit
more
progress
so
that
I
can
get
closer
to
a
running
test
for
someone
to
finally
approve.
I
It
just
seems
everyone's
got
documentation
about
the
standard
resources,
but
control
revision
seems
to
be
left
behind,
because
I
found
when
you
do
a
ku
karo
explain.
There
were
still
references
that
says
that
it's
a
beta
actual
resource
and
it's
still
going
to
have
changes
made
to
it.
So
it
seems
to
have
not
had
as
much
love
as
the
other
resources
and
cigarettes.
A
Can
do
you
know
the
story
behind
controller
revisions?
I
don't
recall
being
part
of
of
the
work
around
controller
revisions.
D
So,
in
the
same
way,
that
replica
set
kind
of
is
a
way
that
you
can
roll
back
a
deployment
right
like
you,
keep
a
reference
to
the
object
around
and
it
allows
you
to
kind
of
detect
if,
like
the
prior
revision,
was
at
the
the
declared
state
right
now
so
like
if
the
images
match
over
and
so
on,
it'll
just
roll
back
or
scale
up
the
existing
replica
set.
The
same
thing
is
true
for
controller
revision.
It's
just
an
intermediate
that
stateful
set
and
demon
set
used
to
save
their
state.
I
I
found
the
controller
there's
a
history
controller
that
and
that
control
revision
is
interacting,
but
I've
sort
of
like
lost
the
threat
of
where
the
code
base
is
running
to
to
find
out
actually
who's,
really
driving
the
controller
revision
creation
process
behind
the
scenes.
So
I
know
where's
all
the
tripwires
when
I
try
to
do
patching
and
replace.
I
Because,
eventually,
if
there's
any
flakes
or
anything
like
this,
that
come
up
when
we
go
through
our
two
weeks
before
we
promote,
I
generally
end
up
having
to
deal
with
some
of
the
other
people
like
clayton
or
potentially,
some
of
the
other
people
in
stick
architecture
that
sort
of
like
poke
poking
on
some
of
the
flakes.
I
Occasionally
I
want
to
have
it
so
that
it's
recently
bulletproof
before
it
goes
to
conformance.
D
I
D
D
A
Likely
that
we
are
the
majority
of
the
api,
specifically
the
the
api
testing
that
ryan
and
folks
were
working
on,
they
were
adding
just
the
ones
that
are
verifying
all
the
endpoints.
A
So
the
especially
those
end
points
that
you
mentioned
that
are
not
being
used
by
the
fact
that
we
are
exposing
those
so
listing
all
name
spaces
or
deleting
all
name
spaces.
If
I
remember
correctly
or
replacing,
we
expose
that
api,
even
though
it's
not
being
used
and
probably
historically
should
have
never
been
used.
The
fact
that
it's
out
there
we
are
missing
those
tests
and
by
how
we
are
testing
those
we
just
have.
You
know
a
single
test
that
just
iterates
over
all
the
endpoints
to
ensure
that
they
are.
H
I
That's
it
there's
a
lot
of
tests.
Sorry,
a
lot
of
the
endpoints
are
covered
already
with
integration
tests,
but
they're
not
being
they've
not
been
exercised
through
a
actual
con
conformance
test.
That's
probably
yeah.
D
So
what
I'm
saying
is
like
with
it
with
with
a
staple
set,
if
you're,
testing,
rolling,
update
or
you're
just
testing
creating
it,
a
controller
revision
would
be
created
and
it
should
be
a
namespace
controller.
Revision
would
be
created
as
part
of
that
complements
test,
and
I
have
to
go
look
at
the
the
exact
performance
markings
but,
as
the
last
time
I
checked,
the
conformance
for
staple
set
did
have
staple
set
creation
and
so
forth
as
part
of
conformance.
So
the
api
call
some
of
these
api
calls
for
control
revision.
D
H
I
mean
controller
revision.
All
the
other
resources
in
apps
is
actually
you're
right,
they're
all
tested,
it's
only
the
ugly
ninth
flight
controller
region-
that's
not
not
being
seen
by
api
stump,
and
it
could
be
that
there
is
actual
test,
but
that
we
haven't
found.
So
that
would
be
also
good
and
that
there's
a
reason
that
it
doesn't
pick
it
up.
But
we'll
appreciate
some
eyes.
I
Okay,
it's
probably
the
way
that
generally
the
conformances
around
like
staple
sets
and
diamonds
diamond
sets
seems
to
be
the
one
that
the
control
revision
integration
tests
are
using
predominantly
at
the
moment
and
generally
in
those
conformance
tests.
Generally,
it's
a
case
of
we're
just
creating
replacing
directly
the
daemon
set
which
doesn't
take
it
through
like
doing
a
roll
roll
back,
and
I
think
there
was
an
earlier
initial
test
that's
in
there,
but
because
it
hasn't
been
promoted
to
conformance
we
don't
track
it
then.
I
So
it's
and
some
of
the
original
integration
tests
around
that
do
exercise,
control
or
revision
are
using
what's
called
fake
history,
where
I
believe
it
should
be
dealing
with
like
real
history
objects
from
the
way,
I'm
I
believe,
the
all
the
conformance
endpoint
stuff
is
dealing
with
real
information.
I
The
I,
the
the
general
idea
is
that
each
particular
sig
that
we
work
with
ends
up
basically
just
validating
that
we're
exercising
the
resources
the
way
it
should
be
used.
So,
if
there's
something
like
for
the
job
status,
one
where
something's
not
been
exercised
completely
correctly,
then
give
us
some
feedback
and
we'll
update
the
test.
I
went
through
about
two
or
three
minor
revisions.
I
H
The
seek
to
decide
what
is
acceptable
for,
for
conformance
it
does
go
to
the
performance
group
and
john
barromeric
and
clayton,
and
those
folks
have
to
decide
whether
it's
allowed
in
for
conformance
so
there's
a
further
review.
But
the
stick
is
normally
the
authority
that
makes
the
decision
and
we
really
like
to
help
you
finish
off
this
apps
endpoints
as
soon
as
possible.
I
Generally,
with
the
cigarette
endpoints,
we
don't
originally
run
into
too
many
issues
where
potentially
an
endpoint
has
to
be
test.
We
have
to
make
sure
conformance
tests
are,
of
course,
testing
just
the
api,
and
we've
got
to
make
sure
that
everything's
good
for
if
it
needs
to
be
run
on
any
other
platform,
whether
or
not
it's
windows
s390s.
H
Another
scenario
that
does
happen-
and
you
said
that
you
placed
unless
she
thought
to
probably
not
even
be
available
what
we
do
do
with
endpoints.
That
is,
like
stephen
just
said,
vendor
specific
or
is
related
to
storage
or
there's
criteria
for
making
that
endpoints
are
qualifying
for
conformance.
So
if
there
are
disqualifieds
for
any
reason-
and
you
make
the
decision
in
this
seg
that,
for
this
reason,
is
it's
disqualified,
then
we
can
make
it
ineligible.
Then
it
comes
off
the
list
and
it
goes
on
the
back
burner
for
later
review.
H
But
then
they
need
to
be
valid
reasons
why
it
should
not
be
conformance
eligible
like
an
optional
feature
or.
D
The
resource
is
a
built-in
resource
by
default,
there's
a
certain
set
of
operations
that
are
you're
going
to
just
enable
by
declaring
that
resource
inside
of
the
tree
right.
So
we
get
a
lot
more
than
we
actually
use
the
the
actual
utilization
of
the
resource.
D
My
opinion
is
really
covered
by
the
existing
tests
that
we
have
today,
like
looking
at
the
conformance
tests
that
we
have
for
the
workload
controllers.
We
are
actually
like
the
fact
that
you
can
run
that
test
implies
the
existence
of
the
resource
and
implies
that
the
controller
can
exercise
it
in
the
way
that
it
needs
to
the
api
surface
for
controller
revision,
as
used
by
the
workload
controllers,
hasn't
changed
in
years.
Right
like
it's,
not
something
that's
in
flux
or
in
flight
on
a
regular
basis.
D
It's
super
stable,
so
the
the
the
the
other
apis
as
we
expose
we're,
probably
never
going
to
use
them
now,
the
kind
of
downside
to
just
saying,
okay.
Well,
we
expose
them,
so
let's
go
build
tests
for
them
and
make
them
part
of
conformance.
Is
that
now
we
have
apis
that
we're
not
actually
internally
using
that
we're
committed
to
supporting
indefinitely
as
part
of
performance
right
and
that's
that's
kind
of
the
thing
where
I'm
like.
Is
this
really
a
value?
D
Add
like
we're
gonna
put
code
in
that
we
own
and
commit
to
supporting
a
bunch
of
apis
that
really
the
controllers
don't
use?
I
guess
like
and
like
my
intuition,
which
I
don't
actually
have
evidence
in
this
domain-
is
that
they're
not
going
to
be
valuable
for
other
people
who
are
writing.
D
D
Add
for
other
people
who
are
writing
controllers
and
it's
going
to
be
code
that
we
have
to
own,
on
top
of
a
promise
to
the
community
that
we're
committed
to
at
a
v1
level,
fairly
indefinitely
that
that
being
said
like
because
they're
under
you
they're,
basically
not
utilized,
I
don't.
H
And
from
the
the
last
discussions
about
deprecation
of
apis
from
what
I
understood
is
this
there's
a
decision
that
they
would
likely
never
deprecate
anything,
that's
in
production-
that's
already,
maybe
not
maintenance
won't
be
done
on
it
anymore,
so
if
it
does
go
away
so
from
that
point
of
view,
it
might
be
useful
for
us
to
to
write
the
test.
That
is
what
we're
doing
so.
If
we
get
the
right
help,
we
can
make
it.
A
Yeah,
I
think
what
what
ken
was
trying
to
say
is
that
we're
probably
not
going
to
deprecate
the
apis
or
remove
it
for
that
matter,
but
it's
about
not
covering
them
with
conformance
testing.
A
If
we
are
certain
at
this
point
in
time
now,
they're
not
being
used
and
most
likely
will
not
be
used.
So
you
did
mention
nsk
patch
of
well.
We
will
just
say
that
they
are
not
covered
by
conformance,
and
I
would
do
just
that
for
those
few
that
are
not
being
exercised
and
we
can
return
to
this
particular
decision
in
in
the
next
round
of
reviews.
A
But
for
now,
if,
if
we
are
certain
and
from
what
kenneth
saying
we
are,
then
I'm
perfectly
fine
with
making
exception
for
those
few
for
the
time
being,
and
we
can
revisit
that
decision
during
the
next
round
of
reviews.
G
A
I
I
can
I
can
look
at
doing
a
run
through
all
the
current
conformance
test
for
both
damon
set
and
stateful
set
and
show
all
the
endpoints
that
those
tests
are
actually
hitting
and
making
sure
that
there's
a
clear
story
of.
Is
there
actually
any
controller?
Reversion
endpoints
been
hit
at
all
as
part
of
the
current
process,
and
then
we
can
then
look
at
doing
some
feedback
and
getting
some
more
clarity
on
which
ones
you
want
to
potentially
make
and
eligible.
I
This
is
understanding
what
is
the
current
test
running
and
do
they
actually
force
a
controller
revision
action
as
part
of
the
current
test
run
or
whether
or
not
they
need
to
look
at
doing
some
action
of
either
doing
a
rolling,
a
roll
up
or
a
roll
back?
That
would
then
force
the
controller
revision
actions.
A
Yeah,
that's
unreasonable!
If
you
could.
You
know
when
you
have
that
report
ready
ping,
myself
and
ken
on
the
sega
slack
or
in
the
mailing
list,
and
we
can
revisit
and
we
will
say
yes,
no,
but
all
the
evidence
shows
that
it
should
be
just
those
two
most
likely
if
we
will
notice-
or
we
will
learn
that
we
are
missing
something
else-
it's
possible
that
I
would
prefer
it's
possible
that
we
should
probably
promote
one
of
the
existing,
because
most
of
the
I
was
quickly
skimming
through
the
daemon
sets
test
as
well.
A
H
H
I
Think
I
think
ryan.
There
was
an
issue
where,
because
the
diamonds
test
the
because
they
run
seriously
they're
not
picked
up
by
snoop
until
they're
made
conformant
tested,
I
think
there
was
an
issue
that
we
found
historically.
H
A
Yeah,
that's
that's
and
definitely
a
preferred
option,
because
that
clearly
shows
that
there
is
a
gap
in
our
testing
that
we
should
improve
rather
than
pushing
just
the
api
bits.
Testing.
A
A
Of
the
hour
we
run
over
by
a
minute,
so
not
that
bad.
Thank
you
very
much
all
enjoy
the
rest
of
your
day.
Take
care
bye.
Thank
you.
Thank
you.