►
From YouTube: SIG Apps Meeting 20200111
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
I'm
recording
so
welcome
everyone
to
cigarettes
today
is
the
11th
of
january.
It's
the
first
meeting
back
new
year,
so
happy
2021
everyone
and
spacing
the
link
to
the
doc
here.
So
first
can
follow
on.
B
Yeah,
so
we
we
talked
briefly
about
this
last
meeting
on
december
14th.
I
think
we
tried
to
propose
like
a
something,
a
lot
more
generic,
but
we
decided
to
take
a
step
back
and
and
really
propose
the
lower
level
primitives
that
we
would
like
to
have
in
the
job
api
to
allow
us
to
control
the
job
as
a
whole
by
an
external
controller
and-
and
so
one
of
the
things
that
we
wanted
to
add,
is
a
is
a
suspend
or
stop
flag.
B
Basically,
when
we
create
a
job,
we
want
to
create
a
job
in
a
stopped
state
sort
of
like
so
that
likely
drop
control
doesn't
create
the
points
right
away,
and
that
will
allow
an
external
controller
to
make
sure
that
the
cluster
is
is
ready
to
to
get
that
job
executed.
B
And
by
that
we
mean,
for
example,
checking
quota
like
checking
quota
for
the
whole
job
rather
than
individual
parts,
or
making
sure
that
the
resources
are
provisioned
by
somehow
communicating
with
an
auto
scaler
and-
and
this
is
extremely
useful
for
cases
where
you
you
want
or
nothing
scheduling,
and
we
thought
that
this
simple,
primitive
would
allow.
Allow
us
to
do
such
a
thing
and
so
yeah
we
propose
this.
Hopefully
it
is
kind
of
clear
enough.
B
We
we
have
a
kip
up
by
addie
as
well,
and
I
think
magic
has
been
assigned
magic
to
it,
but
please,
let
us
know
who
else
should
look
at,
should
look
at
it
as
well.
C
Hi,
this
is
mate.
Sorry,
I
I
was
hoping
that
I'll
be
able
to
look
at
look
through
it
over
the
the
holiday
break,
but
I
was
completely
offline
for
most
of
the
time,
so
I'm
hoping
to
look
at
it
within
the
next
week
and
I'll
probably
be
as
soon
as
I'll
do
I'll
ping
you
on
slack.
B
C
C
Contentious
issue,
but
other
than
that,
the
the
overall
idea
is
is
perfectly
legit.
We
just
need
to
agree
on
how
to
approach
whether
whether
stop
means
a
suspend
in
execution,
or
rather
it's
a
stop
and
remove
everything
and
start
from
scratch,
but
I'll.
Let
I'll
have
a
look
at
the
pr
at
the
cap
and
I'll
leave
comments
up
there.
Thank
you.
A
Thank
you
sounds
great
thanks
a
lot
both
of
you
put
a
link
to
kep
here,
so
if
anyone
else
is
interested
in
taking
link
as
well
go
ahead
and
review,
there's
also
this
array
job
feature
request.
B
Yeah
I
opened
an
issue
aldo
is,
is
working
on
a
kit
as
well
I'll.
Let
aldo
introduce
you
to
this.
Like
I
mean
I'm
pretty
sure
everybody
knows
about
it,
but
we
are
basically
restarting.
D
This
effort,
although
yeah
this,
is
a
long-standing
request
of
having
a
completion
index
in
in
the
jobs
in
the
job
pots,
with
the
idea
of
this
being
enabling
a
simple
static
partitioning.
D
So
I
added
the
cap.
The
basic
the
basic
usage
is
basically
to
to
have
the
the
index
as
an
annotation
that
can
be
pulled
through
the
downward
api
and
then
jobs.
Sorry,
the
pods
can
use
that
to
to
partition
or
to
select
their
their
task
now.
Furthermore,
there
are
a
lot
of.
There
are
a
couple
of
open
questions.
D
D
Such
as
mpi
or
other
kind
of
similar
type
of
types
of
jobs,
they
want
stable
host
names
or
host
names
that
they
can
predict.
So
that's
one
reason,
but
also
another
reason
is
the
this
table
name
allows
to
be
able
to
avoid
duplication.
D
So
if,
if
you
have
a
stable
name,
the
you
cannot
replace
a
pod
that
failed
without
removing
the
failed
pot
first,
so
that's
going
to
be
useful
for
cases
where
they
absolutely
need
a
single
running
pot
per
per
index.
But
that's
subject
to
discussion.
I
think
we
can
discuss
that
in
the
cap.
The
problem
that
this
opens
is
how
do
we
keep
track
of
failed
tasks
or
sorry
fail
pause,
because
that's
the
information
we
we
keep
for.
D
We
use
the
failed
parts
to
to
calculate
the
status
of
the
job,
the
number
of
failures
and
do
the
the
back
off
and
eventually
the
back
off
limit.
So
we
need
to
figure
out
either
how
to
calculate
the
failed
jobs,
the
failed
pots
with
some
other
mechanism
or
drop
this
requirement.
D
So
I
would
like
to
to
have
some
discussion
on
that
topic,
but
in
that
topic,
in
the
topic
of
of
failed
parts,
I
think
there's
an
open
issue
for
tracking
failures
and
successes
in
a
different
manner.
I
I
I
honestly
didn't
look
at
this
yet,
but
I
would
be
willing
to
to
take
that
as
well
to
because
it
will
solve,
or
also
it
will
solve
this,
this
problem
in
the
array
job.
C
So
your
proposal
will
write
jobs
reminds
me
eric's
idea
of
index
jobs,
I'm
not
sure
if
you,
if
you
get
a
chance
to
look
through
that
particular
issue
yeah
before,
and
I
wonder
how
much
different
the
idea
is,
and
I
think
I
would
probably
go
with
the
index
job
name
since
that
probably
better
expresses
the
the
idea
of
ordering-
and
if
you
look
at
the
issue
it
it
goes
all
the
way
back
to
shortly
after
we
created
the
jobs.
C
Yes,
so
it's
definitely
something
that
we
do
want
to
see
happening.
It
is
possible
that
some
of
the
ideas
are
already
written
down
in
those
in
those
issues,
so
it
will
be
good
to
go
through
them
and
double
check
whether
all
those
ideas
from
back
then
still
applies,
or
do
we
wanna
some
do
certain
things
differently,
knowing
a
little
bit
more
than
before.
D
D
Yeah
go
ahead.
Yes,
so
I
entirely
based
my
proposal
on
this.
It's
like
an
iteration
and
a
con.
The
doc
is
still
available,
like
I
call
it
repo,
so
I
ported
it,
but
I
also
simplified
it
drastically,
because
a
lot
of
the
the
proposal
is
focused
on
on
easy
of
use
in
terms
of
cuba.
Ctl
run
yeah
that
all
right,
yeah
yeah,
so
I
think
the
community
is
now
more
used
to
yammels.
So
I
don't
think
giving
all
that
power
in
the
in
the
gibson
run.
C
C
Create
parts
and
nothing
more:
if
you
want
to
create
something
else,
you
need
to
use
a
a
specific
cube.
Catal
create
command,
so
I
won't
worry
about
that
at
all
and
it's
definitely
not
a
concern
at
all
for
the
index.
Jobs
referring.
C
About
the
controller,
counting
the
failed
ones,
yes,
we
do
want
to
maintain
the
information
about
how
many
jobs
failed,
because
one
way
or
the
other
you
want
to
know
the
progress
of
your
current
job,
and
there
is
an
outstanding
issue,
how
we
should
calculate
those-
and
I
had
some
ideas-
and
I
would
really
want
to
see
this
issue
being
solved
because
currently
the
fact
that
we
need
to
keep
all
those
parts
around
just
for
calculating
the
status
might
seem
a
little
bit
wasteful,
especially
if
you
think
about
running
a
job
with-
I
don't
know
a
hundred
or
a
thousand
executions
and
and
basically
or
any
kind
of
long-running
tasks
with
this
many
pots.
C
That
basically
means
you'll
have
to
reserve
and-
and
you
will
be
wasting
this
many
resources
for
some
period
of
time.
So
solving
that
particular
one
will
be
a
key
to
to
also
the
problems
you've
mentioned
before
and.
D
One
more
thing
I
noticed
is
that
another
problem
that
could
happen
is
that
the
garbage
collection,
the
pot
garbage
collection,
could
trigger
once
we
hit
2000.
So
sorry
12,
000
pots,
so
you
would
lose
information
anyways.
C
Exactly
that's
the
other
thing
that
that
just
that
is
possible,
so
we
definitely
need
to
figure
out
a
a
reasonable
approach
forward.
D
Okay,
yeah:
I
can
stick
with
you,
perhaps
in
slack
or.
C
C
To
figuring
something
out,
we
can
probably
give
it
a
try
and
and
see
what's
possible,
so
we.
D
Every
job,
so
yes,
one
of
the
main
reasons.
Well,
there
are
two
reasons
one
is
to
avoid
splitting
the
ecosystem.
D
The
problem
with
leaving
these
two
third
parties
is
that,
then
third
parties
do
pod
management
differently
or
they
could
potentially
do
pod
management
differently,
and
that
could
confuse
users,
and
the
other
reason
is
that
we,
this
other
cap
of
stop
flag.
D
We
we
basically
want
a
job
to
provide
the
basic
primitives
so
that
the
the
job
can
be
used
as
a
unit
across
different
across
different
controllers.
So
that's
kind
of
the
rationale.
C
I
mean
going
back
to
to
the
very
early
days.
The
index
jobs
was
was
definitely
something
that
we
myself
and
eric.
We
we've
seen
as
something
that
we
want
to
have
in
the
core
it.
It
just
seems
as
a
natural
extension
to
the
currently
through
the
current
functionality
of
the
jobs,
without
actually
changing
the
api
that
greatly
so
yeah.
It's
it's
probably
possible
to
do
it
outside
of
the
car,
but
I
would
be
probably
pushing
towards
the
entry,
because
that's
it
will
be
the
simplest
and
the
api.
C
E
I
mean
sometimes
I
think
there
are
definitely
extensions
that
should
be
done
on
top
of
the
workloads
apis
where
it
makes
sense,
but
I
think
there
are
definitely
workload
specific
controllers
that
may
do
things
differently.
That
can
be
done
out
of
trade
like
the
original
intention
when
the
apis
were
developed,
and
this
is
in
the
documentation
is
that
they
should
be
forkable.
So
for
specific
use
cases,
it
should
be
easy
to
fork
an
existing
workload
api
and
modify
it
to
suit
your
use
case
from
sig
architecture.
E
E
But
if
you
think
that,
like
this
is
a
natural
extension
to
the
existing
jobs
api
that
can
be
implemented
in
a
backward
compatible
way
on
what
we
have.
I
feel
like
that's
a
strong
argument
to
do
it:
entry,
if,
if
we're
going
to
end
up,
writing
a
second
controller
for
it
anyway,
like
if
it's
going
to
be
completely
orthogonal
to
the
existing
job
controller.
E
D
C
C
D
C
D
Yeah
for
for
alpha
definitely
121
and
beta,
hopefully
the
next
release,
but
yeah
that
can
be
that
can
take
a
little
bit
longer
depending
on
feedback.
I
guess.
C
Okay,
so
as
part
of
the
alpha,
we
would
leave
the
problem
of
fail,
the
count
counting
of
failed
parts
and
completed
parts,
and
we
would
focus
entirely
on
the
functionality
of
the
index
job
and
then,
as
part
of
the
further
evolution
we
would
be.
Solving
the
pre-existing
issues
is:
is
that
more
or
less
what
you
were
thinking.
D
About
I
don't
know
how
complex
it
could
be
to
solve
the
other
problem.
If
it's
not
too
much,
I
I
would
be
fine,
also
having
it
in
the
alpha.
C
D
Ready
to
see
so,
I
think
it
all
comes
down
of
it
also
comes
down
to
having
the
index
in
the
bot
name
or
not.
If
we
don't
have
it
there,
we
don't
need
the
tracking.
D
If
we
add
it,
then
yeah
we
will
need
tracking.
Does
that
make
sense.
C
E
Another
question
so
when
we
say
the
semantics
are
similar
to
staple
set,
is
there
any
notion
that
we
would
provide
any
I'm
not
clear
on
what
the
fencing
guarantees
are
or
if
there
are
any
bents
and
guarantees
any
31
like
staple
set,
provides
ordering
based
on
storage
used
for
fencing
so
like
it
actually
as
you're
scaling
up,
there
is
some
guarantee
reasonably
that
you're
really
going
to
start
staple
set
0
1
after
staple
set
0
right
for
job?
Are
we
trying
to
guarantee
that
or
are
we
just
trying
to
provide
like
best
effort
ordering.
D
E
C
Yeah,
that's
good
approach.
Maybe
we
can
leave
it
off
as
a
potential
improvement.
It
will
be
definitely
backwards
compatible.
It
will
be
just
additional
restriction
if
you
wanted
to
have
one.
E
E
E
Also,
potentially
doing
the
counting
as
well
like
you
want
to
get
the
array
job
symmetrics
in
place,
the
accounting,
the
counting
will
be
done
separately,
potentially
thereafter,
but
we're
not
going
to
block
alpha
on
that
being
present
and
then
the
yeah
the
stop
flag.
If
that
gets
done
at
all,
that
would
be
done
as
a
completely
separate
enhancement.
D
E
The
release
stuff,
where
is
that
section.
D
D
F
I
mean
the
array
jobs.
Are
they
meant
to
be
retrieved
if
they
fail
or
something?
Because
if
you
choose
identity
for
those
spots,
just
because
of
indexing,
that's
gonna
make
it
hard
right.
You
say
a
note
goes
down
and
it
won't
go
back
up
and
you
can't
create
that
put
again
until
that
note
comes
back
right.
If
you
have,
if
you
have
an
identity
for
report,
the
only
one
who
can
delete
it
is
the
cube
from
that
node.
You
can't
even
delete
it
from
the
api.
Normally.
E
You
cannot
relaunch
java
one
to
retry
it
with
that
name
again
until
that
there's
a
timeout
that
we
have
configured
where
eventually
the
node
would
be
considered
gone
and
it
would
be
stuff
would
be
deleted
anyway.
But
it's
a
fairly
long
timeout,
so
you're
not
going
to
be
able
to
retry
that
job
again
until
that
pod
is
explicitly
deleted,
so
that
will
require
either
that
timeout
expiring
or
the
network
partition
healing
that
node
rejoining
the
cluster.
And
then
you
know
the
pod
being
deleted.
F
E
C
So,
basically,
given
that
job
guarantees
a
certain
completion,
number
that
basically
overhauls
the
fact
of
having
a
stable
naming
and
in
the
job.
C
F
E
And
I
think
the
concern
for
job
with
this
particular
case,
if
you
do
go
with
the
deterministic
name
like
with
with
staple
said,
it's
kind
of
a
okay
choice
to
make,
because,
typically,
you
have
long-lived
applications
that
are
running
and
when
they
do
go
down,
it
is
often
something
where
you
want
to
intervene
or
like
with
something
like
cassandra.
You
don't
care
if
you
lost
it,
one
node
anyway,
right
with
a
with
jobs,
because
there's
a
lot
more
churn
right,
like
there's
a
lot
they're
higher
throughput.
D
Okay,
that's
that's
all
fair,
okay
and
then
in
that
case
I
I'll
try
to
order
my
thoughts
in
in
on
this,
but
I
might
be
updating
the
clip
to
remove
the
this
table.
Names.
B
Use
cases
there's
in
like
training,
machine
learning
training
you
have
like
a
two
sets
of.
I
think
they
called
reinforcement-
training,
for
example,
two
bits
of
pods
the.
I
think
they
called
it
like
the
actors
and
the
learners,
and
you
need
stable
names
so
that
you
would
know
to
like
which
actor
you're
gonna
you're
gonna
be
talking
to,
and
both
of
them
are
actually
just
running
to
completion.
B
I
mean
in
that
case,
like
these,
some
are
using
sticks
will
say.
The
problem
statements
is
that
when
a
pod
like
completes
it,
it
doesn't
can't
exit
because
it
will
be
created
again.
The
problem
with
this
is
that
you're
going
to
be
holding
on
to
resources
that
can
be
used
by
others
like
gpus,
etc.
E
You
have
potentially
you're
competing
for
potentially
scariest
resources,
so
I
mean
the
way
I
the
way
I've
done
stuff
like
this
in
the
past
would
be
to
and
again
this
is
kind
of
why
I
brought
up
the
idea
of
a
custom
controller
for
a
use
case
like
this
is
like.
If
you
had
a
service
per
pod
right,
you
know
the
service
would
give
you
a
unique
name
that
you
could
use
to
set
up
your
communication
communication
topology.
That
would
be
completely
independent
of
how
your
computer
is
laid
out.
E
So
if
you
had
an
individual
job
failure-
and
you
had
like
you-
know-
job
zero
service,
all
the
other
jobs
can
communicate
directly
through
that
service
to
tactics,
but
that's
not
definitely,
probably
not
something
we
want
to
do
in
tree,
so
the
orchestration
around.
That
would
be
something
that
you
would
have
to
do
on
top
of
it.
E
So
let
me
like,
if
you
had
this
array
drop
concept,
and
then
you
wanted
to
build
a
higher
level
controller
that
could
do
something
like
manage
the
network
topology
on
top
of
it,
without
using
stable
names,
but
using
something
more
using
an
indirection
like
service
to
do
the
the
network
communication,
while
the
network
coordinates.
B
We
thought
about
this,
basically
we're
trying
to
you're
trying
to
have
like
a
virtual
ip
over
each
and
every
pod.
I
mean
there's
a
lot
of
complexity
in
achieving
the
other
thing
is
scalability.
I
mean
when
you
introduce
your
services,
we're
talking
about
like
your
proxy
and
syncing
and
all
that
jazz
and
when
we
talk
about
like
massive
jobs.
That's
a
lot
of
burden
on
on
the
on
the
whole
system,
like
like
just
syncing
these
endpoints
across
all
the
nodes.
E
I
mean
I
would
think,
though,
like
if
you're
creating
a
massive
number
of
pods
right.
You
still
have
to
update
the
pod
siders
respectively
on
the
individual
nodes,
and
you
still
have
to
update
kubi
proxy
for
all
the
pod
ipa
addresses
you
would
be
going
like
2x.
If
you
did
a
service
per
pod,
so
you
would
have
double
and
you
the
number
of
endpoints
would
proliferate,
but
I
mean
you're
already,
like
you
already
have,
that
problem
is
kind
of
the
way
I
think
about
it.
I
could
be
wrong
there,
though,.
B
E
B
You
know
the
job
name
and
then
there's
a
button
right.
B
For
the
product
for
the
pod,
names
like
like
stateful
sets
okay,
but
but
that's
fair
like
I,
I
don't
think
it's
a
slam
dunk
that,
as
you
get
all
mentioned,
it's
going
to
be
easy
to
introduce
that
stable
names.
But
I
guess
we
can
yeah.
E
Staples
that
also
uses
generally
uses
a
heavily
headless
service
for
dns
to
explicitly
allow
that
discovery.
Were
you
not
or
were
you
planning
on
using
the
headless
service
or
no.
B
E
B
Exactly
so,
it's
deterministic
like
basically
like
your
your
application.
That's
called
it
contains
two
jobs
like,
as
I
mentioned,
with
reinforced
learning.
You
have
the
actors
and
learners
and
they
know
each
other's
like
prefix
name,
which
is,
for
example,
the
learner
and
the
like.
You
know,
and
they
just
add
indices
to
them
like
it's,
the
applications
problem
like
again
with
mpis
the
same
thing
rank
zero.
B
For
example,
they
assume
that
it's
the
driver
and
then
all
the
other
ranks
they're
just
like
one
two,
three
up
up
to
the
job,
like
the
the
number
of
the
number
of
ranks
that
you
have
in
that
job,
so
so
yeah,
it's
it's
not
it's!
Not
it's
not
a
service.
That's
the
difference!
Exactly
it's!
It's
a
job!
That
knows
it's
it's
its
names.
Basically,
it's
more
like
something
you,
like
think
of.
G
Cool
anything
else
on
these
two
announcements.
C
I
have
a
quite
favor
for
our
fallout
2
to
do
when
you
will
be
removing
the.
C
Index
from
the
name,
don't
remove
it,
but
write
it
as
something
that
was
considered
and
why
it
wasn't
picked
and
ideally
put
the
arguments
that
were
laid
out
during
today's
call.
I
think
it
will
be
valuable
for
readers
to
know
what
has
been
discussed.
Yes,
absolutely.
D
G
Awesome,
okay!
So
moving
on
to
the
next
slide
on
this
graduating
job
ttl
to
beta.
B
Yeah,
so
I
I
updated
the
cap,
I
sent
a
a
pr
update
ticket,
basically
and,
and
what
I'm
proposing
here
is
to
graduate
to
beta
without
having
this
feature
in
pods
and
and
if
you
want
to
have
it
in
pause,
it
will
be
a
future
work
under
separate
future
flag.
B
I
I
did
sync
offline
with
with
janet,
unfortunately
she's
not
able
to
attend
these
meetings
because
she's
a
completely
different
time
zone,
but
but
she
was
okay
with
this,
and
now
I'm
bringing
it
to
this
egg
to
get
consensus
on
on
this
one
and
so
yeah
I
have.
I
have
the
cap
open.
Please
take
a
look
and
I'm
wondering
if
anyone
has
any
objections
on
moving
the
pod
ttl
after
finish
as
a
future
work,
rather
than
being
coupled
with
this
specific
future
flag.
C
If
I
remember
correctly,
the
people
was
an
annotation
on
on
a
job,
and
I
was
wondering
whether
we
are
considering
promoting
this
as
a
regular
job
field.
In
that
case,
or.
C
Oh,
if
it's
this
might
be
thinking
about
something
else,.
E
Right,
but
I
mean
like
right
now,
like
the
the
whole
life
cycle,
like
is:
isn't
it
still,
the
one
giant,
spec
or
one
giant
kind
of
feature
for
pod
and
job
as
well?
E
So
right
now,
it's
not
like.
It's
only
implemented
for
jobs
right.
I
know
it's
only
implemented
for
jobs,
but
with
the
initial
proposal
that
janet
brought
forth
like
a
while
ago.
Proposal
was
for
both
pod
and
job,
and
I
believe,
if
my
understanding
is
correct,
that
we're
continuing
to
track
both
of
them
as
one
feature.
Maybe
we
should,
if
we
haven't,
maybe
we
should
just
consider
splitting
them
entirely
and
then
just
this
is
a
completely
separate
feature.
Yeah.
B
Yeah,
exactly
that's
what
I'm
proposing
and
that's
why
I
moved
it
into
future
work,
but
do
you
suggest
different
like
should
I
create
a
different
issue
as
well
like
for
the
pods
and
just.
E
I'm
not
I'm
asking
if
what
what
the
sig
wants
like,
if
that
makes
sense,
because
it
does
look
like
we're
going
to
be
able
to
promote
this
more
rapidly
toward
a
general
availability,
and
we
haven't
made
any
progress
on
the
pi
ttl.
There
was
a
lot
of
pushback
from
api
machinery
about
the
semantics
around
that
anyway,
so
I
mean
this
seems
to
have
life
where
the
other
does
not.
So
maybe
I
mean
splitting,
it
seems
reasonable
to
me.
F
B
Different
field,
the
job
has
a
field
at
the
job
level.
The
part
has
a
feel
at
the
pod
level.
B
Paste
I
mean
it
could
have
been
implemented
in
the
same
controller,
but
the
thing
is
yeah,
as
as
ken
as
mentioned,
the
pushback
was
on
the
implementation
actually
of
the
pod
ttl.
To
finish,
it
was
never
about
the
dark
one
right.
E
What
what
their
concerns
were?
Yeah
you'd
have
to
go.
I
can
go
find
it.
I
mean.
There's
it's
back
on
the
original
cafe
anyway,
like
it's,
the
entire
conversation
is
captured
there.
I
can't
forget
what
exactly
what
the
what
or
what
the
concert
was,
but
I
believe
daniel
was
the
one
who
was
primarily
worried
that
it
would
be
unreliable
or
that
it
would
conflict
with
the
existing
pie.
Gc.
B
Right-
and
this
goes
also
to
the
point
where
we
should
split
them,
because
they
they
are
really
different
kind
of
at
a
higher
level
in
terms
of
features
and
semantics.
So
why
should
we
try
them
under
one
flag
and
another
one
feature.
F
E
C
I
was
just
checking
in
a
pot,
I'm
I'm
not
seeing
that.
I
think
yeah.
I
think
I'm
okay
with
calling
out
that
we
temporarily
we
will
progress
the
the
job,
ttl
part
and
probably
point
to
the
discussion
about
why
pot
is
not,
and
let's
try
to
figure
out
what
we
want
to
do
with
with
pod
before
eventual
ga,
and
I
would
put
that
as
a
requirement
to
before
we
decide
what
to
do
before
we
decide
what
to
ga.
B
E
Emerged-
this
was
only
on
here
just
to
make
sure
that
we'll
just
let
mike
ask
if
he
needed
anything
else
in
order
to
get
it
through,
but
it
looks
like
it
merged.
F
F
G
Cool
great
to
see
that
coming
through
and
then
finally
pdbga.
E
Sweet
is
there
anything
else
you
need
from
from
sick
apps,
or
is
it
just
a
production
readiness
review
from
wojciech.
E
G
Excellent,
okay,
okay,
so
the
last
thing
on
the
list
here
is
conformance
testing.
I
think
we
were
trying
to
get
an
idea
if
folks
were
interested
in
looking
into
this,
can
I
think
you
had
a
bit
more
context
here
right.
E
So
I
think
I
think
my
concern
is
kind
of
this
right
like
looking
at
what
it
seems
like
they
want.
They
seem
they
want,
like
a
per
resource
per
verb
kind
of
flow
for
coverage
for
conformance
and
I'm
not
entirely
sure
that
makes
sense
for
the
workloads
controllers.
I
mean
we
can
do
it,
but
I
don't
know
if
there's
a
lot
of
value
in
it.
So
the
point
of
conformance
testing
is
to
make
sure
that
we
have
a
fairly
strong
definition
of
what
it
means
to
correctly
implement
the
workloads
apis.
E
So
that,
like
if
your
k3x
3
k3s
instead
of
k8s
and
you
pass
conformance,
people
have
a
reasonable
expectation
that
when
they
run
their
workloads
on
top
of
you,
you
implement
the
same
semantics
as
the
the
standard
open
source
distribution.
E
E
C
The
question
is:
maybe
they
are
just
not
part
of
the
conformance,
and
if
that's
the
case,
maybe
it's
about-
I
don't
know,
including
those
bits
into
the
conformers
or
or
promoting
some
of
the
tests
into
into
conformance.
C
C
C
G
C
So
here
he
explicitly
calls
out
all
the
apis.
C
C
G
G
F
G
C
Sense
I
mean
the
question
is
basically
whether
we
want
to
have
a
hundred
percent
test
coverage,
because
someone
figure
out
that
will
be
the
best
approach
or
it's
better
to
have
a
consistent
and
reasonable
90
coverage
to
end
point
to
to
the
end
points
that
that
does
make
sense
and
the
evolve
those
less
obvious.
I
don't
just
thinking
out.
C
So
probably
worth
writing
to
him
and
including
this
sick
mailing
list
and
see
where
we
go
from
here.
With
regards
to
that
conversation,.
G
Sounds
good
to
me
looks
like
we're
at
time
so
before
we
wrap
up.
If
there's
anything,
everyone
wants
to
quickly
discuss
and
give
a
minute
or
two.
Otherwise
we
can
stand.
G
G
Okay,
it
sounds
like
there's
no
painting
designed
to
discuss
something
so
great,
we'll
end
here
and
we'll
see
everyone
again
in
two
weeks,
thanks
again
for
the
great
discussion
today,
everyone
yeah
thanks
again.