►
From YouTube: Kubernetes SIG Apps 20230109
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Okay
welcome
everyone
today
is
January
9th,
and
this
is
another
of
our
bi-weekly
sigax
call.
My
name
is
nache
and
I'll
be
your
host
today,
and
let
me
put
my
name
right
over
here:
I
have
one
quick
announcement.
The
127
schedule
is
out.
If,
if
you
have
not
seen
it,
it
is
linked
in
the
agenda
which
I
also
linked
in
the
chat.
A
The
key
dates
for
you
to
remember
is
February.
10Th
is
when
the
enhancements
freeze
is
due.
The
code
and
test
freeze
are
set
in
March,
15th
and
22nd
accordingly.
A
A
I,
don't
think
there
are
any
other
additional
announcements,
so
we
can
move
over
to
the
main
topic.
Abdullah
you
added
the
allow
creating
index
jobs
with
completion
meals.
You
want
to
walk
us
through
it.
B
Yeah,
can
you
open
the
issue?
Please
yeah.
So,
as
you
all
know,
we
introduced
index
job
a
few
releases
ago.
B
One
thing
we
did
for
index
drop
was
we
validated
that
completions
should
be
set,
because
we
wanted
to
use
it
as
a
way
to
define
the
range
of
indices
that
will
be
created
for
the
job
I'm
suggesting
here
that
we
relax
this
validation
with
the
semantics
of
having
parallelism
being
used
as
the
defining
limit
of
the
range
for
the
index
job,
and
the
idea
here
is
that
we
want
to
maintain
the
current
behavior
of
completions
equal
now,
and
let
me
explain
how
it
works
right
now,
so
in
the
normal
job
that
the
one
that
is
like
the
the
Legacy
one,
where
the
not
for
not
the
indexed
mode.
B
B
Those
workers
are
basically
waiting
for
work
to
be
assigned
to
them
by
the
manager
using
and
and
in
many
cases,
these
workers,
like
the
manager,
needs
to
communicate
with
these
workers.
So
you
need
to
set
up
a
stable,
DNS
endpoints
for
each
of
these
workers.
So
if
we
are
to
use
index
job,
we
will
get
these.
B
You
know
stable
DNS
names,
but
you
would
basically
want
to
know
in
advance
the
number
of
workers
that
you
will
have,
because
completion
is
immutable
so
that,
because
this
is
the
only
way
you
can
create
a
next
job
now,
if
we
relax
this
assumption,
then
we
can
treat
workers
as
basically
a
bunch
of
parts
that
can
be
all
skilled
with
the
semantics
that
we're
going
to
get
from
a
job
which
is
like
failures,
retries
and
and
all
these
goodies
that
we
have
in
in
the
in
the
in
the
job.
B
So
in
a
sense
we
can
use
it
in
a
similar
way
to
how
we
use
stateful
set,
but
with
the
advantage
of
having
the
new
like
job
specific
features
like
recharibal
and
non-tribal
failures,
how
it
feels
how
we
try
Etc
any
questions.
A
The
only
question
that
I
had
I've
noticed
that
Tim
was
asking
about
the
completions,
how
this
will
work
and
so
forth,
but
I
guess,
which
you
just
explained
with
regards
to
parallelism.
A
It's
basically
answering
all
the
questions
that
I
had.
There
was
something
that
I
I.
It
just
give
me
a
moment
if,
if
I'll
recall
what
I
was
thinking
of
I'll
I'll,
let
anyone
else
ask
some
questions.
If
they
have
foreign.
B
Quickly
restate
what
the
proposal
is
so
right
now,
index
job
is
on
is
allowed
to
be
created
when
completions
are
set.
Proposal
is
to
allow
index
drop
to
be
created
when
completion
or
equal
to
nil
rely
on
the
parallelism
parameter
to
define
the
range.
Parallelism
is
a
mutable
field,
and
so
we
can
use
it
in
a
similar
way
of
how
we
use
stateful
sets.
So
you
can
Auto
scale
it
with
the
advantage
of
using
job
index
job
with
completion.
Equality
versus
statefulness.
Is
that
with
job
you
get?
B
A
B
A
created
one
right
exactly:
there
needs
to
be
at
least
one
part
successful
to
declare
the
job
as
successful,
otherwise
I
mean
you're.
Gonna
rely
on
the
back
of
limit
to
basically
declare
when
the
job
is
filled.
A
How
changing
the
parallelism
instance?
This
is
the
only
field
that
you
can
always
modify
within
the
within
the
job
effect
the
work.
We
will
always
just
update
the
current
running
parts
to
match
zero
to
parallelism
thoughts.
In
that
case,
if
you're
scaling
up
we'll
just
be
adding
additional
ones
and
on
a
sort
of
scale
down
of
the
parallelism
we'll
be
just
killing
the
last
one.
B
Correct
yeah
so
right
now,
that's
the
other
change
that
we
have
to
make
to
enable
this
right
now
when
we
skip
when
parallelism
is
reduced.
The
way
that
currently
works
is
that
we
removed
Parts
based
on
some
heuristic,
like
they've,
recently
created
one
or
something
like
that.
B
A
Yeah
I,
don't
I,
don't
see
any
objections.
I
remember
that
came
when
I
was
scrolling
through
the
issue.
He
raised
some
concerns
with
regards
to
introducing
another
completion
mode,
which
is
definitely
not
something
that
I
would
want
to
see
in
your
you're,
an
Aldo
authors
with
regards
it's
totally
valid
to
relax
the
validation.
That's
that's
one
of
the
reasons
why
we
are
keeping
the
validation
stricter
initially
to
be
able
to
relax
it.
A
The
opposite
is
a
little
bit
more
harder
because
you
can
get
into
the
state
where
you
will
be
running
in
a
cluster
with
invalid
data
where,
in
this
particular
case,
that's
not
a
problem,
and
on
top
of
that
users
they
have
to
explicitly
set
the
completions
to
mail
to
be
able
to
to
build
on
top
of
that
I'm.
A
Assuming
that
you
will
be
going
for
the
regular
Alpha
Beta
state
for
that
or
how
you're
thinking
of
rolling
this
out
I
think
Aldo
was
asking
with
regards
to
being
able
to
modify
roll
out
the
API
changes
first,
and
only
the
controller
in
the
next
release.
It's.
B
B
And
then
relax
the
validation,
because,
right
now
we
always
look
at
completions
and
we
assume
that
completions
have
never
knelt
when
when
it
is
index
mode,
when
it's
in
indexed
mode,
so
we
need
to
change
that
first
roll.
It
roll
it
that
basically
a
knob
and
then
relax
the
validation
in
the
next
release.
A
B
B
I
will
I
will
create
a
new
one
then,
but
I
don't
expect
that
we
will
have
a
feature
flag.
There
is
no
feature
flag
associated
with.
B
A
B
So
for
the
API
change,
we're
not
yeah
we're
changing
the
validation,
I,
don't
know
what
the
feature
flag
would
enable
us,
because
it's
not
introducing
a
new
field.
Sure.
B
And
so
it
would
just
delay
the
whole
thing:
I
I,
don't
I,
don't
know
if
we
get
again
anything
in
terms
of
stability
or
reliability.
We.
A
D
D
So
when
we
are
doing
validations
for
completions,
we
currently
did
deny
completion
SQL
meal.
So
what
we
have
to
do
is
in
the
first
release.
We
have
to
allow
this
during
updates.
D
If
they
feel
the
is
already
nil,
it
can
continue
to
be
nailed
and
then
the
next
release
we
can
say
the
the
field
can
be
nailed
from
the
creation
Point.
This.
D
Right,
it's
necessary
for
upgrades
right
because
you
could
have
n,
plus
one
APS
server,
that
accepts
the
change
and
then
it
goes
back
to
an
end
and
APS
server.
And
then
it
it
doesn't
know
what
to
do
or
it
tries
to
cut
deny
the
object.
B
A
I
would
probably
the
the
original
question
should
sound.
Do
we
want
to
allow
modifying
existing
index
jobs
to
use
that
feature,
or
rather
we
will
force
everyone
to
create
a
brand
new
such
that
they
will
be
allowed
and
personally
I'm
inclined
to
save
the
latter,
because
you
can't
change
the
completions
normally,
the
only
field
that
you
can
modify
in
a
job
is
parallelism
and
the
fact
that
we
will
be
relaxing
the
validation
should
not
break
the
rule
of
oh
I
want
to
update
any
particular
field.
B
B
A
There's
an
explicit
check
which
only
verifies
that
you
are
modifying
the
the
parallelism
and
anything
else
should
be
disallowed,
but
I
might
be
wrong.
So
this
that's
probably
something
that
we
would
have
to
do
so.
D
I
can
fill
up
one
detail
here
that,
yes,
we
do
check
immutability,
but
in
addition
to
immutability
we
check
during
update.
We
check
the
same
things
we
check
for
create,
so
that
needs
to
be
relaxed.
B
A
Yeah,
okay,
I'll
have
to
double
check.
I,
don't
remember
the
code
up
above
my
head,
so
yeah,
it's
something
that
we
can
figure
out
during
the
either
cap
or
specifically
the
implementation
phase
that
I'm
perfectly
fine
about
that.
A
A
F
Yeah,
so
I
just
wanted
to
align
with
the
Sig
apps
about
the
variety
for
any
erprs
that
needs
to
be
reviewed
during
this
cycle
really
cycle,
so
I
can
I
can
help
as
much
as
I
can
with
reviewing
PRS.
So
please,
let
me
know
if
there
is
any
priority.
I
need
to
take
care
into
consideration,
or
just
picking
you
know
like
whatever.
Whatever
is
you
know
like
PRS
is
opening
for
the
the
current
trees
thanks.
A
Right
so
in
general,
I
would
look
at
what
kind
of
features
we
will
be
accepting
for
127.
I'm,
currently
going
through
the
list
of
what
the
stuff
that
we
have
and
if
I
remember
correctly,
we
certainly
have
there
will
be
a
PR
from
Abdullah
with
regards
to
changing
the
index
jobs.
It
will
be
explicitly
impacting
the
job
controller.
There
will
be
the
pdb,
unhealthy
policy,
that
server
was
working
and
that
will
be.
We
will
be
promoting
that
over
to
Beta
from
what
I
remember
con
jobs.
A
Is
you
crunch
up
time
zone
is
used
for
GA
in
127
I,
don't
recall
if
there's
anything
else.
Yet,
although
there's
a
couple,
a
couple
of
the
PRS
coming
from
folks
from
instrumentation,
which
are
adding
the
context
over
to
logging
and
Goldfield
and
I,
are
looking
at
those
PRS.
If
you
can,
if
you
want
to
help
us,
you
can
be
I
can
definitely.
A
I
pinged
you
in
the
slack
thread
that
we
have
in
the
cigarettes
Channel,
where
Patrick
only
was
asking
for
for
what
it
should
look
like.
So
that
would
be
the
primary
and
then
any
other,
and
the
next
priority
would
be
basically
any
other
PR
fixing
the
the
controllers.
A
Roughly
the
the
priorities
that
we're
trying
to
follow
for
127.
F
Yeah
sounds
good:
I
will
I
will
continue
this
conversation
with
you
and
take
up
and
I
will
I
would
see.
You
know
like
which
one
I
will
pick
and
I
will,
let
you
know
sure.
A
Does
anyone
else
have
any
other
topics
for
the
group.
C
Hello,
hey
so
I'm
I'm
new
in
this
is
this
meeting
I've
opened
a
mail
request,
so
just
an
issue
so
I
present
it
now
or
I,
don't
know
it.
I
can
share
my
screen
on
Pleasant,
Street
now
or
I.
Don't
know.
C
Yes,
okay,
so
the
basic
idea
of
this
this
feature
is
sometimes
you
just
want
to
generate
some
some
German
variables
on
the
fly
so,
for
example,
a
conjure
that
when
every
day
that
needs
to
claim
a
GWT
that
is
only
valid
10
minutes
or
something
like
that,
and
today
we
don't
have
a
reliable
way
to
knit
this
on
the
Almond
variable.
C
So
basically
the
thing
we
done
we
do
is
override
the
the
entry
point
of
the
of
the
script
by
sourcing
the
an
available
file
that
is
generated
before
with
the
innate
container
on
the
this
is
a
problem
when
you
are
dealing
with
not
sure
an
entry
point
or
on
your
Docker
file.
If
it's
a
third
party
app
the
entry
point,
you
don't
really
know
what
it
is
and
it
can
change
sometimes
so
this
second
ID
to
to
deal
with
this
is
inside
the
unit
container.
C
You
allow
you
install
a
cubesatel
on
you
can
edit
this
pod
to
add
some
other
element
variables
to
the
container
on
the
slide.
But
it's
not
it's
not
the
best
way,
and
sometimes,
for
example,
a
ashika
vault.
C
Where
is
my
storage?
Convert,
for
example?
Just
say
to
to
you
add
this
file
on
at
the
end,
you
will,
you
will
add.
C
And
you
have
a
known
file
at
the
end,
you
ever
known
fail
that
you
need
to
Source
before
adding
your
own
3.3.
So
it's
not
it's
not
really
really
good.
On
the
basic
idea
of
my
my
issue
is
add
on
versus
to
allow
user
to
just
say
to
to
kubernetes
before
running
this
Docker
container
check
this
file,
pass
it
on
the
on
the
add
all
of
this
environment
variable
to
the
to
the
docker
before
running
it.
A
So,
first
of
all,
because
you're
trying
to
modify
the
part
level
resources,
the
cigarettes
will
not
be
of
much
help
for
you.
But
it
will
be
more
a
question
for
a
Sig
note
if
they
are
interested
in
in
having
the
ability
to
inject
any
kind
of
environment
variables
or
rather
be
able
to
to
modify
the
environment
as
a
whole.
Without
going
into
specific
whether
it'll
be
injection
of
of
a
specific
environment,
variables
or
or
some
additional
modification
to
the.
A
To
the
image
of
running
so
I'll
probably
start
with
a
Sig
node
of
whether
it
be
online
yeah
go
ahead
that
code's
in
kubility
yeah.
That's
why
I'm
saying
Sig
note
is:
is
the
best
the
best
place
to
ask
about
something
like
that?
I'm,
not
sure
if
they
are
considering
anything
around
this
problem
this
area?
Maybe
they
already
have
something
which
might
be
addressing
your
problem
or
they
will
be
able
to
guide
you
a
little
bit
better,
but
I
can't
think
of
anything
on
top
of
my
head.
C
Okay,
so
thank
you,
so
it's
not
yes,.
E
Like
what
you're
asking
for
is
for
somebody
when
Kublai
creates
a
pod,
it
goes
down
pulls
out,
creates
the
mirror
pod
when
it
starts
like
actually
bringing
up
containers,
it
would
I,
don't
know
how
this
would
actually
work
either
because,
like
what
you're
asking
for
is
it
for
it
to
run
until
the
container
is
complete,
then
look
at
the
file
system,
that's
shared
between
them
and
after
any
container
completion,
then
populate
the
environment
variables
for
the
main
container
in
the
Pod
and
I'm,
assuming
this
would
only
work
between
any
containers
and
pods
right,
like
you
couldn't,
because,
because
there's
no
defined
launch
order
for
containers
I
mean
there
is,
there
is
an
actual
launch
order.
E
A
Okay,
does
anyone
else
have
any
other
topics
that
they
want
to
discuss
with
the
group.