►
From YouTube: SIG Cluster Lifecycle - kubeadm office hours 2022-06-08
A
So
in
125
we
just
had
to
do
some
cleanup
of
the
flags
and
some
of
the
things
that
we
did
in
124,
but
basically
the
work
is
complete
at
this
point
and
cube.
Adm125
is
supposedly
now
compliant
with
the
changes
that
we
did
it.
We
did
with
around
docker
shim
removal,
the
documentation
we
didn't
even
have
to
update
it
in
this
release,
but
you
only
did
like,
according
to
these
communities
changes
cleanups.
A
A
This
is
the
first
item,
the
docker
machine
changes,
so
the
pretty
much
complete
here
in
127,
apparently
there's
a
removal
of
a
couple
of
flags
that
we
basically
want
to
clean
up
from
the
cube
adm
setup,
but
the
web
25
125
work
is
pretty
much
complete.
I
guess.
B
Sorry,
if
we
get
cluster
api
to
work
now,
then
we're
pretty
much
good
for
125
from
a
qubit
in
perspective,
well
sure
sure
yeah,
and
we
have
only
that
one
change
that
you
might
get
later
too
or
not
with
the
registry,
so
that
that
looks
good.
A
Yeah,
the
removal
of
these
flags
flags
is
going
to
be
rather
disruptive.
I
think
they
are
used.
A
lot
especially
container
runtime
is
now
implicitly
becoming
a
single
possible
value
flag,
which
means
that
it
used
to
be
docker
and
remote,
but
now
it's
going
to
be
only
a
remote
and
the
flag
has
no
meaning
anymore,
but
if
it's,
if
it's
present
in
a
configuration,
it's
going
to
fail.
A
So
if
somebody
was
overwriting
that
it's
a
problem
for
different
continental
images,
it's
a
mess
because
it
also
currently
acts
as
a
prevention
from
the
cobra
garbage
collector
to
remove
a
certain
image.
That's
like
a
cri
cri
problem.
I
don't
know
what,
but
basically
this
is
the.
I
don't
know
what
they're
going
to
do,
but
this
is
the
tracking
issue.
A
A
This
means
that
new
versions
of
cube
adm
will
always
use
this
name
for
the
config
map
is
no
longer
going
to
be
with
the
suffix,
and
the
users
can
no
longer
control
the
the
value
of
the
feature
unit.
Okay,.
A
So
in
125
you
know
ga
a
bunch
of
pr's
end-to-end
tests
cleanups.
The
only
remaining
thing
is
the
dogs.
It's,
I
think
it's
already
our
gtm.
It's
a
matter
of
approval
and
to
highlight
something
here
in
our
box.
The
kubernetes
docks,
don't
include
ga
feature
gates.
We
just
once
a
feature.
Gate
goes,
ga
we
just
remove
it.
A
That's
not
ideal,
I
think,
but
if
it's
locked
to
true
users
cannot
really
do
much
about
it.
So
we
had
a
bit
of
a
debate
whether
we
should
do
this,
but
I
don't
know,
potentially
the
documentation
of
prowess
might
decide.
Okay,
we
shouldn't
do
this
only
once
the
feature
gate
is
removed.
A
Only
then
we
should
remove.
B
A
A
B
Yeah,
I
just
didn't
know
about
the
behavior
once
it's
locked
and
what
happens
if
a
user
starts,
but
I'm
aware
of
the
mechanism,
I
think
we
copy
at
it
or
are
we
seeing
the
same
I
mean.
A
I
think
it
errors
I
last
time
I
checked.
A
A
B
Right
yep,
we
didn't
set
a
feature
gate,
so
users
can
set
it
or
not,
but
we
ourselves
don't
set
it
and
we
renamed
the
content
map.
I
think
when
upgrading
to
124.
A
A
We
also
added
a
release
load
that
because
the
feature
gate
is
unlocked
to
true,
if
you
happen
to
be
using
the
feature
gate
with
the
value
of
false
before
you
just
have
to
manually,
go,
and
you
know,
change
your
config
maps
because
you
are
like
already
diverging
from
the
the
path
of
upgrade,
which
is
to
essentially
start
using
the
new
configmap.
A
My
hopes
is
that
the
users
migrated
their
infrastructure
and
to
be
clear,
it's
not
that
much
of
a
surface
area
because
there
might
be
some
users
that
patch
the
you
know
the
config
map
manually
in
scripts.
But
I
don't
think
that's
that's
a
lot
of
users
really
so
yeah
126
is
the
next
release.
Where
we're
going
to
remove
the
feature
gate
pretty
much.
A
Okay
cry
sockets,
and
this
is
something.
A
Yeah
this
so
what
we
did
in
124,
we
started
showing
warnings.
If
the
cri
circuit
endpoints
do
not
include
url
schemas
in
125,
we
did
some
cleanups.
We
are
continuing
to
show
warnings
if
the
user
explicitly
does
not
have
a
url
scheme
in
the
the
endpoint.
A
But
the
big
question
here
is:
should
we
turn
warnings
to
errors?
From
my
perspective,
probably
we
shouldn't
until
the
kubullet
does
it.
The
kublet
correctly
has
the
warnings.
If
it's
missing
the
the
schema,
the
kubrick
shows
a
warning.
I
think
cubed
m
should
do
the
same
until
the
couplet,
which
is
you
know,
the
lower
level
component
decides
that
they
should
show
warnings.
A
A
Okay,
so
this
is
part
of
the
upgrade
code,
so
during
upgrade
we
were
basically
getting
the
node
object
where
we
applied
the
endpoint
socket
the
qqb
stores,
the
endpoint
inside
the
node
object,
and
we
were
actually
punching
this
object
to
to.
B
A
Have
the
euro
scheme
so
we
are
during
upgrade,
we
modified
the
node
object
and
the
other
one.
The
other
thing
that
we
did
was
we.
We
also
updated
the
dynamic
environment
file,
which
is
the
the
cubed
m
e
and
v
file
with
the
flags
we
pass
through
the
system,
the
equivalent
basically.
B
Upgrades,
but
I'm
not
sure
if
you
want
to
if
you
will
worry
about
users
using
the
wrong
schemas,
so
what
we
at
what
we
at
least
did
with
with
the
copy
release
which
supports
124.
Is
we
documented
that
I
should
use
the
schema
for
zero
socket?
B
So
I
think
probably
everyone
else
is
just
on
their
own.
If
we
get
to
in
place
upgrades
at
some
point.
A
B
A
One
problem
would
be
if
the
the
same
user
of
mutable
nodes
upgrades
to
a
new
version
of
the
kubelet
that,
in
the
future
errors
out
on
us,
in
that
case,
costrapia
has
to
have
a
mechanism
that
in
place
modifies
these
cubadian
locations.
A
B
I
think
we're
very
early
regarding
impressive
criticism.
B
Probably
definitely
it's
just
probably
for
us
a
matter
of
how
much
do
we
care.
C
B
Not
high
on
the
priority
list,
even
though
it's
not
ideal,
I
mean.
Ideally,
you
would
do
something
like
control.
Runtime
supports
web
poke
warnings,
and
then
we
just
produce
warnings
whenever
they
create
class
api
objects,
when
the
schema
is
not
there,
but
that
support
is
in
control,
runtime
and
we
didn't
have
time
to
edit.
I
mean
to
even
produce
those
warnings,
yeah.
C
A
Yeah,
if
I
mean
if,
for
example,
with
downstream
customers,
complain
about
hey,
I
got
this
error
from
the
future.
Coblet
that
no
longer
tolerates
these
sockets
documentation
is
fixed,
potentially
some
sort
of
script
that
we
execute,
even
even
outside
of
construction.
Something
can
potentially
fix
a
particular
multiple
load.
A
A
A
This
is
something
that
I'm
not
sure
it
will
be
that
useful
to
all
of
users,
but
if
a
particular
kubernete
user
decides
to
skip
the
add-on
manifests,
which
is
by
using
your
cube
again,
you
need
phase
add-on.
I'm
sorry,
kubernete
minute
skip
phases,
they
skip
the
you
know
the
add-ons
completely.
A
They
might
deploy
their
own
q,
proxy
and
core
dns,
but
to
do
that
they
have
to
prepare
their
own
manifest.
Maybe
using
like
a
some
sort
of
an
external
package
manager
for
us
it
could
be
anything
but
what
what
we're
doing
with
this
feature?
That
is
already
closed
because
we
implemented.
B
A
Is
we
we
are
adding
a
new
flag
to
the
add-ons,
so
you
can
do
kubernetes
phase,
I
don't
proxy
print
manifest
and
it
will
dump
based
on
a
given
cubed
m
configuration.
It
will
dump
the
manifest
that
cube,
adm
will
otherwise
apply.
A
If
you
have
skipped
yellows,
so
the
user
could
could
take
those
and
potentially
apply
patches
to
them
by
using
customize
or
something
else.
So
it's
like
a
album
customization
layer
that
I
think
is
really
nice.
A
You
know
if
users
wanna
want
to
do
it.
I
saw
a
few
thumbs
up
when
we
had
a
discussion
on
slack
about
it,
so
we
might
have
some
users
in
general.
I
think
the
users
who
skipped
kubernetes
models
are
a
very
small
percentage,
maybe
something
like
10,
even
even
less
than
that.
I
don't
know,
but
it's
a
nice
feature
it's
I
just
wanted
to
mention
it
here.
A
The
master
label
and
change
changes-
this
is
probably
the
longest
running
change
in
kubernetes.
It
started
in
120,
multiple
phases.
A
And
the
third
stage
at
this
point
removes
the
master
paint
from
nodes
from
control,
plane,
nodes,
there's
no
third
stage
for
the
mode
label
or
in
this
third
stage
you
only
tackle
the
the
taint,
though
we
removed
the
trains.
Now,
once
you
upgrade
to
kubernetes
125
the
control
plane
nodes
will
no
longer
have
the
legacy
master
things.
They
only
have
the
controlling
things,
but
there's
also
fourth
stage
here
when
we
are
discussing
the
design
and
it's
a
big
document
with
a
wall
of
comets.
A
A
Coordinates
the
coordinates
department
of
cuba.
Dave
only
has
two
narrations
everything
else.
The
q
proxy
doesn't
have
them.
Cubox
is
more
more
tolerable.
So,
potentially,
in
this
final
stage
we
are
going
to
tell
the
users
to
remove
auto
relations
from
the
for
the
master
legacy,
master
dent
and
actually
in
the
communities
website.
There
are
so
many
instances
of
this
of
this
master
generation.
We
also
have
to
clean
it
there,
but
it's
going
to
be
like
an
action
requirement,
actually
required
item
for
the
next
release
premium,
but
yeah
for
125.
A
A
And
you
know
after
126
we
finally
complete
with
this
migration.
It
was
a
long
effort,
but
it's
worth
it-
and
this
is
this-
is
interesting.
This
is
a
new
feature.
It
hasn't
been
discussed
anywhere
now
back
in
2019,
we
had
a
discussion
with
fabrizio,
like
how
can
we
support
customizing
the
corporate
configuration
on
joining
nodes,
because
so
during
init,
we
pass
a
couple
of
configurations:
it's
global
to
not
focus
but
like
how
do
you
apply
a
node
specific
configuration
for
coblets
on
particular
modes?
A
You
cannot,
but
the
only
way
to
do
it
is
with
flags.
You
know
the
kubelet
extra
rx
that
a
lot
of
people
use
and
all
these
flags
are
deprecated
by
the
way,
most
of
them.
So
it's
we
are
kind
of
stuck
like
how
do
we
enable
this
customization
for
the
users
like
what?
What
do
we
do
about
it
fabrizio
had
a
really
nice
design
of
basically
allowing
during
cube
a
dam
init.
A
But
I
said
that
maybe
we
can
do
this
in
the
future.
In
the
meantime,
what
we
can
do
is
we
can
utilize
our
patches
functionality,
which
is
the
init
configuration
joint
configuration
patches
to
potentially
just
pass
patches
to
for
the
couplet
config.
So
what
happens
is
that
you
pass
a
single
cubelet
configuration
during
emit
and
that's
considered
global,
and
then,
if
you
pass
patches
between
it,
like
a
no
specific
patch
on
the
init
node
will
apply,
you
know
maybe
changing
a
little
field
somewhere
and
then
on
joining
modes.
A
A
During
an
upgrade,
you
can
also
preserve
your
patches,
because
the
upgrade
commands
in
cubadm
also
support
the
patches
but
yeah
the
the
the
kep
update
merged
the
the
pr
the
code.
Pr
is
much
as
well.
It
is.
It
is
a
pretty
straightforward
enhancement
because
we
have
the
boilerplate
for
the
static
port
control.
Plane
manifests
and
extended
it
to
the
kubelet.
Configuration
was
pretty
easy.
A
Having
said
that,
we
have
failing,
we
have
failing
entering
tests,
I
was
actually
looking
at
them,
but
that's
that
that's
like
a
kinder
problem.
So
it's
a
problem
with
the
tests.
It's
not
probably
humane,
but
this
feature
is
really
nice.
I
think
of
the
course
api
side.
If
you
know,
if
you
get
a
request
from
users,
how
do
I
customize
my
machines
in
a
way
that
I
want
these
machines
to
be
gpu
machines
or
something
with
different
resource
consumption?
A
I
mean
we
already.
Costa
repair
already
supports
the
purchase
functionality,
and
so
the
users
can
just
write
patches
using.
I
forgot
the
label,
the
quest
api
static
files
that
can
be
written
on
a
machine.
A
It
was
like
a
part
of
the
config.
You
can
write
a
file
yeah.
B
Yeah,
we
have
the
cloud
in
its
stuff,
so
you
can
write
files
and
then
you
can
use
you
can
just
configure
the
in
in
the
kubernetes
config
part
the
patches
directory
to
point
there,
and
then
they
should
be
applied
and
there
now
we
can
do
it
for
kubernetes
too.
That
sounds
great.
A
Like
that,
but
if
you
have
a
joint
configuration
per
machine,
you
know
they
can
do
it.
I
think
yeah.
B
A
A
Yeah,
this
is
super
extensible.
If
a
british.
A
I
think
that,
given
we
have
the
punches
functionality
because,
like
a
like,
we
are
basically
going
to
have
like
multiple
ways
to
do
the
same
thing,
but
if
people
prefer
his
design,
we
can
potentially
do
it.
I
have
no
objections.
B
Yeah
for
us
it
could
come
down
to
from
a
classic
api
perspective
that
we
can
just
do
it
on
our
level.
So
we
have
a
separate,
join
configuration
kind
of
per
machine,
but
I
mean
it's
same
for
the
entire
machine
deployment,
but
if
you
say
I
have
that
certain
machine
deployment,
it
should
use
that
cube
configuration
and
I
want
to
use
a
different
configuration.
I
just
have
a
different
machine
deployment.
Then
you
have
that
grouping
again.
B
Yeah,
that's
what
I'm
saying.
Essentially
we
I
think
we
don't
need
a
grouping
on
the
on
the
cuban
level
because
we
can
have
it
on
our
machine
deployment.
Level.
Yeah-
and
I
mean
one
machine-
implement-
also
has
the
same
infrastructure
machine
config,
which
means,
I
guess,
the
same
flavor
or
whatever,
and
usually
that's
kind
of
tuning
us
the
same
for
the
same
kind
of
machine.
B
Yes,
yes,
yes,
essentially
in
the
machine
deployment,
we
are
referencing
a
cube,
adam
conflict
and
inside
our
cube
admin
config.
We
have
the
let's
say
the
configuration
for
kubernetes
detroit
configuration
and
we
also
have
that
file
stuff
where
we
just
can
configure
files
and
then
they
are
injected
via
cloud
in
it
or
ignition
yeah.
So
that
should
work.
A
A
Yeah,
what
is
remaining
here
is
to
fix
the
entering
test,
and
also
we
have
to
include
a
specific
new
test
that
actually
verifies
that
that's
an
end-to-end
test
that
verifies
that
we
actually
patch
a
file.
We
already
have
tests
for
that
for
the
static
ports,
but
we
have
to
do
a
test
for
the
complete
configuration
and
also
docs
docs
are
missing,
but
that's
what
to
do
for
me
later
in
the
cycle.
A
A
We
faced
a
bunch
of
problems
around
the
cubed
and
tests
and
also
inside
kubernetes
itself,
because
we
we
realized
that
okay,
we
can
migrate
the
config
defaults,
which
is
you
know
the
v1
beta2.
We
won't
be.
The
three
incubator
now
default
to
register
case
of
the
o,
but
we
realize
that
during
upgrade,
we
have
to
also
migrate
potentially
default
case
case
gcr
dot
io
inside
the
course
configuration
to
become
a
registered
case
material.
This
means
that
the
user
does
not
care
about
the
repo,
but
it's
actually
defaulted
to
the
old
one.
A
A
A
Yeah,
what
we
did
is
basically
we
when
we
explore
a
turbo,
we
decide
whether
we
want
to
patch
the
registry
inside
the
turbo.
So
we
take
the
turbo.
We
build
an
image
out
of
it,
but
if
the
cube
idea
version
on
the
is
125
like
this,
we
basically
replace.
A
A
So
like
a
secure
between
cubed
m
and
the
control
plane,
it's
a
supported
skew,
but
the
images
ended
up
with
so
keeper
name
is
135,
but
the
images
ended
up
with
k,
gcr
dot,
io,
cube,
adm
135
assumes
that
it
has
images
have
to
have
this
registry
and
we
are
actually
hard
coding.
The
images
in
a
way
that
we
don't
have
to
pull
anything
from
the
internet
they're,
basically
on
the
node
right,
so
to
avoid
pulling
images
from
the
internet.
A
We
just
had
to
apply
this
fix,
which
is
it
has
to
reside
in
kinder
until
we
no
longer
test
this
particular
skew,
which
which
is
going
to
be
three
releases
from
now.
I
think
yes,
but
it's
not
a
pretty
hack,
we're
basically
modifying
global
variables
to
then
have
a
function
that
determines
versions
and
stuff
yeah.
It's
it
was
a
kind
of
problem.
So
how
can
I
help
somehow
with
your
coaster,
api
capd
problem.
B
I
think
we
are
fine,
we
only
have
so.
Our
current
problem
is
mostly
just
we
having
it.
We
have
a
test
which
creates
a
cluster
with
124
and
then
upgrades
it
to
125.
B
If
a
user
didn't
pin
it
to
some
other
registry,
and
that
will
mean
that
every
new
node
that
comes
up
with
125
will
use
the
new
registry
and
why
that's
even
a
problem
with
kind
is
we
essentially
have
kind,
124
images?
We
have
kind,
125
images
and
each
of
those
have
the
right
registry,
the
right
images
already
pre-pulled
into
those
images,
so
old
and
new
everything
great.
B
The
only
problem
is
that
we
are
now
trying
to
bootstrap
new
125
notes
with
the
old
registry
and
the
images
are
baked
in
with
the
new
registry,
but
that
will
will
be
fixed
once
we
automatically
change
the
registry.
A
B
Exactly
yeah,
that
was
my
impression
too.
I
didn't
have
time
to
talk
before
pizza,
yet
because
we're
trying
to
get
out
our
last
account
release
which
doesn't
support
125
yet
but
yeah,
but
I
think
that's
what
we
will
do
then.
B
I
tried
this
manually,
so
I
ran
around
for
a
test
on
my
local
machine.
Then
I
just
manually
changed
stuff
and
then
a
new
node
was
coming
up,
so
I'm
pretty
confident
that
it
worked.
Okay,
okay,.
A
Well,
I
mean:
what
could
what
other
concerns
do
we
have
here
if.
B
Maybe
one
question:
do
you
have
an
idea
when
they
want
to
drop
the
the
redirects
from
the
old
registry
to
the
new
one,
so
kate's
a
gcrio
when
they
want
to
essentially
drop
them.
A
Yeah
yeah,
so
so
currently
registry
case
is
the
new
one
it
redirects
to
the
broad
one.
Eventually
they
want
to
flip
it,
such
as
that
case.
Gcr.O
starts
reacting
to
the
new
domain
and,
to
my
knowledge,
the
redirecting
from
the
old
to
the
new
is
going
to
be
at
least
for
one
year,
but
like
what
happens
after
they
dropped
this
completely.
I
bet
that
there
will
still
be
users
who
are
somewhere.
A
You
know
in
the
world
that
are
broken
by
this,
but
you
know
they
decided
to
have
a
blog
post,
we're
going
to
do
announcements.
Some.
B
So
I
I
suppose,
once
we
are
at
that
point,
it's
totally
fine
that
they
dropped
that
at
some
point,
but
once
we
are
there
in
copy,
I
guess
we're
at
a
point
where
either
user
have
to
really
use
newer
versions
or
they
have
to
pin
to
the
new
registry,
and
I
think
that's
totally
fine.
We
should
we
shouldn't
wait
longer
for
something
like
that.
We
are
already
supporting
way
too
much
versions,
but
it's
just
something
that
we
will
probably
have
to
put
in
our
release
notes
at
that
time,
but
we
will
notice
it.
A
I
mean
yeah.
First
of
all,
118
is
a
really
really
out
of
support
to
this
point,
but
I
can
see
that
costa
api
users
might
want
customers
or
family
or
tools
can
potentially
want
this
support
window.
To
be
much
bigger
problem
is
that
this
this
might
become
like
a
communication
issue.
So
if,
if
we
are
able
to
manifest
the
problem
in
front
of
the
people
who
are
going
to
remove
this
dna
solution,
the
domain
we
can
tell
them.
A
B
Yeah
so
so
far
is
mostly
so
when
I
say
we're
supporting
1.18,
then
trust
means
we
have
exactly
one
end
to
enter,
which
installs
118,
updates,
119
and
then
runs
conformance
test,
and
we
have
that
for
all
our
versions,
I'm
not
sure
if
we
actually
care
about
or
if
we
actually
have
users
on
118.
It's
just
that
we
always
kept
that
test
and
we
didn't
really
start
dropping
tests,
which
you.
B
Do
at
some
point
so
I
can't
really
say
that
we
have
a
concrete
use
case,
but
I
will
definitely
talk
to
for
pizza
just
to
make
sure
that
he's
aware
and
that
he
can
ask
some
product
people
or
whoever
might
be
or
in
the
community
yeah,
and
that
they
know
that
there
is
potentially
a
deadline
coming
up
and
that
they
should
talk
to
the
relevant
people.
If
that
phase
should
be
longer.
A
Yeah,
but
in
question
api,
when
when
we
we
are
talking
about
versions
of
kubernetes
going
out
of
support
like
do
you
follow
the
kubernetes
sort
of
support.
B
We
we
currently
just
keep
our
entrance
because
it's
basically
almost
no
effort
to
just
keep
the
entrances
running.
It's
not
like
users
are
frequently
asking
us
to.
I
mean
it's
not
like.
We
have
to
support
the
kubernetes
version.
It's
just
that.
We
still
know
that
cluster
api
works.
Our
current
cluster
api
version
still
works
with
the
old
kubernetes
version
and
we're
basically
not
investing
any
effort
there
and
set
except
run
keeping
the
test
running
and
it
yeah
just
worked
over
the
years.
B
B
That's
that's
just
upstream
and
you
don't
have
any
support
there,
but
right
now
it
just
doesn't
really
matter
to
us.
I
would
say
we
don't
really
have
a
reason
to
drop
it,
so
we
just
keep
it
running
yeah
I
mean
it
would
be
better.
It
would
be
some
kind
of
forcing
function
for
users
that
actually
migrate,
but
the
current
situation
is
just
that
we
have
those
tests
and
that
we
didn't
decide
to
drop
them.
I
think
we
should,
sooner
or
later,
rather
sooner
than
later,.
A
Yeah
in
cuba,
dm
we
have
a
is
very
familiar
with
this.
We
have
a
tool
that
basically
rotates
around
the
support
window.
So
once
a
new
release
comes
up,
we
can
actually
increase
the
support
window
for
a
period
of
time
to
have
four
versions.
Eventually,
when
a
version
goes
out
of
support,
the
same
script
can
actually
trim
the
the
orders
version
so
that
that's
how
we
we
drop
120
in
this
case,
not
121,
is
the
the
oldest
supported
version.
A
B
I
think
what
what
the
surveillance
for
me
is.
I
think
I
think
it's
not
sustainable
for
users
in
any
way
to
just
not
upgrade
and
with
every
I
guess,
four
months
that
they
don't
upgrade.
They
fall
further
behind.
So
there's
just
absolutely
no
way
to
continuously
not
upgrade
you
actually
have
to
when
you're
that
far
behind
you
have
to
upgrade
more
often
than
upstream
kubernetes,
because
otherwise
just
falling
back.
B
So
it's
not
like
sustainable
to
tell
people
hey,
you
can
just
stay
on
118
even
longer
and
then
on
19,
because
it's
not
getting
any
better,
so
they
have
to
learn.
In
my
opinion,
there's
no
way
around
it
to
upgrade
clusters.
A
Yeah
yeah
as
well
yeah,
go
through
a
look
at
it
later
I
don't
know
what's
happening
exactly,
but
it's
a
kinder
problem.
It's
not
a
cubanian
problem.
I'm
going
to
check
the
check
later
but
to
buy
tobacco
is
basically
the
biggest
challenge
which
is
actually
was
approved
by
a
survey
that
was
done
at
some
point.
Was
that
the
users
stuck
on
old
versions
because
they
because
of
other
api
changes
in
kubernetes
like
certain
api,
is
removed
and
they
don't
have
the
time
to
migrate,
there's
no
automatic
tooling
to
migrate
them.
A
A
But
we
don't
handle
api
changes
in
kubernetes
core,
so
users
are
on
their
own.