►
From YouTube: SIG Cluster Lifecycle - kubeadm office hours 2022-16-03
A
Okay,
this
is
the
kubernetes
office
hours
today
is
the
16th
of
march
2022,
I'm
going
to
share
my
screen,
and
we
have
one
topic
to
discuss
today,
which
is
about
kubernetes
image:
registry
change:
okay,
okay,.
A
A
So
what's
happening
here,
we
want
to
give
the
tldr
of
this
change.
B
Yeah,
absolutely
so,
I'm
with
kitten
for
a
six
and
we
basically
working
on
on
set
up
a
new
proxy
in
front
of
okay
in
front
of
basically
the
container
image
produced
by
the
community.
So
in
2020
we
we
did
a
domain
flip
from
google
google
infrastructure
to
the
community
infrastructure,
but
get
gcr.io
stl
owned
by
google.
It's
basically
a
proxy
in
front
of
the
controller
registry
and
it's
owned
by
google.
So
we
now
we
want
to
introduce
a
new
proxy
called.
B
B
We
want
to
be
able
to
define
that
endpoint
inside
the
currency
exposure
so
later
in
the
future.
If
there's
a
there
are
improvement
of
changes
on
how
we
distribute
the
control
images
is
not
impactful
for
the
upstream
community
and
different
downstream
distribution
and
project
consuming
the
community
project.
B
A
Yeah
we
had
a
conversation
with
jim
cernell
on
slack.
Basically,
my
first
question
was
whether
my
first
reaction
to
his
proposal
to
switch
kubernetes
testing
to
this
registry
was
like.
Why
don't
we
have
a
simple
networking
test
which
is
just
just
testing
the
proxy
redirect
instead
of
having
to
generate
traffic
through
the
registry,
such
as
you
know,
using
a
docker
pool
on
all
the
images
will
generate
traffic
because
we
are
pulling
the
layer
turbos
from
the
proxy
redirect
going
to
the
the
original
kate's
gcr
dot
io?
A
Instead,
we
can
have
a
simple
networking
test,
but
do
you
know
why
we
want
to
actually
generate
traffic
with?
You
know,
thousands
of
gigabytes.
B
I'm
not
sure
is
is
reversible
because,
basically
it's
about
to
because
we
basically
test
one
assumption.
We
basically
test
the
pulling
aspect.
We
don't
testing
we're,
not
testing
pushing
the
the
proxy
will
handle.
Only
one
action
is
pulling
we're
not
doing
pushing
from
that.
We
don't
do
other
options
so
from
a
network
perspective
with
testing
only
we're
doing
only
two
http
action
get
and
head,
and
that's
it.
So
it's
kind
of
difficult
to
generate
traffic
from
that,
because
it's
basically
trade
we
need
to
know.
B
We
need
to
be
able
to
generate
egress
traffic,
which
is
something
is
not
that
easy.
So
we
want
right
now
we
want
to
test
the
scalability
of
that
new
proxy.
That's
why
we
want
to
basically,
firstly
switch
all
the
test
job
to
that
endpoint
if
possible,
and
see
how
I'm
a
little
confident
we
don't
have
a
problem
with
the
proxy,
because
it's
a
serverless,
it's
running
on
a
service
platform,
so
auto
scaling
is
already
handled
by
that.
B
A
Yes,
I
see
it's
actually
not
so
simple
to
my
knowledge
to
enable
this
in
kinder,
because
for
which
you
can
confirm
this.
We
actually
pull
turbos
of
images
from
a
gcs
bucket
and
we
construct
our
images
and
we
there's
still
some
traffic
to
the
gcr.
But
I
don't
think
we
use
it
that
much
so
we
actually
construct
our
own
images
in
kinder.
C
Yeah
and
what
I'm
starting
to
wonder,
if
we
can
help
from
copy
side
to
do
this
test
so
and
copy,
for
instance,
we
have
a
bunch
of
tests
which
are,
I
don't
know,
testing
cluster
api
with
a
matrix
of
support
and
kubernetes
release.
So
eventually,
eventually,
we
can
start
changing
this
test,
telling
that
we
are
using
the
new
registry
instead
of
of
the
one
that
gets
by
default.
B
Okay,
so
one
thing
I
forgot
to
maybe
team
cd
mentioned
that
we
don't
we
don't
hunt
the
for
the
moment.
We
don't
handle
the
pulling.
Basically
we
do
http
working,
which
means
basically,
when
you
send,
when
a
docker
client
asks
to
pull
from
vc
case.io.
The
request
is
sent
to
kgcr.io
and
kgbc.
So
that's
why
it's
basically
not
really
useful
to
this
network
address,
because
right
now
we
do
just
http.
A
Oh
yes,
it's
is
also
sufficient
in
a
way
I
mean
the
example
I
showed
dims
is
just
curling
one
of
the
manifests
files
and
when
you,
when
you
close
this
manifest
file,
it
will
give
you
the
the
resulted
manifest
yaml,
sorry
json,
but
yeah.
It's
not
exactly
pulling
you
just
probing
to
see.
If
the
the
target
registry
has
this
manifest
file,
I'm
not
convinced
that
just
pooling
or
just
using
these
cops
or
qpdm
tools
will
give
her
will
give
us
any
additional
signal
whether
the
proxy
works.
A
But
if
that
that
is
your
decision,
that's
okay.
I
think
it
is
like,
like
I
said,
it's
kind
of
difficult
to
do
this
changing
kinder.
So
until
we
actually
flip
cube
adm
to
use
this
registry
by
default.
A
I
don't
think
we
should
exercise
incubation
testing,
but
we
have
another
area
where
we
actually
test
manifests,
and
maybe
that
will
help
you.
I
can
actually
show
you
what
we
have
and
yeah.
B
A
A
Okay,
but
if,
if
we
change
this
domain
to
registry
case
dot
io,
it
will
work
and
it
will
start
testing
whether
all
the
images
cube
adm
cares
about
are
properly
configured
on
the
new
domain.
A
Basically,
if
this
is
something
that
you
want
to
do,
I
think
appear
for
this
is
acceptable,
but
kinder
is,
I
think,
it's
more
complicated.
A
Yeah,
that's
a
that's
a
related
question
that
I
have
like
when
when
should
we
change
cube
adm?
Like
the
you
know,
the
actual
production
constant
for
gc
gc.
B
A
Yeah,
I
agree.
The
code
freeze
is
also
in,
I
think,
less
than
10
days
or
something
like
that.
Yeah
yeah
makes
sense
125
by
the
way
I
didn't
see
a
cap
or
anything
like
that
for
this
change,
maybe
it's.
It
was
just
under
the
kate's
infra
umbrella.
B
Because,
by
definition
is
not
a
kept,
is
more
like
a
a
change
of
the
endpoint
or
where
of
where
we
pull
the
images.
It's
not
bring
value
to
kubernetes
itself,
because
we
don't
we
don't
do
improvement.
We
basically
say
to
the
community:
we
own
the
entire
infrastructure
of
phone
content
and
image
distribution.
So
we
after
conversation,
we
realized
it's
not
really
a
cape
per
se,
so
we
can
have
a
document.
We
can
submit
the
cap
and
go
to
the
pollution,
readiness
team
and
basically
say.
Oh,
we
are
doing
this.
B
A
Yeah,
I
see
what
should
be
it's
like
a
infrastructure
change
that
is
the
doesn't
really
fit
the
cap
template
at
all.
I'm
just
adding
some
comments
here
quickly.
C
Now
make
sense
and
just
have
advertised
that
people
should
start
as
soon
as
as
you
as
we
are
certain
that
it
works,
because
we
have
to
start
advertising.
I
don't
know
in
kubernetes
there
hey
guys
and
start
planning
for
this
cycle,
and
so
and
we
have
to
obviously
advertise
periodically.
So
it
could
happen
that
someone
pick
up
the
change.
B
B
C
Okay,
it
makes
sense
we
will
stay
tuned
and
and
and
try
to
help
if
you
need
us
to
point
the
caster
api
jobs
in
advance
to
this
endpoint
to
start
generating
the
workload.
Let
us
know.
A
B
A
Be
advertised
for
that
yeah
I
mean
locally
on
the
sick
level.
We
can
use
our
mailing
list,
but
usually
for
big
changes
such
as
you
know,
removing
the
built-in
docker
support
in
the
cubelet,
which
happens
which
is
happening
in
this
release.
C
A
Yeah,
so
this
is
something
that
can
be
coordinated
between.
You
know
the
test,
infrared
sig
docs.
They
will
happily
accept
like
a
short
block
explaining
such
a
big
change.
B
I
just
have
to
basically
do
some
do
a
last
check
with
get
infrastructure
and
also
seek
testing
about
this,
so
I
definitely
have
to
talk
to
sigdocs
and
the
marketing
team
next
week.
A
Yeah,
it
sounds
good,
also
the
the
switch
that
dims
explained,
which
is
you
know
we
currently
have
this
redirect,
but
in
the
future
we
can
have
the
backwards
redirect,
which
is
from
the
from
the
old
to
the
new,
is
also
going
to
help.
I
think
so.
That's
like
a
backwards.
Switcheroo
that
we
want
to
do
is
going
to
help
a
lot
of
users,
so
I
think
I
don't
see
any
big
issues
around
this.
Overall,
I
think,
is
going
to
be
fine.
B
B
A
Yeah,
thank
you.
Thank
you
very
much
for
for
that
and
appreciate
it.
If
you
want
to
send
the
pr
for
the
verified
manifestos
just
go
ahead,
I
will
pick
me.
I
will
try
to
merge
it.
B
A
I
honestly
don't
have
anything
I
was
on
pto.
We
have
some
poor
requests
that
we
have
to
look
at,
but
other
than
that
the
issue
tracker
is
mostly
quiet
has
been
mostly
quiet.
The
past
couple
of
weeks,
which
is
good
yeah.
I
don't
have
any
updates,
maybe
a
couple
of
weeks
we
can
discuss,
what's
happening
after
cold
freeze,
so
yeah,
that's
all
we
have
for
today.
Thank
you,
everybody
and
see
you
again
in
a
couple
weeks.