►
From YouTube: Kubernetes SIG K8s Infra - 20220817
Description
A
A
A
I
think
we
can
start
with
the
bearing
pause
so
compared
to
last
time
we
were
there.
I
don't
see
something
specific
changing
in
the.
A
Compared
to,
if
I
go
to
the
gcp
breakdown.
A
C
Yeah
they
open
up
a
ticket
and
oca
proxies
repo
for
renaming
the
repo
to
that.
I
owe
because
I
think
it
would
suit
it
way
better
and
from
there
I
also
created
a
ticket
in
the
org
repo
for
kubernetes,
and
this
is
the
formal
process
I
believe
for
making
this
happen.
C
A
Okay,
I
think
the
your
question
is
in
what
kristoff
say,
if
renaming
it's?
Okay,
but
migration
to
a
different
organization
need
a
conversation
with
seag
architecture.
D
A
A
Because
there's
a
policy
issue
by
sick-
and
I
think
teams
teams
point
that
out
somewhere
in
community,
we
we
don't
allow
new
repository
in
the
comments
this
organization.
A
So
in
order
to
get
that
to
make
that
happen,
you
need
to
get
the
approval
from
sig
arch.
Okay,
where
is
that
policy?
Different
branch,
yeah
currencies,
repository
seek
or.
A
B
A
Okay,
so
next
evaluate
platform
for
aspect
distribution,
I'm
assuming
it's
teams
or
anyone
other.
B
I
don't
mentioned
it.
I
saw
the
asian,
I
thought,
let's
just
stick
it
in
the
earth
and
then
discuss
because
I
remember
last
time
it
was
quite
the
robust
discussion.
So
I
thought
I'd
take
that.
A
E
Yeah,
I
wanted
to
mention
a
few
things
yeah,
so
there
is
something
we
can
do
with
cloudflare
today
to
cut
some
costs
with
serving
anything
out
of
gcs,
not
the
images
but
the
blobs.
E
E
E
A
D
E
Okay,
yeah,
okay.
I
thought
it
was
a
plan
how
far
along
are
we
are
we
like
in
a
place
where
we
can
turn
this
on,
or
is
it
like
quite
a
way
to
go.
A
They
gave
us
an
account
with
enterprise
plan,
but
it's
not
official.
So
the
condition
to
make
that
official
is
to
finish
the
plc
and
go
back
to
cloudflare
and
cncf
need
to
toggle
platform.
E
That
should
always
be
encrypted
should
be.
A
E
E
A
Because
the
icaru
is,
you
need
to
have
your
ipad
is
coming
from
computer
engine
or
storage
and
china
once
you
do
that
the
1715
person
applied
to
that.
A
E
A
Oh,
no,
we
don't
we
don't.
We
don't
want
to
do
that
because
is
not
ready
at
the
moment,
because
it
was
better
when
we
were
when
we
talked
to
him.
It
was
very
better
for
he
was
not
ready
at
the
moment.
Okay.
E
E
A
B
A
So
it's
at
the
moment
we
talked
to
them.
It
was
not
part
of
that,
so
we
need
to
go
back
to
them.
So
I
think
the
with
cloudflare
we
want
to
finish
basically
migrate
to
the
migrate,
a
few
domains
we
have
currently
and
once
we
finish
with
that
migration,
we
can
talk
about
using
cloudflare
a2,
but
I
don't
think
we
need
to
move
to
that,
because
there
is
no
specific
value
again
on
that.
A
A
Basically
is
the
same
thing
because
we,
I
think
the
the
the
difference,
because
we
have
an
enterprise,
so
we
have
in
limited
bandwidth.
We
are
not
limited
by
bandwidth.
Okay,
so
moving
to
r2
is
more
like.
Oh,
we
want
to
leverage
our
to
feature,
but
we
already
use
cloud
for
network
use
with
the
enterprise
plan.
So
there's
no
okay,.
F
A
Yep
and
egress
is
always
most
free.
I
think
the
the
minor
difference
is
the
the
acres
costs
coming
from
cloudflare,
because
if
we
use
cloudflare
we're
going
to
pay
for
the
traffic
between
the
origin
and
cloudflare,
that's
what
we're
going
to
pay
and
we're
going
to
have
75
percent
of
rejection
on
that.
A
F
Yeah
and
the
benefits
for
r2
I
mean
it
makes
sense
to
store
the
blobs
in
r2.
If
you,
if
you
but
the
thing
I
was
worried
and
I
never
suggested
looking
into.
That
is
because
we
started
lots
of
lots
of
historical
globes
if
r2
charges
you
for
those
images
which
are
simply
there
for
historical
reasons,
because
we
never
purchased
them
and
we
would
be
paying
for
nothing
really
right.
A
Yeah,
I
think
this
is
like
the
if
we
use,
if
we
basically
use
r2,
we
don't
need
an
enterprise
plan.
To
be
honest,
we
went
through
them
because
we
want
to
leverage
that
enterprise
plan
and
say:
okay,
we
have
infinite
bandwidth
on
the
entire
account.
A
F
A
So
it's
like
you
don't
need
to
do
anything.
You
just
need
to
shield
your
basically,
your
your
point
of
distribution
like
dlcs.io
ads
focus.io,
the
the
endpoint
for
installing
meaning
the
system
packages.
I
don't
remember
you
share
that
with
cloudflare.
There's
no
point
to
basically
add
extra
logic
to
push
to
s3.
A
Or
to
push
to
any
object,
storages
and
right
now
only
sql
is
doing
that
to
push
your
s3
there's,
no
other.
Only
sick
relations
cops
doing
that.
A
What
I'm
saying
yeah
for
2022,
we
should
basically
focus
on
finish
setup
and
cloudflare
finish
with
s3
and
stuff
like
that,
and
when
we
start
on
23,
we
can
come
back
and
basically
say
exactly
also
based
on
the
conversation
we
have
with
cloudflare
because,
like
I
said
at
the
moment,
we
reach
out
to
them.
R2
was
not
the
thing
and
was
not
part
of
enterprise
plan.
A
F
F
Yeah,
so
so
yeah,
so
just
I
think,
I'm
kind
of
seeing
the
benefit
of
of
it.
Maybe
it's
the
same
that
mohammed
is
it's
also
contemplating
so
as
as
we
so
the
the
plan
as
we
are.
We
are
currently
improving,
tooling
on
the
promoter
to
push
to
ir
and
we're
also
building
the
the
aws
infrastructure
mirroring
system,
so
those
are
going
to
be
built
anyway,
and
the
path
that
we
are
working
on
right
now
is
adding
complexity
and
time
to
the
promotion
process.
F
E
A
That's
what
I'm
I'm
proposing
when
I
reached
out
to
cloudflare
was
the
entire
plan
force.
2022
is
focused
on
absorb
the
core
for
what
is
existing
right
now,
so
mostly
the
binaries
and
the
system
packages,
and,
to
be
honest,
we
are
two
was
something
we
wanted
to
look
into,
but
it
was
private
beta
and
I
never
got
came
back
to
them
and
basically
said:
oh,
let's
try
it
out.
So
we
can
talk
about
that
next
here.
If
you
want.
B
Just
summarize
the
thoughts
in
the
issue
in
answer
to
dems
and
market
that
as
2023,
so
if
it
does
come
up
with
live
cycles,
still
that
everybody
remember-
oh
yes,
that's
that's
where
we.
B
A
2023
is
more
like
to
be
defined
because
we
still
don't
know
what's
going
to
happen
in
the
next
six
months
about
image
promotion,
so
even
for
next
20
23
is
to
be
defined,
like
I
don't
know,
what's
happening
with
information
in
december,
so
I
know:
we've
been
talking
about
image.
Promotion
basically
get
rid
of
a
lot
of
system
packages,
but
I
saw
it's
a
different
issue
so
about
that.
I
think
we
we're
clear
about
about
this.
A
A
A
The
second
issue
is
about
the
conversation,
ben
and
brienne
and
caleb
had
last
two
weeks
ago,
because
I
I
talked
to
him
before
this
week
and
he
told
me
we
need
to
ensure
we
have
automated
sync
between
gcs
and
s3.
C
Yeah
yeah,
can
you
clarify
the
question?
Sorry.
C
I
had
given
a
few
days
of
trying
to
figure
out
how
to
do
the
notifying
for
authentication
or
any
kind
of
changes
regarding
access
to
the
the
accounts
and
that
role-
and
I
don't
know
currently
how
to
do
that.
It's
it's
very,
very
complicated.
D
E
C
But
yeah
I
I'm
not
sure,
but
I
will
be
revisiting
that
and
then
I
have
been
working
on
this
from
the
start
and
currently
running
into
terraform
bottlenecking.
How
I
can
implement
this.
Those
are
some.
I
am
things
that
I'm
trying
to
get
right,
but
pretty
much.
What
this
pr
is
is
to
add
a
resource
that
does
the
sinking
between
all
of
the
regions.
C
Incident
would
just
put
into
the
us
east
2-1,
and
then
it
would
figure
its
way
out
to
the
rest
of
them,
but
yeah,
sometimes
terraform
can
make
things
a
little
more
difficult
and
running
into
something
with.
I
am
that
I
need
to
figure
out
as
well,
but
been
on
my
mind
since
the
start
still
figuring
it
out.
A
Okay,
can
we,
I
think
I
don't
want
to
assume
you
blot,
but
can
we
basically
do
that
using
iclone
like
I
have
multiple
chrome
process
in
maybe
easy,
to
instance,
equine
stuff,
to
do
that.
C
My
thought
was
that
aws
can
take
care
of
it
for
us,
which
means
that
there's
no
management
around
running
a
job
somewhere,
and
it
also
would
mean
that
the
the
traffic
is
likely
free
kind
of
thing.
If
we
want
to
do
any
syncing
and
make
it
definitely
more
on
the
the
cost.
Saving
side,
then
we'll
need
to
make
sure
it's
running
in
aws
somewhere.
D
C
A
A
Because
the
one
thing
we
we
basically
can
promote
oci
proxy.
If
that's,
if
on
the
bucket,
are
not
in
sync,
that's
that's
my
my
main
concern
here
like
because
I
think
la
last
week
I
had
to
basically
sing
from
gcs
to
to
the
marketing
is
to
because
there
were
like
some
blob
missing
like,
for
example,
the
bluffs
or
the
pose
images
were
missing.
C
B
C
If
I
may,
I
don't
believe
it's
particularly
urgent
to
have
everything
immediately
up
to
date,
because
regardless
they
will
get
the
assets
and
then
we
can
just
update
it.
As
as
we
need,
I
you
when
you
did
run
the
the
the
sync
command.
C
However,
I
think
you
might
have
only
done
it
to
one
region,
but
I
I
have
some
scripts
to
do
it
manually
for
gcs
to
s3
and
s3
to
the
remainder
of
the
bucket,
which
would
in
fact
do
the
minimal
cost
saving
of.
However
many
blobs
that
would
be
transferring
over
there,
but
I
what
I'm
trying
to
say
is:
yes,
I
think
automated
sync
is
great.
I
don't
think
it.
C
No,
I
don't
think
it's
particularly
urgent
to
have
it
sinking
constantly,
because
regardless
they're
gonna
get
the
content
that
they
need
and
then
yeah
that's.
Those
are
my
thoughts.
B
No,
I
think
ben
would
prefer
if
it's
up
and
running,
but
what
I
I
understood
exactly
the
same
from
ben
is
so
basically
he
keeps
log
of
what
is
synced.
He
check.
Is
it
there
and
then
there's
a
a
local
stored
log
of
the
things
that
he
did
find
there
and
if
he
does
not
find
it,
it's
not
in
the
log
it
just
not
don't
redirect.
So
it's
just
a
loss
of
cost
saving
that
we
have,
but
the
end
user
will
not
miss
the
the
artifact,
but.
A
B
Absolutely
absolutely
that's
why
this
issue
is
important
and
is
a
priority,
but
it's
not
a
blocker,
so,
okay,
that
is
the
thing
that
we're
working
on
now.
So
I
don't
think
it's
blocking
us,
but
you're,
absolutely
right.
It
is
a
priority
to
get
it
up
sooner
than
later
and
also
get
the
syncing.
Automated
inside
we've
been
talking
quite
a
bit
with
aws
to
try
and
get.
A
E
A
C
I'm
not
sure
about
an
eta
for
it,
but
there
will
be
cost
saving
if
we
go
on
to
production
for
the
new
assets
downloaded
using
the
domain.
Of
course,
when
new
new
images
become
available
and
the
the
blobs
need
to
be
accessed,
the
cost
saving
won't
be
immediately
in
for
those
until
it's
synced,
but
that
that
is
quite
a
minimal
thing.
A
minimal
amount
of
data
to
transfer
compared
to
all
of
the.
C
I
think
we
have
as
of
like
two
weeks
ago
or
something
all
of
the
buckets
sinked,
which
is
pretty
cool
and
was
it
gonna
say
that
would
mean
for
a
great
lot
of
cost
saving
for
people
who
are
pulling
from
the
new
domain
compared
and
then
we
can
just
pull
across
the
new
blobs
as
needed,
and
that
will
mean
for
cost
saving
for
those
new
ones
when
it's
needed,
or
aside
from
the
automated
sync.
B
Now,
from
our
last
discussion
with
ben,
he
said
he
would
love
the
automated
sync,
but
if
we
can
work
out
some
method
to
at
least
ensure
that
there
is
syncing
at
some
stage,
if
even
if
it
has
to
be
manually
up
to
now,
but
it's
not
a
blocker,
so
yeah
we
can
work
out
what
we
could
do
on
outside.
If
we're
really
blocked,
we
can
say:
okay,
we're
going
to
have
a.
E
B
It
would
be
obviously
important
to
to
sync
them
and
figure
out
what
intervals
will
sync
manually
until
we
can
get
the
automation
figured
out.
The
automation
is
obviously
important,
but
we
can
work
it
another
way
and
immediate.
F
E
B
Yes,
that's
the
one
plan
and
the
other
plan
is
hippie
already
send
out
a
notification
on
the
mailing.
Let's
say
be
aware:
this
is
what's
going
to
going
to
happen
within
the
next
30
days.
We
are
working
on
it,
but
we
won't
swap
out
before
the
next
30
days
so
to
give
people
a
heads
up,
because
what
we're
concerned
about
is
that
it
could
be
a
breaking
change
for
people
so
properly.
Just
after
release
has
been
cut,
would
be
a
good
time
to
do
this.
A
C
A
Think
ben
was
the
communication
that
target
would
be
end
of
september
to
basically
shift
traffic
is,
is
going
to
be
transparent
for
anyone,
but
we
still
need
to
send
a
notification
for
this.
D
A
D
F
A
E
E
E
E
Help
like,
for
example,
if
I
know
this
is
going
live,
I'm
expecting
a
ton
of
traffic.
Next
week
the
terraform
work
takes
priority,
deal
stuff
away.
A
Okay,
I
think
we
should
have
an
issue
as
a
priority
list
and
say
what's
needed
to
be
done
in
order
to
go
in
production
and
have
a
date
establish
yeah.
That's
the
that's
a
good
one.
A
Okay,
anyone
has
a
question.
I'm
just
going.
B
A
B
Action
action
is
honor,
you're
gonna
make
a
an
issue
and
share
it
with
the
priorities
that
must
be
done
before
we
can
go
to
production.
D
F
I
I
have
one
so
I
was.
I
was
talking
to
the
sneak
people
this
morning,
yeah,
and
so
just
by
passing.
I
I
I
heard
something
from
them
that
apparently
some
of
the
resurrection
that
happens
now
with
the
new
with
the
new
oci
proxy
and
the
new
domain.
F
It
so
some
something
I'm
not
sure
of
the
specifics,
but
I
can
I
can
get
them
if
you,
if
you
need
me
to
apparently
the
the
new
redirection
schemes
broke,
the
sneak
client
so
sneak
the
security
tool.
You
can
point
it
to
an
image
reference
and
and
it'll
scan
it
for
you,
so
the
new
redirection
broke
all
the
scanning
of
the
kubernetes
images.
I
don't
know
why
exactly
and
they
already
fixed
it,
but
I
was
curious
if
you've
heard
about
other
tools
breaking
because
they
can
handle
the
redirection.
C
F
A
We
never
had
does
it
after
all,
the
commission
we
made.
I
think
we
need
to
enhance
the
communication
and
that's
why
we
flag
that
change
as
a
as
a
breaking
change,
because
the
reason
is
going
to
be
breaking
change
once
we
do
that,
we
should
basically
see
people
reach
out
about
things
breaking,
but
if,
like
snake
people
can
reach
out
to
us
and
basically
share
with
us
what
happened,
what
kind
of
issue
they
face,
that
would
be
great.
So
we
can
basically
address
that
now.
A
A
F
A
F
A
Yep,
so
if
they
can
open
an
issue
anywhere,
we
might
be
able
to
investigate
that
and
see
what's
happening
so
because
there
are
too
many
options
here,
like
maybe
they
use.
The
v1
client
of
the
docker
client
is
doing
requests
using
the
v1
version
of
the
api
server
and
that's
why
there's
an
issue
must
be
some
something
in
the
blobs
many
options.
F
Yeah,
so
this
is,
I
mean
it's
basically
what
you
were
saying
we
let
people
know
and
when
it
broke,
they
knew
the
reason
and
fixed
it,
but
to
fix
it
is
if,
in
the
future
we
see
other
other
tools
breaking.
It
would
be
helpful
to
know
that
certain
pattern,
pattern
of
consumption
and
breaks
lines
and
in
order
to
more
easily
advise
people.
F
Yeah,
I
mean
I'll
I'll
ask
them
if
they
can
give
me
some
technical
details
and
and
I'll
come
back
here
and
see
if
we
should
open
an
issue
or
capture
the
document
or
we
can
decide.
B
F
A
B
A
Yeah,
I
already
done
that
it's
already
is
only
in
the
change
log
of.
If
you
check
the
chain
gloss
for
rc1
or
rc2,
you
will
see
that
I
already
took
care
of
that.
E
Yeah,
I
just
wanted
one
quick
comment
that
report
we're
probably
gonna,
hear
more
about
it,
because
people
often
forget
to
configure
the
clients
to
redirect
things
properly
so
expect
to
hear
more
of
that
in.
A
E
E
Sorry,
my
internet
is
a
bit
bad
but
yeah.
I
would
expect
to
hear
more
reports
about
broker
errors,
because
people
haven't
configured
their
clients
to
redirect
things
properly.
A
Oh
so
this
is
affecting
plus
the
starting
125
or
custom
installer
deploying
cubasic
cluster,
with
125
version.
A
E
A
A
D
Hear
you
hello,
yeah,
thanks
for
for
putting
me
up
there
at
the
front.
First
first
attendance
here,
I'm
working
on
fixing
flaky
tests,
I'm
kind
of
new
to
the
whole
project.
I've
been
having
some
discussion
about
getting
a
test
artifact
stored
in
a
what
did
I
see
a
gc
gcs
bucket.
D
A
This
is
on
me.
I
was
supposed
to
get
back
about
this,
so
my
concern
is,
if
we,
I
think,
the
front.
The
first
concern
I
expressed
is
like
we
should
not
do
public
bracket
because
we
gonna
be
almost
be
the
people
handling
the
traffic
for
other
people
interest,
but
by
the
data
you
want
to
pull.
I
think
danielle
suggests
we
use
private
packet,
I'm
fine
with
that.
A
D
A
There's
no,
basically,
if
you
have
a
gcp
project,
it's
fine,
even
of
your
personal
icon.
I
can
add
you
and
see
what's
happening
because
we
don't
have.
We
are
not
specific
about
you
uses
of
gcp
all
the
instances
of
dynamic
public
ip
address,
so
I'm
wondering
why
I
can
using
three
different
two
different
gcp
projects
pull
the
data
with
a
a
curl,
but
in
that
image
it's
failing.
I
think
that's
my
concern.
I
understand
because
at
work
I
use
a
gcp
project
and
I
can
pull
that
data
using
conventional
infrastructure.
D
A
Okay,
I
will
gotta
go
back
gonna.
We
can.
A
Next
week,
because
I'm
a
little
focused
on
belize
125
is
really
it's
happening,
tuesday,
yeah.
I
have
a
lot
of
things
to
do
before
that,
so
so
for
anyone
interested
in
the
conversation,
there
is
a
job
pulling
that
information
and
which
is
the
public
data
set
and
us
by
the
american
university.
So
we
have
an
image
pulling
that
data
and
approach
of
using
that
image
failed
to
download
that
we
basically
got
the
connection
timeout.
D
Yeah,
I
bundled
up
a
minimized
set
of
the
external
dependencies
into
a
zip
file
and
I
can
easily
pull
down
the
zip
file
and
unpack
it
myself
in
the
test.
As
long
as
I
can
get
that
thing
get
access
to
the.
A
In
in
kkk
is
not
going
to
happen,
that's
for
sure
so,
because
I
think
sig
seek
api
machine
is
spending
a
lot
of
time
to
trying
to
reduce
the
side
of
that
repository
to
the
point.
We're
just
handling
golang
logic.
E
A
Why
that's
not
a
great
idea
but
yeah?
That's
why
there's
a
policy
planning
it
then?
Okay,
it's
like
policy
banning
is
more
like
I
know
in
the
past.
We
had
issue
with
that
so
because
I
would
say,
for
example,
the
the
cluster
sub-report.
I
had
a
lot
of
binaries
and
we
got
rid
of
a
lot
of
stuff
in
that
repository.
A
D
A
Okay,
so
we
have
five
minutes
to
talk
about
gcr,
to
add
from
migration
to
attribution
migration.
E
Yeah,
so,
as
you
know,
the
infrastructure
is
up.
Kpro
is
a
bug.
I
know
that
it
torpedoed
the
1.25
release
the
other
day.
Unexpected
things
we're
I'm
investigating,
k,
promo's
logic.
I
know
what
the
problem
is.
Just
gonna
work
out
how
to
fix
it
and
then
the
other
thing
is
there.
I've
got
a
couple
of
jobs,
running
to
sync
buckets,
sorry
to
sync,
the
artifact
registry
repositories
and
I'm
running
two
quarter
problems.
E
It's
just
how
it
is.
It
will
probably
take
longer
than
expected
to
get
this
bucket
synced
properly.
Sorry
to
get.
A
E
A
Oh,
oh,
you
don't
sorry,
you
don't
have
access
to
the.
A
So
we
have
already
full.
A
So
I
yeah
there's
a
great
thing
I
need
to
do
for,
for
you,
asia
is
almost
full,
so
we
have
like
derivation
almost
full.
So
I'm
surprised
you
just
say.
E
Yeah,
okay,
that's
interesting!
I
I
wasn't
able
to
see
this.
I
have
no
idea
how
well
it's
progressing
if
we're
expecting
a
terabyte,
it's
actually
pretty
good.
The
jobs
are
running
every
two
hours,
some
of
the
quarter
resets
every
two
hours.
Some
sets
resets
once
a
day,
but
okay,
at
this
rate,
I
think
it'll
be
done
by
the
end
of
the
week.
A
Yeah,
but
so
also
attribution,
we
don't
have
great
expectation,
but
that's
what
was
history,
because
it's
not
cost
saving
to
use
that
we
don't
save
money
by
using
that.
E
A
E
D
E
E
A
E
E
E
A
Oh,
that's:
there's
no
difference
because
agres
cost
for
artifice
registry
and
gcr
the
same
for
everything
outside
google
cloud.
A
Okay,
so
that
change
is
only
for
like
google,
but
even
in
october
one.
It's
gonna
be
the
same
pricing
and
I
think
because
let
me
see
if
you,
which
is
free
pricing
stuff.
I
think
I
saw
that
somewhere.
Where
is
the
egress
network?
Egress
yeah
ethernet
egress
is
based
on
premium
tier
yeah,
so
it's
mostly
so
there's
no
really
for
everything
outside
of
the
google
cloud.
There's
no
change
because
it's
going
to
be
the
same
price
yeah,
that's
what
I'm
saying
moving
to
at5
registry
is
only
a
benefit
for
us
as
we
have
infrastructure.