►
From YouTube: Kubernetes SIG K8s Infra - 20220706
Description
A
A
D
C
D
We
I
had
initiated
the
running
of
the
the
bigquery
data
sets
being
created
through
the
public
log
asn
matching.
I
can
drop
a
link
to
the
kits
that
I
ordered
holder.
You
can
see
the
data
sets
created
through
there.
A
E
I
think
there
was
just
some
research
and
development
on
it,
and
so
we
needed
to
go
through
and
keep
it
up
to
date.
Some
of
it
was
related
to
looking
at
some
new
day
with
rion
might
be
able
to
speak
to
that
or
taylor.
If
you
have
more
details,
but
I
know
there
was
more
work
on
our
side,
which
might
have
resulted
in
more
traffic.
A
A
D
I've
just
dropped
a
link.
It
appears
that
in
the
last
30
days
we've
had
four
new
data
sets
come
up,
so
I
must
have
run
it
about
four
times
the
last
month,
and
if
this
is
considered
problematic
we
will
be
beginning.
D
We
do
have
a
plan
to
begin
pruning
them
anyways,
but
we
will
be
talking
about
them
more.
If
that
is
something
that
is
needed
to
be
talked
about
more.
F
Its
primary
purpose
is
to
enable
the
ability
to
set
per
service
configuration,
so
things
like
variables
and
any
other
parameters
that
you
look
to
override,
there's
more
to
be
done.
But
that's
this
starting
point
that
I
want
to
do.
It
would
be
great
to
have
this
merged
and
deployed.
A
About
this,
I
think
we
need
to
change
the
approach
by
deployment
for
osa
proxy
on
the
same
bus
environment
because,
basically,
what's
happening
is
when
I
set
up
this,
I
didn't
consider.
We
need
to
basically
deploy
every
container
image
built
with
two
for
oci
proxy,
so
I
will
suggest
we
close
this
issue
and
we
migrate
this
traffic
code
to
the
oci
repo
and
we
work
on
that.
There.
F
A
F
A
G
A
F
Okay,
yeah,
okay,
I
like
that
idea,
so
the
telephone
code.
We
can
move
it
there
and
from
that
we
can.
F
Work
out
how
to
auto
bump
the
image:
it's
not
that
difficult.
We
can
get
telephone
to
read
from
yaml,
for
example,
yep
and
there's
a
couple
other
clever
hacks.
We
can
do
yeah.
F
Can
do
it's
very
easy
yeah
I
can
demo
how
that
works,
including
the
ability
to
bump
the
image.
A
We
basically
keep
the
res
everything
related
to
the
load
balancer,
the
dns,
the
network,
endpoint
group.
We
leave
that
as
that,
but
the
cloudgram
services
are
basically
auto,
deploy
using
bash
or
whatever
tooling.
You
want
to
use
okay,
it's
tricky
because
there's
a
relationship
between
the
load
balancer,
the
cloud
grant
services.
So
we
have
to.
E
D
I
I
have
a
suggestion,
so
cloud
run
is
compatible
with
the
k-native
spec.
Would
it
be
possible
to
just
use
a
k-native
serving
service
specification
to
define
the
service
and
then
have
a
few
of
them
per
region
and
such
and
then
that
way,
we're
writing
it
as
yaml,
and
then
we
can
template
it
out
and
then
perhaps
we're
wanting
to
run
the
ci
on
tags
and
then
do
a
build
and
deploy
through
applying
it
through
gcloud.
D
A
I
don't
have
a
problem
with
that
approach
if
we
were
in
the
basically
in
the
standard
native
deployment.
The
problem
with
that
is
cloud.
1
is
as
I,
in
our
context,
in
our
context,
we
use
cloud
when
plus
plus
the
google
load
balancer,
and
there
is
a
relationship
between
both
so
use.
The
native
service
specification
it's
fine,
but
you
need
to
be
able
to
tie
the
both
clouds
run
services
we
have
currently
in
the
same
bus
environment
to
do
gclb,
and
that's
why
it's
tricky.
That's
why?
A
D
To
that,
then,
what
if
we
split
up
the
management
of
it
to
the
actual
services
and
how
they're
configured
and
deployed
with
multiple
k-native
services
there's
over
here
on
the
ocf
proxy
repo
and
then
over
in
case
that
io,
where
it
is?
Currently,
we
have
the
linking
of
the
load,
balances
and
stuff
to
the
to
those
k
native
services.
A
A
A
Basically,
naming
at
some
point
so
we
have
to
be
careful
about
this,
I
mean
what
you're
proposing
could
be
and
it's
doable
the
the
thing
is,
I
didn't
have
time
to
investigate
into
that.
So
if
you
can
work
with
martin
about
this
and
see
what's
up
how
we
can
basically
do
that,
I
don't
have
a
problem
with
that,
but
my
my
main
goal
is
to
basically
move.
Basically,
the
logic
relate
to
the
infrastructure
of
ocr
proxy
to
a
different
report.
So
we
have
one
single
source
of
for
everything
related
to
the
sandbox
environment.
D
I
would
say:
yeah
yeah,
I
I
think
that's
that
sounds
great
I
to
me
having
only
just
thought
about
it
and
haven't
thought
about
it
for
longer
than
five
minutes.
I
think
it
sounds
like
a
fun
idea
to
explore
and
we
we
can
be
in
correspondence
about
that,
be
cool.
A
Okay,
next
is
to
mad
med
configuration
here
from
pure,
submit
and
first
summit.
You
have
my
quickly,
you
have
my
plus
one
of
this.
F
A
F
A
F
So
we
need
to
start
running
things
in
terraform.
Basically,
we
need
to
create
the
principles
and
the
right
power
jobs
to
run
all
this
work.
A
F
A
H
A
H
A
Yeah?
Why
is
that.
A
Okay,
so
we
have
pre-submit
to
valid
to
basically
run
some
to
basically
validate
the
draft
from
code,
but
not
it
doesn't
run
the
tier
from
plan.
It's
more.
F
F
Stereo
validate
isn't
very
sufficient
in
my
opinion.
More
importantly,
you
actually
want
to
see
if
the
time
the
project
run
is
what
does
the
infra
actually
look
like,
because
most
of
the
people
that
are
going
to
submit
code
will
not
have
access
to
the
instructions.
They
can't
even
see
what
it
looks
like
much
less.
F
A
F
Okay,
that
can
work
if
you
want
to
limit
the
blast
radius
per
service
account
with
other
privileges
on
the
project
and
job
it's
responsible
for
we
I'm
happy
to
do
that
if
it
will
allow
us
to
understand
as
a
trader.
A
F
Single
highly
privileged
service
account
to
run
everything.
I
mean
that'll
still
work,
but
it
might
make
a
few
people
a
bit
scary.
E
A
Basically,
we
can
use
that
issue
to
clarify
what
we're
doing
what
we
want
to
do
here,
because
there's
already
an
issue
that
I
don't
really
see.
Value
on
basically
spread
multiple
issue
to
address
this
problem,
because
I
think
we
already
have
a
conversation
about
this.
It's
just
there
was
no
one
to
commit
on
that.
F
Yeah,
the
other
problem
is
that
the
bootstrapping
has
to
be
done
by
somebody
with
privileges,
so
I'll
need
arnold,
dims
or
somebody
to
take
an
hour
of
their
time
and
get
this
bootstrap
properly
and.
A
A
Okay,
I
don't
want
to
say
ping
me
because
I'm
home
vacation
next
week,
so
pink
gyms
things
are
going
to
be
wrong.
If
there's
an
issue
I
will,
I
will,
I
will
jump.
A
F
F
F
Complicated
for
what
it
needs
to
be,
if
you're
implementing
three
zero,
seven
half
the
concerns
disappear.
Okay,
I
think.
E
A
Yeah,
my
question
is
more
general
like
if
I
want
to
do
that
with
azure
and
economics,
how
this
is
happening.
F
A
F
Bit
easy
pke
odyssey
works
for
azure.
I
can't
comment
because
I
haven't
done
something
like
that
before,
because
linux,
I
don't
know.
A
F
Yeah,
exactly
that's
already
in
place
for
google
cloud
and
aws
other
clouds.
I
can't
comment
we'll
have
to
see
what
they've
got.
F
Second,
I
need
to
step
away
for
about
10
minutes
I'll,
be
right
back.
I
pushed
the
other
item
for
ocr
proxy
last.
Meanwhile,
you
can
talk
about
the
s3
buckets.
C
B
B
B
B
Well,
we
we
tried
to
do
all
of
the
regions,
all
of
those
nine
regions,
but
we
ran
into
session.
B
B
Not
sh,
I
think
my
internet
is
messed
up
anyway.
We
got
through
60
minutes
or
so
and
ran
into
timeouts,
so
caleb,
maybe
you
can
say
like
so
where
we
are
now
and
what
our
plan
is
to
get
the
base
image.
Catalog
syncs
or
at
least
the
layers
synced
into.
D
Yep
so
apparently
trying
to
sync
the
first
one.
So
what
we're
gonna
do
is
go
from
gcp
to
s3
and
then
once
we've
got
the
stuff
on
s3,
we'll
just
sync
that
first
synced
bucket
with
the
remainder
buckets
and
that
way
we
can
save
on
bandwidth
to
to
make
all
of
the
buckets
be
populated.
Initially
and
there's.
Something
else
is
going
to
say.
D
Yes,
what
you
always
want
to
say
I'll
come
back
to
it
yeah,
but
we're
I've
got
a
document
that
outlines
the
manual
sync
for
how
I
have
started
initiating
it
and
there's
there's
a
really
clear
outline
of
the
steps
for
how
we're
going
to
sync
it
and
we've
got
like,
and
a
new
aws
account
inside
of
the
the
ou
inside
of
the
cncf
root,
account
kind
of
how
infrastructure
things
elsewhere,
and
so
what
we're
doing
is
making
it
so
that
the
inside
of
the
kate's
infra
accounts
account
it
contains.
D
The
buckets
are
in
a
different
ou,
but
are
still
in
the
same
structure
of
the
cncf
root
account,
and
so
the
there's
an
account
for
registry
kate's
registry.case
that
I
owe
underscore
admin,
which
is
the
one
that
actually
contains
the
buckets,
and
you
can
find
that
in
case
that
I
know
that's
right
so
yeah
that's
have
I
missed
anything
but
jay.
D
Nope,
okay,
great,
so
that
that's
that's
the
idea!
That's
I
believe,
we're
at
right
now
and
trying
as
hard
as
possible
to
get
everything
happening
all
at
once
and
we'll
be
tackling
the
the
time
out
and
then
having
the
initial
sync
at
the
buckets
and
yeah.
I'm
very
excited
to
get
this
all
across,
which
would
be
great
yeah.
It's
great.
A
H
D
Oh
sorry,
I
was
muted
when
you
say
the
pride,
you
say
the
private.
Are
you
talking
about
the
gcs
or
the
s3
buckets?
I.
A
D
Yeah
there's
a
a
bucket
policy
which
is
defined
there
to
make
them
all
read
own,
read
public
and
that
that
is
an
effect.
There
could
be
a
chance
that
the
the
url
is
wrong.
I'm
not
sure
how
you
were
testing
it,
because
you
can
either
run
against
the
https
endpoint,
because
it's
set
to
https
only
and
so
you
can
hit
the
http
url
or
you
can
go
aws
s3,
ls
and
check
it
out.
But
I
can
I
can
send
some
commands
if
it's
helpful.
A
A
H
A
A
D
I'm
sorry,
I
know
my
my
headphones
just
played
up
because
my
phone
was
apparently
fit
to
them.
Would
you
mind
repeating
the
thing
that
I
missed
about
20
20
seconds.
A
So
tldr,
I
was
not
able
to
basically
see
the
content
of
the
bucket
using
that
command.
If
anyone
basically
wants
to
run
that
and
tell
me
I'm
crazy,
I'm
fine.
H
A
F
F
A
D
Yeah,
okay,
I
see
what's
happening
here,
so
it
appears
that
we're
I'm
just
looking
at.
I
think
you
might
already
have
this
up,
but
I'll
just
drop
a
link
for
everyone
to
the
piece
of
code.
If
you
try
to
make
a
request
to
a
particular
layer,
it
will
succeed.
If
you
try
to
listen
it,
it
won't
succeed.
That's
what
I'm
reading,
how
I,
how
either
myself
and
jay
I
forget
at
which
point
in
time
who
set
this
particular
up.
D
I'm
gonna
say
it's
me
and
this
part
is
saying
you
can
you're
allowed
to
get
any
of
the
objects
but
listing
it
is
not
configured,
and
I
don't
know.
A
C
A
Don't
know
how
downstream
whatever
downstream
distribution
is
pulling
from
us,
like.
I
have
some
conversation
about
some
folks
interest
to
do
a
full
copy
of
the
registry,
so
they
may,
they
might
at
some
point,
be
able
to
list
the
basically
list.
What's
is
inside
the
bucket,
whatever
it's
a
sub
repo
or
an
option,
so
we
should
be.
We
should
be
careful
about
this.
I
A
Three
or
four:
it's
because
the
ocr
proxy
is
not
in
the
in
api
registry.
We
are
really.
We
try
to
be
a
basically
a
transparent,
transparent
proxy
between
the
controller
register
acting
as
the
origin
and
a
docker
client,
so
the
docker
account
will
hit
us
and
we
just
forward
the
request
to
the
content
registry
and
the
content
registry.
A
J
A
F
A
A
Wouldn't
it
then
we
get
the
manifest
they
will
get
the
manifest.
They
will
basically
get
the
list
of
the
blob,
slash
image
layer,
but
the
roots
of
those
blob
will
be
defined
by
oci
proxy,
because
we
will
basically
look
at
the
source
ip
of
the
request
to
basically
say
where
you
are
pulling
the
blob
from.
E
D
If
I
may
yeah
going
back
to
earlier,
everything
is
happening
as
configured
and
expected,
though
it
is
unexpected
to
not
be
able
to
list
the
blobs
in
the
bucket
the
so
it
would
not
break
things.
A
Yeah,
I
think
my
concern
is,
if
we
say
behave
as
expected,
we
don't
we
don't
know
how
particular
case
happening
when
when
it's
come
to
image
polling,
we
can
basically
list
off
the
use
case
about
this.
I
think
that's
my
concern
like
I,
you
may
have
people
basically
pulling
the
identify,
the
I
would
say
the
object,
storage
services
and
try
to
pull
directly
from
that
for
whatever
reason.
A
D
Yep,
okay:
this
is
fine.
This
is
pretty
trivial
to
do
and
yeah.
That
sounds
fine.
Let's
do
that.
Creating
a
ticket!
Okay,.
C
A
D
D
So
we've
got
a
user
that
you
can
that
someone's
able
to
authenticate
ads
and
then
they
can
call
sts
to
assume
role
in
another
account
in
which
a
role
exists,
and
so
what
we've
done
is
both
jay
and
I
and
the
others
who
use
the
the
cncf
accounts
are
signing
in,
as
I
am
users
and
then
from
there
they're
able
to.
D
And
so
that's
how
we've
got
the
s3
writer
role
inside
of
the
registry.case.io
admin
account.
Okay,
and
would
you
be
able
to
please
reiterate
the
second
part
of
the
question.
I
think
it's
my.
D
Yes,
so
the
the
accounts
account
and
the
a
user,
an
im
user
in
the
accounts
account
and
the
role
inside
of
the
sorry,
the
account
for
where
the
registry
buckets
are
do
in
fact
have
a
trust
relationship
and
then
there's
a
role.
Inside
of
the
registry
case.
I
o
account
in
which
we're
able
to
assume,
as.
A
E
A
Yeah
I
mean
what
I'm
saying
is
like:
can
we
manually?
We
don't
have
identification
between
the
google
group
inside
jcp
and
aws,
because
we
don't.
We
don't
have
identity
federation,
I'm
interested
to
create
a
im
user
for
each
member
for
each
sig,
chair
and
technical
lead
of
sig
release,
so
we
basically
give
them
access
to
that
and
they
can
basically
try
to
do
stuff,
like
that.
I
don't
know
if
it's
okay
for
you
at
that
point.
A
F
A
So
I
think
my
question
is
like
emergency
cases
where
we
don't
necessarily
need
to
escalate
to
aws
about
this.
There's
gonna
be
there's
gonna,
be
situations
where
we
might
be.
We
might
need
to
authenticate
to
that
authentication
account
to
do
something.
Like
I
mean
if,
for
example,
the
regret
have
an
issue,
we
need
to
restore
that
from
a
backup.
A
A
F
Yeah,
so
I
just
wanted
there's
a
little
comment
around
access
to
interviews
in
general
right,
so
this
usually
isn't
a
problem
with
companies,
because
you've
got
one
primary
identity
provider
and
everybody's
on
there.
That
needs
access.
F
You
leverage
something
like
it
so
now,
in
our
case,
that's
not
the
case,
because
it's
not
guaranteed
that
we
have
a
single
identity
source
right
I
mean
we
could
make
that
happen.
If
you
wanted
to
ensure
that
everybody
that
wanted
access
to
aws
had
access
to
a
community,
I
o
google
account
and
we
federated
directly
to
aws,
but
it's
a
decision
that
you
need
to
make.
Otherwise
we
have
this
funky
looking
cross
account
address
access,
which
is
really
horrible.
A
F
So
that's
a
decision
that
the
project
has
to
make
it's
not
too
difficult,
because
you
can
issue
cloud
identity,
accounts,
people
don't
cost
anything,
and
then
we
attach
that
to
eight
recesses,
so
google's
got
instructions
far
to
set
up
later
so
with
this
week.
So
it's
not
that
hard!
You
can
make
that
happen.
A
Yeah,
I
think
the
blog
of
that
is
like
we
are
not
we
are
not.
Kids
are
not
admin
of
the
google
workspace
right
now.
F
Well,
we
don't
need
to
be
because
all
we
need
to
do
is
ensure
that
the
relevant
people
get
that
account
and
they
join
the
right
groups
and
that's
the
end
of
it.
Somebody
has
to
do
a
one-time
setup
of
the
applications
in
google
workspace
and
that's
it.
F
A
So
what
you're
saying
is
like
you,
we
we
can.
We
can
hide
aws
oidc
provider
as
a
google
workspace,
yeah.
F
Something
like
that,
so
we
can
leverage
it
with
ssl
and
use
saml
to
log
in
and
have
the
groups
present
properly
and
then
we
can
make
it
so
that
different
groups
get
access
to
different
accounts
with
different
permissions,
and
it's
that
easy
I
can
put
together
a
demo
for
next
week
for
the
next
meeting.
I've
got
access
to
okay.
F
Because
once
we
do
that,
all
most
of
the
complications
that
caleb
and
jay
drew
disappear
because
we
have
a
clear
way
for
humans
to
access
and
for
robot
access
is
even
more
simpler
because
we're
just
going
to
use
theories
or
adc
to
do
it.
A
A
So
I
once
the
moment
when
we
basically
we
know
we're
gonna
have
like
basically
all
the
buckets
fully
sync.
D
A
D
Well,
I
I
would
like
for
that,
to
be
some
kind
of
thing
running
on
a
periodic
regularly
that
that's
my
current
thought,
some
ci
job,
but
at
the
same
time
jay
had
also
made
a
pr
which
I
think
we
might
have
seen
in
slack
or
something
for
release
engineering
related
stuff
around
promo
tools,
and
I
think,
after
an
an
idea,
was
that
after
an
initial
sync,
then
the
everything
else
would
continue
to
just
sink
across
through
some
kind
of
thing.
I
can't
answer
to
that
because
I'm
not
completely
familiar.
D
A
D
I
think
that
is
totally
possible.
We
can
do
we
haven't.
Do
we
have
the
issue
about
multiple
issues
about
sinking
the
bucket
in
those
issues?
Are
there?
Is
there
a
discussion
about
the
automation
of
the
continuous
thinking
we
can.
D
F
Yeah,
we
can't
stitch
together
a
basic,
proud
job,
continuous
one,
a
periodic
one.
Sorry
I
basically
just
put
as
our
clone
and
six
the
buckets
the
differential
diffs
shouldn't
consume
too
much
compute,
because
it's
not
much
and
we
run
that
over
x
number
of
hours.
F
In
any
case,
there's
logic
in
we
see
a
proxy
to
serve
push
to
forward
the
request
upstream.
If
it's
not
available
list
three,
so
we're
fine.
A
I
think
that's
not
what
what
what
I
but.
F
It's
not
hard,
so
we're
gonna,
proud
to
be
raid
hours
and
sync
the
buckets.
A
The
one
concern
have
is,
I
feel,
like
doing
that
inside
pro
gonna
be
slow,
because
it's
a
lot
of
object.
We
need
to
sync.
A
F
That's
the
best
way
to
do
it.
Yeah,
caleb
or
jay
can
pre-populate
the
pockets
first
and
as
soon
as
they
do
that.
F
D
A
You
have
it
wrong.
Yeah,
you
have
an
issue
with
with
timeout
and
basically
relate
to
the
assume
world.
Am
I
correct.
D
Sorry,
I'm
meaning
the
github
issue
that
your
browser
is
currently
on.
If
you
could
refresh
the
page
and
go
to
the
bottom,
I've
just
added
a
new
comment:
oh
okay,.
A
I
will
debug
this
sync
backed
up.
It's
a
full
sync:
it's
basically
a
copy
from
jesus
to
s3.
It's
the
first
thing
every
day.
A
A
D
D
K
Then
that
also
mentioned
in
a
previous
school
that
if
there
is
anything
missing
in
the
bucket,
it
will
check
the
bucket.
If
it's
not
there,
it
will
just
redirect
not
do
the
redirect
to
the
bucket.
So
there's
no
risk.
K
We
need
to
sync
it,
but
just
there's
no
risk
if,
if
an
object
is
not
available
in
the
bucket,
because
his
tool
is
working
to
to
protect
against
that
yeah.
G
K
There's
also
going
to
be
logging
to
the
show
what
has
been
missed.
So
we
can
also
look
at
that
every
now
and
then.
A
D
Yeah,
I
think
we've
got
some
good
discussion
going
here
and
I
think
we
can
keep
the
discussion
going.
I
don't
think
I
have
any
questions
about
what
we've
just
discussed.
I'm
not
sure
what
the
solution
is.
I
know
we
have
a
few
ideas
for
it,
but
I
think
the
conversation
might
land
a
bit
closer
to
what
we're
doing
when
we
also
have
jay
to
discuss.
A
A
A
I'm
not
sure
where
else
to
go
yeah,
okay,
so
what
I'm
I'm
saying
is
you
can
break
that
code
into
a
part,
so
the
first
part
is
mostly
run
that
manually
as
a
human,
and
basically
you
don't
even
need
to
basically
put
that
here.
You
can
basically
have
another
another
bracket
like
basically
blob
cache,
something
acting
as
a.
A
A
A
So
you
sure,
basically
is
do
basically
there's
a
configuration
done.
You
might
need
to
tweak
basically
the
the
bash
code
for
that,
but
mostly
that's
the
the
core
idea
of
the
periodic
is
to
do
that
now.
There's
gonna
be
you're.
Gonna
have
two
problems.
I've
signed
it's
a
pro
job.
You
need
a
docker
image
with
iclone
and
you
need
to
mount
adwords
credentials
in
that
project.
F
Oh,
so
the
edius
credentials
is
answered
in
a
separate
issue.
If
you
leverage
that
you
don't
have
to
worry
about
the
credentials
for
our
clone,
it's
a
double
binary,
so
you
can
just
grab
off
the
internet
on
like
a
base,
ubuntu
image
and
just
run
it
directly.
A
Yeah,
but
you
still
need
an
image
for
that,
so
I
prefer
to
have
like
a
I'd
prefer
to
have
a
dedicated
image,
so
you
don't
have
to
basically
pull
iclone
a
different
version
of
iclone
each
time
you
want
to
run
that
job,
it's
better!
It's
better.
We
fix
the
version
of
our
cloud,
we
use
and
have
a
docker
image.
For
that
I
mean
that's.
D
We
can
continue
the
conversation
on
slack
I'll.
I
will
keep
thinking
about
this
and
we'll
find
a
good
solution
that
answers
the
issue.