►
From YouTube: Kubernetes WG K8s Infra - 2021-03-03
Description
A
A
On
all
the
places
and
we're
all
going
to
adhere
to
the
kubernetes
code
of
conduct
in
this
meeting,
but
basically
being
our
very
best
selves,
not
being
jerks
to
each
other,
if
you
have
a
problem
with
the
conduct
at
this
meeting,
please
email
conduct,
kubernetes
dot,
io
or
you
are
also
free
to
reach
out
to
me
privately.
A
As
this
meeting
is
being
publicly
recorded,
it
will
be
posted
to
youtube
later.
Okay,
I
think
that's
that's
my
little
preamble
there
so
welcome.
I
feel
like
I,
I
know
you
caleb,
but
I
feel
like
I'm
seeing
some
faces.
I
haven't
seen
in
a
while.
B
Yeah
I'll
just
say,
hey
again
good
to
see
you,
it's
been
a
little
while
I'm
excited
to
be
involved
in
some
of
the
things
that
you're
involved
in
again
and
I'm
to
to
others
I'm
from
ii.
So
I
work
with
hippie
hacker
and
brian.
A
So
next
up,
we
always
check
out
the
billing
report,
so
I'm
going
to
pull
up
data
studio
here.
I
don't
know
if
tim
wants
to
pull
up
something
equivalent
on
his
side
to
compare
share
screen,
but
this
is
basically
what
the
billing
report
looks
like.
A
I
can't
say
that
I
see
anything
wildly
out
of
line
here
this
lines
up
roughly
with
our
spend
over
the
previous
28
days.
As
I
recall,
and
we're
seeing
you
know,
rises
and
falls
this.
This
jump
up
here
is
certainly
interesting,
but
it's
not
something
I
feel
compelled
to
act
on
immediately.
A
So
it
looks
like
there's
a
there's,
a
sharp
up
and
to
the
right
here
on
this
side.
It
might
be
frozen.
Sorry,
yeah.
A
A
Okay,
I
don't
know
I
feel
like
I
have
to
blame
something
about
my
big
sur
set
up
or
something
well,
essentially,
I
see
I
see
a
tick
up
into
the
man.
If
I
can't
share
my
screen,
this
is
gonna,
be
annoying.
I
see
you
take
up.
Does
somebody
else
want
to
share
their
screen?
Maybe
that's
fine.
A
Sure
that
would
be
super
helpful
or
not.
I'm
gonna,
I
trust
you
will
be
willing
to
share
whatever
you
feel
is
relevant.
I'm
going
to
make
you
co-host,
so
you
can
actually
share
your
screen.
So
maybe
we
have
stuff
more
entertaining
to
look
at
than
my
face.
As
I
talk,
so
I
think
that
the
rise
up
and
to
the
right
for
ci
since
march
1st
may
be
worth
looking
at
my
suspicion
is,
I
recently
changed
the
proud
build
clusters
capacity.
A
I
bumped
that
up
to
240
nodes
because
for
audit
like
when
auto
scaling
was
hitting
that
limit
jobs,
weren't
getting
scheduled,
and
so
there
would
be
waves
of
frustration
through
the
kubernetes
community
during
like
high
traffic
periods,
where,
like
their
jobs,
just
randomly
would
fail
or
get
a
pod
pending
timeout
error,
and
they
didn't
really
understand
why
so
I
raised
it
up.
I
haven't
really
changed
our
quota
that
much
to
account
for
that.
A
But
that's
that's
one
potential
thing.
I
can
certainly
open
up
a
follow-up
issue
to
dig
more
into
this
later
if,
but
I'm
certainly
open
to.
D
A
D
Even
even
just
downloads,
not
just
the
yeah,
even
just
download,
worldwide
destinations,
excluding
asia,
australia
is
let's
pick
on
the
march
1st
in
particular
25
and
on
feb
22
is
2100,
so
you
know
35
40,
higher
okay,.
A
A
On
the
agenda
a
bit
anyway,
yeah,
I
think
I'm
gonna
file
a
little
issue
for
me
to
take
a
look
at
this.
I'm
not
gonna
move
urgently
on.
B
A
The
bump
in
prowl
build
resources
makes
me
a
little
curious.
It's
like
I'm
looking
at
just
the
build
cluster
itself
as,
as
our
know,
is
filtered
out
here.
That's
still
a
pretty
hefty
bump.
A
So
I'd
like
to
understand,
if
that's
you
know
things
we
did
recently,
we
upgraded
to
118
for
the
build
clusters,
so
the
first
time
in
a
while,
like
the
kubernetes
jobs,
were
running
on
a
supported
version
of
kubernetes,
and
I
know
we
switched
it's
possible
that
we
are
spending
more
compute
or
slightly
longer
building
using
the
make
build,
because
it's
currently
pretty
it
doesn't
do
a
whole
lot
of
caching
and
stuff.
D
A
It's
true,
but
ultimately
I
I
also
feel
like
this.
This
level
of
ci
here
still
isn't
really
anything
to
write
home
about
compared
to
the
rest
of
ci.
That
hasn't
even
been
migrated
over
here.
A
Yes,
so
I
mean
don't
get
me
wrong.
The
pre-submits
here
are
pretty
much
the
vast
majority
of
the
traffic
that
the
project
receives,
but
we
still
have
a
lot.
So
this
this
trend
is
something
I
would
definitely
want
to
look
at,
but
the
absolute
value
here
doesn't
make
it
super
urgent
for
me
right
now
sure,
okay,
any
other
questions.
A
Okay,
ai
review-
I
did
not
have
time
to
go
through
and
scrape
ais,
but
I
see
a
bunch
of
people
have
put
stuff
on
the
open
discussion
section
so
because
we
just
move
on
to
that
and
so
ricardo
you
get
us
up
to
date
on
what's
going
on
with
certificates,
sure
yeah.
So
I
I
have
just
finished
my
lunch.
E
Yeah
so
I
was
take,
I
was
talking
with
james
about
that
and
he
corrected
the
problem
that
occurred
that
again
this
week
I
think
tim
raised
an
alert
about
the
certificates
being
exp
inspired
again,
so
james
has
put
a
controller
that
that
runs.
Actually
they
work
around
on
the
ingress
objects
and
it's
this
pull
request,
but
he
asked
me
to
to
put
this
on
the
agenda,
so
you
you
could
take
a
look
into
this.
E
E
But
we
want
some
sort
of
like
a
c
name
for
the
wild
card
issues
and
then
unknown
privileged
name
dns,
so
cirque
manager
can
at
least
provide
the
challenges
for
lighting
crypt,
and
maybe
we
have
like
one
certificate
that
can
get
renewed
automatically
and
also
I
saw
the
team
left
some
reviews
on
on
the
certificate
expiration
monitoring-
I
just
didn't-
have
enough
time
this
week
to
take
a
look,
but
I'm
also
running
against
the
code
freeze
and
the
things
from
network
policy
things.
A
That's
awesome,
I
guess
just
speaking
for
myself
the
the
details.
A
Stuff
is
not
exactly
my
wheelhouse,
especially
the
security
implications
of
wildcard
stuff,
I'm
I
guess
my
default
instinct
is
to
defer
to
tim,
who,
I
think,
has
a
better
security
perspective
on
this.
I
also
know
that
I
think
the
the
controller
that
sort
of
is
supposed
to
automate
some
of
this
stuff
was
something
ben
chad
that
the
elder
chatted
with
mothers
about.
A
So,
if
tim's
review
bandwidth
is
constrained,
I'd
be
willing
to
trust
that
ben
can
take
a
closer
look
at
it,
but
the
wild
card
thing
to
me.
It
feels
like
a
policy
question
that
I'd
like
to
hear
from
timon
the
role
I.
D
Yeah,
the
wild
card
is
worrisome
mostly
because
if
it
gets
exfiltrated,
then
somebody
else
can
claim
to
be
something
that
kubernetes
that
I
owe
not
that
I
don't
trust
all
the
lovely
people
here,
but
it's
a
risk
we
should
take
only
if
we
really
don't
have
a
better
choice.
I
think
the
controller
doesn't
sound
that
bad
until
the
real
issue
is
fixed,
whether
that's
a
cert
manager
fix
or
something
else.
D
I
think
it's
a
certain
manager
fixed
with
what
james
was
saying,
so
I
I'm
I
would
lean
more
towards
the
controller
direction.
If
we
can.
E
A
Just
since
I
have
the
this
voice
inside
my
own
head
at
times
is
totally
fine
that
you
haven't
immediately
responded
to
tim's
review.
I
think
we're
all
the
latency
is
all
a
little
higher
for
us
at
the
moment.
It.
D
Just
for
context,
in
addition
to
being
code
freeze
week,
it's
also
google's
perf
review
week,
so
many
of
us
at
google
are
slammed
with
doing
reviews
and
other
things,
so
apologies
for
people
who
are
waiting
for
reviews
if,
if
they're
blocked
on
me
and
aaron,
I've
actually
got
your
pr
in
front
of
me.
People
who
are
waiting
on
me
for
stuff
feel
free
to
ping
me
and
let
me
know
that
you're
waiting,
that's
okay,.
A
Okay,
thank
you,
ricardo,
so
caleb.
You
want
to
talk
about
an
updated
infrared
diagram.
B
B
A
A
thank
you
for
doing
this
because
it
sounds
like
I
haven't
done
it
forever
b.
At
the
moment,
I
feel
like
this
is
more
appropriate
for
sick
testing,
since
it
talks
a
lot
more
about
the
the
testing
for
stuff,
the
the.
So
this
is
based
off
of
the
slides.
I
did
back
in
like
2018
the
next
big
chunk
of
technical
debt.
A
That
needs
to
be
reconciled
at
some
point
is
like
showing
how
much
of
this
now
runs
in
kate's,
incredible,
I'm
trying
to
work
sort
of
with
the
test
in
for
team,
but
also
arno's,
been
helping
me
out
trying
to
like
sort
of
better
document
and
put
together
a
playbook
and
stuff
for
the
proud
build
clusters,
and
I
and
I'd
love
to
I'm,
trying
to
kind
of
get
us
to
recognize
that
there's
like
a
service
cluster.
B
A
Then
there
are
build
clusters
and
they
are
hosted
in
different
places
and
they
have
different
levels
of
access,
and
so
I
feel
like
that
kind
of
diagram
could
be
useful
for
kate's
infra.
I
think
like
something
that
showed
where
the
aaa
where
the
aaa
cluster
is,
and
the
idea
that
we
have
different
name,
spaces
and
stuff
could
be
an
awesome
diagram.
I'm
not.
A
Scope,
I'm
just
saying
if
you
are
so
super
charged
by
doing
a
diagram
for
test
infra
that
you
want
to
do
more,
we
we
would
really
love
to
have
you.
B
Yeah,
I'm
not
particularly
sure
on
the
scope
of
this,
but
from
what
the
diagram
is
right
now
does
that
reflect
more
or
less
what
the
the
current
test
entry
is.
Aside
from
how
you're
saying
about
the
separation
between
the
service
and
build
pluses.
A
Okay,
so
next,
I
think
you
want
to
talk
about
trying
to
trying
to
move
to
a
plane
of
like
registry.kates.com
or
artifacts.kates.io
or
something
that's
different
context.
This
is
us
talking
about
like
we
want
to.
A
We
want
to
empower
multiple
members
of
the
cncf
to
mirror
and
or
host
beers
or
host
artifacts,
so
that
it's
not
just
gcr.
That's
serving
all
these
things
and
it's
unclear
to
us
whether
we're
gonna
have
to
rename
from
kates.gcr.iot
to
do
so.
A
B
Yeah,
so
just
to
read
through
my
sub
points
there
there's
a
poc
for
a
thing
called
artifact
server,
which
we
discovered,
which
is
it
appears
to
be
unused,
and
so
something
like
that,
where
it's
it's
able
to
be
run
in
a
cluster
somewhere
across
any
of
the
providers
able
to
share
a
bucket
somewhere
yeah
that
looks
like
it
could
be
useful
because
it's
not
just
container
artifacts
that
we
want
to
be
able
to
host.
B
D
B
D
D
Yet,
and
so
I
don't
know,
if
I
wouldn't
say
it's
not
it
excuse
me,
I
would
not
say
it
is
a
requirement
that
the
two
have
the
same
solution,
but
if
they
do
awesome,
but
I'm
trying
to
give
you
some
freedom
of
implementation
there,
the
yeah,
I
suspect
we
will
ultimately
have
to
change
the
dns
name
that
we
use,
because
I
doubt
very
much
that
we'll
be
given
authority
to
use
do
anything
with
gcr.io
suffix
that
that's
google's
and
google
does
not
like
to
share
the.
D
So
if
we're
gonna,
do
it,
let's
figure
out
how
we're
gonna
do
it
one
more
time
having
done
it
once
already,
it
was
very
tedious,
but
not
hard.
The
yeah.
I
have
no
particular
knowledge
of
how
the
right
way
to
do
this
is.
This
is
where
I
run
into
the
brick
wall
of
like
ideas
like
what
is
the
right
way
to
do
this,
so
that
customers
are
happy
with
the
security
that
we're
applying
to
it
that
they
know
they're
getting
the
real
deal.
D
F
I
have
two
thoughts
here,
one
of
them
well,
thank
you,
tim
for
the
information
and
also,
even
though
justin,
I
don't
think
he's
on
the
call
thanks
heaps
for
those
initial
pocs
and
all
the
work.
F
One
is
a
caching
solution
where
the
authority
and
how
we're
publishing
now
doesn't
change
and
the
caching
when
a
provider
is
dialing
into
their
instance
of
the
of
the
thing
or
maybe
the
302
redirects
point
them
to
a
local
cache
where
we
don't
worry
about
complete,
mirroring
the
artifacts
are
pulled
at
the
first
requested
and
cached,
and
then
the
other
is-
and
I
think
tim
you
you
touched
on
this
is
authenticate
ensuring
that
the
artifacts
delivered
are
secured
and
there's
harbor
is
one
of
the
pieces
of
software
we're
looking
into
that,
I'm
also
reaching
out
to
to
microsoft
as
they
I
don't
know
what
they
did,
but
inside
for
their
solution
of
their
version
of
kubernetes
host
of
kubernetes.
F
They
came
up
with
the
deployment,
but
I
want
to
fold
out
what
other
people
have
done
and
get
a
little
more
feedback
before
we,
you
know
present
a
few
options.
A
Yeah,
my
impression
is,
I
I
thought
like
azure
or
acr,
something
like
already
mirrored
a
number
of
repos
for
to
be
friendly
for
their
customers
and
users
in
asia
and
china.
D
What
where
we
are
most
lacking
is
the
transparency
right
as
a
customer,
you
have
to
switch
to
kate's
that
acr
die
or
whatever
the
name
is.
I
don't
actually
know
right,
which
is
sort
of
obnoxious
from
a
from
a
ux
point
of
view.
D
I
think
I
think,
as
a
community,
we
can
do
better
for
all
those
users
who
are
around
the
world
the
the
thing
I'm
terrified
of
and
we
may
not
be
able
to
avoid
it,
but
I'm
still
scared
of
it
is
putting
binaries
that
we
now
have
to
be
on
call
for
in
the
data
path
for
users.
A
Yeah,
that's
that's
the
same
thing.
I
guess
I
was
going
to
say
a
slightly
different
way.
One
of
the
benefits
we
have
is
the
case.gcr
that
I
o
setup
and
I
think,
even
artifacts.gates.io,
because
we
just
lean
on
google
cloud's
load,
balancing
solution
and
gcr.
We
probably
lean
on
whatever
google's
internal
load.
Balancing
solution
is-
and
you
know,
I'd
rather
the
thing
that
hits
that
handles
all
the
traffic
up
front
is
infrastructure
level
and
not
something
that
we
feel
compelled
to
run
on
our
own
kubernetes
clusters
anywhere.
D
Right,
strictly
speaking,
it
doesn't
sorry
to
ask
the
question
from
google.
I
don't
know
like
is
there
cdn
for
container
images
does?
Is
that
a
thing.
E
D
Maybe
I
I
don't
honestly
know
enough
about
it.
I
don't
think
there's
a
way
to
put
the
cdn
in
front
of
gcr
because
they're
two
separate
verticals,
but
I
I
honestly
don't
know
enough
about
how
the
cdn
is
configured
here.
I
don't
know
that
that
will
actually
ultimately
solve
the
the
problem
of
you
know:
users
in
amazon,
staying
within
amazon's
network
right
unless
we
put
a
mirror
inside
of
amazon's
network.
D
Which
so
I
like,
I
said,
I'm
totally
open
to
all
possibilities.
Here
I
would
say
in
the
like
priority
stack
of
solutions,
configuring,
an
automated
load
balancer
like
cloud
or
something
else,
is
strictly
better
in
every
way
than
running
our
own
nginx,
for
example,.
C
A
Agree
with
that,
I
guess.
A
A
So
I
sort
of
agree
with
him
that,
like
we,
we
have
underserved
the
whole
promotion
of
binary,
artifacts
and
everything
else.
I
don't
know
that
I
would
want
to
put
completion
of
that
on
the
critical
path
to
let's
find
a
hosting
solution
or,
let's
find
a
mirroring
solution.
Like
again,
I
think
it's
a
good
idea
to
look
at
it
and
figure
out
how
you
know.
A
Let's
make
sure
we
don't
like,
go
through
a
one-way
door
and
force
ourselves
to
have
to
reinvent
the
wheel
again
when
we
start
looking
at
artifacts,
but
I
feel
like
we
kind
of
need
a
lot
more
involvement
from
like
the
release,
engineering,
folks
and
a
bunch
of
other
people
to
actually
make
artifact
promotion
or
promotion
of
arbitrary
binaries
a
reality.
I
might
be
wrong
justin's,
not
here,
he
might
say
different,
but
yeah.
D
A
A
Okay,
having
said
that
out
loud
I'll,
just
flip
around
for
a
second
and
say
what
I
do
feel
like
artifact
promotion
or
not.
What
like
could
conceivably
happen
in
the
next
month
is
I
move
all
the
kubernetes
releases,
so
I
migrate
dl.kates.io
traffic
to
the
community
and
we
could
find
out
that
that's
an
order
of
magnitude
larger
than
what
kates.gcr.io
is.
I
don't
honestly.
D
D
A
D
D
Distinction,
because,
like
one
of
the
proposals
for
doing
the
registry
stuff
was
well,
you
can
actually
beat
an
nginx
config
into
submissions
such
that
it
pretends
to
be
a
registry
and
actually
fronts
another
registry
right,
and
it
was
a
really
cool
demonstration
and
all
the
data
from
that
registry
would
stream
back
through
that
nginx
and
there's
just
no
way.
I
want
to
manage
that.
B
Sounds
really
fascinating.
Nonetheless,
do
you
have
any
links
to
that,
because
I'm
very
curious
for
some
reason.
D
I
I
might
be
able
to
dig
up
an
nginx
config.
It
was
an
interesting
config.
Let
me
see
what
I
can
find.
A
Thanks
tim
so,
and
then
I
guess
the
other
comment
I
had
is,
I
think,
see.
Yeah
secure
delivery
of
these
artifacts
sounds
sounds
good.
Using
something
like
harbor
might
be
cool.
My
little
engineer
sense
was
just
like.
Is
this
scope
creep?
If
I
was
looking
at
it
purely
from
like?
Let's
make
sure
we
shed
the
load
appropriately,
but
I
might
be
wrong,
it
could
be
that
as
soon
as
we
introduce
mirrors
into
the
mix,
you
actually
really
do
need
to
validate
that
the
images
aren't
what
you
think
they
are.
D
Well,
so
I'm
not
an
expert
in
the
image
subsystem,
but
my
understanding
is
the
protocol
says.
First,
you
resolve
a
name
to
a
hash
and
the
hash
is
independently
verifiable
right.
So
you
can
resolve
the
name
to
a
hash
at
a
central
place.
That's
a
low
bandwidth
operation
and
then
fetch
the
hash
from
any
one
of
n
places
and
the
hash
is
independently
verifiable
right
so
that
it
feels
like
the
vulnerability
is
on
that
first
resolution.
D
A
So
I
feel
like
I'm
super
excited
y'all
are
are
looking
into
this.
Do
you
feel,
like
you,
have
sufficient
rambly
opinions
from
tim
and
I
to
make
progress
or
what
do.
B
You
would
you
like
from
us
good
question,
so
I'm
still
waking
up
a
little
bit.
So,
okay,
I'm
trying
to
try
to
keep
all
of
this
up.
Oh,
we
have
a
notetaker,
whoever
that
is.
I
appreciate
it.
D
B
I'm
not
sure
chris.
F
D
A
The
same
thing
yeah,
I'm
happy,
I
put
an
ai
I'll,
make
an
issue
or
actually
maybe
I'll
put
it
on
you
guys.
You
can
sorry
guys
you
folks
to
make
an
issue
and
we'll
work
through
it.
So
that
seems
totally
fair.
A
A
G
A
Form
of
ip
addresses
and
stuff-
and
I
think
we
we
are
we're
close
to
having
that
resolved
such
that
hippie
and
caleb
and
other
folks
from
ii,
are
clear
for
this.
So
my
my
plan
was
to
set
up
a
group
that
is
specifically
for
individuals
who
are
allowed
to
have
access
to
this,
and
then
we
could
try
flipping
access
logs
back
on,
but
I'm
also
in
no
rush
to
I
don't
know,
I
guess
I'm
trying
to
gauge
tim's.
D
Comfort
level
with
that,
I,
I
suspect
that
it's
blocked
in
part
on
me,
because
I
saw
a
bunch
of
email
in
my
mailbox,
which
I
haven't
been
able
to
get
to
through
this
week
to
to
with
subjects
that
refer
to
this
space.
So
I'm
totally
game
priyanka's
thumbs
up
the
the
iii
folks
to
have
access
here,
I'm
still
waiting
on
her
to
give
us
sort
of
an
official
pii
policy
and
to
set
up
sort
of
a
protocol
for
anybody
who
wants
to
join
this
group.
D
What
like
do
you
have
to
sign
something
or
go
through
some
mandatory
training
or
something
right
like
at
google,
in
order
to
get
access
to
any
pii?
There's
training
that
you
have
to
take
right
and
you
have
to
take
your
lead.
So
I
don't
know
what
cncf
wants
to
establish
articles
on
the
google
help
center.
A
Yeah,
okay,
so
yeah.
I
just
wanted
to
check
in
on
where
we
were
refresh
where
we're
at
lately.
Okay,
I
want
to
move
us
along
cause.
We
got
a
couple
other
things
on
the
agenda,
so
another
reason
y'all
are
here
so
early
in
your
local
time.
Hippie.
You
want
to
talk
about
audit
updates.
F
Sure
one
of
the
things
we
did
early
on
at
ii
was
to
help.
F
I
think
tim
was
heavily
involved
as
well
was
the
creation
of
our
audit
trails
for
changes
and
when
we
stepped
back
it
was
it's
a
lot
of
load
for
a
lot
of
reasons,
but
we
would
create
that
pr
before
the
meetings
we
now
have
a
new
audit
update.
F
That
was,
after
a
slew
of
about
18
6,
16
18
pr's,
that
I
tried
to
make
into
small
chunks,
although
I
think
aaron
did
a
beautiful
job
of
making
a
single
pr
with
lots
of
commits,
which
was
a
little
harder
to
automate
with
the
bot,
but
I
think
the
ones
from
the
bot
have
merged,
and
this
audit
is
post
those
merges
it's
still
pretty
large,
but
it's
definitely.
The
blast.
Radius
is
much
smaller
and
easier
to
review.
F
If
you
want
to
see
exactly
how
we
did
it,
we
finally
have
merged
that
the
cii
case
io
audit
image
currently
running
every
six
hours.
Let's
we
can
change
that,
but
let's
not
do
more
than
two,
because
it
takes
about
two
hours
to
run.
F
Yeah,
this
resolved
that
it's,
I
I
feel.
D
So
I
don't
find
the
reviews
to
be
that
hard,
but
I
automated
some
parts
of
it
to
make
it
easier
to
see.
I
wonder
if
we
can
do
something
like
like
have
a
sort
of
canonical
version
of
each
different
category
of
stuff
and
then
diff
the
incoming
audits,
against
those
things.
A
I
saw
your
your
script
looked,
yeah
kind
of
like
mine
when
I
was
comparing
some
staging
projects.
There's
just
there's
lots
of
said.
It's
just
amazing,
just
wipe.
D
A
Yeah
so
yeah,
thank
you
so
much
for
actually
pushing
to
get
the
job
to
open
up
prs
automatically.
I
feel
like
team-
and
I
have
done
many
lengthy
reviews
on
my
massive
prs
and
then
I
did
a
bunch
of
review
reviews
on
the
prs
you
opened
up.
I
have
been
trying
to
open
up
follow-up
issues
for
all
of
these
things
and
prefix
them
with
the
word.
Audit
follow-up
give
them
the
infra
auditing
label.
A
So
we
know
what
we
have
to
clean
up
and
I've
been
going
through
to
try
to
reconcile
these.
I
personally
might
be
interested
in
we'll
say
depends
on
my
time.
I
I
would
be
interested
in
more
than
six
more
frequent
than
six
hours
to
see
me
resolve
this
stuff,
but
to
so
to
tim's
point.
This
is
the
direction
I
kind
of
started
heading
with
managing
custom
roles
on
the
organization
I
I'm
trying
to,
even
if
it's
our
horrible
bash
for
now.
B
A
Would
like
to
see
us
start
to
converge
on
a
pattern
of
having
like
the
spec
for
these
common
patterns
defined
in
yaml
and
then
some
way
of
like
dumping,
that
to
that
canonical.
This
is
what
it
should
look
like
thing,
because
then
I
feel
like
we
could
have.
It
might
be
more
easy
to
compare
the
audit
stuff
against
the
yaml
stuff.
So
I
forget
I
I
swear.
A
I
thought
I
thought
I
saw
you
mention
this
somewhere
tim,
but
this
is
why
I,
through
yq
into
some
scripts
instead
of
jq,
because
yaml
can
have
comments,
and
I
think
that
being
able
to
use
something
that
is
commented
for
our,
like
our
source
of
truth,
would
be
really
cool,
and
so
I
could
see
like
making
some
pr's
to
the
audit
scripts
to
move
everything
over
tml.
But
I
want
to
get
us
to
kind
of
like
there
are
no
changes
standing
out
there
that
we
need
to
review
before.
I
start
doing
that.
A
Okay
and
tim,
either
I'll
probably
put
together
some
horrible
amalgamation
of
your
or
my
hacky
scripts.
The
kate's
I
o
repo,
has
a
hack
directory
now.
So
why
not.
A
Yeah
on
that
note,
yeah
like
art,
arno
and
I
have
been
starting
to
take
a
look
at
trying
to
get
tests
for
this
repo
up
and
running
because,
like
we
have
a
couple
specific
tests
for
like
the
group
camel
or
for
like
the
container
image
promoter
when
we
do
pulls
to
manifests
but
like
we
have
a
lot
of
yaml
and
we
have
a
lot
of
bash
in
this
repo.
And
it's
not
it's
not
tested
all
the
time.
A
F
Sure,
one
of
the
reasons
you're
seeing
a
lot
of
ii
folks
in
here
is
priyanka-
has
asked
us
to
step
forward
and
to
to
invest
our
time
heavily
in
helping
move
things
out
of
the
google-owned
infrastructure
and.
F
Not
just
the
community-owned
google
project,
but
beyond
that,
and
part
of
that
is
part
of
a
project
she's,
putting
together
for
the
cloud
credits
and
and
how
to
best
steward
those
we've
done
a
really
good
job
at
pioneering
the
the
kate's
info
working
group
and
to
take
the
community
resources
and
find
these
ways
where
heaps
of
people
who
are
not
at
google
helping
to
to
manage
that,
and
I
think
there's
there's
iterations
to
be
made
beyond
so
taking
some
of
the
templates
of
success
here
and
applying
it
beyond
the
kubernetes
community.
F
And
that
means
that
they're
still
gonna
be
90
of
our
time.
I
think
inside
of
kate's
infra
working
group,
but
I
want
to
start
putting
down
those
guard
rails
for
a
larger
scope
thing
and,
to
that
extent
we're
probably
going
to
need
a
toc
sponsor,
because
that's
my
understanding
of
how
cncf
project
d
things
work
and
I
just
want
to
make
sure
there's
any
other
interested
parties
or
cloud
providers
to
to.
Let
me
know
on
a
commitment
level.
F
I
want
to
make
sure
that
we
are
focusing
on
the
right
things
as
as
we're
showing
up
for
particularly
for
caleb,
and
I
what
I
hear
the
our
group's
saying
is
that
the
big
priority
is
please
find
a
way
for
us
not
to
spend
so
much
money,
giving
away
the
things
that
we
produce
as
a
community,
because
it's
a
large
part
of
the
bill.
F
So
that's
our
our
our
first
focus
there
and
and
the
the
next
focus
is
I've
I'm.
This
is
where
I'm
not
sure
there's
a
a
couple
of
umbrella
issues
from
aaron
around
migrating
things
from
inside
google
to
the
community
infrastructure,
and
if
that's
the
right
next
focus
for
our
attention,
I
want
to
make
sure
we
are
doing
the
right
things
prioritized
from
this
group.
F
For
the
cncf
infra
working
group,
it's
going
to
be
both
it's
definitely
bolstering.
The.
The
number
of
I
think
part
of
the
filter
there.
That
they're
trying
to
encourage
is
that
the
infer
group
is
there
to
definitely
allow
contribution
from
other
cloud
vendors
to
the
kaits
and
for
working
group,
but
to
do
so
under
the
like.
D
Okay
for
the
record,
I
was
about
to
ask
if
other
people
are
interested,
why
don't
they
just
show
up,
but
actually
I
understand
the
need
to
sort
of
formalize
what
we've
done
with
the
credits
from
a
organization
management
point
of
view.
So,
okay,
thanks.
A
Yeah,
I'm
I'm
interested
in
in
this
longer
term.
I
feel
like
there's
a
lot
that
I
want
to
resolve
or
land
here.
First,
I
don't
know
I
always
get
trapped
in
this,
like
I
would
love
for
what
we're
doing
to
be
a
pattern
that
others
can
extend,
or
you
know
rubber
stamp
or
even
just
use
straight
up,
if
only
it
weren't,
so
bad
right
now
with
there's
just
so
much
bash
and
stuff.
A
But
you
know
I
if,
if
it'll
help
get
more
people
into
the
pool,
I
think
that
would
be
great.
So
one
other
comment
there
I
feel,
like
I
saw
a
comment
from
stephen
augustus:
who's
been
trying
to
sort
of
stand
up
a
bunch
of
infrastructure
for
the
inclusive
naming
project
which
I
think
is
adjacent
to
the
cncf
or
related,
so
he
might
have
some
feedback
or
input
or
some
contributions
to
provide
to
that
effort,
and
he
also
he
already
participates
in
the
contributor
strategy.
A
Cncf
sig.
So,
there's
that,
as
far
as
I
opened
up
a
bunch
of
issues
to
migrate
stuff
out
of
google,
that
is
the
mandate
of
this
working
group
and
until
all
of
those
are
done,
this
working
group
has
to
stick
around
or
who
knows,
maybe
one
day
we'll
admit
it
needs
to
be
a
sick.
But
I
consider
those
to
be
important.
I
and
I
will
start
pinging
on
those
like
by
next
meeting.
A
I
don't
think
this
represents
that
bump,
but
somewhere
along
the
way,
I
did
that
for
all
of
the
ci
binaries.
So
that's
all
coming
from
the
community
now,
but
I
will
poke
on
those
and,
generally
speaking,
if
if
there
are
issues
in
our
project
board
and
they're
in
either
of
the
backlog
columns
and
they
have
the
121
milestone,
I'm
still
pretending
like
this
is
something
we
as
a
group
have
committed
to
accomplish
by
the
time.
121
goes
out
the
door.
A
A
F
We'll
look
at
both
of
those
between
caleb
and
I
to
put
some
definite
engineering
time
and
hours
into
the
the
umbrella
and
obviously
the
reducing
the
cost.
A
Thanks
so
much,
I'm
gonna
move
us
forward
to
linus's
agenda
item.
A
And
we
cannot
hear
you:
oh
no
you're,
not
muted.
We
just
can't
hear
you
if
you're
using
the
web
client.
B
A
F
D
The
I
mean
there
are,
there
are
multiple
aspects
of
what
we're
doing
here.
The
one
half
is
like
just
get
everything
out
of
google.
The
other
half
is,
oh,
my
god
make
sanity
out
of
the
mess
that
we've
now
created
for
ourselves,
even
as
careful
as
we
have
been
in
moving
things
over,
you
know,
I'm
still
staring
at
a
big
tangle
of
bash
code
right
now,
trying
to
figure
out
what
exactly
it
does.
A
Okay,
why
don't
I
just
do
my
best
shot
and
linus?
You
can
like
chat
me.
If
I'm
I'm
messing,
you
can
ping
and
chat
if
I'm
messing
this
up.
So
linus
is
the
individual
responsible
for
the
container
image
promoter,
as
most
of
you
probably
know,
which
is
why
we
have
kates.gcr
or
it's
how
we
have
case.gcr.io
today.
So
he
worked
with
an
intern
a
little
while
ago
to
to
augment
that
or
get
automated
vulnerability
scanning.
B
A
All
of
our
images,
which
was
really
cool,
he's
looking
to
host
an
intern
this
summer
to
help
remove
the
basal
dependency
from
how
image
promoter
is
built,
which
that
that
sounds
cool
and
yes,
we've
been
talking
about
how
we
kind
of
want
to
get
artifact
promotion
to
look
a
lot
like
container
promotion
and
if
we
could
unify
these
tool
together.
That
would
be
amazing.
It
would
probably
involve
a
bit
of
a
rewrite
of
the
image
promoter,
but
that
sounds
like
a
really
cool
thing.
A
I
hope
I
announced
that
well,
is
there
anything
you
want
people
to
provide
you?
Let.
A
Us,
okay,
yeah
main
thing,
is
to
sync
with
release
engineering
and
and
see
sort
of
what
their
priorities
are.
I
agree
with
that.
So
thank
you.
I'm
gonna
use
that
to
just
ramble
sack
for
on
to
on
tim's
thing
of
like
there.
A
There
are
multiple
parts
to
what
we
do
here
so
making
making
our
bash
less
awful
and
getting
everything
like
getting
everything
ported
over
and
then
making
the
management
of
it
less
messy
and
awful
are
are
great,
but
I
would
also
say
that
it's
partially
because
it's
it's
awful,
maybe
partially,
because
it's
really
heavy
infrastructure
that
people
want
to
just
work.
I
feel
like
the
release.
A
Engineering
team
has
done
a
good
job
of
really
formalizing
roles
and
documenting
stuff
and
making
other
people
feel
like
they
are
empowered
to
contribute
to
this
once
because
the
some
of
the
release
stuff
kind
of
has
a
routine
and
it's
locked
in
and
there's
there's
regularity
to
it.
I
think
we're
still
kind
of
figuring
those
roles
out
here,
but
we're
sort
of
beginning
to
converge
on
some
common
patterns
and
stuff.
A
H
I
think
I
can
hear
you
yeah
sorry.
I
just
realized
that
the
little
yeah
I
just
learned
how
to
use
zoom,
I
haven't
used
it
in
a
while
yeah,
so
thanks
aaron
for
the
overview
yeah
I
just
want
to.
I
guess
you
kind
of
took
away
my
thunder
there,
but
yeah
there's
been
some
efforts
to
do
some
like
cli
refactoring.
The
way
I
see
it,
there's
like
a
new
tool
called
k-promo
that
is
trying
to
combine
justin's
like
efforts
in
the
image
promoter
code
base.
H
H
That
stephen
augustus
is
writing
or
working
on.
There's
a
pull
request
to
add
image
function
there
as
well,
but
it
just
I
need
to
take
up
with
them,
because
one
just
my
my
gut
instinct
tells
me
that
it,
it
seems
like
it's
doing
a
lot
of
work,
for
I
don't
know
what
the
gain
is
there
like.
I
don't
know
what
the
need
for
a
rewrite
is,
but
maybe
I'm
wrong
so
I'll
leave
this
a
couple
of
them
there.
The
other
thing
is
like
I
I
do
recall.
H
I
didn't
add
it
in
the
notes,
but
there's
a
big
thing
about
the
the
artifacts
registry
thing
and
how
that's
going
to
impact
image,
promotion
or
artifact
promotion
and
stuff,
and
that
I
don't
think
it
was
I
I
was
doing
something
else,
while
other
people
were
talking
earlier
in
the
meeting.
But
that
needs
to
be
addressed
as
well.
I
think,
but
I
don't
know
what
the
status.
A
A
Is
like
they
had
a
big
scary
announcement
before
artifact
google
artifact
registry,
one
ga
that
said
like
we're,
gonna
support,
container
registry
for
six
months
and
then
you've
gotta,
move
and
that
scary
language
got
removed
once
they
went
to
ga.
So
my
question
is
google
container
registry
isn't
really
going
anywhere,
which
is
great?
Yes.
A
That
said,
I
have
noticed
that,
like
so
cloudy
you
who's
here
has
been
working
on
a
lot
of
the
getting
multi-arch
images,
windows,
images
for
and
then
testing
of
kubernetes
pushed
into
gcr
in
our
community
infrastructure
has
bumped
into
some.
A
So
there's
like
a
bug
between
container
d
and
certain
registry
implementations,
it
affects
gcr,
key
dot,
io,
it
affected
github
container
registry
and
a
number
of
other
things.
Eventually,
the
bug
will
will
get
fixed
and
make
it
into
build
kit,
and
then
everything
should
be
good,
but
other
registry
implementations
have
like
worked
around
this,
including
google
artifact
registry,
but
google
cloud
registry
has
not,
which
could
be
a
sign
that
I
don't
know,
maybe
like
it,
could
be
in
a
slower
support
cycle.
A
But
again
until
I
feel
like
there's
a
really
strong,
urgent
reason
to
move.
I'm
not
planning
on
investing
effort
in
that
I
will
refer
to
your
to
your
knowledge
and
wisdom
on
that.
I
think
you
once
you
dig
into
it.
G
Yeah,
I
guess
that's
it
we're
over
time.
So
I
won't
say
anymore.
A
We
are
so
I'll
just
say.
Thank
you
all
for
showing
up
something
I
want
to
talk
about
next
time
or
maybe
I'll
chat
with
some
folks
offline
is,
as
I
look
at
us,
resolving
the
audit
stuff.
One
thing
I
would
really
be
interested
in
doing
is
removing
humans
from
the
path
of
actually
running
some
of
these
scripts
and
so
trying
to
understand
you
know
what
what
would
it
take
to?
Where
does
our
comfort
level
need
to
be
and
where,
and
what
can
we
do
to
get
there
to
like?
A
Have
some
of
the
common
patterns
like
add
a
staging
project,
to
be
something
that's
automatically
pushed
like?
I
feel,
like
we've
reached
that
pattern
for
add
a
google
group
or
update
group
membership
or
add
images
to
the
cage.gcr.io
registry,
and
I
would
love
to
figure
out
how
we
can
do
more
of
that,
but
I
will
save
that
for
next
time.
A
Okay,
it
was
really
great
to
see
you
all.
I
hope
you
have
a
happy
wednesday.
I
hope
the
run-up
to
code
freeze
is
not
too
crazy
for
some
of
you
and
performance
reviews
are
not
too
great
for
others
of
you
and
that
you
get
some
sleep.