►
From YouTube: Kubernetes WG K8s Infra - 2019-09-18
Description
GMT20190918 153402 k8s infra 1920x1050
A
A
A
A
D
A
D
There's
like
two
or
three
different
ways:
you
can
actually
hook
it
up
whether
it's
through
github
web
hooks
or
github
actions,
or
this
GCB
app.
So
we're
looking
at
just
to
see
if
we
can
cut
the
need
for
prowl
out
and
if
we
can.
How
does
that
affect
the
user
experience
here
with
respect
to
logs
and
github
disability
and
those
sorts
of
things
all
right?
No.
A
B
D
I,
don't
think
Justin's
here,
I
haven't
seen
him
and
he
was
in
the
building
earlier
this
week,
but
I
haven't
seen
him
since
so,
and
I
don't
see
him
on
the
list
here.
So
I
didn't
look
at
the
his
billing
report,
but
I
can
report
back
on
our
billing,
which
maps.
Actually
he
sent
me
a
sent
us
a
separate
email
saying
that
his
billing
report
picked
up
the
new
usage
that
I
was
playing
with.
So
that's
a
good
thing.
I'm
looking
at
a
whopping
total
for
the
last
I'm.
Sorry
need
update
dates.
D
Sorry
I'm
looking
at
the
cloud
console
dashboard,
not
the
report,
I,
don't
have
the
report
in
front
of
me.
I
was
hoping
Justin
we'd
be
here.
I
can
find
that
though,
and
cross-check
it
or
I
can
just
cross-check
with
him
offline,
but
the
last
about
five
times
we've
cross-checked
it's
been
within
Penny's.
Once
we
figured
out
why
my
my
view
is
wrong,
so
I'm,
pretty
confident
in
it
at
this
point
and
I'm
looking
just
real
quickly
at
the
numbers
and
there's
nothing
that
is
surprising
to
me.
D
Great,
oh
sorry,
there's
an
interesting
line
item
$7,
download,
worldwide
destination
worldwide
destinations.
So
I
wonder
if
that's
the
cluster
API
stuff
it's
hard
to
tell
it,
doesn't
break
it
down
further
yet
so
we
now
at
least
have
one
datum
or
we
can
start
to
dig
into
it
and
see
if
we
can
break
down
further.
B
B
The
big
one
was
terraform
and
getting
it
spun
up
in
production.
I
know
report
that
we've
made
significant
progress
and
we
have
what
we
believe
to
be.
The
final
configuration
checked
into
the
repo
do,
and
are
you
comfortable
with
like
getting
that
pushing
the
button
to
get
that
up
and
running
in
corrington
yeah.
D
So
I
was
all
set
to
do
that.
I
was
really
hoping
that
I'd
be
able
to
say
I
poured
it
over
the
publisher
bot,
but
it
looks
like
Google
has
changed
our
internal
off
mechanism
to
force
me
to
reoffer
often
and
that
his
broken
terraform
so
and
of
course,
I
got
opted
into
it.
Two
days
you
2
days
ago,
so
I've
filed
myself
an
opt-out
ticket
and
hopefully
I'll
be
able
to
do
that
later.
Today,.
D
Okay,
great,
but
yes,
we're
locked
and
loaded
everything's
merged
I,
don't
have
any
pending
PRS
I.
Don't
have
anything
that
I
want
to
change
about
it.
I
want
to
turn
it
on
and
I
want
to
port.
The
workloads
that
are
currently
running
in
the
other
clusters
into
the
new
cluster
and
I
want
to
burn
down
the
cluster
turnip
project.
B
Great
any
questions
or
concerns
from
anyone
kind
of
a
phase
last
call.
B
E
Yeah,
so
this
will
happen,
I
think
in
the
next
weeks,
certainly
not
months,
I
think
it's.
It
should
be
done
probably
around
the
early
October
time
frame
because
I
don't
really
know
all
the
hurdles,
because
there
are
thousands
of
images
in
the
old
Prague.
I
just
did
a
test
or
a
subset
of
those
images,
and
it
turns
out
that
the
promoter
cannot
currently
handle
v-rocker
schema
one
images
because
I've
been
using
the
go
container
registry
library.
E
E
So
that
will
happen
this
week,
but
the
the
gist
of
it
is
that
we
want
to
copy
all
the
images
from
the
existing
I
call.
It
legacy
prodigal
containers
to
the
new
one
that
Tim
set
up
a
while
ago,
kate's
artifacts
fraud,
that's
the
one
that
we've
been
promoting
the
new
like
staging
subprojects
images
to
so
I.
Imagine
that
this
will
basically
back
at
the
end
of
it.
We'll
have
a
single
like
manifest
for
like
yet
another
folder
in
that
directory
next
day.
E
But
that's
the
way
I
see
it
happening,
because
I
think
it
would
be
nice
to
at
least
have
a
record
of
what
exists
there
like
officially
going
forward
just
so
that
we
know
okay,
you
know
these
images
were
copied
in
from
the
old
one.
They
are
technically
legacy,
so
that's
gonna
happen
soon,
I'm,
currently
working
on
the
disaster
recovery
like
scripts.
E
E
E
Yeah,
so
I'm
just
to
recap,
so
there
are
two
things:
what
is
disaster
recovery?
That
is
basically
a
backup
solution.
You
know
for
the
new
prod
like
registry,
so
after
you
know
it's
all
done,
we
have
all
these
images
to
migrate
over
there's
new
name
just
coming
in
and
like.
We
just
wanted
to
have
some
process
that
backs
it
up,
because
we
don't
have
that
currently
for
go
containers
so
we're
trying
to
make
it
less
terrifying,
less.
B
D
E
B
So
just
taking
a
step
back
to
the
legacy
piece,
so
just
for
my
own
edification.
The
process
that
we're
looking
at
doing
is
taking
a
copy,
basically
using
promoter
types
using
the
promoters,
for
as
it
is
to
take
a
copy
of
things
that
are
currently
in
Google
containers,
putting
it
into
a
legacy
prod
bucket.
That
will
become
like
a
staging
bucket
and
then
using
the
actual
promoter
to
promote
those
into
our
actual
prod.
Like
our
new,
our
new
production
bucket,
a
head
of
cutting
over
DNS
2.2
new
registry.
E
C
C
Given
there's
so
much
usage
of
these
the
these
URLs
effectively,
it
could
be
argued
to
represent
API.
So
if
you
don't
do
what
leanness
is
saying,
you're
still
going
to
have
to
duplicate
and
go
through
a
deprecation,
well,
I
would
think
for
the
old
ones,
because
so
many
things
reference
those
so
it
just
saving.
D
D
So
leaving
them
in
play
means
we
can
like.
If
there's
anybody
who
is
using
the
sort
of
naked
URL,
not
the
not
the
vanity
URL,
then
they
won't
break.
We
can
leave
that
there
for
a
year
or
whatever
right
like.
We
expect
the
traffic
to
almost
immediately
die
off,
because
the
most
traffic
is
going
through
the
vanity,
URL
and
so
I
think
it's
simplest
to
just
leave
it
there
and
then
schedule
an
action
item
for
a
year
from
now
to
decommission
that
bucket
entirely,
because
that
is
now
the
staging
bucket.
A
D
A
E
B
B,
so,
okay,
so
now
a
bad
piece
of
information,
I
think
kind
of
clarifies
it
for
me.
So
let
me
let
me
try
to
explain
what's
happening
again,
so
we
have
like
all
of
our
current
stuff
is
in
Google
containers
and
taste
of
GC
r
dot.
Io
is
a
vanity
URL
that
points
directly
at
Google
containers,
so
whatever's
in
Google
containers
get
served
by
case
of
GC
r
dot,
IO
well.
A
B
Going
to
do
now
is
configure
all
the
manifests,
like
the
specific
shah's
of
the
images
that
are
currently
tagged
in
google
tanners
and
we're
gonna
use
the
promoter
script
to
basically
promote
them
from
Google.
So
there's
a
copy
of
that
particular
shot
in
the
GCR
registry
for
case
artifact
product,
which
is
the
new
progress
tree.
B
Once
we
have
all
of
that
stuff
promoted,
then
we
can
start
testing
that
particular
registry
make
sure
it
works
the
way
we
want
and
then
schedule
to
cut
over
the
vanity
URL
to
the
new
one,
which
will
then
allow
promotion
not
only
from
google
containers,
but
if
we
want
to
move
the
process
for
any
of
those
Google
containers
images
like
you
know
you
cross
over
to
a
proper
staging
bucket.
We
would
just
then
change
that
the
hash
in
the
promoter
script.
It
would
just
consume
it
as
a
different
source.
D
Saying
we
would
actually
change
the
ultimate
URL
so
that
it
would
include
a
subdirectory
in
the
middle
is
that
a
problem
I
hope
that
it
would
make
everything
clean
simpler
in
the
long
term,
while
still
leaving
the
old
in
place,
but
it
would
be
a
user-visible
change.
Like
the
evolution
like
if
version
one
was
in
the
root
directory
version,
two
would
be
in
a
subdirectory
I.
D
The
restriction
was
most
lying
to
stop
me
if
I'm
stepping
on
you,
the
restriction
was
mostly
there,
because
there's
no
good
way
to
detect
conflict
between
promoter
manifests
without
a
human
actually
saying.
Oh,
don't
use
that
name,
because
somebody
else
is
using
that
name.
Whereas
if
we
put
everything
in
a
subdirectory,
we
can
say
well,
their
subdirectory
corresponds
to
your
staging.
So
therefore
you're
by
definition,
clean.
B
Can
I
make
a
different
suggestion?
I,
don't
know
how
much
work
this
would
be
to
change
it,
but
if
we,
if
we
said
that
every
staging
repo
was
a
directory,
that's
step
one
but
then
have
a
flag
in
the
promoter
that
says
like.
Oh,
this
should
be
like
root.
You
know
like
there
should
be
a
root
image
that
then
drops
it
to
the
root,
as
opposed
to
the
subdirectory
or
both
that
it
goes
to
both
in
the
directory
and
the
root.
B
E
This
is
getting
a
little
bit
too
into
the
details
of
the
implementation,
but
basically,
if
you
still
want
to
promote
to
the
roots
directory
as
we're
calling
it,
you
can
still
do
that.
You
just
need
a
separate
manifest
for
it.
So
there's
nothing
like
stopping
us
from
creating
a
new
manifest.
That
says
this
is
like
a
legacy
or
a
roots.
Only
you
know
image
to
manifest
any
image,
and
here
we'll
land
in
the
you
know
root
that's
already
doable
today.
E
Just
we
don't
have
that
model
or
we
don't
have
that
practice
set
in
place
it
because
currently
we
have,
you
know
subfolders
named
after
each
staging
sub-project
under.
That
is
any
question
that
manifests
that
gamal.
So
that's
been
like
this
pattern
so
far.
What
you're
basically
saying
is:
okay,
we'll
introduce
another
manifesto
yeah.
Well,
that
just
says
the
destination
is
at
the
root
it's
doable,
but
yeah
I
mean
I,
have
really
discussed
this
idea.
Maybe.
D
A
D
A
B
B
A
I
have
one
thing:
I
do
have
a
pending
PR
that
I
need
to
update
for
the
email
support.
I
just
wanted
to
give
a
little
bit
of
history,
therefore,
for
the
pyaare
itself,
so
the
existing
G
sweet,
mailing
email
lists
right
I
wanted
to
preserve
the
metadata
exactly
as
us,
so
that
was
number
one.
The
number
two
was
some
of
them.
A
You
know
are
open
to
subscription,
so
I
don't
want
those
the
behavior
of
those
to
change,
so
I
will
so
I
added
code
to
make
sure
that
if,
if
they're
open
to
a
subscription-
and
we
have
documentation
saying
please
subscribe
to
this
Google
group-
then
I
don't
want
to
mess
with
the
members
of
that
Google
group
using
the
automation
that
we
have
and
we
also
have
to
keep
the
existing
Google
groups
that
we
are
using
for
a
CEO
you
know
has
to
be
backed
by
the
group
study
amount.
So
there
is
this
conflicting
stuff.
A
B
Just
go:
I
was
just
going
to
mention
that,
like
yeah,
that
had
been
on
my
my
review
back
load
and
then
I
got
swamped
down
with
things
that
were
released
pending
so
yeah
I.
What
I
could
do
as
well
as
I
should
I'll
go
through
it
like
from
a
very
high
level
and
be
like
hey.
These
are
like
design
blocking
things
or
things
that
we
need
to
rework
or
change
if
there
are
any
before
you
go
and
and
and
take
the
time
to
go
and
update
ax.
D
So
I
was
confused
whether
the
intention
was
to
run
this
periodically
to
keep
the
github
in
sync
or
just
to
know
that
it's
gonna
be
incorrect
because
subscriptions
are
not
handled
through
github
or
whether
to
sort
of
go
hybrid
and
say
all
owners
have
to
be
listed
in
github,
but
other
members
can
be
added
manually
or
some
something
else.
Alright.
So.
A
A
We
should
still
be
able
to
update
ACLs
in
by
adding
making
changes
to
groups
retama
and
also,
if
we
decide
for
you
know
if
the
mailing
list
metadata
I
need
to
change
bills,
whether
it
is
a
subscription
based
or
or
ACL
based,
either
way,
we
should
be
able
to
do
it
using
the
automation.
So
that
was
the
intention.
There.
D
A
A
A
B
A
Right
now,
well,
the
other
use
case
that
I
was
thinking
about
was
like.
If
we
want
to
see
the
growth
of
you
know
how
popular
one
of
the
mailing
list
is,
then
we
can
run
it
every
few
few
months
to
see
you
know
if
there's
more
people
less
people
that
kind
of
stuff,
but
you
know
that's
not
like
a
primary
thing
right
now,
yeah.
D
A
Correct
for
the
subscription-based
list,
I
will
prune
before
I
update
the
group
stock.
That's
what
I
was
anyway.
Thank.
D
A
I
had
to
go
figure
out.
Why
I'm
not
getting
it
through
the
API
I,
don't
know
if
I
don't
have
enough
permissions
or
or
something
as
off
so
I
have
to
log
in
through
the
UI
and
see
whether
I
can
see
the
group
using
the
UI
and
see
you
know
if
it's
just
a
question
of
the
API
I'm
not
able
to
see
it.
Okay,.
D
Cool,
let
me
know
if
there's
more
I
can
do
to
help
there.
Clearly
the
group
does
exist
because
I'm
allowed
to
turn
clusters
up
with
it.
In
fact,
you
made
me
going
quite
and
like
try
to
turn
up
a
cluster
listing
the
security
group
as
a
known
bad
name,
and
it
did
come
back
and
you
say
hey
this
group
doesn't
exist,
but
it
allowed
the
kate's
gke
security
groups.
So
clearly
the
group
does
exist
and
gke
is
validating
against
it,
but
I
can't
see
it
in
whatever
the
tool
is
producing.
So,
oh
look.
D
Okay,
so
actions
for
next
time,
I
intend
to
have
the
prod
cluster
up
and
the
publisher
bot
switched
over
to
it.
If
somebody's
familiar
with
the
publisher,
bot
I
would
love
a
volunteer
to
help
or
I
can
just
try
to
figure
it
out
from
the
amylin
and
whatnot,
then
we
can
identify
a
candidate
list
of
what
the
next
few
targets
are
going
to
be.
Moving
from
the
legacy
utility
cluster
into
this
new
cluster,
converting
over
the
ingresses
and
IPS
and
DNS
and
those
sorts
of
things,
and
then
we
are
on
our
way.
E
D
I
mean
like
I,
can
do
the
publisher
bot
conversion,
but
the
the
big
goal
of
being
able
to
move
this
all
into
community
hands
is
to
get
small
groups
of
people
who
volunteer
to
own
individual
bits
of
infrastructure.
So
if
somebody's
volunteering
for
it
I'd
be
happy
to
shoulder
surf
while
they
do
it
instead
of
me,
but
I'll
do
it
if
I
have
to
I
can
help.
If
you
want
awesome,
Thank,
You,
Bart
I
will
I'll
ping
you
as
soon
as
that
clusters
often
ready,
and
we
can
work
out
a
time.
D
C
B
B
That
did
for
this
stuff,
that
was
on
the
agenda.
I
looked
at
our
milestone
and
our
milestone.
Much
of
the
like
current,
though
ready
to
migrate,
milestone
specifically,
is
we
need
a
cluster,
so
I
think
it'll
be
way
more
useful
to
go
over
that
and
spend
some
time
churning
through
that
milestone
next
meeting,
once
we
have
a
cluster
up
and
a
ballad
ated
at
least
one
workload
in
it,
because
then
we
can
triage
the
work
of
migrating
things
into
it.
Any
concerns
or
objections
to
just
waiting
for
next
meeting.
For
that
one.
B
D
If
we
think
that
this
is
really
a
bad
thing
for
usability,
sorry,
let
me
add
the
the
decorated
the
vanity
URL
would
be:
K
2,
GC,
r,
dot,
io
/,
build
image,
/,
build
image,
colon
tag
and
thank
you
for
driving
the
screen,
which
I
don't
find,
particularly
egregious.
It
is
different
than
what
people
are
doing
today.
B
D
B
Discoverability,
because
you
can't
just
like
pull
down
and
say,
hey
dump
me
all
the
tags
for
this
image,
because
the
image
is
actually
moving.
So
like
the
image
itself,
when
you
go
to
like
taste,
a
GC
r
dot,
io
/,
u
cross,
it
will
like
stop
on
whatever
today's
tag
is
and
we'll
just
never
update
with
a
new
tag
because
actually
the
it's
a
different
image
location.
As
far
as
like
docker
v2
is
concerned.
Yes,.
D
D
They
would
have
to
be
updated
to
get
the
new
name
with
the
subdirectory
built
into
it.
If
we
want
a
special
case,
a
couple
of
things:
I
guess,
I'm,
okay
with
that
I,
don't
want
a
special
case,
all
of
them,
because
there
that
gets
a
mess
right
now
and
you
can't
like
when
you
look
at
the
the
listing
of
it.
There
are
thousands
of
images
there.
Nobody
can
find
anything
so
having
them
at
least
lumped
by
which
staging
they
came
from,
gives
you
a
clue
as
to
who
owns
it
and
what
who's
responsible
for
it.
D
This
is
a
ver
for
that
example.
This
is
a
bad
example.
If
you
pop
up
two
directories
and
look
at
the
cluster
API,
the
kate
staging
cluster
API
manifest
oh
yeah,
you
can
see
in
there
a
real
example
where
they're
actually
using
it.
B
B
E
D
D
E
This
one:
yes,
it's
green
right,
so
well,
I,
guess
yeah
think
about
it.
Well,
the
duplicate
section
for
image,
names,
I,
guess,
I,
don't
I!
Think
I
do
that.
Already,
though,
checks
it
should
check
for
consistency.
So
if
you
have
two
items
in
the
images
array
that
both
have
name
equals
foo,
it
should
just
error
out
and
say
no.
E
B
Right,
which
and
and
that
would
make
sense
that
we
like
we
would
need
to
do
those
extra
tracks
to
make
sure
that,
like
within
the
images
array
that,
like
each
name
like
each
object
in
there,
has
a
unique
name
as
far
as
the
images
array
is
concerned,
and
that
the
tag
array
has
what
you
don't
have
two
different
digests
that
are
pointing
towards
the
same
tag.
I'm.
E
Pretty
sure
I
do
those
checks.
I
mean
I,
have
to
look
up
that
source
code
again,
but
yeah
I
mean
I,
tried
to
add
as
many
chunks
as
I
could,
when
I
first
wrote
the
implementation,
but
just
to
go
back
or
loop
back
into
what
Tim
was
saying,
so
the
the
whole
idea
of
having
just
broken
up
like
in
the
end,
we
could
have
had
one
manifest
file
with
all
the
images
in
the
world,
but
you
know
we
just.
E
That
was
another
good
idea,
because
now
everybody
has
access
to
write
to
everybody
else's
image,
potentially
so
we're
like
well,
why
not
just
give
each
sub-project
their
own
manifest
and
you
can
do
whatever
you
want
there.
You
can
give
access
to
that.
Your
staging
repo
to
select
few
people
that
you,
you
know
trust.
Now
you
don't
have
this
issue
of
like
okay,
I'm
gonna,
create
an
image
in
a
PR
and
I'm
gonna
write
an
image
that
I
don't
really
maintain
or
something
right.
E
If
you
have
like
one
root
directory
with
30
images
that
just
say
and
I
Omega
PR
to
two
of
them,
how
do
you?
How
do
you
know
that
I'm
not
adding
an
image,
a
new
image,
for
you,
know
a
foo
and
bar,
and
you
know
all
these
other
ones
unit,
so
a
human
would
have
the
check
we
do
that.
But
then
it's
like
one
less
piece
of
automation,
I,
guess
that
we
could
automate
if
we
just
use
subfolders
for
everything.
So
that
was
the
main.
So
then
me.
E
D
Or
we
could
just
special
case
like
if
it
was
just
important
for
the
the
build
stuff
right,
we
could
have
either
a
separate
staging
or
manifest
yamo
if
the
proto-tool
would
new
to
look
for
it
that
had
no
subdirectory
specified
at
the
end
of
the
registry.
Names
right
and
just
call
it
like
a
one-off
special
case.
I
will
push
back
hard
against
doing
that
for
more
than
a
single-digit
number
of
special
cases.
B
B
The
thing
when
we
moved
from
like
Google
containers
to
the
vanity,
the
vanity
URL
like
making
that,
like
that's
the
last
time
we
did
like
a
big
move
of
like
a
whole
bunch
of
images
and
stuff.
When
we
did
that
move,
we
were
maintaining
the
entire
back
history
of
tags
and
such
so
you
could
interchangeably
use.
B
Google
containers
or
taste
OG,
cRIO
and
you'd
still
have
both
the
entire
history
going
forward
and
going
backwards.
I'm
just
resistant
to
the
idea
of
locking
us
in
and
telling
like
anybody
who
has
a
current
image
in
Google
containers
like
you,
are
no
longer
allowed
to
update
this
image.
You
need
to
move
its
location,
you
need
to
change
or
code
to
look
at
a
completely
different
location
and
that
location
will
not
have
your
back
history
I.
E
Mean
that's
that
last
part
I
mean.
If
you
really
wants.
You
could
say:
oh
well,
you
know,
let's
say
for
foo
image
foo,
you
know
there.
You
could
just
create
a
new
staging
repo
and
say:
okay,
well,
we're
gonna
backfill,
all
the
old
existing
foo
images
there.
So
if
you
look
at
the
new
location
at
least
you'll
have
the
whole
history
moving
forward
for,
like
that
hasn't
been
brought
up
as
an
idea.
I,
don't
yeah.
D
I
mean
if
you,
if
you
want
to
see
the
list,
go
to
GC
r
dot,
io
/,
Google
containers,
Google
underscore
Google,
containers
and
eels.
You
can
see
the
list.
We
could
go
through
every
single
image
and
figure
out
which
staging
should
it
go
to,
and
then
you
know,
move
things
around
so
that
it
appears
as
if
it
came
from
that
staging
and
then
we
retain
the
back
history.
But
the
name
is
different.
B
A
B
Which
things
are
at
which
of
those
are
still
active
right
because,
like
things
that
are
currently
better
inactive
by
definition,
I
don't
care
about,
because,
as
long
as
we
maintain
the
back
history,
which
we
are
going
to
maintain
to
back
history,
that's
no
problem.
It's
things
that
we
expect
will
like
our
actively
used,
and
we
expect
we'll
get
updated
at
some
point
in
the
future.
That's
where
it
gets
tricky.
So.
D
Are
you
more
concerned,
so
there's
really
two
two
avenues
here:
there's
one
which
is
the
the
non
tag
portion
of
the
name,
the
first
half
our
first
90%
changes.
If
you
have
the
subdirectory
and
then
there's
the
history
of
all
the
tags,
given
that
same
prefix,
if
we
were
to
promote
all
of
these
into
the
individual
sub
directories,
we
still
change
the
prefix,
but
you
at
least
have
a
history
of
tags
versus
I.
Don't
want
to
change
the
prefix
at
all
for
new
tags.
B
D
B
It
does
it
does
as
long
as
like
the
prefix
gets
updated
right
so
like
if
you
still
like
it
depends
on
where
you're
so,
where
you're
getting
like
from
the
from
the
dance
on
the
human
factor.
Here,
if
we're,
if
you
put
in
the
old
prefix
you're,
still
going
to
get
the
images
up
to
a
point
and
then
it
will
just
be
stale,
that's.
B
C
One
thing
I'd
throw
out
there
is
that
until
about
a
year
ago,
most
of
these
were
made
fairly
sloppily
and
we've
been
improving
that
a
lot
over
the
last
year,
so
I
feel
like
there's
already
a
transition
to
where
a
lot
of
the
there's
some
sort
of
inflection
point
there
prior
to
which
any
of
that
history
didn't
really
matter.
Much
from
a
tag.
C
I
can
tell
you
from
a
deep
inspection
this
summer,
most
of
our
artifacts
I,
couldn't
really
tell
you
what
was
in
them
very
closely
aside
from
a
top
level
like
if
it's
kubernetes,
1,
13,
11
I,
know
what
that
is
based
on
github,
but
the
peripheral
stuff
in
the
container
can
be
hard
and
then
some
of
them
weren't,
even
built
so
closely
from
a
release
to
where
you
could
map
that
you
know
like
came
around
this
time
period
from
this
repo,
so
it
would
have
been
one
of
these
commits
ish.
So.
A
C
History
there,
like
I
I,
really
appreciate
and
like
I
mean
I,
was
the
one
who
brought
up
like
hey
what
about
deprecation
policy,
even
just
on
the
URLs,
but
for
some
portion
of
this.
If
you
get
very
far
back
at
all,
even
like
on
the
order
of
three
months
for
some
of
these,
what's
there
for
history
doesn't
give
the
user
much
anyway,
I.
D
I,
imagine
let
me
argue
the
the
Christoph's
side
of
it,
though
I
imagine
that
there
are
out
there
a
lot
of
repos
that
build
up
an
image
named
by
cat
nating
strings
and
it
would
be
clumsy
for
them
to
say
if
you're
asking
for
tag
less
than
X,
where
X
is
an
arbitrary
string
and
not
necessarily
numerically
compatible
comparable.
If
it's
less
than
X
then
use
this
prefix
otherwise
use
that
prefix
and
I
guess
that
would
be
clunky.
D
B
It
does
and
the
like
I
think
I,
like
I'm
kind
of
coming
around
to
I,
think
that
that
is
a
fair
compromise
like
it's
still
not
it's
still
not
perfect,
but
I
think
that
may
strike
the
right
balance
between
you
know
our
investment
into
this
and
how
much
work
we're
willing
to
do
to
kind
of
maintain
both
forward
and
back
and
all
of
that
kind
of
stuff,
as
well
as
from
an
image
maintainer
perspective.
Okay,
here's
a
path
have
you
want
all
your
old
tags
and
have
like
one
source
of
truth
location.
B
D
So
we
can
publish,
we
can
put
out
how
to
for
folks
like
the
core
DNS
team,
who
has
their
own
staging,
but
also
has
a
legacy
image.
That's
just
in
the
root
or
the
cute,
build
or
or
whatever
those
people
and
say,
here's
how
you
take
an
old
image
from
the
old
repository
or
even
from
the
new
root,
I
guess
and
also
tag
it
under
your
subdirectory
and
it's
actually
a
really
easy
procedure
right.
You
really
just
need
to
copy
the
SHA
into
your
own
manifest
and
give
the
tag
the
appropriate
tag
for
it.
D
B
D
D
B
I
wonder
if
there's
a
way
to
like
like
can
we
have
two
registries
in
the
registry
is
bit,
but
both
have
source
true
and
like
have
one
either
staging
bucket
and
one
biggie.
Google
containers
does
that
work,
or
these
are
like
another
another
way
for
us
to
like
tag
a
specific
shah's
like
hey.
This
is
like,
if
you
coming
from
google
containers.
E
D
D
E
B
May
image
that
that
protection
a
little
bit
more
difficult
if
we
have
like
you
know,
because
they
would
have
maybe
a
like
a
manifest
legacy.
That
would
then
never
change,
because
that
would
just
be
like
one
time
your
tags
go
in
here.
This
is
the
stuff
that's
pulling
from
you.
Oh
containers,
exactly.
D
B
D
But
at
that
point,
I
think
we
can
rely
on
humans
to
just
do
the
right
thing,
because
it's
broken
down
to
a
small
enough
scope
that,
like
hey
cluster
API
people,
are
cute,
build
people
are
coordinates,
people
just
don't
screw
this
up
the
legacy.
One
is
sort
of
a
one-time
thing
for
keeping
history
and,
honestly
after
you
do
that
with
the
first
commit
of
the
legacy,
it
should
probably
never
ever
change
right.