►
From YouTube: Kubernetes WG K8s Infra - 2021-05-26
Description
A
Hi
everybody
today
is
wednesday,
may
26th
and
you
are
at
the
kubernetes
wg
kate's
in
for
a
bi-weekly
meeting.
I
am
your
host
aaron
of
sig
beard,
also
known
as
aaron
crickenberger,
also
known
as
spiffxp
at
all
the
places
we're
all
going
to
adhere
to
the
kubernetes
code
of
conduct
during
this
meeting,
which
means
we're
going
to
be
our
very
best
selves,
and
as
this
is
a
publicly
recorded
meeting,
you
will
be
able
to
see
this
posted
on
youtube
later.
A
Okay,
with
that,
I'm
gonna
go
through
our
recurring
topics
first
and
ask
if
there
are
any
new
members
or
attendees
who
would
like
to
introduce
themselves,
so
we
can
get
to
know
you.
B
So,
hello,
all
I
am
I'm
sorry
new
to
this.
My
name
is
tyler
and
I
am
interning
at
google
under
linus.
So
linus
is
my
host
and
basically
I
am
looking
to
probably
do
a
few
projects,
but
my
first
on
the
list
is
removing
bazel
from
the
image
promoter,
the
container
image
promoter
and
we'll
see
how
long
that
takes
and
from
there
on
I'm
just
looking
to
help
out
and
be
a
part
of
this
stuff.
D
Also
new
here
jimmy
casino
I
work
for
synopsis,
specifically
the
black
duck
software
part
of
synopsis,
just
looking
for
a
way
to
give
back
in
the
community
and
saw
jim's
tweet
about
this
group.
So
I'm
here
to
help
we
run
all
google
cloud.
So
hopefully
I
can
help
in
some
ways
with
what
I
do
today
already.
A
Yeah,
that's
welcome
jim,
really
appreciate
you
showing
up
so
I'm
gonna
just
move
to
our
next
recurring
thing,
which
is
to
go.
Take
a
look
at
our
billing
report
and
see,
if
anything,
looks
wildly
surprising
in
it.
Since
I
don't
have
tim
here,
we're
not
going
to
compare
it
against
the
actual
console.
A
But
at
this
point
it's
basically
I
I
trust
these
numbers
just
as
much
so
over
the
past
28
days
we
spent
you
know
about
a
hundred
and
change
dollars,
and
you
know,
as
usual,
I
don't
have
a
ton
to
say
here.
Typically
during
this
part
of
the
meeting
we
might,
we
might
have
a
discussion
if
things
are
suddenly
going
up
and
to
the
right
very
unexpectedly.
A
But
this
to
me
looks
like
a
pretty
periodic.
People
are
busy
getting
kubernetes
artifacts
from
us
and
using
our
ci
to
contribute
to
kubernetes.
A
All
right
I'll
cool
on
that,
so
I
unfortunately
didn't
have
time
to
go
through
and
do
action
item
scrubbing
of
last
meeting.
Was
there
anything
that
anybody
had
from
last
meeting
that
they
wanted
to
follow
up
on
today
or
shall
we
just
go
to
open
discussion.
A
Okay,
so
the
first
item
here
is
about
migrating
to
manage
certificates
for
kate's
dot.
I
o
did
is
somebody
here
who
put
that
topic
on
okay.
I
thought
that
might
be
the
case,
so
I'll
just
show
it
so
tim,
hawkin
and
arno
mukhem
have
sort
of
reacted
to
our
yet
again,
we've
gotten
big
scary
emails
from
let's
encrypt
about
some
of
our
certificates
expiring.
A
A
Both
ipv6
and
ipv4
addresses
for
all
of
our
ingresses,
so
tim
set
up
an
experiment
to
use
gke's
managed
certificates
crd
basically,
and
that
seems
to
have
worked.
So
at
some
point
I
noticed
he
and
rno
just
were
like
migrating
things.
Whole
hog,
so
I've
updated
this
issue
to
indicate
which
apps
that
are
running
on
the
aaa
cluster,
where
we
run
most
of
our
public
stuff,
have
been
migrated
over
to
use
these
and
which
apps
remain.
A
I
believe
the
intent
once
we're
done
with
this
is
to
remove
cert
manager
from
the
cluster
there's
a
part
of
me,
that's
a
little
sad
about
not
using
like
a
fully
open
source
thing
and
instead
leaning
on
sort
of
a
proprietary
hosted
solution,
but
at
the
same
time,
if
we
don't
ever
have
to
like
jump
on
certificate
renewal
stuff
again,
I'm
totally
fine
with
that
this,
and
then
I
think
that
maybe
just
since
ricardo
is
here,
I
wasn't
sure
if
this
means
that
the
work
that
ricardo
you
have
been
doing
on
like
monitoring
and
providing
potential
alerting
on
the
the
freshness
or
staleness
of
our
certificates,
who's
going
to
be
operated
by
this
or
if
it
was
still
relevant.
F
Yeah,
no,
it's
not
it's
not
relevant
anymore,
where
actually
I
have
three
pr's.
Two
of
them
are
about
certain
manager
and
one
is
related
to
actually
using
the
google
cloud
monitoring
for
for
our
learning.
I
guess
that
at
least
the
the
terraform
manifests
for
monitoring
is
interesting
as
it
uses
it.
It
can
be
used
to
monitor
any
any
certificate.
Not
only
the
not
not
only
the
ones
from
the
cluster
so
was
something
that
I
was
discussing
with
team
in
the
past
in
our
channel
yeah.
F
Yeah,
because
it
it
can
use
the
the
time
being
or
something
like
that
from
from
gc
gcp
to
to
monitor
the
the
certificate
expiration.
So
so,
at
least,
if
something
went
wrong,
we
also
with
the
managed
one
from
google
we
we
are
going
to
get
alerted
as
well.
I
just
need
to
review.
I
am
seeing
that
arnold
left
some
comments.
I'm
just
trying
to
fix
my
schedule
again.
A
Okay,
that's
that's
totally
fine
yeah.
Looking
at
this
now,
I
remember
I
had
to
knit
that
like
at
some
point
here,
if
we
think
we're
gonna
start
using
terraform
to
provision
more
and
more
things.
I
want
us
to
have
a
chat
about
how
we're
gonna
like
organize
everything,
because
right
now
things
have
kind
of
organically
grown
and
I've
been
okay
with
the
fact
that
most
the
terraform
is
constrained
to
one
directory
that
manages
our
clusters.
A
But
okay,
thanks
for
that
update,
I
appreciate
it.
I
I
will
close
the
other
ones
as
we
are
moving
to
the
manager.
Okay,
that's
fine!
I've
pinged
them
all
from
that
issue,
and
I
figured
I'm
gonna
treat
this
issue
as
like
the
umbrella
issue
and
I'll
go
through
and
close
out
stuff.
That's
linked
from
this
once
we're
all
wrapped
up.
Yeah
yeah
cool,
that's
cool
too.
F
F
If
we
really
want
to
add
the
keys
that
can
manage
our
dns
entries,
because
actually
what
I've
suggested
was
hey.
Why
don't
we
just
issue
an
unwild
card
and
use
for
everything
and
that's
the
same
case
right
because
you
can.
You
can
issue
a
wired
card
with
dns,
but
the
problem
is:
is
that
we?
We
don't
trust
too
much
this
like
having
this
these
keys
inside
our
production
cluster.
So
this
was
the
main
discussion
that
I
I
guess
that
that
can
be
previously
revisited
later,
but
but
now
it's
mostly
because
of
this.
A
Yeah
thanks
yeah
I'd
say:
if
you
want
to
dig
more
into
the
details
of
it,
speaking
ricardo
definitely
knows
a
fair
amount
and
then
tim
and
arno
and
bunners
are
also
good
people
to
talk
to
on
slack
all
right.
So
I
stopped
sharing
my
screen
because
I
think
we're
basically
good
to
go
there
next
up.
I
wanted
to
hand
over
to
cloudyu
to
talk
about
updates
on
making
sure
that
registry
is
used
as
part
of
kubernetes.
Ci
are
kate's
in
for
registries.
Basically,
community
owned
instead
of
google
owned.
G
Yeah,
it's
been
something
that
I've
been
working
on
on
and
off
again,
so
for
the
first
item
there
for
the
kubernetes
e3
test
images,
there
was
only
one
left
remaining
there,
which
is
the
cuda
vector,
add
image
1.0
now,
basically,
for
that,
I
basically
had
to
go
back
and
get
history
to
fetch
whatever
that
image
was
beforehand
and
I've
just
added
it
as
it
was
then
again,
so
it's
pretty
straightforward.
G
I
think
I
also
mentioned
yeah
I
mentioned
from
which
git
sha
I
I
got
the
image
from
and
basically
we're
using
the
same
mechanisms
as
the
other
httpd
and
nginx
images
to
basically
have
the
code,
the
new
image
and
the
good
old
image.
So
that's
the
only
image
left
for
that
registry.
G
I've
already
added
a
job
for
it.
We
only
have
to
wait
for
this
for
request
to
merge,
basically,
after
that,
it's
going
to
be
promoted
and
then
simply
replace
it
into
into
the
quantities
test
suit.
Here's
image
manifesto
go.
G
But
yeah,
as
for
the
other
one,
that's
a
bit
more
interesting.
Apparently
this
got
closed.
I
think
this
is
because
of
the
main
branch
rename
it
might
be.
I
didn't
close
it
myself,
but
it
might
have
been.
It's
also
says
it's
close,
it's
also
much,
but
it's
obviously
not
true
anyways.
G
A
So
I
can
recap
real
quick.
Let's
see,
where
did
I
put
it
here,
yeah,
so,
basically
right
now,
with
the
way
that
we
have
kates.gcr.io
setup,
it's
like
the
service
that
backs
it
is
incapable
of
simultaneously
serving
some
images
publicly
and
some
image
images
privately.
A
A
It's
sort
of
a
requirements
gathering
thing
for
the
ii
team
about
like
well,
so
does
it
even
make
sense
to
have
one
single
try
and
make
one
single
registry
that
supports
both
public
and
private
images,
or
should
we
consider
having
two
cross-cloud
registries,
one
that's
for
public
one,
that's
for
private,
or
do
we
do
something
like
just
just
take
the
hit
and
use
a
private
registry
and
just
leave
it
hosted
in
gcr,
and
so
people
who
are
running
conformance
tests
that
use
whatever
the
hard-coded
key
is
in
kubernetes
source
code
to
verify
that,
yes,
their
kubernetes
clusters
can
pull
from
a
private
registry.
A
They'll
just
hit.
Google
they'll
hit
gcr,
I'm
not
like.
I
don't
have
a
strong
opinion
on
it
right
now,
but
I
feel
like
it's
something
that
needs
to
be
untangled
or
looked
at.
Does
that
make
sense.
G
A
But
yeah
yeah,
I
think
it
was
like
we
could
that's
why
I
think
I
was
getting
it
like.
We
could
make
a
project,
that's
just
that.
Just
makes
this
bucket
private
and
then
just
this
registry
is
private
and
that's
great.
But
then
that
means
like
like
not.
A
I
I
want
us
to
get
to
a
world
where,
like
all
of
the
images
that
are
used
as
part
of
kubernetes
ci,
are
in
the
cross
cloud
registry
that
you
know
we're
trying
to
to
work
towards
so
that
if
somebody
needs
to
get
all
the
artifacts,
they
need
to
effectively
stand
up
a
kubernetes
cluster
and
then
make
sure
it
passes
conformance.
A
They
can
do
that
using
like
a
single
registry,
but
since
some
of
the
conformance
tests
use
a
private
registry,
maybe
that's
not
possible,
and
so
then
it's
like,
if,
if
we
just
have
okay,
we'll
just
have
one
private
bucket
on
private
registry
like
should
we
talk
about
making
that
private
registry
also
a
cross-cloud
thing
that
the
cncf
hosts
and
all
of
the
different
mirrors
and
cloud
providers
host?
A
That,
or
do
we
just
say
we'll
just
leave
it
here
in
our
our
google
organization
right
and
I'm
fine
just
using
the
the
one-off
project
approach
for
now.
If
we
do
that
doesn't
seem
like
it's
a
staging
project,
because
it's
not
so
much
temporary.
E
Trying
to
coordinate
the
multi-cloud
solution,
for
example,
for
like
the
amazon
registry
and
the
microsoft
registry
and
other
ones,
to
have
the
same
authentication
method
for
the
same
threads.
I
think,
would
add
an
undue
burden
to
the
community.
As
far
as
running
our
conformance
tests-
yeah-
probably
it's
okay
to
have
this
one
off
have
a
bucket.
That
is
private
and
we
have
the
way.
E
What
might
make
sense
is
to
have
one
at
the
registries
and
test
the
different
it's
a
little
bit
of
test
in
the
cloud
providers,
but
making
sure
that
kubernetes
can
pull
authenticatedly
in
three
different
ways,
maybe
and
just
make
sure
that
the
three
different
providers
that
we
communicate
to
use
the
most
popular
ways
of
pulling
private,
because
it's
not
just
authentication
it's
the
ways
of
authenticating.
A
Yeah
anyway,
it's
just
been
like
low
on
my
priority
list
to
come
back
and,
like
just
say:
well
we're
doing
this.
So
if
somebody
else
has
a
really
strong
opinion
on
what
we
should
do,
you're
welcome
to
post
it
on
there
and
just
drive
us
forward.
I
don't
think
I'm
gonna
object
too
much
yeah.
I
don't
know
the
only
other
thing.
The
only
other
headspace
I
was
in
was
from
like
an
air
gapping
perspective.
A
G
G
Yeah
pretty
much
the
only
one,
that's
a
bit
more
is
the
one
that
also
adds
the
windows
on
a
server
image
into
kk,
also
as
the
alpine,
I
guess
that
could
be
removed.
The
idea
is
that
if
we
do
not
have
this
private
registry,
we
basically
have
to
maintain
our
own
registry
just
for
the
windows
ci's.
A
All
right,
I'm
commenting
on
these
just
to
kind
of
bump
them
up
in
my
history,
I've
been
behind
on
sweeping
reviews,
so
I'll
try
to
take
a
look
at
these
in
the
by
the
end
of
the
week.
A
Awesome
thank
you
for
continuing
to
push
on
this
yeah.
At
some
point,
I
do
plan
on
coming
back
and
trying
to
get
us
to
push
to
close
out
all
this
stuff
before
we
start
getting
anywhere
near
code.
Freeze
for
v122.
A
Okay,
I'll
stop
sharing
and
I'm
going
to
hand
off
to
caleb
now
caleb
since
you're
running
a
demo,
I'm
guessing
you
might
want
the
ability
to
share
your
screen.
Is
that
correct.
H
Normally
I
might,
but
not
today
and
I'll,
show
you
why?
Okay,
if
you
all,
would
wouldn't
mind,
jumping
on
that
demo.ii
sandbox
I'll,
we'll
all
bring
up
the
presentation
and
just
don't
touch
it
and
you'll
see
why.
H
Very
magic,
and
can
I
just
get
a
round
of
thumbs
up?
Oh
thank
you
for
sharing
as
well.
That's
also
good.
So
I'm
I'll
start
the
presentation.
Now,
if
everyone
is
all
good
with
that
everyone
got
the
page,
look
cool
sweet
all
right.
Yes,
I've.
I've
done
some
stuff
on
registry.case.studio
and
preparing
a
few
implementations.
H
So
we
have
two
candidate
present
implementations
right
now.
We've
got
one
using
envoy
and
one
using
artifact
server,
so
yeah,
let's
start
off.
These
are
my
ingresses
for
today,
so
you'll
you're,
currently
on
demo
artifact
servers
where
artifact
server
will
be
there's
also
a
distribution
instance
and
then
there's
envoy,
and
we
won't
worry
about
the
last
one.
H
But
these
are
the
urls
we'll
be
looking
at.
Oh,
that
didn't
display
at
all.
If
you'd
like
to
jump
into
the
teammate
session
in
the
back
end,
please
copy
and
paste
this
link
in
the
terminal,
and
you
will
join
to
be
able
to
see
what
I
can
also
see
without
the
web
and
it'll
be
hard
to
see.
Yeah.
That's
a
bit
annoying.
H
I
think
we
have
three
people
joined
at
the
moment.
I'll
also
put
this
in
the
channel
I'll
just
continue
on
from
the
slides.
Here's,
the
ssh
one
cool,
let's
go
through
and
take
a
look
at
the
rest
of
the
presentation,
so
ooh,
that's
fun
envoy.
So
let's
talk
about
the
envoy
implementation.
H
So
here
is
some
listen
discovery
service.
This
is
some
some
basic
bootstrap,
it's
a
partial
configuration.
This
allows
us
to
do
things
like
this,
so
here
we
have
some
some
blue
code
that
handles
the
rewrite.
So
the
first
three
lines
you
see
here,
the
first
one
is
the
default
host.
Second,
one
is
the
secondary
host
and
then
the
third
line
is
the
ip
address
that
we're
going
to
change
the
host
based
on
we're
going
to
say
on
every
request.
H
I
realize
I'm
going
through
this
quickly
and
if
anybody
has
any
questions,
I've
got
the
chat
open
and
I
can
answer
them.
Thank
you,
ricardo.
This
is,
this
is
a
fun
demo
to
put
together.
Yes,
so
what
you
can
also
do
is
you
can
take
this
request
and
we
can
take
a
look
at
what
the
response
will
be.
So
if
you
would
like
to
copy
this,
you're
welcome
to
I'll
give
you
maybe
10.
H
H
There
we
go,
there's
the
call
come
on,
so
what
I
see
is:
oh,
it's
kind
of
cut
off
on
your
screen.
There
aaron
so
the
in
in
my
environment,
this
pairing
environment
that
you're
you've
joined
in
with
a
teammate
session.
We
get
distribution
and
then,
if
you're
anywhere
else,
you
will
get
the
case.gcr
the
io
url
redirected.
H
So
that's
what
you
see
there,
I'm
presuming
it
is
giving
you
the
second
one
for
everyone.
If
it's
giving
you
the
first
one,
I'm
very
scared,
but
I'm
pretty
sure
we'll
give
you
the
second
one.
As
per
the
logic.
Okay,
you
can
even
pull
a
container
image,
go
ahead
and
pull
pause.
If
you
like
so
yeah
it
just
does
the
standard
redirect
thing
it
works
as
you
would
expect
it
won't
reduce
cost
today,
because
it's
all
hosted
inside
the
ii
sandbox.
H
That's
right,
so
I've
pasted
the
link
for
docker
pull
in
the
chat,
so
feel
free
to
run
that
and
I'll
move
on
to
the
next
slide.
If
you'd
like
to
learn
more
about
the
implementation,
please
check
out
this
blog
post.
H
This
isn't
a
new
one,
but
this
goes
through
all
of
the
details
on
the
implementation,
a
bit
more
than
I'm
going
through
right
now,
so
yep,
that's
where
you
can
find
that
let's
talk
about
artifact
server,
I
like
it.
Let's,
let's
take
a
look
at
it,
who
came
up
with
artifacts
server.
I
am
glad
you
asked
that's
our
friend
justin,
who
is
also
on
the
call.
H
So
I've
currently
got
a
pr
which
I'll
talk
about
later
on
after
I've
gone
through
discussing
the
implementation,
so
yeah
I've
got
a
proof
of
concept
config.
So
what
this
says
is
we're
gonna
actually
start
at
the
third
one.
What
it
says
is
if
the
if
there
is
if
there
are
no
conditions,
we're
gonna
default
to
falling
back
on
kits.gcr.io.
H
H
Otherwise,
if
the
path
is
slash,
chaos
we're
going
to
take
you
to
kuba.
So
this
is
the
last
one
is
especially
important
for
handling
the
artifacts
and
not
just
the
way
that
can
registries
work.
So
that
is
the
configuration
and
then
here's
the
logic.
This
is
not
thorough
logic.
It
works
for
the
demo,
so
we're
selecting
a
back
end.
H
So,
on
every
request,
we
don't
have
a
back
end
decided
and
so
we're
going
to
pick
the
first
one
and
then
we're
going
to
pick
the
last
header
that
matches
if
there
is
one
that
matches
and
we're
going
to
pick
the
last
backhand
that
matches
if
there's
one
that
matches.
So
if
you
were
to
bring
this
up
using
this,
these
set
of
commands,
then
that
might
work
using
yeah.
It
works
for
the
demo
yeah.
I
was
saying
if
you.
H
Were
to
hit
against
this,
it
will
go
in
the
order
of
in
the
configuration
we'll
go
cates.gcr
to
oh.
If
we
match
the
header,
then
distribution
and
then
the
path
chaos.
So
that
is
the
implementation,
and
I
have
brought
up
artifact
server,
I'm
currently
in
the
teammates
session.
I
believe
bono
has
dropped
a
link
to
that
if
the
web
a
web
link,
if
you
wouldn't
mind
connecting
as
well
aaron,
I.
H
So
what
you
can
do
is
curl
against
it
using
the
following
command
in
the
presentation
that
I've
got
up
right
now,
which
I'll
also
drop
in
the
chat
and
for
you
it
will
give
you
kids
to
gc
out
of
io,
but
for
me,
because
I
ran
this
inside
of
this
pairing
environment.
It's
given
me,
distribution.
H
So
one
person
hit
it,
we
have
someone
has
hit
it,
so
that's
good
pulling
from
somewhere
yeah,
it's
resolving
things.
That's
cool.
H
That's
pretty
cool,
more
details
on
pause
later
today:
yeah
the
foreshadowing,
so
that's
cool,
so
we've
just
made
a
request
through
artifact
server,
redirecting
to
a
particular
address
which
is
specified
in
the
configuration,
and
we
can
also
yeah
this
didn't.
I
I
changed
one
thing
really
a
few
moments
ago
and
then
I
don't
know
what
happened
with
my
configuration
but
by
hitting
slash,
k,
ops,
it's
meant
to.
H
If
I
run
it
manually,
this
artifact
server
won't
have
cars.
Artifact
server
is
only
gonna
have
I
know
I
know
what's
happening,
why
that's
giving
me
a
404
because
it's
first
matching
on
the
ip
address.
So
if
I
run
this
in
my
pairing
environment,
it's
not
going
to
work,
but
if,
if
you
run
this
command
remotely
that's
currently
in
on
the
presentation
which
I
will
also
drop
in
the
channel,
it
will
give
you
a
thing
saying:
access
denied,
that's
a
a
response
from
amazon.
That
means
it
worked.
H
Thing
for
once,
what's
that,
just
something
on
the
channel,
I'm
just
kidding,
that's
the
response,
yeah!
So,
okay,
that's
interesting!.
H
I
appreciate
your
words
justin
yeah.
I
would
like
to
take
a
look
at
the
error
that
aaron
has
gotten
later
on
I'll,
just
save
that
for
later,
because
that
is
really
confusing
to
me.
Moving
on
in
the
presentation,
I
have
a
pr
available,
and
this
is
the
link
to
the
pr
if
you
would
all
like
to
jump
on
it
at
once.
That
would
be
wonderful,
I'll,
also
copy
a
link
into
my
section
of
the
document.
H
So
this
is,
it's
still
put
for.
Do
not
merge.
So
keep
that
in
mind.
I
would
appreciate
any
review
just
to
say
if
my
implementation
is
good
or
not,
but
the
main
thing
that
I
would
like
to
to
know
is
which
path
that
we
should
go
down.
Should
we
be
using,
should
we
be
using
envoy,
or
should
we
be
using
artifact
server?
I
have
good
thoughts
between
both.
H
A
Go
back
to
the
presentation
there
we
go
justin.
I
see
you
have
your
hand
up
established,
yeah.
I
I
On
the
envoy
ver
on
envoy
versus
artifact
server
or
go
code,
I
imagine
that
will
probably
depend
on
how
hard
it
is
to
do
the
cider
mapping
stuff,
which
I
imagine
depends
on
some
of
the
analysis
that
you
probably
are
also
about
to
talk
about
later
on.
So
but
I
I
think
that
the
cider
mapping
is
going
to
be
the
the
real
pain
point
here.
C
A
Figure
it
out
yeah,
I
don't
I
don't
know.
I
don't
have
strong
opinions
that
your
your
demo
made.
Artifact
server
look
cooler
because
it
seems
like
it
unifies
image
and
binary
serving
and
the
configuration
is
a
lot
easier
to
understand,
and
I
appreciate
how
live
this
was
and
and
how
much
of
the
details
we
got
to
walk
through.
There's
a
part
of
me
that
still
feels
like.
A
Like
I
mean
I
may
just
not
be
involved
in
this
enough,
but
I
still
feel
like
I'm
kind
of
missing
the
big
picture
of
like
what
you
know.
What
the
trade-offs
are
that
we're
talking
about
here.
You
know
what
is
it
that
we're
intending
to
throw
off
to?
Are
there
any
sort
of
weird
authentication
mechanisms
that
we
need
to
be
aware
of?
What
are
the?
What
are
the
critical
user
journeys
that
we're
trying
to
address
here?
A
So,
like
you
know,
what
are
the
the
test
cases
that
we're
gonna
throw
at
this
and
stuff
like?
I
feel
like
this
is
awesome
from
a
proof
of
concept
perspective
and
since
I'm
a
biased
action
guy,
I'm
mostly
gonna,
be
like
whichever
one
it
looks
like
you.
A
Can
get
up
and
running
most
completely
most
quickly,
but
at
some
point
I
I
feel
like
I'm
kind
of
lacking,
like
you
know
something
in
the
scope
of
a
proposal
or
a
cap
or
something
that's
got
like
nice
boxes
and
arrows
that
helps
us
understand,
sort
of
the
flow
of
traffic
and
what's
actually
getting
hit.
When
puts
my
mind
at
ease
that
we're
not
talking
about
running
our
own
infrastructure
to
handle
ridiculous
amounts
of
traffic
that
we're
doing
the
thing
where
we're
like
you
know.
D
A
A
So
I
don't
know
if
any
of
that
random
stuff
guides
your
decision
on
artifact,
server
or
envoy,
but,
like
I
agree
with
justin,
it
seems
like
the
next
big
piece
would
be
to
figure
out
how
you're
gonna
determine
what
to
hand
off
to,
because
if
this
would
be,
if
like
making,
that
decision
is
going
to
be
in
your
critical
path.
You're
going
to
want
to
figure
out
how
to
optimize
that.
E
H
Yeah,
I
think
that
is
very
reasonable
to
consider
that.
H
In
terms
of
it
being
in
the
critical
path
or
not,
the
main
thing
would
be
implementing
the
the
asn
related
stuff
so
which
asmr,
what
source
ip
kind
of
thing.
I
think
either
one
would
it
would
end
up
in
go
so
it's
probably
okay,
because
we
can
just
write,
go
wasm
for
envoy
and
then
create
a
custom
image
out
of
that,
and
then
it
does
pretty
much
the
same
thing
but
yeah.
It
just
depends
on.
E
I
think
I
don't
think
the
data
set
is
going
to
be
large
and
the
data
structure
is
going
to
be
fairly
small.
To
implement
in
a
in
a
library,
says:
here's
an
ip
give
me
the
the
302
redirect
destination,
making
that
as
small
as
possible
and
a
pfc
can
plug
into
both
of
these
fairly
easily
written
as
a
go
library.
H
A
Priority
that
makes
sense,
like
I
don't
know-
maybe
maybe
I'm
just
like
I'm
trying
to
understand
way
more
than
I
need
to,
but
I
I
feel
like
if
I
was
going
to
explain
this
to
somebody
at
the
cncf.
I
would
eventually
want
to
be
able
to
provide
them
a
clear,
understandable
diagram
of
like
here's,
where
the
traffic
is
going.
Here's
where
all
this
stuff
is
running,
here's
how
we're
going
to
be
shifting.
E
Costs
or
whatever
just
we're
going
to
show
costs
in
us
in
a
minute.
But
I
think
if
you
want
the
simplest
yamly
styles
and
that's
how
we
kind
of
diagram
these
days,
there's
a
folder
that
contains
the
asn
mappings
that
we're
going
to
try
to
make
sure
accurate
and
as
best
we
can
from
the
vendors
saying.
Is
this
accurate
for
your
asens.
A
A
But
again
like
I'm,
just
I'm
giving
you
like
my
perspective
as
somebody
like
justin
and
other
folks.
Here,
I
know,
have
been
more
actively
involved
with
implementing
and
like.
I
definitely
agree
we're
still
to
like
get
that
proof
of
concept
going.
I
just
personally
still
feel
like
I
lack
the
like,
but
what's
the
what's.
E
The
plan,
so
if
I
were
the
shortest
version
I
can
give,
is
we
have
cip
the
image
promoter
and
all
of
our
destination,
how
those
promote
from
staging
all
of
that
up
the
staging
stays
the
same,
but
image
promoter
now
promotes
to
prod,
but
it
would
also
promote
to
let's
say
amazon
registry
and
microsoft
registry
and
pick
whoever
else
signs
up.
It's
part
of
our
promotion
process
to
ensure
that
they
exist
at
these
locations
seems
like
a
way.
The
other
way
would
be
to
do.
I
Anyway,
I
want
to
keep
us
moving
and
be
respectful,
so
I
just
wanted
to
quickly
mention
sorry.
There
is
an
early
year,
an
early
cap
which
I
think
brendan
and
myself
worked
on.
I
don't
know
if
it's
even
a
cat,
but
like
a
dock,
so
there
is
that
the
other
thing
it
wasn't
as
detailed
as
you
asked.
I
The
other
thing
I
would
suggest
when
looking
at
the
like
the
request
flow
is
to
mark
which
ones
are
security
sensitive
and
which
ones
are
not
because
I
think
very
few
of
them
are
security
sensitive.
I,
the
first
one
is
security,
sensitive
and
then
all
the
others
are
basically
untrust
could
be
sort
of
completely
untrusted.
I
A
All
right
over
to
rihanna
to
talk
about
stuff
you
have
slides
to
share.
Are
you?
Am
I
going
to
click
on
another
link
or
should
it
die.
J
J
Have
is
we
import
the
data
from
the
data
source?
As
with
this
query
up
here,
then
we
do
a
transformation
where
we
basically
take
the
epoch
date
in
the
in
the
log
standard
into
a
real
date.
We
import
these
tables
that
we
need
the
cs.
Referrer
actually
have
the
image
name
inside
so
then
we
will
take
the
cs.
Referrer,
we
break
it
up,
get
it
to
get
the
image
done
with
the
resource,
and
then
we
put
the
hash
in
as
well.
At
the
moment,.
J
We
might
later
on
so,
but
the
main
thing
is
to
see
the
resources.
That
was
the
question
last
time
and
here
is
paul's
back
and
if
you
look
at
the
images
being
pulled,
the
big
blue
bar
on
the
left
hand
side.
That
is
the
amount
of
time
that
falls,
got
pulled
in
the
period
from
9
april
to
25
may,
and
that
is
worth
even
though
it
is
only
0.1
megabytes
per
image.
J
We've
got
33,
000
gigabytes
of
data
just
for
pause
and
a
cost
of
five
thousand
eight
hundred
dollars,
so
the
the
top
is
the
top
amount
of
image
being
pulled
by
a
download
volume
and
then
the
second
graph
that
I've
got
is
the
highest
download
cost
per
image.
So
this
is
cost
wise.
So
every
time
you'll
see
costs
on
the
right
hand
side,
so
the
most
expensive
thing
being
pulled
is
to
proxy.
J
J
Basically,
I
swapped
out
company
names.
I've
got
the
top
10
pulling
ips
and
laughing
a
lion.
Is
our
top
puller
of
images
cost
wise?
Basically,
everything
up
to
line
eight
is
all
one
one
company's
ip
address
pulling
those
and
then,
if
you
look
at
the
bottom,
this
is
a
pivot
table
showing
which
images,
unfortunately,
all
all
of
them,
don't
fit
on
the
screen.
But
you'll
basically
see
these
other
images.
J
So
you'll
see
for
line
item
one
device,
a
thousand
seven
hundred
dollars
worth
of
full,
of
which
800
900
almost
dollars
is
for
cube
proxy
and
another
800
for
for
node.
F
J
Delete
and
then
the
other
other,
smaller
ones
for
minor
amounts
of
currency
friendly
fox,
for
instance,
you
can
see
pulled
4,
000,
gbs
249
000
times
they
pulled
hcd
amd
64..
So
it's
just
an
interesting
showing
the
patterns
of
things
being
pulled
and,
and
particularly
yes,
it's.
E
I
Sorry,
yes,
the
well
you're
saying
there
is
one
like
one
ip
that
pulled
this
not
like
the
entire,
like,
for
example,
aws
like
this.
J
J
Unfortunately,
I
can't
match
is
ends
yet
so
that
would
basically
make
all
of
line.
One
two
eight
go
into
one
line,
but
I
can't
match
those
yet
because
of
the
complexes
of
getting
aced
internet
is
to
match
exactly
because
I
don't
want
to
put
anything
here.
That's
not
exactly
true.
So.
J
Then
yeah,
so
that's
that's
the
thing
that
we
got
then
just
for
interest
sake,
which
is
not
vitally
important.
Average
pools
per
day
is
about
26
000
bulls,
but
26
million
pools
per
day
from
the
registries,
and
that
is
all
right
in
the
next
steps
that
we
do
want
to
do
at
the
moment.
J
Transform
the
data,
and
then
our
data
goes
into
data
studio,
so
so
we
can
display
it
like
that.
So
our
next
thing
is
to
figure
out
exactly
how
to
automate
that
data.
I
think
just
needed
the
data
for
the
billing.
If
I'm
not
wrong,
you
helped
me
a
lot
with
the
information
about
data
studio,
so
yeah,
so
automation
of.
J
I
Rather
than
chat
type
it
I
will,
I
will
say
the
the
billing
report
doesn't
need
an
ingestion
transform
in
just
in
time
transform
there
is.
It
happens
at
query
time,
because
this
is
the
transform
is
very
simple
the
I
did
upload
the
scripts
that
I
had
used
in
the
past
to
like
insert
this
data
into
bigquery
or
did
not
even
script
the
program.
I
guess
I
think
someone
tried
that
out.
I
thought
you
tried
it
out,
but
yes,.
J
J
A
I
can't
I
wish
I'm
trying
to
think
of
like
it's
a
selfish
thing.
I,
what
looking
at
that
immediately
made
me
ask
was
how
much
is
kubernetes
project
ci
costing
like
I
don't.
A
I
don't
know
how
far
it
is
to
get
there,
and
I
don't
know
how
important
it
is
for
the
purposes
of
you
know,
registry
across
cloud
artifact
hosting,
but
I
feel
like
I
want
to
get
a
handle
on
like
you
know
how
much
of
our
projects
can,
like
all
the
pull
requests
and
all
the
jobs
that
we
run
as
a
result
of
that
like?
How
much
are
we
causing
or
yeah,
to
sort
of
differentiate?
A
The
calls
coming
from
inside
the
house
versus
the
project
serving
its
community.
A
But
I
think
it's
yeah,
it's
just
a
curiosity.
Maybe
I'm.
E
Interested
in
that
as
well,
I
don't
know
the
shape
of
the
of
how
to
grasp
those
pieces.
It
would
require
a
deep
understanding
of
all
like
everything
to
craft
even
to
hold
it,
because
I
I
assume
most
of
our
ci
is
all
run
within
our
cluster,
but
I
know
there's
other
pieces
as
well.
F
A
All
eventually
be
coming
from
one
repo,
more
or
less,
and
so
you
know
a
subset
of
images
that
we
could
filter
against
and
then
it's
probably
a
matter
of
like
I
don't
I
don't
know.
I
don't
know
if,
because
of
the
magic
of
mirror.gcr.io
and
clusters
getting
to
use
that
in
our
ci
requests
are
going
to
be
looking
like
they
come
from
those
clusters
or
from
other
random
google
ids.
I
don't
know
so
it's
not
it's
not
a
super
important
thing.
A
I
It's
yes,
it's
very
related
to
what
you
just
said.
The
the
thing
which
we
don't
know
is
well,
I
mean
specifically
like
aws.
Moving
moving
data
to
moving
bits
to
aws
is
ten
times
as
expensive
as
moving
from
gcp,
it's
sometimes
as
expensive
as
moving
gcp
to
gcp,
or
something
like
that,
and
so
ci
is
a
case
of
that.
But
also
you
know,
people
running
on
gcp
are
going
to
be
much
cheaper
and
I
think
the
the
interesting
question
from
our
point
of
view.
I
Looking
at
all
this
mirroring
is,
I
think,
like
how
much
money
will
it
save
us
if
we
set
up
a
mirror
on
for
the
sake
of
argument,
aws
or
digitalocean
or
azure
and
like?
How
do
we
prioritize
that,
in
terms
of
money
like
how
much
money
are
we
going
to
save?
If
we
do
that,
like,
we
have
a
mirror
on
g
on
gcp,
but
we
don't
have
one
on
any
of
the
best,
and
I
think
that
would
probably
knock
that.
What
would
that
change?
I
E
Type
thing
we
hope
to
answer
that
question
in
the
next
two
weeks
by
getting
that
asn
data.
There
are
multiple
sources
and
finding
something
authoritative
that
covers
enough
of
our
of
our
target
audience
to
be
precise,
has
been
troublesome
once
we
get
there
and
we
especially
for
the
top
we're
gonna
know
who
those
top
asn's
are
making
sure
within
our
community.
We
can
create
a
pr
that
just
says
here:
the
asn's
can
amazon,
my
microsoft
and
digital
ocean
review.
E
Is
there
anything
remissing?
Is
it
inaccurate
because
we
would
love
to
just
let
you
know
how
much
this
is
costing
you
and
then
we
can
anonymize
it
and
say
you
know
all
the
funny
names
of
the
list
we'll
try
to
keep
those
consistent
going
forward.
I
The
the
that's,
a
good
that
would
be
wonderful.
The
the
bigquery
uploader
actually
assigns
tags.
The
cloud
provider
using
aws
and
gcp
have
a
json
resource.
They
publish
and
azure
also
has
one,
but
they
don't
publish
it.
You
have
to
download
it
by
hand,
but
anyway
there
are
links.
I
found
them
all
in
a
python
project,
so
I
will.
I
K
Ready
yeah,
so
I
think
I
said
this
last
time
as
well,
but
I
I
do
work
for
aws,
so
they
pay
me
to
work
on
the
kubernetes
project,
so
anything
that
I
can
do
or
projects
that
I
can
push
to
help
get
infrastructure
costs
down.
Feel
free
to.
Let
me
know
is
this
something
that
shipping
like
release.
Images
to
our
public,
like
registry,
would
help
with
that
all.
E
I
think
that's
kind
of
the
plan
eddie.
I
think
it
would
be
good
for
us
to
probably
have
an
offline
conversation
and
go
through
the
data
with
you
and
give
you
access
to
see
all
the
things
so
that
we
can
be
frank
and
direct
with
the
precise
pieces
and
get
that
story.
You
know
that
board
around
whatever
the
diagram
of
the
life
cycle,
including
deciding
when
it's
coming
from
amazon,
appropriately
and
and
using
the
the
enterprise
grade,
solutions
that
are
that
service,
your
customers,
well
cool
yeah.
Just
let
me.
K
A
Yeah,
that
sounds
great,
I
feel
like
tim
hawkin
and
myself
would
probably
want
to
be
looped
in
on
that
as
well,
if
not
the
rest
of
the
chairs,
but
yeah
yeah.
I
appreciate
the
help.
I
think
it's
I
don't
know
at
the
moment.
I'm
not.
A
It
with
with
great
urgency,
but
I
think
it
you
know,
just
like
we're
sort
of
figuring
out.
How
are
we
gonna
wire
all
these
bits
together?
I
think
we
need
to
figure
out
sort
of
what
is
what
does
the
engagement
model
look
like
for
onboarding
people
into
this
process
or
into
the
clip
yeah
whatever
into
the
club?
I
don't
you
know
like
I.
B
A
Know
if
we're,
if
we're
looking
at
something
like
how
we
have
like
the
the
distributors,
announce
list
for
people
who
sort
of
met
a
certain
bar
when
it
comes
to
security,
disclosures
and
things
like
that,
or
if
this
is
something
that
you
know,
it's
even
kind
of
unclear
to
me.
How
much
of
this
we
want
to.
We
want
to
talk
about
doing
purely
at
the
project
level
versus
purely
at
the
cncf
level,
independent
of
the
project,
but
I
yeah.
I
agree.
A
So,
oh
I
just
hot
cornered
myself
into
a
black
screen.
I
think
we're
at
time.
Unfortunately,
so
I
think
bear
no
and
my
agenda's
agenda
items
are
going
to
have
to
wait
until
next
time
unless
folks
really
feel
like
sticking
around.
A
L
L
F
E
A
F
A
I
don't
know
that
it's
you've
gotten
a
great
picture
of
sort
of
what
our
roadmap
is.
What
our
plan
is
where
we
could
use
help
and
stuff.
D
Yeah
yeah,
I
can
imagine
it
will
take
a
little
while
to
get
up
to
speed.
But
I'm
connecting
a
few
dots
here
and
there
right.
A
I
am,
I
am
trying
to
like
kind
of
keep
things
a
little
bit
groomed,
but
if
you
have
specific
questions
feel
free
to
reach
out
on
slack,
otherwise
you
feel
free
to
like
keep
sort
of
soaking
it
in
ambiently
yeah.
I
I
really
appreciate
people
who
have
experience
running
a
bunch
of
stuff
on
gcp
showing
up,
because
I
mean
I
know
I
work
for
google,
but
I
still
am
figuring
a
lot
of
this
out
on
the
fly.
A
It's
fun
all
right!
Thank
you
so
much
for
your
time,
everybody-
and
I
hope
you
have
a
happy
wednesday.