►
From YouTube: Kubernete WG K8s Infra 2019-02-20
Description
A
B
Paused
at
the
moment
actually
have
my
activities
kind
of
died
down
to
a
couple
last
couple
days,
because
I
had
been
working
on
something
else,
but
I
mean
the
meat
of
the
project.
Is
you
know
open
sourced
and
Bart
has
raised
some
good
points.
I
mean
there's
been
some
active
discussions
going
on
on
the
issue
tracker
basically
around
extending
it
so
that
it
can
support
Jesus
or
sorry
repositories
other
than
GCR
I.
Think
that's
the
main.
C
B
Just
putting
aside
the
the
different
docker
repos
aside,
like
the
current
implementation
today
is
to
just
use.
G-Cloud
add
tag
that
you
know
in
turn
run
some
API
call,
but
basically
it
just
I
mean
that's
index
can
be
extended
right,
I
mean
we
can
just
say:
oh
yeah
I
just
copy
it
to
this
other
registry.
To
it
that's
mirroring.
D
F
F
That's
the
one
that
I'm
constantly
manually
importing
I'm
constantly
once
every
three
weeks
or
something
and
do
you
think
it
would
be
an
egregious
request
to
say
hey
if
you
want
to
publish
something,
you
also
have
to
push
to
your
GCR
staging
and
they
said
no,
not
a
really
big
deal
so
I.
Don't
think
we
should
block
anything
on
that.
Multi
registry
thing
like
docker
hub
support,
seems
nice,
but
you
know
I,
don't
think
that's
really
a
problem
and
in
fact
it's
more
complicated
because
we
don't
get
to
keep
track
of
those
things
now
right.
A
B
Mean
them
so
it's
like
gee,
sorry
I
would
be
the
registry
quote-unquote
and
then
the
repository
path.
This
is
like
terminology
that
I
picked
up
from
other
implementations,
but
the
path
you
know
G
start
out,
o
/.
You
know
it
could
be
some
arbitrary
nested
path,
/
the
image
name,
and
then
you
referenced
the
digest
at
that
point.
It's
not
like
you
say,
G
start
at
I/o
digest
immediately.
F
B
F
B
F
F
Stupid
go
edit.
The
video
but
I
think
that's
gravy
I
think
the
most
impactful
thing
we
could
do
on
this
would
be
to
get
it
actually
working
between
the
staging
repos
that
we
demonstrated
over
the
last
two
weeks
and
a
new
progress
once
we
prove
that
we
can
actually
have
it
working
on
push
of
the
registry
yeah
Mille,
then
we
have
to
deal
with
the
moving
moving.
F
The
whole
contents
of
the
existing
registry
over
which
we
can
deal
with
sort
of
separately
and
flipping
the
name
but
I
think
if
we
get
the
process
moving,
you
will
be
better
off
all
told
to
do
this
posthaste,
and
then
we
can
figure
out
mirroring
and
other
registries
and
whether
we're
gonna
front
it
with
this
artifact
server
and
all
the
other
stuff.
That's
that's
all
secondary
noise.
B
Yeah
I
mean
I
I,
think
so
yeah
like
I
said
earlier,
I
mean
the
meat
of
the
thing
is
there's
and
it's
basically
like
almost
like
ready,
prime
time.
It's
just
there's
a
pending
pull
request
for
making
some
last-minute
changes,
mainly
in
around
logging
and
stuff
and
verbosity
issues.
So
is
it
running
now
as
a
proud
plugin?
B
A
Yeah,
so
we
so
that
is
step
one
step.
Two
is
we
for
the
manifest
file
themselves?
We
can
start
with
Kate's
IO
repository.
We
can
have
a
folder
in
there
for
where
people
can
request,
uploads
right
and
then
we
set
up
like
3
or
4.
Gc
are
staging
repositories,
so
we
will
enable
just
those
for
now
we
will
add
entries
in
the
kids.
Are
you
slash
manifest
or
whatever
so
and
then?
F
Next
few
days
and
we
can
take
a
soil,
the
take
the
script
that
we
wrote
last
week
and
we
can
actually
tweak
it
just
a
little
bit
and
we
will
get
our
production
registry
right.
We
just
have
to
give
it
a
different
name
and
stuff,
and
we
can
like,
let's
decide
on
that.
You
know
this
week,
make
that
this
week,
let's
decide
where
we
want
to
check
in
the
metadata
here
like
we
should
probably
have
a
list
of
the
names
of
the
staging
repos
that
we've
created
and
the
groups
that
own
them.
F
Just
you
know
in
a
llamó
file
or
in
a
text
file
even
somewhere
under
the
KPI
or
it
add
repo,
and
then
we
could
actually
create
at
you
a
directory
for
the
promoter
and
check
in
sort
of
the
initial
llamo
file
and
then
once
you're
proud
job
goes
in.
We
can
actually
start
testing
against
that
real
yamo
file.
Okay,
so.
A
G
A
F
No
ok
I
can
make
time
I've
been
able
to
make
some
time
on
Friday
mornings,
usually
in
order
to
to
spend
some
time
on
this.
Like
can
we
agree
to
get
a
smaller
group,
maybe
dims
less
myself
and
whoever
Jeff
wants
to
nominate
for
the
prow
side
just
to
regroup
and
see
what's
in
between
us
and
an
actual
working
demonstration
of
this
okay
sounds
good.
Okay,
I
don't
want
to
voluntary
ones.
No.
B
B
F
We
scripted
this
last
week
for
the
staging
repos,
so
I
now
know
I
feel
like
I
know
everything
we
need
to
do
to
set
up
a
proper
GC,
our
repo
from
a
script,
including
all
the
permissions
and
everything
else
na
and
minimal
access
right.
So
it's
only
as
a
Google
group
that
has
access
to
a
certain
set
of
permissions.
F
F
A
B
F
Yeah,
no,
no!
It's
okay,
I
took
edit
this
week.
It's
it's
published
yeah.
We
should
talk
about
whether
it
wants
to
live
in
in
testan
for
our
whether
it
wants
to
live
under
educates
that
IO
repo
the
case
when
we
created
the
case
IO
repo
was
intended
to
be
a
bunch
of
yeah
mole
that
was
running
all
of
our
sites,
but
now
there's
actually
some
programs
and
scripts
and
tools
under
there
and
I'm
fine.
With
that.
B
H
A
B
A
F
I
So
actually,
I
had
some
good
feedback
from
Tim
about
maybe
like
that.
I
was
getting
a
little
bit
tired
with
the
technology
and
that
maybe
we
should
start
with
the
basics,
which
is
like
surfing
artifacts
that
are
not
images
and
so
I
actually
put
together
a
doc
which
is
later
on
the
agenda
about
like
the
MVP
for
serving
from
a
bucket,
and
that
we
can
then
like
serve
it
from
a
GCS
bucket
affronted
by
GCL,
be
a
google
cloud
load
balancer
which
can
do
SSL.
I
It
won't
do
it
the
redirection
and
wantoniy
the
mirroring,
but
it's
a
first
step,
and
then
we
can
look
at
the
artifact,
the
302,
redirect
or
service
as
a
second
step
and
shouldn't
actually
break
anything
as
it
were.
So
I
think
the
there
is
there's
a
PR
to
do
the
redirect
or
service,
but
it
likely
it
implements
so
little
functionality
that
probably
the
first
step
is
to
get
some
blob
serving
and
start
to
gather
statistics
on.
How
much
does
it
cost?
What
is
the
latency?
I
What
is
the
availability,
particularly
in
like
regions
that
might
have
firewalls
like
China,
so
yeah?
That's
I
put
together
a
doc
later
on
the
and
I
suppose
to
get
a
strawman
doc,
which
we
can
discuss
later
on,
but
I.
If
people
like
it,
I'm
happy
to
turn
that
into
a
cap
and
we
can
start
serving
artifacts
without
blocking
on
the
302
every
director,
okay,.
A
A
F
Don't
see
it
as
that
different
from
the
GCR
problem,
it
has
a
different
sort
of
front-end
for
push
and
pull,
but
otherwise
it's
I
think
the
same
problem.
So
I
don't
want
to
distract
finest
yet,
but
maybe
this
is
something
we
want
to
extend
the
promoter
to
handle
or
make
a
fork
of
the
promoter
that
handles
just
plain
old
GCS
or
something
like
that.
Right.
A
F
That's
the
that's.
The
ultimate
goal
here
is
to
make
sure
that
we
have
a
federated
mirror
set
right
that
that,
basically,
anybody
who
wants
to
can
host
a
mirror
of
our
artifacts
and
we
can
figure
out
how
to
rely
on
them
and
that's
the
sort
of
cryptographic
stuff
and
that's
the
distribution
of
updates
stuff
that
I
don't
have
a
full
comprehension
of.
Although
I've
learned
a
little
bit
more
about
it
in
the
last
few
weeks,
I
think
that
is
entirely
separable
from
the
let's
just
start
serving
stuff
from
a
place.
A
D
Wanted
to
get
a
little
bit
more
transparency
in
how
we're
setting
who
has
access
and
that
list
and
eventually
be
able
to
look
at
some
type
of
auditing
for
different
actions
it
and
in
trying
to
understand
where
we
are
and
who
has
access.
Now
it's
been
difficult
to
do
we
talk
about
the
changes
they
get
implemented,
but
there's
no
transparency
and
who
did
that
action,
particularly
for
I,
am
policies
you.
D
F
And
we,
for
the
most
part,
I
mean
the
the
the
reason
it's
not
written
down
right
now
is
it's
almost
one
for
one
that
the
group
with
a
name
corresponds
to
that
permission.
There's
no
individual
users
on
the
GCP
projects,
except
for
the
people
who
have
like
org
level.
Access
like
Dimon's
does
not
have
personal
access
to
any
of
these
projects.
He
only
knows
that
we
should
write
that
down.
Ideally,
we
should
have
a
llamo
file
that
describes
which
group
has
which
people
in
it
and
would
actually
resync
those
groups
based
on
that.
I
D
F
D
Interested
in
in
that,
and
also
interested
in
in
the
auditing
side
like
trying
to
provide
the
info
for
the
transparency
and
what's
happening
because
we
talked
about
the
transparency
and
we're
spending
our
money.
There
also
might
be
interesting
to
have
things
around
and
not
every
single
event,
but
definitely
things
that
change
are
involved
in
configuration,
change
or
possibly
metric
sure.
F
K
F
D
L
I
So
this
might
not
be
the
women.
This
might
not
be
a
fully
baked
request.
Yet
some
of
these
some
of
the
sub
projects
are
trying
to
figure
out
where
to
host
their
docks.
They,
like
firebase
I'm,
trying
to
drill
into
exactly
what
functionally
from
firebase
they
are
using
as
far
as
I
can
tell
they
are
using
it
as
a
basically
a
static
website.
I
So
if
we
have
a
better
answer,
but
the
the
obvious
question
was
like
they
didn't
want
to
use
firebase
because
it
was
a
paid
thing
and
they
didn't
know
who
would
pay
for
it.
That
feels
like
some
of
the
CNS.
This
group
should
do
if
they,
if
we
want
to
use
firebase
but
I
also
don't
know
if
we
want
to
use,
want
to
recommend
firebase
or
some
other
alternative.
F
I
F
I
This
is
for
hosting
Doc's,
so
I've,
never
even
I
didn't
even
realize
that
firebase
was
a
Doc's
hosting
site.
You
do
firebase
deploy
and
you
can
deploy
any
website.
You
want,
what
does
it
use
as
a
template?
You
go
or
there's
a
Lingle.
That
I
was
that
was
in
the
it
doesn't
it
doesn't
do
the
templating,
the
templating
is
done,
client-side
in
a
build
process
and
then
firebase
deploy
effectively.
Does
a.
M
Was
just
going
to
save
it?
We
right
now
would
only
use
firebase
order
to
host
host
static
files
and
those
static
files
are
generated
using
git
book
and
NPM.
The
the
main
advantage
for
us
to
we're
using
firebase
is
that
was
what
the
COO
builder
project
also
uses
now,
so
we
like
the
parody
we
have
for
the
work
we
do.
M
You
know
what
they
do
and
then
I've
spent
a
lot
of
time
trying
to
use
gh-pages
and
it
looks
like
it
might
be
possible
if
we
split
our
Docs
people
apart
from
a
cluster
API,
but
we
think
there's
value
in
keeping
the
docs
as
part
of
the
source
code,
because
the
docs
actually
embed
source
code
as
part
of
the
generation,
and
so
that's
how
we
keep
the
docs
in
sync.
Is
we
don't
cut
and
paste
code
quotes,
we
actually
embed
code
votes
and
then
automatically
generate
the
static
HTML.
M
F
M
Belief
is
that
mattify
is
because
it's
used
by
the
broader
community
or
the
coronating
augmentation
is
a
better
solution.
I
think
it's
also
more
complicated.
So
is
a
longer
term
plan.
I
would
like
to
see
medified
base,
documentation
include
builder,
and
then
any
projects
that
are
built
using
coop
builder
could
also
inherit
that
same
documentation
system.
E
Fold
proceed.
I
have
to
highlight
that
our
current
credits
from
Google
occurring
on
the
G
while
firebase
is
not
a
part
of
GCP,
so
discredits
are
not
covering
them,
so
we'll
have
to
have
to
figure
out
on
the
bill
inside
if
we
decide
to
move
or
if
we
like
firebase
offers
the
free
tier
body
free
tier
one
bit
Slask.
We
have
to
also
solve
their
different
billing
question
so.
A
A
A
A
E
A
K
Hi
this
is
clay.
This
is
my
first
time
here
so
say:
hi
to
everyone,
I
probably
missed
a
lot
of
a
contacts.
I'll
probably
need
guidance
anyway,
when
I
working
on
this
ete
test,
forecast,
ap
on
main
and
it's
non
providers
specific,
and
this
require
container
registry
to
store
those
CI
images.
So
my
question
is
whether
this
group
can
help
to
support
this
I'm.
Looking
for
guidance
and
solutions.
A
D
A
Yeah
we
will
I'll,
show
you
a
where
it
is
and
how
people
are
using
it.
I
had
the
same
request
to
the
to
the
kind
folks
for
a
while.
Now,
because
I
wanted
a
test
conformance
image,
there
is
a
conformance
image.
I
wanted
to
test
that
so
Dave
I,
don't
think
they
have.
They
have
the
kind
has
a
release.
Yet
with
this
support,
so
all
we
need
is
like
wait
for
them
to
make
a
release.
I
the
the
other
option
of
creating
local
repository
I
have
a
small
script
for
that
as
well.
A
So
I'll
share
that
with
you
later.
Ok
me,
on
slack,
I'll,
show
you
these
two
things:
okay
for
now,
I,
don't
think
you
need
a
specific
repository
as
such.
The
main
thing
here
we
are
trying
to
think
about
is
like
how
do
we
publish
repositories
which
somebody
outside
of
our
community
will
end
up
using,
and
you
know
that
doesn't
seem
to
be
the
use
case
here
right
right,
okay,
okay,
yeah.
K
G
Yeah,
so
basically
just
kind
of
to
follow
up
and
see
what
we
can
do
to
move
along
here.
As
I
noted
here
in
the
notes.
Since
last
meeting
I
went
my
experiment
tonight
created
a
PR
which
uses
cert
manager
and,
let's
encrypt,
to
generate
a
TLS
certificate
for
GCSE
web
and
basically
test
that
out
at
least
local
area
not
locally,
but
like
on
my
own
cluster.
It
seems
to
work
and
so
that
works
I,
don't
know.
G
If
have
some
direction,
we
will
move
forward
or
not
I
know
there
was
also
you
know
some
thought
of
try
to
use
that
you
keep
made
in
certificates,
but
that
doesn't
support
multi,
San,
multi-subject,
alternate
names,
I
think
so
like
we
can't
use
it
with
the
kids
do
redirector,
because
you
know
we
have
a
whole
bunch
of
subdomains
there.
So
I
don't
know
if
we
want
to
continue
to
move
forward
with
cert
manager.
F
G
F
I
F
G
It
sounds
like
just
need
well
get
this
approved
and
we'll
continue
to
move
forward
and
then
we'll
have
everything
automated
would
before
a
pole
or
whatever.
Okay.
A
N
F
Say
and
story
welcome
for
everybody
who's
in
that
boat
and
for
the
sake
of
the
recording
we
have
put
together
a
list.
It's
in
the
notes
of
you
know
like
like
75
things
that
need
to
be
converted
over
a
great
number
of
them
block
on
getting
a
cluster
up
and
running.
That's
what
Justin's
been
pushing
on
I
feel
like
we're.
Pretty
close,
maybe
one
more
big
push
and
we
could
have
a
cluster
that
we're
actually
really
happy
with.
What
do
you
think
Justin?
Yes,.
I
F
Don't
play
with
the
permissions
I
want
to
script.
The
heck
out
of
it
I
was
really
enlightening
to
script
the
GCR
staging
stuff.
We
need
to
go
back
and
script.
The
DNS
stuff
also,
but
I
feel
like
we're
pretty
close
to
it,
and
then
we
could
start
moving
things
over
piecemeal
and
once
they're
moved
over.
Then
there's
ample
opportunities
for
people
to
help
write.
I
F
A
If
publishing
board
is
down
for
a
day,
nobody
will
notice
it's
okay,
it's
it's
not
such
a
big
deal.
Only
when
we
are
cutting
milestones
will
people
notice
and
when
we
are
not
cutting
milestones.
We
just
cut
a
milestone
yesterday
morning,
so
we
have
about
a
week.
You
know
when
efficiently
down,
okay,
up
or
down.
F
A
F
And
honestly,
like
from
my
point
of
view,
I'm
gonna
start
getting
super
busy.
As
the
code
reviews
start
to
fly
in
I've
already
started
on
the
big
API
reviews,
and
these
are
all
multi
our
code
reviews.
So
my
ability
to
make
time
for
this
will
probably
diminish
as
we
get
towards
code
freeze
right.
So.
O
Twice,
oh
sorry,
now
just
gonna
say
hello,
I
think
I
know
some
of
you
ready
on
do
I
work
at
a
jet
stack,
oh
and
work,
one
sir
manager
and
kind
and
again
mirroring,
but
just
here
to
help
if
something
like
and
oops
I'll
keep
an
eye
out
on
the
issue
board.
Once
you
go
that
cluster
and
there's
no
says.
I
A
C
So
I
have
an
opinion
on
this
billing
report.
It
doesn't
send
just
since
we
talked
earlier
about
an
I,
am
policy
and
trying
to
understand
what
groups
had
access
to
what
things
I
want
to
understand,
spend
at
that
level.
If
we
have
different
efforts,
I
want
to
understand
this
band
related
to
these
efforts.
I
F
I
mean
all
the
projects
are
set
up
under
the
same
billing
account,
so
we
have
a
breakdown
per
SKU
and
I
think
we
can
do
per
scooper
project.
I
haven't
figured
all
of
that
out.
Yet
I
can
read
off
real
quickly
some
stats
if
anybody's
interested
the
DNS.
It
looks
like
for
the
last
30
days
and
trying
to
figure
out
if
it's
last
30
days
or
if
it's
this
month.
F
C
F
We
should
try
not
to
overlap
the
skews
too
much
between
projects
so
like
the
cluster
will
probably
run
in
terminators
public,
which
means
that
we
will
show
up
as
GCE
VMs
right
in
Cabrini's
public,
which
I,
don't
think,
is
a
problem
because
we
can
say
the
VMS
are
the
cluster
and
DNS
is
DNS.
The
GCR
Stata
storage
space
for
each
of
the
staging
repos
will
have
its
own
project,
so
we
can
divvy
it
up
by
that.
F
I
I
make
a
suggestion
if
we,
for
example,
if
we
had
the
cops
buckets
in
here,
the
the
numbers
would
be
dominated
by
one
particular
line
which
is
currently
zero,
and
so
why
don't
we?
When
we
have
one
line
that
is
dominant?
Why
don't
we
try
to
break
it
down?
So,
in
this
case,
I
got
cloud
storage
report.
We
would
split
and
try
to
do
it
by
half
and
attribute
that
to
the
projects
if
we
cared
I
guess
yes,.
F
I
mean
I
think
that'll
be
really
interesting.
Once
the
storage
stuff
shows
up
it,
you
can
already
see
there's
the
dominant
factor
is
already
the
the
test
clusters
right.
I
can
see
$20
in
load,
balancing
and
SSD
and
$20
in
VMs
and
$45
in
VMs.
So
you're
already,
you
know
almost
an
order
of
magnitude
higher
than
everything.
I
F
I'm
agreeing
with
her
and
I'm
going
with
both
of
you
I
think
we
want
the
transparency,
so
we
don't
conflate
the
costs
as
much
as
possible.
I'm
just
saying
that
project
boundaries
are
not
all
is
the
right
boundaries
and
once
we
did
it,
let's
pair
up
once
we
get
the
cluster
up,
we
can
use
the
per
names
based
billing
stuff
that
some
of
the
folks
here
are
working
on,
so
we
can
actually
produce
per
namespace
breakdowns
of
our
cluster
charges.
I
A
F
I
Had
a
look
at
this
and
I
pulled
the
old
image
and
looked
at
like
the
docker
build,
and
it
is
a
straight
build
of
the
upstream
source,
which
is
an
open
source,
go
program
Lisa
first
I
can
tell
so.
If
we
want
to,
we
can
pull
it
back
up.
I,
don't
know
if
we
want
to
yet
cuz,
then,
just
as
I
understand
it,
there
are
some
bad
actors
at.
F
I
A
A
L
C
A
C
A
F
I
think
we
had
yes
did.
We
I
I,
can't
remember
what
happened
eight
minutes
ago.
Did
we
come
to
a
conclusion?
What
we
want
to
do
is
cluster
API.
Like
is
it's
an
interesting
project,
because
it
has
probably
non-trivial
test
requirements,
but
it's
a
SIG's
project
right,
so
we'd
be
setting
a
precedent
for
paying
for
sync
projects
and
like
what's
the
plan
for
coming
up
with
a
precedent
for
how
and
when
we
want
to
pay
for
those
things
through
the
same
process.
The.
A
F
Know
what
I'm
saying
same
thing
is
that
that's
right,
that's
what
we
decided
do
we
want
to
address
the
topic
of
whether
we
should
be
hosting
GC
our
repositories
for
SIG's
projects,
and
does
that
rate
like
does
that
change
the
bar
for
creating
a
safe
project,
etc?
Okay,.
A
F
F
F
A
F
I
F
I
think
that's
right,
I
think
it
means
we
have
to
figure
out.
What
is
the
bar
for
this?
We
can't
just
I
mean
right
now.
Basically,
any
cig
can
approve
a
cig
project
right
and
we'll
get
a
repo
and
that's
fine
and
nobody
really
cares
cuz.
It
doesn't
cost
us
anything
as
soon
as
this
actually
cost
money.
I
think
we
need
principles
and
guidance
for
how
to.
I
Do
it
right,
I
think
we
should
absolutely
said
numbers
but
I
think
it
is,
it
is
gonna
be
problematic
or
it
is
a
pretty
big
deal
to
say
that
like
because
there
is
a
organizational
structure
that
is,
that
we've
delegated
to
the
SIG's
and
they
have
sub
projects
and
to
interfere
with
that
structure.
Is
a
I
mean.
H
F
F
A
F
A
A
A
G
G
F
A
G
Just
kind
of
one
note
on
bulk
move:
I,
don't
know,
I
came
up
with
as
discussed
in
the
docs
or
not,
but
how
does
a
release
interact
with
that
like
right
now,
I
think
you
know,
when
we
do
a
release,
we
have
like
special
permissions
to
push
to
the
the
repo
where
we
can
do
going
to
continue
to
do
that
when
we
do
an
official
release
to
the
new
repo.
Are
we
gonna
have
something
that
automates
with
release
there?
The
promoter
mo
or
I,
don't
know
how
we
thought
through
that?
That
I
mean.
F
L
A
C
A
A
Definitely
so
if
the
I
think
you
just
have
to
pick
which
of
the
area
that
we
talked
about
here
is
of
interest
to
you,
there's
always
things
to
do
and
if
we
didn't
have
a
boatload
of
things
return
on
in
the
document
in
the
dark,
especially
the
older
dogs.
So
just
tell
us
what
you
are
interested
in,
so
we
can
guide
you
further.
A
So
that's
a
big
problem.
We
don't
know
who's
coming
with
what
background,
so
we
can
suggest
things
to
them.
That
was
one
of
the
starting
trouble
that
we've
been
having.
You
know,
Bart
said
something
about
artifacts,
so
we
pointed
em
at
the
docker
support
for
the
image
promoter,
so
it
just
tell
us
a
little
bit
about
what
you've
already
done
or
what
which
parts
you
are
interested
in
on
slack
or
in
the
next
call,
and
then
we
can
go
from
there.
Okay,.