►
From YouTube: Kubernetes WG K8s Infra - 2019-09-04
Description
GMT20190904 153541 k8s infra 1920x1080
A
A
B
A
A
A
A
B
B
A
B
B
No,
it's
fine
I
do
want
to
get
it
done.
I
might
have
time
on
Friday,
hey
Katherine's
here.
I
do
have
time
on
Friday
before
well,
we'll
pick
up
afterwards
and
we'll
see
if
we
can
get
time
together
and
anybody
else
who
wants
to
join,
let
us
know,
and
if,
if
I
can
pull
it
off
by
then
then
we
can
turn
up
a
real
cluster
and
maybe
move
something
over
to
it
sounds
great.
C
D
B
B
A
A
E
B
And
is
there
what
we
should
probably
do,
dims
if
you
have
thoughts
here
or
or
Catherine,
if
you
want
to
help
if
you
got
more
free
time,
okay,
thank
you
for
your
time
already
would
be
to
just
write
up
a
plan
of
how
to
migrate
pro
I.
Don't
know
if
there's
a
partial
migration
like
if
we
can
turn
prowl
up
in
a
new
cluster
then
start
moving
some
jobs
or
if
it
has
to
sort
of
be
all
right.
A
A
E
Proud
you
can
run,
you
can't
run
multiple
trials
on
the
same
weapon
because
they
will
fight
each
other.
Thank
you
Reaper,
but
you
want
to
run
your
new
problem
that
would
cause
no
issues
doing
a
gradual
migration
is
hard.
You
can
migrate
jobs
graduating
without
much
trouble,
but
migrating
with
services
gradually
right.
A
B
A
D
Yeah
Catherine
is
there
any?
Is
there
any
concern
so
like
I've
had
some
like
scattered
conversations
with
a
number
of
the
testing
for
people
about
this
particular
topic?
Is
there
concerns
with
the
prowl,
like
the
control
plane,
still
existing
in
the
Google
own
project,
but
then
reaching
out
to
a
non
Google
own
project
to
delegate
work
to
that
was
a
question
mark
previously
I,
don't
know
if
I
been
like
settled,
that's
cool.
D
A
A
Right
just
for
running
the
the
one
that
I
mentioned,
the
groups-
yamo
II-
you
don't
need
much
more
than
that,
but
we
are
talking
about
minting
an
image
kind
of
thing.
We
building
a
docker
image.
You
know
for
the
plus
repair,
for
example,
so
that
will
you
know
we
will
probably
need
clusters
that
we
don't
really
trust
fully
from
the
main
pro
first
active,
I
guess
well,.
D
The
thing
that
I'm
cautious
about
is
setting
up
setting
things
up
in
the
wrong
right
I
want
to
make
sure
that,
when
we're
migrating
like
I
want
to
migrate,
prowl
and
I
would
love
to
migrate,
prowl
very
quickly,
but
I.
Don't
necessarily
want
to
move
it
in
the
wrong
order.
So
I
think
we
need
to
write
up
what
we're
going
to
do
before
we
push
any
buttons
to
stand
up.
D
A
Okay,
we
can
table
that
for
now
since
nobody's
jumping
up
and
down
saying
yeah
okay.
So
let's
move
on
to
the
next
item
on
the
agenda,
which
is
Catherine's
proposal-
and
you
know
the
counter
or
the
alternative
that
we
were
already
looking
at-
is
from
Jason.
So
can
I
Catherine
can
I
request
Jason
to
give
an
update
on
what
they
have
tried
so
far,
and
then
we
can
flow
to
your
proposal
from
there
sure,
okay.
F
Jason
yep
all
right,
so
we
went
ahead
and
started
a
pocs
and
github
actions
and
I
link
to
the
workflow
that
we
defined
in
the
agenda
topic.
We've
tested
it
against
personally
owned
Google
accounts
and
were
able
to
successfully
push
images
up
using
that
workflow,
but
we're
blocked
on
any
we're
blocked
on
a
formal
service
account
to
be
able
to
push
toward
to
the
actual
staging
bucket
right
now.
F
B
F
Now,
what
we
do
is
we
have
it
defined
as
a
secret
or
a
secret
within
github,
where
we
fed
that
which
we
use
for
doing
the
initial
testing
and
I
know
we
had
the
last
meeting
when
we
had
talked
about
this.
We
talked
about
potentially
having
the
ability
for
somebody
to
set
that
service
account
so
that
it's
not
known
to
the
other
users,
but
it
would
require.
You
know
some
type
of
action
from
somebody
that
has
the
ability
to
create
those
service
accounts
to
populate
that.
D
So
what
I
good
so
the
the
stuff
that
I've
done
with
it
before
it's
been
very,
very
basic,
but
it's
basically
you
put
it
in
and
then
you
can't
pull
it
out.
It's
there,
it's
in
the
it's
in
the
stuff
and
then
it
gets
exposed
as
you
can
expose
it
as
an
environment
variable
in
the
container,
because
github
actions
as
a
container.
D
Obviously,
if
you
do
something
specific,
there
are
there's
ways
that
you
could
potentially
extract
that
secret.
If
you
are
like
a
malicious
repo
administrator
you,
there
is
a
potential
to
extract
that
secret,
where
you
like,
merge
an
action
that
goes
and
post
it
somewhere
else
get
up
as
protections
against
it
streaming
out
in
the
logs.
D
D
Workflows
run
based
off
of
what's
merged,
not
based
off
of
what's
in
the
PR
branch.
Alright,
so
you
can't
create
a
PR
that
would
extract
the
secret,
but
if
a
malicious
PR
got
merged,
even
if
that
there's
not
a
repo
admit
that
does
it
if
a
malicious
PR
got
merged,
then
that
workflow
would
become
what's
on
the
master
branch
and
would
be
what's
executed.
Yeah.
F
A
I
had
two
questions,
one
is
who
has
access
to
the
logs
of
the
you
know
the
posts
or
my
job
in
this
case
and
the
other
one
is.
We
have
the
me
right
now,
the
all
the
github
access
we
have
owners
and
maintainer.
So
I
guess
so
what
level
do
you
have
to
be
to?
You
know,
do
different
things
with
the
github
action.
D
I
D
Second
question:
the
second
question
is
more
straightforward
one.
The
second
question
is
like
who
can
do
things
so
to
add
a
secret
or
to
view
the
available
secrets.
You
need
to
be
a
repo
admin
to
make
changes
to
the
workflow
file
and
what
happens
inside
the
github
action.
You
need
merge
access
like
anybody
that
can
merge.
A
thing
can
do
that
so
that
be
anybody
with
manual
merge
access
or
anybody
that
has
like
LG
TM
approved
and
can
like
approve
something
into
that.
Folder
can
can
make
the
change.
A
D
A
D
F
A
Right
I
understand
that
I'm
asking
that
question,
because
you
know
with
the
image
promoter
we
did
run
into
this
situation
where
we
had
to
try.
We
run
into
flakes
when
we
push
things
into
production
and
we
didn't
show
up
in
testing
so
I.
We
had
to
like
three
trigger
the
protocol
times,
especially
when
you
we
are
trying
to
do
from
attack
right
instead
of
doing
every
commit
push.
B
D
They're
not
they're,
not
particularly
like
highly
performance
and
things
that
would
take
a
very
long
time
would
also
take
a
very
long
time
inside
of
a
github
action,
but
for
simple
basic
tasks
they
are
like
decent
and
for
something
for
something
like
container
creation
and
push
that
particular
workflow.
I
have
done
that
workflow
a
number
of
times
they
work.
It's
nothing.
It's
nothing
super
fancy,
but
it
does
like
functionally.
Do
the
job
yeah.
B
D
D
The
capability
is
there
in
theory,
this
was
an
open
question.
The
last
time
the
github
admin
team
for
kubernetes
we
met
with
we've
met
with
github
twice
on
actions
and
we're
planning
another
meeting
soon
we're
going
to
try
and
wrap
in
some
of
the
tests
in
provokes
the
prowl
folks
into
that
conversation.
It's
okay,
it's
possible
in
theory,
but
there's
there's
some
caveats
there
and
we've
never
tested
anything
like
it
like
triggering
off
a
github
action
with
a
web
hook
like
that
flow.
We
have
not
tested
at
all.
Okay.
H
A
E
A
E
Okay,
so
basically,
this
is
attempting
to
solve
the
same
problem,
but
using
GCB
and
prowl,
which
are
tools
we
already
use
to
solve
these
sorts
of
problems.
I
have
no
idea
how
a
squirrel
always
confuse
a
new
page.
Now,
yes,.
J
E
J
E
So
basically,
the
idea
is
for
Li
will
you
we
would
use
prowl
to
trigger
a
GCB
job
and
then
GCB
builds
your
container,
pushes
it
somewhere
and
reports
back
to
prowl,
which
you
can
then
access
for
logs
in
the
usual
way.
The
upshot
of
this
is
basically
that
it's
is
a
mechanism
we
already
use
and
have
built
tools
for,
and
we
know
it
to
work.
E
A
E
A
So
first
question
is
say:
I'm
starting
a
new
repository
and
I
new
image.
What
how
what
do
I
have
to
do
as
a
developer
to
try
this
out
and
other
than
like?
If
use
when
jason
said?
Oh,
it
there's
a
make
target
and
we
were
able
to
run
the
same,
make
target
from
github
action.
So
what
is
the
equivalent
in
here.
E
So
in
terms
of
building,
you
can
still
run
the
same
make
target
if
you
like,
that,
should
work
fine.
You
will
need
to
have
some
image
to
run
your
make
target
in,
but
there's
a
not
terribly
hard
to
come
across.
How.
A
E
In
order
to
run
it
locally,
you
can
so
if
you
want
to
test
it,
I
thought
it
was
today.
If
you
want
to
test
it
locally.
Gcb
has
tooling
that
will
run
your
job
as
if
it
was
in
GCB,
but
on
your
machine,
which
is
handy
for
that
or
you
can
use
g-cloud
for
g-code
commands
to
run
it
against
your
own
project
as
well,
and
GCB
provides
you
with
free
time
for
that
sort
of
testing.
E
E
We
would
set
up
a
trusted
trial,
job
that
automatically
submits
to
your
but
runs
per
submits
on
your
job
when
you
words
it
when
you
run
a
when
you
merge
the
master
of
an
a
frosted
drug,
our
job
will
be
set
off
automatically
as
part
of
the
GCR
set
up
process
automatically
hmmm.
That
will
run
your
job
for
you,
so
you
don't
touch
that,
but
instead
you
just
touch
the
cloud
build
llamó
in
your
Reaper
and
that
gets
wrong
automatically,
as
was
also
free
setup
Don,
when
you're
staging
repos
creators
and.
B
B
E
B
E
E
A
This
would
be
like
similar
to
the
CI
cross
jobs,
so
the
CI
cross
jobs
runs
every
day
once
every
day,
because
it's
costly
too
it's
costly
to
run
it
on
every
PR
merge.
So
we
run
it
every
day.
So
if
there
are
jobs
similar
where
we
have
to
mint
a
lot
of
images-
and
we
do
we
want
to
do
it
just
once
a
day-
that's.
A
B
A
E
So
nothing
offense
doing
that
in
theory
and
it
would
work
fine,
but
it
does
require
configuring.
A
periodic
fail
jump
which
increases
the
complexity
of
the
trial,
job
file
that
we
have,
but.
A
E
B
So
I
think
it's
worth
pointing
out
a
couple
things
that
I
like
about
this
proposal.
One
is
the
logs
go
to
a
place,
that's
consistent
with
all
the
other
logs
of
everything
else
that
we're
doing
where
we
built
our
own,
relatively
nice
UX
around
it
and
we
control
our
destiny,
and
the
other
part
is
the
future
for
identity
and
credentials
is
actually
pretty
slick
where
we
wouldn't
actually
need
the
json
file
at
all.
B
We
would
just
use
the
newer
workload
identity
feature
which
would
allow
us
to
specify
IM
rules
in
terms
of
gke
cluster,
so
I
can
basically
say
this
gk
cluster
in
this
identity.
Space
is
allowed
to
push
to
this
GC
b
and
that's
it.
There's
no
credential
download
anywhere,
which
is
this
already
ready.
Tim
R
is
the
same
thing
in
the
future
Sun
now
and
we
have
to
move
the
prowl
stuff
into
a
cluster
that
we
that
was
contained
by
an
ID
evening,
face
or
something
like
that.
A
B
A
A
F
F
They
would
always
fee
to
staging
just
the
idea
is:
is
that
we
would
tag
it.
You
know
we
would
the
tag
that
we
would
push
on
a
tag.
Push
for
the
build
would
be
different
than
what
we
would
do
for
the
post
submit
on
master,
where
we
just
like
update
latest
in
the
staging
staging
bucket,
but
when
we
generate
the
tag
we
want
that
to
actually
build
with
The
Associated
tag
that
we
would
intend
to
promote
using
the
image
promoter.
For
you
know
the
release
we.
B
F
D
D
Yeah
cuz,
like
the
workflow,
the
workflow
there
that
that
would
make
sense
to
me,
would
be
like
every
so
in
a
post,
submit
you
push
a
unique
tag
with
the
get
by
and
get
commit
as
part
of
the
tag,
and
then
you
also
retag
latest.
If
you'll
give
that's
the
latest
master,
you
also
retag
it.
So
you
have
two
tags
that
are
pointing
at
the
same
image
and
then
the
promoter
would
be
like.
Okay,
the
this
particular
commit.
B
So
it
would
be
nice
for
a
human
to
be
able
to
look
at
the
tag
and
say:
okay,
this
is
commit
XYZ
one,
two
three,
even
though
it's
sha
bla,
bla,
bla,
bla,
bla
and
I
can
map
that
commit
back
to
a
tag
in
or
back
to
the
git
repo.
Whether
the
tag
is
human
friendly
or
not,
doesn't
really
matter
it's
more
for
humans
to
be
able
to
identify
it
yeah
that.
A
B
I
B
Mean
I
could
activate
the
GCD
like
for
your
specific
staging
today,
which
will
automatically
create
the
service
account
right
needed
for,
for
the
GCB
push
I,
could
hand
Catherine
or
whoever's
on
call
today
the
JSON
file
by
sneakernet
and
the
prowl
and
GCB
yeah
mol
file.
I,
guess
you
can
help
us
guess
Lee,
so
we
could
literally
do
it
today,
yeah.
That
would
be
awesome.
I'm
around
I've
blocked
some
time,
specifically
after
this
meeting
to
follow
up
on
stuff
from
today.
So
if
you
guys
are
game,
do
you
have
time
after
this.
B
I
I
F
B
If
we
do
more
than
one
that
I'll
have
an
excuse
to
scripted
three,
if
you
wanted
I'd,
rather
I
mean
if
we
can
get
by
with
one
than
one's,
probably
better,
so
I'll
one-off
GCB,
enable
the
cluster
API
staging
and
I'll
share
JSON
with
the
pro
folks
and
I
will
ask
Catherine
if
she
can
put
together
a
example
of
the
prowl
and
GCB
y
mo
I.
Guess
two
example:
files
who.
B
D
Yeah,
it's
like
a
plan.
I'm
really
like
thank
thank
you
for
writing
this
up
and
bringing
this
to
us
Catherine,
because
I
really
like
this
option,
because
it
actually
could
solve
the
KK
problem
too
cuz
Li,
the
github
action
workflow
I
think,
would
work
fine
for
a
lot
of
repos.
Although
it
is
like
more
finicky
in
getting
each
individual
repo
set
up
and
on-boarded,
but
it
would
work
very
well
for
the
smaller
repos
I
think,
but
the
having
something
that
we
can
like
stamp
out
in
template
and
be
like.
A
A
K
Yeah
I
just
dropped
that
one
in
there
and
just
so,
we
can
kind
of
get
an
idea
as
to
how
we
can
go
about
approaching
getting
it
done,
and
so
I
burn
up
or
request
the
other
day
to
update
sir
managed
to
version
Northpoint
9
from
not
27.
This
is
in
response
to
I
think
it's
November.
First,
let's
encrypt
will
no
longer
service
manager,
clients
older
than
nor
point
8,
so
no
27
and
below
as
a
result.
K
Obviously
we
should
update,
and
generally
we
should
update
anyway,
so
I
put
in
a
cool
request
and
I've
also
documented
on
there.
The
steps
needed
to
actually
apply
it
and,
as
there
are,
are
I
think
it's
minimal,
but
there's
a
couple
of
extra
steps
to
take,
as
it
goes
I'm
happy
to
do
that
myself,
along
with
someone
I,
just
don't
have
any
access
to
that
cluster.
So
obviously
yeah.
B
K
Thing
yeah
I'll
assign
it
to
you
now,
so
you
can
take
a
look
it
shouldn't
take
too
long
to
do.
It
should
be
like
a
half-hour
job
maximum
as
I
say,
I'm
more
than
happy,
if
it's
possible
for
me
to
sit
on
a
cool,
just
screen
sharing
and
watch
I.
Don't
know
how
Google
sensitive
it
is
and
if
I'm
even
allowed
to
see
it's.
D
Thing
that
I
would
bring
up
is
like
so
we
have
it's
September.
We
have
two
months
until
November
1st.
Yes,
we
should
be
updating.
Anyway.
Are
we
ok
James?
If
we
put
this
off
a
couple
weeks
and
just
focus
I
would
rather
focus
on
the
teardown
and
build
up
of
these
services
and
getting
the
proper
version
of
cert
manager
running
in
the
in
the
real
cluster
that
we're
going
to
build,
hopefully
on
Friday
yeah.
K
B
A
The
other
one
that
I
didn't
add
to
the
agenda.
What
I
just
remembered
as
I
have
a
PR
in
flight
for
managing
all
the
mailing
lists
in
the
juice
wheat,
with
all
the
custom
settings
for
each
one
of
the
mailing
lists,
ready
and
I've
tried
it
out
and
it
works.
Fine
Nikita
took
a
quick
look.
I
would
like
I
would
like
to
request
Tim
and
Kristoff
to
take
a
look,
and
then
we
can
merge
it.
Okay,.
A
B
D
A
A
L
A
So
Linus
one
thing
I
wanted
to
mention:
I
pulled
you
into
one
slack
conversation
with
a
pure
person.
They
are
trying
to
do
the
same
thing
that
we
are
doing.
They
have
a
promotion
process
for
air-gapped
scenarios
where
they
want
to
move
images
from
a
public
registry
to
an
aircraft
registry
and
they're
using
you
know
the
similar
libraries
that
we
are
using.
So
that
was
one
thing
that
I
wanted
to
make
sure
that
you
are
aware
of.
G
One
thing
that
came
up
and
we
had
the
first
review-
was
they
go
shortener
so
good
at
Kate's
that
I/o
and
we
have
a
few
things
that
are
still
unclear,
mostly
related
to
how
we
would
do
things
in
the
new
classroom.
That
probably,
is
something
that
we
can
like
push
off
until
we
have
the
new
cluster
and
one
thing
that
Tim
proposed
was:
if
you
use
text
erect,
we
need
a
staging
Reaper
for
it
to
basically
build
another
dramatic
GC
GC
R.
So
that
would
be
the
next
pull
request.
G
If
that
makes
sense,
is
the
greater
group
greater
staging
repo?
That
would
also
allow
me
to
at
the
same
time,
test
the
promoter
stuff
and
then
I
already
basically
went
through
the
whole
review
from
Tim.
So
if
you
have
some
time
a
reira
view
might
be
helpful
and
what
we
still
have
to
figure
out
is
basically
how
we
gonna
do
the
dns
and
then
later
on
the
DNS
self-service,
especially
if
we
only
want
to
test
it.
Do
we
create
it
on
a
separate
subdomain?
G
First,
instead
of
hijacking
that
go
to
community
Sorrell
and
go
with
like
go,
/
go
testing.
The
human,
a
CIO
or
something
that
we
basically
can
go
through
the
whole
process
of
pull
requests,
preview,
push
and
the
basically
the
cannery
process,
even
though
it's
manual
without
impacting
the
old
DNS
records.
That
would
be
a
first
step
before
we
automate
everything,
probably
about
the
automation.
I,
have
a
problem
on
hijacking
the
goat,
hates
that
IO
already.
B
Yeah
so
I
appreciate
your
patience
on
that
I.
It's
I
have
not
lost
track
of
it.
I
just
haven't
been
able
to
shift
back
to
it.
We
already
have
the
canary
gates
that
IO
sub-domain
put
aside.
So
whenever
we
do
a
DNS
push,
we
push
to
the
canary
zone
first
and
then
we
have
a
script
that
runs
over
the
input
llamo
and
verifies
that
all
the
names
we
expect
to
resolve
actually
do
resolve
against
the
canary.
Then
we
push
to
the
real
zone.
Does
that
satisfy
what
you
need
here?
The
question.
G
Is
how
doesn't
work
with
the
timeline,
the
problem
with
us,
pushing
to
the
cannery
and
then
waiting?
Let's
say
a
month
or
two
until
we
actually
deploy
I,
have
a
feeling
that
that
might
collide
with
any
other
DNS
updates
and
I've
I'm,
not
sure,
if
that's
something
that
we
want
to
have
lying
around
in
cannery
and
therefore
might
be
accidentally
pushed
to.
B
Production,
yeah,
no
I
I,
don't
want
to
push
anything
to
canary
that
isn't
going
to
production
imminently,
but
like
the
DNS
pushed
script
that
we
have
today,
does
this
process
where
it
pushes
the
canary
weights.
You
know
up
to
like
two
or
three
minutes
for
any
propagation
and
then,
if
that
fails
after
a
couple
of
minutes,
it
aborts
so
I
feel
like
that.
If
we
write
the
test
to
cover
the
go
redirects
that
should
be
sufficient,
I
think
I
just
haven't
had
the
time
to
dig
into
it
by
hand.
So.
G
B
B
Basically,
oh
the
redirector
I
think
it
mean
the
way
you
add.
A
new
record
is
by
adding
a
DNS
text
record
right
sure.
So
if
that's
the
Meccan
that
the
thing
that's
changing,
that's
the
opportunity
right
now,
it's
human
driven,
but
it
will
eventually
be
automation,
driven
and
the
automation
will
push
the
canary
run.
Whatever
the
acceptance
test
is
and
then
decide
whether
or
not
to
push
to
prod.
D
Just
I'm
concerned
about
the
mess
that
we're
getting
it
like.
How
complicated
are
we
gonna
make
this
like
I
I'm,
all
for
the
loop
of
like
okay,
we're
creating
a
DNS
record.
We
want
to
verify.
Did
that
DNS
record
as
we
set
get
created,
did
the
text
record
get
created?
I,
don't
want
to
block
a
DNS
change,
unlike
the
redirector
not
consuming
those
DNS
records
appropriately
I'd
want
that
to
be
its
own,
separate
loop
and
verify,
and
we
don't
necessarily
need
to
verify
every
single
record
that
we
have
if
we
verify
it
like:
hey.
D
Okay,
we
have
here's
some
like
examples
that
we're
monitoring
and
making
sure
that
the
redirector
is
working.
As
expected.
We
have
this
text
record
that
we
know
is
going
to
redirect
to
this
particular
site.
Do
we
get
that
expected
response
from
the
redirector
as
a
like
a
canary
test
to
make
sure
that
we
didn't
break
the
redirect
or
in
a
change
that
we
made
yeah.
G
I
agree,
so
so,
as
far
as
the
tests
are
concerned,
I
would
basically
check
a
test
record
for
the
text
record.
So
if
we
actually
created
it
correctly,
the
second
test
which
would
be
on
a
deploy
on
the
redirector,
would
be.
Does
the
redirector
have
a
specific
test
record
that
actually
does
the
redirect
as
we
wonder
it
and
if
it
defaults
to
the
correct
default
or
fallback
domain,
such
as
to
manage
the
data,
if
anything
breaks
or
something
and
the
third
test
would
be
a
general.
G
If
you
go
with
terraform,
we
have
the
ability
to
go
with
stackdriver
tests
because
they're
implemented
in
terraform,
so
we
could
have
some
monitoring
on
live
production
redirects.
So
that
would
be
something
that
might
make
more
sense
on.
Do
we
break
something
in
the
overall
end-to-end
type
of
production
thing
all
right?
Well,.