►
From YouTube: Kubernetes WG K8s Infra BI-Weekly Meeting for 20200318
Description
Kubernetes WG K8s Infra BI-Weekly Meeting for 20200318
A
Now
so
hi
everyone,
my
name,
is
bart
smikla
and
welcome
everyone
at
our
bi-weekly
case,
infra
working
group
meeting
at
the
beginning.
I
want
to
remind
everyone
about
our
code
of
conduct,
which
we
can
summary
as
be
excellent
to
each
other,
and
I
don't
see
anyone
new.
So,
let's
jump
into.
A
Our
agenda
billing
review-
I
think
this
is
the
week
where
maybe
differently
team
is
there
anything
particular
you
want
to
discuss
right
now.
B
Sorry
I
was
on
the
wrong
window.
Do
I
not
have
video
I'm
sorry?
I
can't
seem
to
get
video.
I
don't
have
anything
in
particular.
Let
me
think
real
quickly,
we're
I
don't
know
if
linus
yeah
linus
is
here
linus
you
can
talk
about
when
I
have
to
disappear
in
a
few
minutes.
You
can
talk
about
the
plans
for
the
domain
flip
and
I
think
that's
really
the
big
one
on
my
mind.
A
A
Audit,
because
I'm
comparing
like
because
I
did
like
a
whole
research
about
components
and
resources
which
we
are
creating
and
using
and
it
looks
like
there
are
some
like
a
gaps.
A
B
There
probably
are
we
should
we
should
do
a
full
audit.
That
would
actually
be
a
lot
of,
I
won't
say
fun,
but
it'll
be
a
lot
of
work.
B
B
Okay:
okay,
that's
it
from
me
and
and
if
you're,
because
I'm
I've
got
higher
privileges
than
that.
If
I've
added
something
to
that
audit
script,
that
the
auditors
group
can't
run,
let
me
know
and
we'll
figure
out
what
permissions.
Okay,
no
worries.
A
When
I
finish
like
this
my
work,
I
will
send
a
link
and
email
about
it,
so.
B
A
B
Okay,
all
right,
let
me
open
that
before
I
disappear.
Okay,.
C
I
was
going
to
try
monkeying
around
with
creating
a
trusted
build
cluster
in
the
next
two
weeks.
Is
that
something
you
have
an
opinion
on
tim.
C
Trusted
means
some
place
to
run
all
of
the
jobs
that
currently
run
in
prows
trusted
build
cluster,
which
turns
out
to
be
the
same
cluster
that
proud
runs
on.
We
want
it,
we
call
it
trusted
because
there
are
secrets
there
that
we
don't
want
exposed
to
all
of
the
other
pre-submit
and
post-submit
jobs
that
run
code
that
comes
in
from
the
public,
so
this
would
be
things
like
the
container
image
promoter
any
of
the
image
builder
jobs.
B
B
Yeah,
I'm
sorry
I
was,
I
was
being
snarky.
I
think
that's
fine!
If
you
want
to
well.
Okay,
yeah!
That's
sorry!
I'm
thinking
out
loud!
That's
fine!
Do
you
want
to
play
with
it
in
a
playground
project,
or
do
you
want
to
play
with
it
in
the
real
project
like
how
much
playing
do
you
think
you
need
to
do
versus
just
doing
it.
C
I
am
not
sure
off
the
top
of
my
head
right
now,
so
I
will
get
back
to
you
on
that.
I
just
wanted
to
make
sure
you
were
cool
with
me
starting
down
that
path.
B
D
B
Okay,
okay,
all
right,
then
I'm
going
to
drop
off.
I'm
sorry
today
was
a
bad
day
for
me.
I
will
look
at
for
the
mailing
list.
F
Yes,
I
can
share
the
spreadsheet.
I
don't
think
I
have
normally
tim.
Does
the
cross
comparison,
but
I
can
share
the
report.
I
have
it
up
in
a
window
which
I
will
try
to
share
with
the
group.
I
don't
think
it's
too.
F
While
I
scroll
through
as
normal
people
should
be
able
to
receive
this,
it
is
a
public
report.
Sorry,
I'm
just
scrolling
through
filling
time,
while
I
scroll
through
my
many
many
windows
and
don't.
A
Worry
maybe
when
we
are
doing
that
there
there
will
be
a
new
item,
because
we
added
last
week
the
new
pool
for
for
the
like
project.
We
need
for
memory.
Well,
the
first
project
was
per
perv
dash,
who
is
taking
like
eight
gigs
when
being
initialized.
So
it's
not
moody
yet,
but
the
pool
is
already
there
and
there
is
like
a
one
instance
per.
F
Yeah,
shall
I
should
go
through
the
report.
I
don't
know
if
we're
ready,
yeah,
let's,
let's,
let's,
let's
do
it.
Yes,
so
I
think
you
can
see
the
report.
It
should
have
a
big
blue
screen
saying
we
spent
over
the
last
28
days
1500
and
if
I
page
down
there,
we
are
the
interesting
thing.
A
F
Yeah
you're
it
it's
very
possible,
it
looks
like
I
mean
it
looks
like
it's
going
almost
to
zero,
so
I'm
wondering
whether
they
actually
just
turned
off
some
of
their
jobs.
But
if
we
look
at
the
there
is
a
nicer
one
here
which
shows
like
the
versus
previous
month
month
of
month,
changes
and
so
that's
where
these
ones
are
highlighted.
So
we
are
spending
50
more
on
compute
engine,
which
could
be
your
ram
thing
and
we
are
spending
about
so
far.
50
percent
less.
So
I
think
that's
heavily
on
on
cloud
build.
F
So
I
think
that's
heavily
skewed
to
it
looks
like
our
primary
consumer
of
cloud.
Build
may
have
stopped
doing
their
cloud
builds
and
we
are
spending
a
lot
more
on
cloud
storage,
600
up,
but
still
not
really
a
lot
of
money,
I
would
say
like
450
and
that
cloud
storage.
We
do
have
a
report
for
this
one,
and
so
we
can
see
that
yeah.
That
is
there,
but
we
don't
yet
have
any
more
insight
into
exactly
who
is
using.
F
I
guess,
a
fairly
substantial
amount
of
storage.
I
would
guess.
H
I
would
guess
that
the
the
increase
or
general
usage
of
the
prod
bucket
is
due
to
the
legacy
images
being
imported
right,
so
we
should
see
a
bit
of
a
bump
in
the
next
report.
Probably.
F
Great,
I
don't
know
whether
the
whether
those
container
images
go
into
oh
yeah.
This
is
by
project
yeah
you're,
absolutely
right.
So
I
guess
I
guess
it
would
be
nice
to
have
a
a
breakdown
by
bucket,
so
we
could
tell
what
was
files
and
it
was
images-
and
maybe
that's
the
next
breakdown,
because
I
don't
think
we
have
that
right
now.
F
Just
looking
for
that,
but
yes
that
does
sound
like
a
likely
hypothesis
and
I'll
try
to
add
a
I'll
try
to
add
a
a
screen
that
shows
breakdown
by
bucket,
so
we
can
confirm
that
it
is
in
the
the
bucket
that
backs
gcr
rather
than
the
other
bucket
just
yes,
this
is
the
biggest
surprise
of
our
billing
report.
I'd
say
this
week,
yeah
biggest
negative,
surprise.
H
A
Okay,
so
I
think
that
it
will
be
fine
for
now
the
next
topic.
I
think
we
can
ask
linus
how
about
things
going
with
the
container
image
promoter.
D
Yes,
so
I
guess,
as
tim
mentioned,
the
most
important
thing
that's
happening
in
two
weeks.
Time
is
the
vanity
domain
flip.
This
is
the
domain
switch
from
the
old
prod
google
containers
to
the
new
product,
kids
artifacts
prod.
D
So
I
forget
how
much
has
been
communicated
to
the
community
but
internally.
We
know
that
the
switch
won't
happen
instantly
because
for
various
reasons
we
purposely
push
out
do
production
rollouts
like
slowly,
probably
it
will
take
around
four
days.
D
I
think
that
was
the
number
that
was
given
to
us
and
so
really
the
the
kickoff
event
will
be.
It
will
be
on
april
1st
not
april
fools
it's
for
reals,
but
it
won't
be
any
dramatic,
like
switch
where
oh
yeah,
like
it's
instantly
flipped.
Now
it's
more
like
have
we
done
all
the
checks
and
preparations
to
make
sure
that
switching
to
the
new
prod
is
okay,
so
that
check
will
happen
in
that
meeting.
D
I
think
it's
10
o'clock
on
april
1st-
maybe
I
should
add
it
to
the
calendar
somewhere.
So
that's
happening
a
few
days
before
that,
probably
around
march
27th,
I
think,
is
the
friday.
D
We
will
stop
making
changes
to
google
containers
to
sort
of
put
it
in
a
freeze,
a
read-only
freeze.
This
really
impacts
mostly
googlers,
who
also
contribute
to
open
source.
They
have
already
been
notified,
but
I
will
probably
send
out
another
email
or
communication,
probably
today,
because
it's
two
weeks
from
now-
and
so
that's
the
domain
flip,
I
don't,
we
have
not
gotten
any
big
pushbacks
or
anything
like
that,
or
any
people
screaming
or
complaining
about
it.
D
Yet
so
I
don't
know-
maybe
maybe
we'll
get
some
feedback
this
time
around
when
I
send
out
email
today.
But
apart
from
that,
I
guess
the
other
couple
items
are
one
the
backup
of
the
new
prod.
This
is
kate's
artifacts
prod.
D
So
just
so
everybody
knows
there
are
roughly
30
000
images,
unique
images
in
production,
new,
new
prod
and
the
way
the
backup
works
today
is
that
it
backs
up
all
three
regions:
europe,
asia,
us.
D
So
that's
actually
times
three
so
90
000
images-
and
you
saw
this
errand,
but
well
you
probably
saw
it,
but
I
changed
the
time
from
one
hour
to
12
hours
and
that
still
has
not
fixed
the
problem.
Apparently
because
I
checked
the
logs
for
the
last
run
and
it
failed
after
roughly
two
hours,
so
I
think
I'll
have
to
change
how
we
back
up
things,
because
now
I
think
about
it.
This
would
probably
the
way
we
back
up
things
right
now,
which
is
copying
all
30
000
images
in
each
region.
D
Maybe
that's
not
the
best
way
to
do
things
off
the
top
of
my
head.
I
think
just
doing
one
region
would
be
enough
if
we
know
that
all
three
regions
are
identical,
which
they
should
be
so
yeah.
We
need
to
make
that
work
without
exceeding
quota.
Maybe
we
might
even
have
to
do
like
a
sleep
one
hour
or
something
in
the
job
to
make
gcr
happy
because
yeah
I
don't
know
how
else
would
you
create
new
images?
So
just
just
so
that
everybody
knows
the
way.
D
The
backup
works
is
that
essentially,
we
just
do
a
dumb
copy
of
every
image
into
a
new
like
prefix,
so
it's
like
time
stamps.
So
you
know
year
month,
day
hour,
so
there
will
be
a
slightly
new
prefix.
So
that
means
that,
even
though
let's
say
nothing
changed
for
all
30
000
images,
gcr
would
still
need
to
basically
create
30
000
like
like
metadata.
D
You
know
database
entries
for
each
tag
or
each
prefix.
I
mean
that's
how
you
know
their
internal
backend,
so
we
we
still
hit
them
for
30
000.
Like
you
know,
things
like
you
need
to
do
these
changes
so
yeah,
even
if
there's
no
delta,
there's
still
a
large
delta
for
gcr,
and
I
think
that's
why
we're
getting
like
exceeding
quota.
D
E
D
So
that's
another
option.
Maybe
I
need
to
ask
them
them
as
in
I
guess,
the
gcr
team,
but
also
it
will.
I
feel,
like
yeah,
go
ahead.
E
D
E
D
E
D
I
mean
so.
The
point
is
to
do
well
the
the
script.
The
backup
score
is
very
simple:
it's
literally
g
crane
copy
dash
r,
that's
it
and
we
just
give
it
a
new
prefix,
so
that
gcr
does
the
deduplication.
You
know
they
do
all
that
work
for
us.
So
it's
literally
one
line
of
code
that
we
wrote
we
you
know
we
could
be.
We
could
optimize
and
say:
okay,
look
at
the
last.
You
know
backup
entry,
look
at
the
delta
and
only
do
a
you
know,
incremental
backup.
D
That
would
certainly
save
us
time.
So
that's
another
option
that
we
could
go
for,
but
I
need
to
like
well.
We
should
like
write
down
all
of.
E
F
H
I'm
currently
doing
the
backup
job
for
all
the
regions
so
like
the
smallest
fix,
would
be
run
it
in
one
region
at
a
different
check
that
just
checks
that
all
regions
are
in
sync,
because
that
should
be
doable
without
the
copy
operation.
Basically
just
read
in
and
then
compare
hashes
so
that
we
basically
say
is
the
us
in
sync,
with
everything
else,
which
it
should
be.
So
that
check
should
be
there
anyway
in
theory,
and
then
only
do
the
30
000
image
copy.
H
And
overall
we
could
just
thinking
here.
Do
it
split
like
how
the
realistic
backup
scenario
in
general
would
be
like
what
is
what
are
the
most
recent
images
of
the
last
six
months
or
something
or
maybe,
and
that
is
done
in
an
hour,
because
that's
the
most
important
part
probably
and
then
all
the
legacy
images
might
only
happen
in
a
backup
full
backup
on
a
sunday,
maybe
over
the
time
frame
of
like
24
hours
or
something
it's
not
perfect.
But
at
least
we
don't
block
our
access
to
gcr
or
something.
D
Yeah
I
mean,
I
guess,
we'll-
have
to
consider
those
ideas
as
well.
I
mean
maybe
yeah
like
separating
the
different
classes
of
images,
because
the
legacy
ones
yeah.
They
won't
change
ever
true.
So
on
the
other
hand,
the
other
sub
projects
will
grow
once
we
change
the
domain
name
but
yeah.
That's
something
to
keep
in
mind.
I
guess
for
now.
You
can
turn
it
off,
because
I
have
a
feeling
like
what
ben
was
suggesting
earlier.
D
The
backup
will
take
probably
yeah
unless
we
change
it
to
optimize
the
number
of
copies
it
will
probably
take
over
an
hour,
maybe
two
or
three
hours,
to
run
to
do
a
full
copy
of
even
the
30
000
images.
So
I
have
a
feeling
that
will
exceed
quota.
Do
we.
C
D
C
Ask
like
do
we
do
we
know
what
specific
quotas
we're
dealing
with
here
like.
C
D
C
D
D
Okay,
so
well,
so
that's
that
I
guess
for
for
now,
just
to
sum
it
up
we'll.
I
think
I
can
turn
off
the
backups
until
we
get
a
better
optimized
way.
C
This
is
what
motivated
me
to
try
and
get
a
build
cluster
up
and
running
yes,
this
group's
project,
so
that
we
could.
You
know
you
could
play
around
with
running
the
job
there
and
it
might
end
up
like
banning
the
nodes
in
that
build
cluster,
but
that
wouldn't
disrupt
the
cluster.
That
proud
runs
on
yeah,
correct.
H
Instead
of
instead
of
turning
it
off
completely,
what
what's
because,
what's
the
general
attack
vector
we're
trying
to
save
here?
That.
C
Anywhere
the
job
runs
whichever
node
the
job
runs
on
runs
out
of
gcr
quota,
which
prevents
us,
and
it
gets
that
node
basically
banned
for
what
seems
like
longer
than
an
hour,
and
we
are
then
unable
to
update
prow
if
crow
happens,
to
be
running
components
on
that
node
in
the
cluster.
H
E
Well,
I
mean
for
first
thing:
we
just
we're
going
to
have
to
get
it
off
of
that
cluster,
because
it's
blocking
us
from
managing
all
of
the
rest
of
the
ci
infrastructure
when
we
spin
it
back
up,
though
we
pro,
it
probably
will
take
a
while
to
figure
out.
I
think,
even
even
with
some
kind
of
optimized
thing.
E
Probably
just
all
the
read
calls
to
all
those
images
is
a
lot
and
you
could
maybe
rate
limited,
but
then
I
mean,
as
we
get
more
images,
we're
just
going
to
make
the
backups
less
and
less
able
to
keep
up
to
date.
It
probably
needs
a
different
approach,
but
we
should
have
that
experimented
with
somewhere.
That
is
not
going
to
get
our
like
main
infrastructure
banned.
A
E
I'm
pretty
sure
this
is
a
we
can
ask,
but
I'm
pretty
sure
this
is
a
fixed
quota,
because
this
is
not
a
thing
that
you're
like
charged
for
or
something
it's
just
avoiding
abuse
and
like
copying,
30
000
images
by
surprise
is
kind
of
abusive
yeah.
True.
H
Do
so
interesting,
so
so
crane
itself
does
a
an
efficient
copy.
As
far
as
I
understood
the
code,
that
does
only
write
the
index
if
the
image
is
there,
but
that
still
is
just
the
right
operations
that
are
being
the
issue.
So
it's
not
really
the
copy
in
general,
but
even
like
an
efficient
copy
is
just
because
we
use
it.
We
back
it
up
into
additional
prefix
instead
of
like
overriding
it,
it
still
creates
the
30
000
index
write
operations.
Basically,
I.
D
Think
yeah,
I
mean
that's
what
I
was
saying
about.
I
mean
I
don't
know
the
internal
details,
but
you
know
I
mean
as
far
as
when
it
finishes
there
will
be.
At
least
you
know:
30
000
rows
whatever
it
is.
You
know
database
updates.
That
has
to
happen
because
the
pads
are
different
yeah,
the
prefixes
are
different,
so
yeah,
but
I
would
like
to
move
on
to
another
topic,
which
is
for
the
promoters
image
auditing
tool,
there's
a
pr
that
justin
is
looking
over
at
this
very
moment.
D
That
is
the
tool
that
will
run
in
cloud
run,
to
check
for
changes
in
gcr
and
to
verify
whether
that
transaction
mutation
of
state
is
good
or
bad.
That
fixes
a
bug
in
the
current,
so
there's
a
bug
in
the
current
implementation,
where
we
do
not
check
correctly
the
child
images
of
a
fat
manifest.
So
if
you
have
fat
manifest
with
digest
a
it
has
children
b,
c
d
e,
if
we
see
a
change
of
you,
know,
insert
tag
or
digest
e.
D
Obviously
digest
e
is
not
in
the
promoter
manifest
because
it's
just
a
child.
We
only
have
the
you
know,
digest
a
written
in
the
promoter
manifest
as
a
parent.
It
needs
to
check
that
properly
and
I
have
a
pr
up,
justin's
reviewing
it.
I
would
like
to
get
it
merged
soon,
so
that
we
can
have
it
working
properly.
D
I've
been
getting
emails
by
the
way
for
the
new
kate's
artifacts
project,
so
the
current
auditor
right
now
when
it
runs
with
that
bug,
it
will
scream
and
and
yell
and
say,
hey
you
have
all
these
images
being
uploaded.
D
They
are
not
verified,
they've
been
mostly
around
the
new
pr's
because
we
get
prs
once
no.
You
know
once
every
day
or
two
like
the
recent
ones
have
been
from
cluster.
Was
it
stage
cluster
api?
So
I've
been
ignoring
those
alerts?
I
haven't
checked
every
single
alert.
That
is
every
single
different.
D
You
know
digest
that
you
know
rings
a
bell
or
you
know,
creates
an
alert,
because
that
would
be
too
much
work.
Instead,
I
will
before
the
or
actually
on
the
day,
two
weeks
from
now
when
we
do
the
flip.
Before
we
do
the
flip,
I
will
check
that
the
stuff
in
the
promoter
manifest
that
we
have
align
with
what
we
have
in
production
just
to
do
like
a
quick
check.
That's
actually
very
quick
and
easy
to
do
just
reading.
D
D
F
A
Okay
elsewhere,
we
look
at
it.
So
there
is
no
other
topic
I
can
see
here,
but
I
want
to
like
give
update
what
I
was
doing.
I
was
focusing
much
more
about
documentation,
as
I
said
before,
and
I'm
doing
the
analysis
for
the
current,
like
infrastructure,
provisioning,
tooling,
I'm
trying
to
understand,
like
all
roles,
iems
etc.
So
I
will
be
asking
a
lot
of
questions
for
people
because
I
want
to
when
I
will
finish.
A
C
C
C
A
Case
that
there
is
no
document
like
that,
there
are
four
things
which
are
running
you,
of
course,
when
you
will
be
digging
into
the
scripts,
you
can
find
those,
but
that's
one
of
the
things
which
I'm
working
out.
This
is
gc
web
right
now
this
is
staging
one.
Second,
I
forgot.
A
A
But
what
my
goal
is
and
what
I
want
to
search
for
that
just
my
like
analysis
right
now
is
also
will
move
us
to
use
the
terraform
for
that
purpose
too,
because
right
now
there
is
everything
which
we
are
like
deploying
to
this
new
cluster
to
people
like
cluster
is
done
manually,
so
we
are
putting
the
manifests,
but
the
manifest
need
to
be
applied
by
a
human
who
has
permissions
at
this
point
and
what
I
want
to
do
when
we
will
move
to
the
terraform.
H
So
basically,
all
the
kubernetes
configuration
would
then
be
terraformified
to
also
be
applied
within
the
aaa
cluster
right.
A
If
it's
between
possible-
because
it
is,
as
I
said
right
now
when
you
are
creating
like
a
you-
need
to
run
the
script
called
namespaces,
ensure
namespace
and
inside
you
need
to
put
the
your
project
name
and
when
the
script
will
be
fired,
your
namespace
with
the
role
and
everything
will
be
created
for
you.
A
Then
you
will
have
an
access
for
this
particular
project
and
namespace
as
a
human
and
then
you
need
to
deploy
it
there
and
there
is
no
connection
between
you
need
to
put
your
code
and
manifest
into
the
case.io
repository
and
then
it
something
will
apply
these
changes
to
the
cluster.
Everything
is
done
manually
at
this
point,
so
this
will
be
the
next
step.
First,
we
need
to
put
the
things
into
the
teraform.
Then
I
want
to
suggest.
Okay,
we
have
the
current
infrastructure
resources
in
terraform.
Let's
move
it
further.
A
The
next
step
would
be
to
actually
suggest
using
terraform
and
automating
this
to
apply
these
projects
with
terraform.
The
next
step
would
be,
but
this
is
a
kind
of
new
thing
is
because
we
wrote
our
tool
for
the
groups
and
the
google
groups
management,
and
actually
we
also
can
do
that
with
terraform
using
the
g-suite
provider.
So,
instead
of
having
one
big
group
groups.yaml
file,
we
could
do
it.
A
I
feel
better
and
easier
to
read
and
maintain
the
future
using
terraform,
so
it
will
remove
all
bash
scripts
for
us
it
will
remove
the
need
for
having
like
groups,
that
yaml
and
scripts
for
provisioning
these
groups.
It
will
also
remove
the
need
of
manual
applying
the
the
project
to
the
new
infrastructure,
so
we
have
like
a
three
or
four
things
in
one
place
in
one
place
done
by
one
solution.
H
Okay,
so
we
so
we
basically
have
a
reconciliat
conciliation.
Loop
running
terraform
apply
for
the
whole
cluster,
including
keeping
in
sync
with
the
g-suite.
A
So
we
can,
we
can
do
it
in
multiple
ways.
This
is
the
thing
which
we
can
discuss
because
I
don't
want
to
have
only
one.
You
know
one
configuration
which
will
be
doing
everything,
so
there
will
be
everything
I
would
like
to
have.
You
know
possibility,
for
example,
to
just
apply.
You
know
that
we
there
is
new
project
which
was
going
to
the
cluster.
Let's
just
run
the
resources
for
that
purpose,
and
also
it
would
be
helpful,
then,
maybe
in
the
future
we
will
decide.
Okay,
it's
safe
enough.
Everything
is
working
correctly.
A
C
A
So
this
is
exactly
what
I'm
doing.
I
I
just
send
a
link
with
the
with
what
I'm
I'm
analyzing,
what
scripts
are
creating,
which
components
and
when
I
will
finish
that
it
will
be
a
great
place
for
okay.
These
are
the
components
which
are
being
created
right
now.
What
are
we
doing
with
them?
I'm
not
saying
that
we
we
decided,
we
do
everything
with
terraform.
This
will
be
the
point
where
we
will
be
discussing
this
at
this
point.
A
To
have
at
least
to
know
the
state
of
the
world
right
now
then
move
to
the
discussion
about
okay.
We
know
what
we
need,
what
we
are
doing
with
that
okay,
then
we
can
agree
or
not
about
terraform
for
the
things
which
already
exists
and
when
we
will
finish
this
part,
then
we
can
talk
about
you
know
automating
and
deploying
the
projects
or
automating.
You
know
the
groups
etc.
H
So
I
mean
I,
I
agree
that
the
terraform
probably
cleans
up
a
lot
of
the
the
general
how
everything
is
needs
to
be
like
written
and
like
moving
away
from
the
bash
part.
H
We
just
moved
to
a
different
language
instead
of
or
at
least
that's
how
it
interpreted
it
is
that
without
the
reconciliation
solution
being
there,
we
just
shift
around
the
problem
on
how
it's
written
not
on
how
it's
deployed.
C
So
I
actually,
I
kind
of
meant
the
opposite.
I
guess,
or
I
my
preference
is
the
opposite-
that
we
would.
I
would
rather,
as
we
iterate
on
some
kind
of
reconciliation
loop,
that
that
continues
to
be
done
by
humans
in
case
anything
goes
completely
wild
or
crazy.
I
am
wary
of
turning
on
automation
right
away.
It
tends
to
accelerate
the
cycle
with
which
we
can
tends
to
accelerate
shooting
ourselves
in
the
foot
has
been
my
experience
if
you
haven't
like
really
had
the
chance
to
exercise.
C
So
what
I
was
asking
for
was
so
bart
I
am
reading
through
your
stuff.
It
is
extremely
thorough,
but
I
think
I
was
asking
for
something
a
little
more
like
a
role
book
or
handbook
that
the
humans
who
are
involved
in
the
reconciliation
loop
today
like
how
do
they
know
what
to
do,
because
we
write
down
what
they
do
so
that
a
it
might
be
possible
to
onboard
people
to
help
this
in
case
those
critical
folks
become
unavailable
and
b.
That
kind
of
just
lets
us
know
what
the
state
of
today
is
operationally.
A
Yeah,
I
I
agree,
and
I
want
to
have
that
kind
of
documentation.
I
I
didn't
plan
to
do
it
like
you
know,
partially
right
what
we
you
know
what
we
need
to
be
done
right
now
I
want
to.
I
was
thinking
about
first
analyzing.
A
What
is
the
state
actually
right
now,
so
what
the
components
are
there
and
then,
with
that
in
mind,
I'm
also
inside
this
analysis,
putting
the
links
where
we
are
creating
these
components
and
what
it
should
with
as
a
result
should
be,
should
show
us
what
is
in
our
infrastructure
right
now.
You
can
partially
check
it
going
through
the
audit
files,
but
it
is,
I
was
doing
this
last
week
and
it
is
kind
of
tricky.
There
is
not
everything
there
and
you
need
to
be
aware
a
little
bit
how
it's
structured,
etc.
A
But
I
agree
about
the
documentation.
I
will
think
about
doing
quickly
something
for
people,
but
overall
my
main
goal
would
be
to
to
get
like
a
proper
documentation
with
the
components
why
these
components
why
these
iems
are
provided,
etc,
etc,
etc.
C
Yeah,
okay,
all
right,
I
will
try
to
help
I'll
see
if
I
can
reach
out
to
some
of
the
folks
who
run
the
things
and
see
if
I
can
get
information
out
of
their
heads
about
how
they
do
what
they
do.
A
I
think
that
when
I
was
digging
into
the
scripts
I
I
kind
of
have
idea.
I
didn't
run
this
music,
I
don't
have
permissions,
but
I
have
idea
what
and
when
needs
to
be
done.
So,
if
you
have
like
a
direct
questions,
definitely
I
can,
I
think
at
least
direct
you,
which
parts
of
our
scripts
are
doing
that
kind
of
thing.
A
But
what
I
know
right
now
that
we
are
you
already
using
terraform
to
provision
the
the
aaa
cluster
and
the
two
poles
which
we
have,
and
we
also
are.
The
people
are
running
the
auditing
script.
The
people
are
running
like
the
the
bash
script,
for
ensuring
these
storage
things,
the
name
spaces
thing,
and
everything
else
is
done
manually.
C
C
H
Another
yeah
there's
one
thing
so
you
reached
out.
I
think,
for
the
last
meeting
to
shortly
give
an
update
on
the
redirector,
so
we
currently
use
nginx
for
the
redirect
and
how
it's
set
up.
It's
basically
a
single
nginx
config
file
and,
in
the
long
run,
especially
enabling
six
to
basically
have
their
own
redirects
in
the
future.
That
just
provides
a
single
point
of
issue,
so
that
was
the
idea
to
base
it
on
dns
and
the
specific
tool
being
tex
direct
and
last
time.
H
I
could
be
here
and
this
time
the
last
two
weeks
for
coyotes,
so
only
giving
a
short
update
so
behind
the
scenes,
we're
basically
working
on
bringing
the
tool
itself
up
to
speed
on
the
edge
cases
of
the
nginx
configuration.
So
we
basically
converted
the
whole
configuration
of
the
nginx
into
dns
records,
basically
using
that
configuration
to
set
up
our
tests
end-to-end
so
that
these
cases
that
we
already
have
in
the
current
redirector
are
basically
tested
well
in
a
in
different
types
of
flavors.
H
Let's
say
that
and
then,
when
we
have
all
the
edge
cases
backed
out
and
basically
the
next
release,
hopefully
next
month,
then
we
should
be
able
to
basically
get
everything
up
to
speed,
to
try
out
the
first
thing,
which
is
not
just
not
the
full
nginx
file
being
converted,
but
only
the
go
dot.
Kubernetes
that
I
so
the
smallest
part.
But
we
wanted
to
basically
test
the
whole
thing
first
to
have
a
good
base
to
build
on.
H
So
we
can
basically
be
sure
that
the
minimal
thing
works
perfectly
and
then
we
can.
With
that
base,
we
can
see
iterating
further
what
makes
sense
to
pull
out
of
the
nginx
into
something
that
is
a
bit
more
self-contained,
and
with
that
we
could.
We
basically
have
the
option
to
give
every
sig
their
own
path,
which
is
controlled
by
dns
records.
H
So
as
long
as
the
sig
has
access
to
their
own
domain
or
record,
they
basically
have
access
to
their
own
ul
shortener
if
they
want
to
and
because
dns
is
hierarchical,
structured
that
maps
well
to
community
and
community
groups
being
self-organized
kind
of.
So
that's
the
general
idea
and
hopefully
that
works
out.
A
C
H
So
basically,
how
it
works
is
that
the
domain
itself
say
go
to
kubernetes.io
has
the
ability,
because
each
there's
some
mapping
between
the
path
to
a
subdomain
and
because
subdomains
can
be
delegated
to
a
cname.
H
The
the
parent
record
could
c
name
a
specific
path,
basically
to
seek,
and
then
the
sick
has
their
own
dns
zone
file.
What
can
they
where
they
can
basically
add
new
records
and
these
record
map
to
redirect
on
the
parent
zone
and
on
the
parent
redirect
engine?
If
you
want
to
call
that.
E
Do
we
actually
are
we
actually
planning
to
allow
people
to
make
unmoderated
like
sig
specific,
like
so
most
of
the
places
where
we
have
cigs
doing
things
on
their
own?
It's
sort
of
like
actually
name
space.
I
think
it
for
like
go
links
that
would
get
kind
of
weird
like
go
dot,
cluster
life
cycle,
dot,
gates.
or
something
most
other
places.
There's
just
a
group
that
manages
this
like.
H
So
the
the
first
step
and
that's
the
the
first
idea-
is
just
having
it
in
sort
of
a
better
mapping
to
a
non-flat
namespace.
So,
instead
of
everything
having
to
go
through,
basically
tim
and
dims,
currently
for
the
nginx
configuration,
we
could
have
the
ability
to
say:
okay,
everything
that
is
an
especially
specific
path
for,
say,
sick
contributors.
H
H
E
H
Yes,
and
so
so
it's
still
the
same
instant.
So
it's
one
instance
running
that
is
doing
the
redirects
it's
for
now
and
that's
the
short
term
goal.
Is
it's
still
going
through
the
same
dns
records,
but
how
the
mapping
works.
It
might
be
nicer
to
be
split
up
on,
even
if
it's
still
the
same
people
and
we
could
bring
in
like
a
because
how
we
manage
the
the
dns
records.
H
Currently,
we
could
allow
sap
directories
being
controlled
by
its
own
owner
file,
so
paris
could
come
in
and
say:
okay,
everything
that
is
basically
dds
related
to
that
redirect
could
be
managed
by
sick
contributors
instead.
H
True,
but
so
so
we
have
to
separate,
probably
the
short
term,
the
midterm
and
the
long
term,
where
the
short
term
is
trying
to
better
have
a
better
control
of
not
just
having
a
single
nginx
file,
with
a
lot
of
edge
cases
to
something
that
might
be
more
contributable
to
and
then
the
second,
the
mid
term
part
is
maybe
split
it
out
to
different
groups
that
can
pre-review
changes
and
then
long-term.
H
If
we
ever
decide
that
we
could
open
up
the
general
option
to
to
basically
push
their
own
redirects
from
like
a
sick
perspective
like
sick
release
wants
to
have
a
specific
redirect
for
release
or
something
like
that.
They
could
push
that
and
review
it
on
their
own,
because
it's
under
their
namespace
in
the
redirect
area,
but
that's
not
planned.
Let's
say
that
that's
just
an
option
that
is
is
being
thought
about
during
implementing
the
first
two
stages.
C
Okay,
that
that
helped
me
understand
where
we're
at
and
where
we're
headed.
Thank
you.
H
Thanks,
if
any
questions
happy
to
to
go
into
detail,
not
here
but
like
bring
me
or
something.