►
From YouTube: Kubernetes WG K8s Infra - 2019-08-07
Description
GMT20190807 153339 k8s infra 1920x1070
B
With
the
billing
report,
I'm
furiously
typing
to
bring
it
up,
but
everyone
should
have
got
it
or
I
wanted
to
subscribe
to.
The
amazingness
should
have
got,
which
is
a
very
general
mailing
list,
which
I
don't
recall.
I
should
have
got
it
in
their
mailbox
on
monday.
It
should
be
sent
to
that
mail
list.
Every
monday.
It's
still
use
Comic
Sans
I'm,
debating
stopping
it
out
for
a
guest
font
of
the
week,
but
it
should
be
pretty
accurate.
A
Yeah
so
link
to
the
mailing
list.
Yeah
I
see
it's
on
the
mailing
list,
so
everybody
gets
it.
Papyrus
is
definitely
a
great
suggestion
for
a
guest
font,
Tim
I
also
they
put
out
a
mono
space
version
of
Comic
Sans.
Recently
I
saw
a
comic
mono
if
you
want
to
go
old-school.
This
is
why
we
need
everyone
to
be
able
to
edit
the
proportions.
B
A
B
C
C
B
C
C
A
C
A
C
A
C
C
D
E
A
C
A
Okay,
so
here's
the
port
I,
have
it
filtered
down
the
issues
and
milestone
ready
to
migrate,
which
is
everything
we
want
to
do
before.
Reading
migrate,
stuff,
I'm
guessing
we
don't
have
any
updates
on
her
namespace
billing
information,
which
I
believe
is
a
feature
we
need
to
enable
last
update,
was
June,
26
Tim.
You
need
to
play
with
this
yeah.
C
A
Storage
analysis
need
analysis
of
what
storage
is
honestly
I
feel
like.
Maybe
we
should
just
turn.
Maybe
we
should
just
find
out
how
much
storage
is
going
to
cost
us
by
turning
this
on
and
seeing
it
live.
That
seems
like
the
best
simulation
of
storage
traffic
and
artifact
download
that
I
can
think
of.
A
I'm
not
necessarily
calling
this
close,
but
the
way
I
see
this
playing
out
is
we
really
are
interested
in
pushing
forward
with
artifact
promotion
stuff
and
so
eventually
we'll
get
to
a
place
where
people
are
downloading
those
artifacts
and
we
should
presumably
see
some
spikes
somewhere
and
if
it's,
even
not
that
reasonable
great.
But
it's
not
it's
not
such
a
spike
that
we
really
care
about
being
granular.
A
While
I'm
doing
this
Tim's,
do
you
mind,
excuse
me,
do
you
mind
taking
notes
for
any
of
your
anthem,
stuff
that
comes
up
here?
Okay,
thanks!
It's!
It's
super
helpful.
Okay!
Next
things
on
the
area
of
access
management,
I
am
dump
scripts.
There's
a
pull
request.
That's
sitting
out
here!
That's
ready
for
review!
This
is
going
to
help
us
better
understand.
What
do
we
have?
What
rights
does?
What
have
oh
yeah.
A
C
A
C
C
To
I
mean
that's,
that's
great
and
gives
me
a
good
confidence,
so
I
will
worry
less
about
whether
it
works
or
not.
I'd
like
to
understand
it
for
myself
yeah
and
make
sure
that
all
the
flags
I
expect
to
be
sad
are
being
set,
which
is
a
tall
order
because
I
don't
even
know
what
that
list
is.
I
have
to
go
back
to
G
cloud
and
check
the
list
again.
Okay,
thank
you.
That's
all
I'm
really
looking
for.
A
Okay,
then,
we
have
sort
of
a
separate
issue.
That's
about
rolls
of
our
back
rolls
for
access
within
the
cluster,
whereas
I
am
is
about
access
to
everything
within
the
project.
Again,
I'm
gonna
suggest
punting.
On
this
now
yeah
we
can't
run
a
scraping
job
until
we
actually
have
the
script
merged
to
use,
not
the
job.
C
C
If
this
is
I
would
say
from
my
point
of
view,
if,
if
we're
good
with
it,
we
turn
it
on
and
we
shift
over.
What's
the
one
that
we
have
running
the
publisher
and
we
shift
over
the
publisher
to
the
new
repo
and
then
we
start
with
low-hanging
fruit
from
the
old
internal
cluster
like
GCS
web
and
those
sorts
of
easy
things
and
merge
a
few
of
those
over
make
sure
that
we're
happy
with
that
and
then
I
would
say
we
would.
C
We
would
want
to
build
up
our
playbook
if
we're
going
to
start
having
more
people
paying
attention
to
it
and
more
people
having
access
to
it.
I'd
like
to
understand
what
our
monitoring
and
alerting
and
reaction
is
going
to
be
to
problems
right
now
it
you
know
like
Kate,
said:
I/o
goes
out
and
it
pages
me
so
I'd
say
the
next
step.
Once
you
have,
the
cluster
up
is
figuring
out:
production
radios.
A
Ok,
so
you
haven't,
have
an
issue
to
burn
down
and
recreate
the
Kate's
infra
cluster
know,
which
is
what
we
would
do
using
terraform
yep
and
migrate,
all
the
things
over
and
then
the
thought
is.
We
would
then
need
to
enumerate
some
follow-up
steps
around
monitoring
once
we
burn
things
down.
Yes,
anybody.
A
A
All
right,
my
two
favorite
umbrella
issues,
which
I
really
feel
like
need
to
be
retitled
to
our
East
Coast,
which
are
about
setting
up
a
GCS
bucket
and
setting
up
a
GC,
our
bucket
dims.
You
even
hunt
assign
yourself
from
this
one
I
kind
of
feel
like
I,
don't
even
know
what
this
means
anymore.
I
have
used
it
sort
of
as
the
proxy
for
we
want
to
do.
Artifact,
promotion
and
I
think
it
also
is
related
to
the
fact
that
GC
r
is
like
a
loose
wrapper
over
GCS,
or
something
like
that.
So.
D
Sorry
I
was
doing
a
bulk
on
the
sign,
and
this
was
one
of
those
that
got
in
I
did
that
inadvertently,
so
the
GC
r
GC
s
seems
fine.
The
process
that
we
have
now
I
did
add
an
item
to
open
discussion,
so
we
can
talk
about
it
later,
but
I
don't
think
we
have
anything
much
left
to
do
here.
We
have
a
process
and
a
few
of
us
are
able
to
run
the
script
and
we
are
able
to
you
know:
Alec.
You
allocate
a
CIO's
and
give
people
access
to
so
we
are.
A
C
Until
we
have
a
generalized
promotion
process,
what
I
I
want
to
be
thoughtful
about
is
whether
we're
going
to
give
out
a
bunch
of
individualized
GCS
buckets
or
whether
we
want
to
actually
have
sort
of
a
top-level
bucket
that
we
subdivide
you
know
sub-directories,
underneath,
which
was
what
the
promotion
process
would
provide
and
I.
Don't
think.
We've
had
enough
demand
for
it,
yet
to
really
say
which
one
we
want
to
do
doesn't
B
feel
like
we
do,
maybe
I'm
just
ignoring
it.
I
I
think.
D
We
we
are
handling
like
70%
of
the
cases
right
now
so
I'm
happy.
We
have
the
automation,
stuff
working
and
we
are
able
to
whoever
is
coming
to
us.
We
can
turn
around
quickly
and
give
them
what
they
need
right
now.
So
we
are
fine.
When
so
one
example
that
came
out
for
the
one-off
was
the
sig
release.
Team
came
ask
for
a
bucket,
and
you
know
that
court
handled
outside
of
this
process
so
which
is
fine.
A
So
each
singing
words
like
this,
it
sounds
like
what's
lacking,
is
documentation
like
it
sounds
like
sick
release,
kind
of
happened
to
know
the
right
people
to
poke
and
ask
real
nice
for
a
GCS
bucket,
but
I
don't
know
that
we
have
written
down
that
like
this
is
a
thing
you
can
do.
You
can
ask
us
for
GCS
bucket
for
special
purposes,
all.
D
C
C
A
Let
me
see
if
I
can
numerate
what
I
think
happens
right
now,
right
now,
I
think.
If
we
give
people
a
GCS
bucket,
we
require
that
that
access
to
that
bucket
be
controlled
via
a
group
that
is
listed
in
groups,
thought
yeah,
and
so
we
are
saying
that
anybody
can
get
a
bucket,
but
access
to
that
has
to
be
controlled
exclusively
by
our
set
up.
C
C
I,
don't
think,
is
a
problem.
I
think
it's
good,
so
staging
I
think
is
well
understood.
We
have
this
prod,
enabled
prod
storage
script,
which
we've
sort
of
overloaded
for
the
these
side
of
things,
but
I,
don't
think
that's
actually
the
right
answer
girl,
if
it
is,
it
needs
to
be
codified
a
little
bit
more.
C
Sorry
wrong
with
it
I
mean
it
was
built
to
handle
the
GCR
side
of
things.
Then
we
sort
of
bolted
GCSE
onto
it.
It's
got
a
couple
of
special
cases
for
the
test,
situations
for
the
prod
situations
and
I
just
don't
feel
like
I've
thought
about
it
enough
to
say,
I'm
completely
happy
with
it.
The
way
I
am
with
the
staging
stuff.
A
D
A
Groups
yay,
okay,
thank
you
for
helping
me
learn
all
of
that.
Okay,
so
the
GCR
thing
is
kind
of
like
the
proxy
for
how
how
are
we
doing
with
container
image
promotion
these
days
and
28
days
ago,
I
remember
amy
rattled
off
like
this
great
list
of
stuff
that
we
thought
we
needed
to
do.
I
see
some
of
these
are
closed.
Some
of
these
are
open.
Is
this
the
relevant
list
of
issues
to
walk
through
our
current
blockers
for
image
promotion.
E
E
This
one
I
keep
dropping
the
ball,
I'm.
Sorry
about
that.
It's
on
my
like
to-do
list
of
like
figuring
out
like
a
Google
Talk,
for
the
options
that
we
have
but
yeah.
Let
me
know
if
it's
like,
like
a
like
someone
else
wants
to
do
it
or
if
it's
like
an
absolute
blocker
but
I,
trying
to
make
sure
they
get
this
done.
So,
basically,
the
the
idea
is
for
the
ASIO
pockets
they're
like
if
the
image
promotion
process
like
accidentally
deletes
it.
B
I
think
we
could
very
quickly
build
what
I
would
call
a
very,
very
bad
backup
solution
which
is
I,
could
loop
over
the
mole
and
duck
or
save,
which
I
know
is
gonna,
be
horrific,
ly
inefficient,
but
I'm
very
encouraged.
So
it's
not
as
expensive,
but
I
mean
because
it
doesn't
it's
a
hurricane
because
it
wouldn't
deed.
You.
F
A
F
Exactly
yes,
but
like
to
do
that
over
I,
don't
know
to
actually
have
that
happen
over
many
on
the
real
thing
or
with
the
kind
of
volume
that
we
have
with
the
you
know
in
Google
containers
having
many
many
images.
How
long
did
it
take
like?
How
robust
is
it
will
it
you
know,
does
it
recover
if
you
kill
it
halfway
through
the
thing,
while
it's
you
know
doing
the
snapshot
you
know
70%
through
and,
like
all
of
those
considerations,
I
think
have
to
be
thought
out,
but
anyway,
yes,
I
am
going
too.
A
Much
into
this
I
hear
your
points
so
will
require
further
discussion,
but.
F
Let's
see
how
much
further
we
can
move
this
this
Boulder.
So
just
one
quick
note,
though,
I
would
like
to
read
echo
tim's
comments
on
this
originally,
which
was
we
should
have
something
even
just
like
a
written
instruction
set
of
instructions
of
this
is
how
you
you
know,
do
a
manual
backup
or
something
like
that
would
be
the
first
like
step,
I,
think
sort
of
having
an
actual
solution
implemented
just
having
an
official
policy
or
it
could
read
me
or
something
that's
way
more
anything.
So,
okay,.
D
F
C
A
Right,
okay,
yeah
I'm,
not
gonna,
play
I'm
honestly,
not
super
clear
what
this
is
about
and
suggest
we
punt
on
this
because
right
now,
I
think
it's
like
we've
run.
You
run
something
container
image
promoter
related
to
any
trusted
place.
A
A
F
D
F
I
think
the
multi-run
script
that
we
have
that
shell
code
is
probably
not
as
useful
anymore,
even
though
it
does
like
this
optimization
thing
where
it
only
runs
against
the
manifest
I
have
changed
like
the
way
the
new
code
works,
like
it
does.
Some
optimizations
of
as
far
as
like
making
as
few
read
or
read,
calls
against
the
repository
as
possible.
So
I
don't
think
we
even
need
anymore.
Shell
codes,
so
that's
gonna
be
changed.
Sorry
I
have
2%
on
my
battery
meter.
F
The
one
update
I
have
is
so
the
the
other
missing
piece
here
is
like
turning
on
ETV
tests
and
making
sure
that
it
works
for
the
actual
use
cases
that
we
do
use
it
for
so
there.
The
current
III
test
that
I
have
in
the
e2b
binary
is
just
a
single
promotion.
I
guess
one
manifest
I
need
to
add
another
test
case.
That
has
you
know,
multiple
manifests
and
like
I,
just
need
to
add
more
test
case
and
to
turn
that
on
I
don't
know
if
we
have
an
open
issue
for
that.
A
F
F
Mean
the
update
from
Justin's
that
he
merged
his
code
for
like
file
promotions
since
the
promotor.
So
it's
a
different
binary
but
I
think
later
on.
We'll
probably
put
that
behind
a
common
interface
like
CLI,
maybe
we'll
use
Cobra
or
something
for
flag
headlight,
so
unified
under
one
like
interface.
But
that's
the
only
other
update.
As
far
as
the
rotor
repository.
F
A
B
We
now
have
an
image
from
a
binary
promoter.
In
the
container
image
promoter.
We
have
a
work-in-progress
PR
to
use
that
for
some
cups
wineries
we
actually
have
the
prod
bucket
setup
right
now,
so
it
will
serve
right
now
in
artifacts,
arcade
siyao
I
need
to
follow
up
with
Tim
to
find
out
like
more
about
why
he
was
unhappy
with
it.
It
sounds
like
he
was
more
nether
with
a
structure
than
ie
like
I
will
confirm
with
him
or
worse.
My
understanding
is.
B
We
were
going
to
do
that
and
then
see
how
it
goes
and
whether
we
need
to
then
like
change.
The
structure
and
cops
is
fine
with
that
we
have
a
sort
of
way
of
a
mirror,
doesn't
have
to
have
the
artifacts
on
it.
A
secondary
mirror
doesn't
have
to
have
the
artifacts
on
it,
so
this
would
be
a
secondary
mirror.
So
if
it
goes
away,
that's
not
a
big
deal
for
anyone,
and
we
can
basically
start
to
get
traffic
and
start
to
understand
how
it
works.
B
So
I
was
helping
to
do
that,
but
the
two
big
blockers
that
I
know
are:
we
actually
need
to
get
like
the
promoter,
the
file
promoter
into
a
a
cron
job.
However,
we're
running
the
the
container
provider,
and
we
would
need
to
get
billing
reports,
which
we
can
do
more
when
when
we
actually
have
something
to
report
on
like
we
have
the
aggregated
building
reports
today
on
GCSE
utilization,
but
it
will
once
that
number
grows,
we'd
like
to
probably
break
it
down,
drill
down
into
it,
which
we
can
do
once
we
have
traffic.
A
B
I
guess
I
guess
actually
we're
gonna
have
to
move
it
into
a
different
directory,
so
I'm
and
I
agreed
to
split
the
file
promoter
into
a
separate
binary.
So
we
now
have
to
run
a
different
binary.
Executable
I
am
guessing
that.
Therefore,
the
we
should
spit
onto
separate
directory,
which
will
be
good
anyway,
because
current
leader,
she
is
called
like
Kate's
DC
audio.
So
now
we
can
have
like
GCS
and
it
will
be
more
logical
and
I
will
repoint
my
PR
to
that.
A
A
A
My
goal
here
is
not
to
block
this,
but
I'm,
feeling
like
that
was
happening,
was
not
really
that
much
like
upfront
documentation,
description
of
the
design
that
sort
of
stuff
but
sounds
cool
like
I'm
sitting,
I'm,
very
fine
with
us
just
turning
it
on
and
then
seeing
what
happens
billing
wise
within
a
week
I
find
it
difficult
to
believe
that
we
would
really
blow
through
that
much
of
our
credits
in
a
week
of
down.
Look
the
week's
worth
of
downloads
to
be
like.
B
Wildly
surprised
right,
it
will
be
6,
it
will
be
substantial
compared
to
the
current
billing.
It
should
be
2
orders
of
magnitude
higher.
That
sounds
great.
Yes,
the,
but
I
will
also
do
a
description
of
how
the
like.
So
we
want
to
prow
job
with
a
readme,
a
description
of
the
architecture
for
the
file
promoter,
which
doesn't
have
a
name
yet
so,
yes,
that
is
up
for
naming
and
I
guess,
like
a
description
of
how
to
use
it
all
right
like
when
you
want
to
like
promote
binary.
A
D
A
D
D
So
one
of
us
pick
pick
up
the
pr
merge
it
and
then
you
know
run
the
job
by
hand
so
that
that's
working
out
fine,
then
the
next
one
that
is
working
well
as
the
update
to
the
groups
for
the
ACLS,
that's
working,
fine
as
well.
There
was
one
hiccup
where
there
was
an
update
to
a
group,
but
the
ACO
didn't
get
reflected
in
the
bucket
for
some
reason
and
Tim
had
to
recreate
the
bucket
for
it.
Actually,
you
know-
and
we
couldn't
figure
out
exactly
why
you
know
who
was
it?
D
Somebody
who
was
yeah
Jason,
DT
BER,
was
trying
to
upload
an
image
and
it
wasn't
getting
uploaded,
even
though
he
had
previously
uploaded
an
image
a
couple
of
weeks
ago,
so
it
was
so
the
bucket
had
to
be
recreated
essentially,
and
we
still
don't
know
why
it
went
bad
other
than
that.
One
hiccup
in
the
rest
of
the
uploads
to
the
staging
GC.
Our
repositories
went,
fine,
I
haven't
heard
any
other
instances,
then
the
other
one
was
promotion
from
the
staging
repository
to
production.
They
were
a
couple
of
hiccups.
D
There
was
at
least
some
code
changes
from
Linus
which
went
in
and
be
promoted.
They
had
image
promotional
image
it
for
the
product
to
pick
it
up
and
it
got
stabilized.
So
in
the
meantime,
what
we
had
to
do.
We
try
running
the
prowl
job
and
we
had
to
ping
test
and
frog
to
rerun
the
project,
and
you
know
that
took
care
of
some
of
the
issues
until
the
changes
from
Linus
landed.
So
it
wasn't
too
bad.
D
We
just
have
to
paint
testing
from
the
on-call
in
the
tests
in
whoever
is
on
on
call
and
get
them
to
rerun
the
project,
so
that
and
we
knew
where
to
go,
look
for
the
project,
so
we
could
go
check
if
the
projects
at
seeded
or
not
I
wish.
There
was
a
better
way
to
do
this,
but
this
is
fine.
For
now
at
least
we
have
the
logs
and
Jason
could
go
by
himself
and
go
look
at
the
logs
and
see
if
the
project
succeeded
or
not.
So
it's
it's
fine
I
will
say
just.
A
Real
quickly
on
that
point
that
one
of
my
teammates
Randy
Christ
is
working
on
like
a
rerun
button
that
you
can
push
we're
still
trying
its
rollout.
So
it's
still
the
same
set
of
people,
but
we
are
trying
very
diligently
to
click
a
button.
Instead
of
run
a
script,
it's
really
painful
for
some
of
us,
but
then,
ultimately
we
can
roll
that
out
so
that
other
groups,
two
people,
can
rerun
other
specific
jobs.
D
A
D
So
yeah
so
I
talked
about
the
groups.
I
talked
about
the
staging
repositories.
I
talked
about
the
image
promotion,
so
all
these
things
I'm
able
to
do
now.
What
I
want
to
do
is
to
see
if
we
can
get
somebody
else
as
well.
So
right
now,
it's
me
Christoph
and
Tim
who
take
care
of
these
requests,
but
over
a
period
of
time,
I
want
to
add
more
people
to
it.
D
If
anybody
else
is
interested
in
this
kind
of
stuff,
please
let
me
know
and
we'll
try
to
get
you
into
so
we
should
have
like
a
rotation
on
call
similar
to
on
call
I
guess-
and
you
know
somebody
because
you
know
at
some
point
we'll
have
to
come
up
with
an
SLA.
How
long
is
it
gonna
take?
Who
is
going
to
be
able
to
do
it
and
stuff
like
that?.
A
So
elsewhere,
I
am
trying
gonna
reach
out
to
the
product
security
team,
to
ask
them
about
the
ops
Jeannie
instance
that
they
use
to
better
understand.
If
that
is
something
that
folks
like
us,
could
use
for
on-call
rotations
to
also
grow
the
pool
of
people
who
can
answer
the
testing
for
on-call
bat-signal,
and
my
my
other
thought
here
is
handling.
Dns
requests
certainly
sounds
like
it's:
it's
a
person
as
a
service
type
of
deal
and
I'm
wondering
like
what
is
preventing
us
from
having
that
be.
A
A
D
D
Do
you
see
what
I'm
saying
I
I
do
so
so
that
was
one
way
to
do
it?
The
other
way
to
do
it
is
like,
like
similar
to
the
publishing
bond,
where
all
we
are
running
is
like
running
a
deployment
or
a
pod,
where
the
the
publishing
bot
is
just
looping
and
waiting
for
incoming
requests
and
kicking
things
off.
That
would
be
the
easier
way
to
do
it,
but
then
the
problem
with
artists,
the
logs,
are
not
available
to
the
people
who
you
know
the
probes.
D
A
A
That'll
be
good,
which,
with
my
sink
testing
hat
on
I,
will
advertise
the
fact
that
our
meeting
is
now
an
hour
long
every
other
week.
So
we
would
probably
have
room
to
talk
about
this
Tuesday
at
10:00
a.m.
Pacific,
okay,
I'll,
be
there
like
I'm,
not
opposed
to
opening
up
the
pool
of
people.
I
know,
I
I
definitely
would
like
to
be
one
of
those
people.
I
keep
getting
pulled
lots
of
other
stuff,
but
certainly
the
group
in
DNS
handling
stuff
sounds
like.
D
Toil
that
we
we
can
automate
away
all
right.
There
is
some
gotchas
here
too
right,
for
example,
when
we
get
requests
for
a
specific
project
to
be
created,
a
GC
our
bucket
right.
The
length
of
this
string
of
the
project
name
has
to
be
like
within
a
specific
thing.
So
essentially,
what
happens
is
I,
try
things
out
and
if
I
see
a
failure,
I
tell
people
to
go
fix,
whatever
they
submitted
in
the
PR
right.
D
So
if
it's
a
post
job,
then
we
will
have
to
like
create
new
PRS
to
fix
the
previous
problem
kind
of
thing,
so
it
will
be
yeah
somewhat
of
a
problem
and
until
we
stabilize
or
all
the
input
validations,
especially
the
problem
with
the
GC
our
GCS
is
people
are
updating
scripts
right
now,
it's
not
EML
files
right.
So
we
don't
even
know
what
the
changes
in
the
scripts
people
are
going
to
make.
A
Yeah
so
I
agree,
it's
appropriate
to
grow
the
pool
of
people.
You
can
like
help
figure
this
out
and
the
set
of
people
who
are
showing
up
to
this
meeting.
Maybe
the
most
motivated
people
of
all.
If
you're
interested
in
helping
out
because
I
know,
Tim's
and
Christophe
and
Tim
are
names
that
I
see
everywhere,
so
the
like
less
bottlenecking,
we
can
do
an
incredibly
high
traffic
people,
the
better
off
we
will
all
be
but
tier
2.
A
Your
comment
about
an
SLA
I
feel
like
we
are
very
very
very
early
days
before
we
can
get
that
because
we
are
still
like
experimenting
and
figuring
things
out.
A
So
I
think
it's
it's
perfectly
reasonable
and
appropriate
to
say,
like
we're
still
kind
of
operating
on
a
best
effort
basis,
and
if
you
are,
somebody
feels
like
this
project
requires
an
SLA.
We
would
welcome
and
encourage
your
participation
to
help
us
get
to
the
point
where
we
can
articulate
and
measure
you
know.
Soi
is
SOS
SOS,
whatever
I
sell
letter
you
want
to
use.
A
Well,
alright,
I,
don't
think
I
have
anything
else
other
than
do
you
say
happy
Wednesday
to
all
of
you
good
to
see
you
all
again.
I'll
see
you
in
my
only
one,
oh
yeah,
the
shirt
by
the
way
I
just
felt
like
I
needed
to
do
it,
because
today
is
my
work
from
home
day
because
of
meetings,
so
I'm
gonna
unite
with
you
all
separately.
In
my
own.