►
From YouTube: 20190717 scl cluster api office hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So,
as
always,
we
have
a
standard
put
a
contact
policy,
because
this
meeting
is
so
large.
We
highly
recommend
that
folks
use
the
raise
hand
feature
so,
if
you're
unfamiliar
with
that,
there's
a
little
bit
of
documentation
at
the
top,
which
is
gonna,
be
the
participants
section
starting
to
walk
through
the
agenda.
If
you
again,
if
you
could
add
your
details
here
to
be
how
useful
Jason
you're
at
first,
yes.
B
C
D
D
We
need
an
agenda,
so
I
think
most
likely
it's
going
to
be
what's
in
and
out
of
scope
for
b1,
alpha
3
plus
for
the
things
that
are
in
scope,
trying
to
put
together
a
design,
or
at
least
an
initial
design
for
those
I'm
guessing
that
will
probably
need
a
day
and
a
half
to
two
days
to
try
and
keep
travel
to
a
minimum
and
budgets
fairly
tight.
But
please
fill
out
the
survey.
A
C
E
This
is
obviously
not
the
first
time
in
the
kubernetes
community.
This
has
come
up.
There
are
kind
of.
There
is
sort
of
some
support
for
zones
and
regions
built
in
to
kind
of
core
itself,
but
there
is
not
really
in
the
cluster
API
community
away
a
consistent
way,
at
least
to
provision
clusters
that
are
either
resilient
to
failures
in
you
know,
single
domains.
E
F
E
Aws
I
know
Cavan
provider.
You
can
specify
an
AZ
and
basically
get
all
your
resources
there
and
there's
sort
of
a
little
bit
of
something
similar.
You
can
get
a
random
AZ,
basically
for
a
cap
Z.
So
there's
a
lot
of
words
here.
I
think
the
proposal
basically
boils
down
to
I
would
like
some
way
for
us
to
specify
the
behavior
and
for
users
to
get
consistent
behavior
in
terms
of
the
way
that
their
cluster
resources,
like
VMs,
are
spread
across
these
domains.
I
think
the
absolute
simplest
possible
proposal
is
either
they're.
E
E
A
G
What
we
have
said
not
very
per
formally,
but
we
have
sort
of
said,
is
a
machine
deployment
is
a
would
be
sort
of
per
zone
or
whatever
you're
like
most
localized
failure
domain
is
so
so
that
it
better
integrates
with
things
like
auto
scaling
and
that
sort
of
thing,
and
so,
for
example,
you
would
typically
have
three
machine
deployments
in
a
H
a
like
three
zonal
configuration
I,
don't
know
if
that
helps,
and
certainly
it
does
not.
It
glosses
over
like
the
control,
plane,
piece
yeah.
E
Jessica
salmon
for
the
control
plane
scenario
right
like
you,
would
need
an
a
Jake,
potentially
an
H.
A
control
plane
was
still
three
machine
deployments,
so
you
would
need
to
be
aware
of
that
information,
even
if
you're
saying
that
that's
the
way
that
users
should
go
forward
right
and
that
also
makes
another
point.
You
need
to
be
aware
of
that
information.
In
fact,
at
the
level
of
a
machine
Center
machine
deployment,
not
at
the
cluster
level
to
even
have
a
story
there
right.
A
So
I
guess
the
question
here
is
like
I
agree
with
the
supposition.
I
think
it's
highly
fragmented
in
the
language
of
the
terms
we
would
want
to
use
would
vary
and
there'd
also
be
a
question
of.
What's
the
importance
level
of
machines
versus
control,
plane,
I,
think
control
plane,
as
Justin
mentioned
earlier,
is
obviously
higher
importance
for
most
consumers
who
care
about
some
level
of
fault.
A
Tolerance,
they're,
gonna
care
about
machines
too,
but
then
you
take
away
the
capabilities
for
let's
just
submissions
like
auto-scaling
groups,
so
I
I
don't
exactly
know
how
to
solve
this
one
other
than
like.
You
know.
If
a
provider
wants
to
implement
an
example
and
potentially
propagate
that
back
to
the
core.
That
seems
like
a
reasonable
you
know
show.
This
is
an
example,
show
how
it
it's
common
across
providers
and
then
go
forth
from
there
Andrew.
You
have
here
note.
F
Yes,
thank
you.
Yeah
I
agree
with
everything
sets
him
I
wanted
to
add
you
know.
Cathy
is
looking
at
doing
some
affinity,
anti
affinity,
rules
that
might
fall
under
the
aegis
of
the
fault,
tolerance,
but
because
well
tolerant,
in
fact,
while
it
routinely
might
imply
something
like
H
a
through
the
use
of
load,
balancers
and
so
I
would
call
that
network
fault
tolerance
right.
You
could
still
with
something
like
there
have
the
in
fault
tolerance
through
something
as
the
the
DRS
feature.
So
I
I
want
to
make
sure
that
we
have
to
recognize.
C
E
So
yeah
I
hear
kind
of
a
couple
concerns
I,
guess
in
terms
of
proof
of
concept.
I
can
definitely
kind
of
take
a
stab
at
something
and
come
back,
I.
Think
in
terms
of
provider
specific
details,
my
goal
is
kind
of
to
define
how
this
would
work.
It's
a
Huber,
Nettie's
node
level
right
at
the
Machine
deployment
level,
and
then
you
know
it's
up
to
the
providers
to
implement
what
those
details
look
like.
E
But
there
are
some
behaviors
that
kind
of
a
user
should
expect
in
terms
of
in
my
control
plan
is
maybe
AJ
or
maybe
not.
My
machine
deployments
are
isolated
to
na
z
or
you
know
a
fault
domain
or
they're.
Not
those
are
things
that,
as
a
user
of
cluster
API
I
would
need
to
at
least
know
or
have
the
information
exposed.
It's
not
gonna
build
higher-level
orchestration
myself
if
I
need
it
to
I.
Think.
A
There's
there
already
exist
semantics
for
doing
some
of
this
currently
inside
of
communities
proper,
so
through
the
use
of
labels
in
hit
to
infinity
rooms
like
so
long
as
you
have
the
capability
to
have
attached
anti
affinity
rules
like
who
is
missing
that
earlier
and
said.
Justin
that
you
can,
you
can
have
that
capability
built
in
under
the
hood
behind
the
scenes
and
I
think
that's
a
reasonably
generic
way
to
approach
this
problem
without
inventing
new
verbs
and
nouns
okay.
E
That's
that's
fair,
I
mean
I.
Think
part
of
the
reason
I
was
bringing
here
is
that
this
is
something
that,
in
core
right,
I
can
say
cloud
provider
they're
trying
to
push
out
of
core,
and
this
is
very
much
like
an
infrastructure
aware
place
that
is
provisioning
these
machines.
So
it
seemed
like
kind
of
a
nice
niche
that
actually
it
might
fit
well
in
terms
of
the
provider
specificity
in
terms
of
something
that
we're
already
dealing
with.
I,
definitely
understand
a
concern
so.
A
The
current
scheduling
has
a
bunch
of
rules,
around
affinity
and
anti
affinity,
and
soft
versus
hard
we'd
have
to
basically
nail
down
what
was
the
set
of
things
I.
Think
if
you
do
it
within
that,
with
that
context
in
mind,
you
might
be
able
to
solve
your
problem
in
a
reasonable
way
and
get
a
subset
of
that
total
steering
apparatus
that
exists
inside
of
scheduling.
That's
my
hot
take
I'm
open
to
other
suggestions
for
ideas
as
well.
D
I
just
wanted
to
bring
this
up
again
since
I.
Don't
think
that
there's
any
contentious
comments
in
there
I
know
we're
waiting
on
a
diagram
update
from
a
skier
to
plant
UML
and
there
maybe
were
like
one
or
two
other
small
edits,
but
if
you
haven't
had
a
chance
to
look
at
this
proposal,
please
do
we're
highly
motivated
to
get
this
approved
so
that
it
can
go
into
B
1
alpha
2,
because
it
will
allow
us
to
get
rid
of
the
actuator
interface
from
the
cluster
entirely.
H
Nope
nope
I
mean
I
think
that
they
did
internal
changes.
There
is
only
one
left
that
you
made
yesterday
and
then
I
will
update
the
Diamonds
beside
the
FATA.
Look
really
retro
I
think
that
I
owe
myself
I
think
there
are
no
changes
to
do
no
changes,
so
I
think
that
people
can
take
a
look
at
what
is
there
right
now?
It
will
still
make
sense.
D
D
I
Hi
so
I'm
sick,
so
for
these
I
haven't
met.
Yet
so
thank
you
for
everyone
who
already
looked
at
the
doc
and
thank
you
for
your
comments.
I
think
we
were
mostly
looking
for
guidance
at
this
point
on
how
we
should
proceed.
There
seems
to
be
two
things
that
Mesa
Lucy
came
up
after
the
doc
review.
I
The
first
one
is
the
how
this
virtual
machine
skill
set
implementation
differs
from
carré's,
auto-scaling
concepts
and
so
I
think
we
do
agree
that
there's
some
overlap
there
and,
unfortunately
that's
just
because
the
way
they
implemented
the
you
know,
skill
sets
in
Azure.
It
does
do
some
things
like
literal
scaling
like,
but
we
do
want
to
use
the
kubernetes
primitives
for
sure,
even
though
there's
some
like
features
such
as
like
me,
imaging,
for
example,
that
using
the
vmss
infrastructure
allows
us
to
do,
but
that
we
can't
do
with
loose
PMS.
I
So
that
was
the
primary
motivation
for
this
proposal
and
then,
in
terms
of
like
whether
we
want
to
do
this,
the
cluster
Oh
disclosed
a
cluster
API,
sorry
Azure
provider,
and
do
this
specifically
for
edger
I
would
really
like
to
see
this
be
shared
across
providers.
If
we
can
make
it
in
a
way,
that's
generic
enough
so
that
we
can
reuse
this.
And
it's
not
him
at
this,
address
specific.
I
The
reason
that
we
we
started.
This
proposal
here
was
I
was
at
least
under
the
impression
that
there
had
been
the
question
in
the
past
and
sorry
if
I'm
missing
some
context
here.
But
the
question
of
extending
mission
sets
in
cluster
a
bi
have
been
brought
up
in
the
past
and
that
that
wasn't
really
something
that
we
wanted
to
go
for.
I
A
G
Yes,
I
I
have
not
yet
had
a
chance
to
review
the
proposal.
I
apologize,
I
think
I
was
certainly
one
of
the
people
that
I
don't
think.
There's
consensus
on
this,
but
my
I
am
one
of
the
people
I
think
in
the
the
camp
that
says
we
should
not
back
machine
sets
with
make
say,
SGS
or
I
guess
vmss
my
art,
my
objection
being
the
idea
that
we
want
to
define.
G
There
are
some
conflicts
with
how
kubernetes
thinks
about
these
things
and
like
the
autoscaler
and
about
like
rolling
updates
and
they
sort
of
battle
each
other
in
ways
that
you
that
are
sort
of
surprising
and
so
we're
starting
to
see
some
of
these
problems
already
on
the
ADA
website,
because
because
of
leveraging,
what
is
getting
groups
instead
of
having
machine
set
directly
create
machines,
which
then
goes
and
creates
great
machine?
Crd
objects,
which
then
goes
and
creates
machines
on
a
one-to-one
basis.
A
J
Maybe
great
so
this
wha
me
so
I
just
want
to
add
that
I
want
to
make
sure
that
we,
when
we
talk
about
Azure
and
Beam
Assist
as
it
pertains
to
the
provider
here,
the
scale
functionality
is,
is
not
something
that
we'll
use
so
the
autumn
automatic
scale.
You
know
that
will
just
be
turned
off
the
primary
motivation
behind
us.
You
know
wanting
to
use
the
field
of
that
piece.
Machines
by
this
attribute.
Ss
is
really.
J
This
is
the
state
of
the
art
in
terms
of
our
you
know,
virtual
machine
management
right
now
and
Azure,
and
any
of
the
new
features
that
are
coming
out
for
a
sure.
A
lot
of
them
require
or
have
had
the
more
desirable
finality
when
you're
using
these,
you
know,
DM
scale
says
versus
two
standalone
VMs
or
even
some
of
the
older
iterations
there,
and
so
it's
not
just
I
just
want
to
make
sure
that
we
understand
that
we
don't
want
to
have
the
you
know,
different,
auto
scaling
compete.
J
You
know
the
the
goal
for,
for
what
we
would
try
to
implement
is
really
that
we
would
defer
to
kubernetes
to
control
auto-scaling,
and
we
would
really
just
be
using
this,
the
MSS
to
allocate
the
end
Zetas.
So
that's
that's
one
thing
and
so,
and
then
just
over
the
last
thing
I'll
add
is
the
the
the
document
really
outlines
how
the
implementation
would
look
given
the
cluster
API
as
it
lives
today
and
yeah.
We,
you
know
I,
think
we've
gotten
various
feedback
on.
J
J
You
know
we
want
to
make
the
developers
we
want
to
help,
make
the
developer
experience
good,
for
you
know
for
other
providers
that
are
facing
similar
challenges,
and
you
know,
given
the
the
state
of
the
art
in
terms
of
the
patterns
you
guys
are
using.
You
know
that's
really
what
we're
looking
for
here
and
and
really
the
one
special
in
document
again
is.
It
demonstrates
how
we
would
do
it
now,
given
the
abstractions
that
that
are
there,
and
it's
not
really
intended
to
be
the
solution.
B
Jason
liniments
yep,
so
the
first
concern
that
I
would
have
would
be
around
control,
plane,
node
management
right
now,
because
that
can't
easily
be
abstracted
away
into
like
a
set
of
instances
that
have
the
same
type
of
template,
specifically
around
the
need
to
knit
the
first
one
versus
join,
and
it
almost
seems
like
that.
One's
still
that
use
case,
at
least
today
still
needs
to
be
handled
by
like
individually
backed
instances
backed
by
machine
or
abstracted
by
machines,
as
we
start
to
formalize.
The
management
of
that.
B
The
thing
too
is
I
was
originally
a
proponent,
or
you
know,
in
favor
of
extending
machine
sets
to
support
this
type
of
functionality,
but
I've
been
persuaded
otherwise
for
a
few
different
reasons.
But
I
think
we
should
have
some
type
of
primitive
that
we
support
in
cluster
API
itself.
For
providers
to
you
know
be
able
to
back
resources
by
something
similar
to
the
MSS
or
auto-scaling
groups
or
whatnot,
and
and
that
way
the
user
can
decide.
You
know
how
they
want
their
instances
back.
G
It's
the
one
that
I
think
might
work
in
terms
of
like
how
can
you
actually
square
the
circle
and
be
compatible
so
it
to
summarize,
it
looks
like
you
are
saying
we
will
still
create
the
kubernetes
machine
objects,
but
we
will
aggregate
those
creations
similar
creations
into
mutations
on
a
VM
SS
rather
than
individually
actuating
them
as
crate
instance,
schools
or
the
equipment,
and
that
to
me
I
think,
makes
sense
and
can
work
I
think
there
are
challenges
around
naming.
But
to
me
that
is
if
that
is
what
you're
proposing
to
me.
G
That
is
the
design
that
can
work
in
cost
your
API
and
I.
Would,
if
that
is
what
you're
proposing
I
would
love
to
see
that
I
think
that
makes
sense.
It
may
be
that
we
find
it
doesn't
work
but
I.
It
I
think
it
is
the
one
that
took
me
make
sense.
If
that's
what
is
being
proposed
find
rest.
It
correctly.
J
To
see
yeah
so
and
and
just
to
fill
in
the
blanks
for
your
skin
I
think
one
of
the
one
of
the
ugly
things
that
come
along
with
that
design-
and
you
know
maybe
this
is
the
thing
that
we
address
in
it.
You
know
it's
part
of
a
really
tight
scope
is,
let's
say
you
know
as
part
of
a
machine
deployment
you
create.
You
know
what
you
want
to
create
ten
machines.
J
J
All
somehow
attract
that
track
that
privat
the
edger
provisioning
to
make
sure
that
it
completes
now
all
and
probably
in
go
away
right
and
then
and
then
I'll
get
another
notification
right
from
from
another.
You
know
for
this
other
machine
and
and
really
what
that
means
is
I
wanna
ahead
of
time,
be
able
to
say,
okay
machine
set,
give
me
ten
machines,
and
then
you
know
as
part
of
yeah
and
in
you
know,
in
the
part,
in
this
design.
What
we'll
do
is
we'll
have
something
to
kind
of
track.
J
You
know
the
bunch
of
these
requests
that
are
coming
in
at
the
same
time
them
up
together
in
a
single
call
to
vmss
and
then
in
the
book
work
around
that,
but
it
would
be
in
do
it.
You
know
I
think
it
would
be
a
little
bit
nicer.
It
would
feel
a
little
cleaner
if
we
had
a
way
to
kind
of
ahead
of
time,
say:
okay,
this
machine
deployment
is
is
is
requesting
ten
machines
at
once.
Let
me
go
get
ahead
of
that
and
then,
as
part
of
them,
she
goes
machine
series
thing
process.
J
L
Michael
then
Justin,
so
the
pattern
that
you
just
described
actually
might
be
a
pretty
good
fit
for.
We
did
in
the
bare-metal
actuator
I'm,
probably
not
worth
trying
to
dig
through
the
weeds
and
describe
how
those
might
actually
be
similar,
and
we
might
be
able
to
reuse
that
pattern
for
your
use
case,
but
I
interested
of
maybe
follow
up
after
this
call,
if
you'd
be
interested
to
to
brainstorm
about
that
on
the
service.
G
Justin
I'm
just
gonna,
say,
guess:
I
absolutely
agree
that
is
a
like
how
you
take
these
independent
threads
or
go
routines
and
like
merge
them
into
one
is,
is
indeed
a
challenge.
I.
Think,
though,
that
that
is,
that's
that's
a
good
challenge,
because
that's
something
we
can
solve
once
and
we
can
figure
out
the
strategy
for
doing
that,
and
it
is
not
a
user
facing
thing
and
I
think
so.
I
think,
if
that's,
if
that's
the
works,
then
we're
in
good
shape.
The
ones
that
I
know
are
like
naming
is.
G
That
is
that
is
a
good
problem
and
not
a
because
it's
it's
one
that
we
actually
kids
within
our
scope
and
we
can
solve
it
together
and
yeah.
There
may
be,
there
may
be
things
from
bare
metal
or
you
know
other
areas,
and
we
can.
We
can
figure
it
out,
but
the
I
don't
I,
don't
see
that
as
a
real
blocker
yeah.
J
A
A
The
I
question
whether
or
not
we
can
change
some
behavior
in
Machine
sets.
So
that
way
your
batching
becomes
automatic
so
that
you
don't
have
to
do
a
separate
sort
of
batching
by
yourself,
and
this
way
it
could
potentially
support
that
capability
across
from
irons,
but
I'd
have
to
I
as
well
as
I,
think
others
have
to
read
through
a
little
bit
more.
This
I
was.
K
K
J
Think
for
us
that's
a
month
and
a
half
is
a
pretty
tight
timeline,
so
I
think
you
know
we
don't
want
to
we.
Don't
we
also
don't
want
to
rush.
You
know,
rush
ahead
too
quickly.
I
think
I'm
not
totally
caught
up
on.
What's
what's
what's
planned
for
me,
what
alpha
2
versus
the
one
out,
the
three
but
I
would
I
would
prefer
we
kind
of
take
we
approach.
We
we
target
v1
alpha
three
with
this
and
in
the
meantime
you
know
we
can
implement
this.
The.
J
If
the
proposal
is,
you
know
the
same
sound
we
can
implement
it
without
any
of
the
you
know,
goodness
and
and
then
as
an
exercise
for
the
one
alpha
3
we
can.
We
use
that
as
the
starting
point
and
really
demonstrate
how
we
can
delete
the
code
that
we
think
is
ugly
and
then
I
start
with
something
nice
all.
A
Right
you
pretty
much
read
my
mind.
There
I
think.
That's
that's!
Probably
a
more
I
don't
want
to
overload
V
1
alpha
to
you
as
it
currently
exists
today,
because
we
already
just
added
a
bunch
of
features
with
some
of
the
with
with
Pablo's
proposal.
No,
that's
not
to
mean
that
people
can't
work
async
POCs,
but
if
you
want
to
extract
it
back
into
Cappy
in
the
possible
implementations
that
would
inherit
from
there,
that's
that's
a
much
bigger
change.
So
I
love
the
idea.
Folks
read
the
proposal,
try
to
collect
feedback
and
comments.
A
I
A
Thank
you
so
again,
please
read
over
the
proposal
and
try
to
give
feedback
and
they'll
be
highly
useful.
Are
there
any
other?
Last
questions,
comments,
complaints,
concerns
and
of
you
must
be
MSS
proposal
and
what's
twice
few
tips
right,
I
will
stop
sharing
my
screen
and
I
will
help
Andy
go
through
than
your
shoes.
D
D
J
D
B
I
think
we
have
two
options.
We
can
probably
relatively
easily
automate
it
today
using
github
actions,
but
I
would
rather
us
wire
it
into
prowl,
with
the
rest
of
the
automation
configuration
right
now
and
what
we
basically
need
to
do.
There
is
sync
with
somebody
you
know
like
Catherine
or
Aaron,
and
try
to
get
a
secret
mounted
that
we
can.
That
would
have
permission
to
push
to
the
proper
staging
repo,
and
then
we
could
automate
pushing
an
image
on
build.
D
Okay,
so
Jason
I
know
you've
done
a
lot
in
this
area,
but
you're
also
fairly
heavily
loaded,
I'm
happy
to
let
you
take
this
if
you
want,
but
I
think
this
also
could
be
an
opportunity
for
someone
else
to
get
some
experience
with
the
test,
infrastructure
and
secrets
and
building
and
pushing
images.
So
I'll
leave
it
up
to
you.
If
you
want
to
take
this
or
if
you
want
to,
could
help
pawn
it
on
here,
I.
H
M
M
D
What's
your
getting
Nick
codon
Road,
Co
de
n,
okay,
javis
and
I'm,
just
gonna
put
this
in
V
1
alpha
2,
because
if
you're
already
doing
this,
you
probably
can
get
this
done
pretty
quickly.
Okay,
next
up
is
from
Daniel
bootstrap
cluster
cleaned
up,
despite
failed
pivot.
So
daily,
are
you
on
the
call,
yeah?
Okay,
I?
Don't
want
to
do
this?
One
yeah.
N
Sure
the
turn
of
it
is
that
I
spun
up
a
cluster
with
Kappa
us
on
AWS
with
cluster
cuddle
and
I.
Had
you
know,
I
think
misconfigured,
some
I
think
maybe
the
name
of
the
SSH
key.
In
any
case,
so
cluster
cuddle
created
the
bootstrap
cluster.
Then
proceeded.
You
know,
thinking
the
control
plane
proceeded
to
create
the
you
know
the
initial
cluster
nodes
on
AWS
and
then
it
it
tried
to
pivot,
and
then
it
failed
I
think
because
it
couldn't
get
the
there's
a
couldn't
get.
N
You
know
the
cube
config
from
the
work
load
cluster,
so
I
couldn't
do
the
pivot,
obviously
cuz
it
couldn't
talk
to
that
API
and
then
it
it
aired
out
and
and
quit,
but
as
part
of
quitting
it
cleaned
up
the
bootstrap
cluster.
So
then
I
I
no
longer
had
the
the
objects
or
the
resources
that
describe
that
that
workload
cluster
anywhere
so
that
that
state
was
gone.
N
N
Destroy
the
bootstrap
clustered,
you
know:
destroy,
destroy
the
resources
that
this
sorry
destroy,
destroy
the
kubernetes
objects
or
the
Cappy
object
that
describes
some
resources
that
have
already
been
deployed
just
to
have
that
it's
good
to
have
that
state
around
in
case
you,
you
know,
you
need
to,
you,
know,
start
start
middle
or
maybe
delete
those
resources.
You
think,
okay.
D
D
N
D
We
can
put
in
the
milestone
I.
Think
probably,
since
you
noted
that
it's
in
a
defer
call,
we
probably
can
adjust
the
logic
to
check
the
errors
and
don't
clean
up
the
bootstrap
cluster.
If
there's
an
error
so
I,
don't
think
that
would
be
too
controversial.
Hopefully,
okay,
the
last
one
we
have
me
just
refresh
real
quick,
is
update,
low,
which
I've
got
to
open
PRS
for
this
one
for
master
and
one
for
release
0.1,
so
I
will
say
that
I
did
it
without
really
knowing
what
I
was
doing
in
terms
of
NPM.
D
G
A
PSA
on
the
yeah-
it's
not
about
this
topic,
but
yes,
I.
The
I
normally
get
asked
how
the
cap
accounts
are
coming.
I
managed
to
create
a
first
kappa
account
and
I
think
it
is
all
good.
So
I'm
gonna
go
and
create
some
others.
It
is
a
slightly
manual
process,
but
I
think
I
have
got
all
our
ducks
in
a
row.
We
have
a
Google
Group
that
is
fairly
locked
down.
We
can
create
date,
arrests,
but
whatever
they
are
accounts
with
that
are
part
of
a
bigger
organization.
So
I
am
working
through
that
person.