►
From YouTube: 20201201 sig cluster lifecycle
Description
Kubernetes SIG Cluster Lifecycle for 12/1/2020
A
Hello
today
is
december
1st
2020.,
it's
been
a
while
sorry,
I
haven't
been
able
to
attend
for
a
while,
but
this
is
the
standard.
Sequester
lifecycle
call
taking
a
look
at
the
dock
and
seeing
what
the
agenda
is.
There's
a
couple
things
I
do
know
that
we
should
talk
about,
one
of
which
is
going
to
be
getting
the
sub
project.
A
Yes,
okay,
so
let's
walk.
A
C
Yeah,
so
the
cncf
is
opening
the
floor
for
more
people
to
participate
in
the
call
for
paper.
Slash
cover
proposal,
submission
review
process
and
potentially
this
was
only
sent
to
the
sig
lead
mailing
list,
but
we
can
like
delegate
to
others
who
wish
to
participate
in
the
review.
C
I
don't
have
to
be
a
lead
exactly
according
to
us,
so
if
anybody
wants
to,
they
can
engage
with
the
cncf,
there's
a
form
that
you
can
feel.
You
can
say
that
you
represent
sequel's
life
cycle
and
you
can
be
a
reviewer
for
the
cfp
for
the
next
cubecom.
A
Is
there
any?
I
should
take
a
look
here.
C
A
Yeah,
but
the
the
question
I
have
is
like
this
is
people's
time.
You
know,
and
there
should
be
some
kickback,
like
the
benefits
of
me
in
the
program
today.
This
is
the
part
that
I
was
looking
for.
C
So
the
hundred
percent,
like
benefit,
is,
if
you
do
the
100
to
200
proposal
review.
I
I
think
this
is
okay.
This
is
reasonable.
D
A
E
B
A
All
right
so
that
all
seems
legit
fine.
Maybe
the
ask
here
should
be
folks
to
go
talk
with
their
individual
sub
projects.
The
subproject
leads,
can
talk
with
different
groups
and
figure
out
if
they're
interested
in
wanting
to
that
was
the
whole
purpose
of
this
psa
thing.
F
A
All
right,
so
all
the
details
are
here:
I
would
send
out
the
information
so
fill
out
the
form
or
whatever
and
try
to
talk
with
the
individual
sub-projects.
So
if
you're
a
sub-project
lead,
please
talk
with
the
rest
of
the
people
if
they're
interested
in
engaging
because
there
are
kickbacks,
you
know
it's
always
hard.
When
somebody
says
you
want
to
volunteer
for
this
and
there's
no
incentive
to
do
so.
But
there
is
least
some
incentive.
C
I
just
wanted
to
mention
that
like
reading
100
abstracts
may
seem
a
lot,
but
it's
really
not
that
much.
I
really
it's
not
clear
to
me.
What
is
the
exact
criteria?
Maybe
another
entity
after
you
will
get
the
submissions
that
you
approved.
I
don't
know
the
exact
details
about
the
process.
They
link
to
a
separate
document
which
I
found.
A
A
Maybe
we
can
follow
up
next
time
and
get
some
more
details.
Maybe
we
can
ask
stephen
augustus
to
see
if
what
the
process
is
just
to
get
some
feedback.
C
C
A
D
Yeah,
I
also,
I
don't
think
it's
purely
score
based
because,
for
example,
if
the
top
two
talks
were
identical,
they
would
pick
one
of
them.
Presumably,
hopefully,
the
higher
ranked
one,
but
you
know
like
there
is
there
is
a
track
session,
which
is
not
purely
algorithmic
but
is
aiming
to
produce
a
good
breadth
of
content
as
well,
but
it
uses
this
as
the
as
the
input.
A
All
right,
so
is
this
horse
been
sufficiently
beaten.
C
Yeah,
if
somebody
has
questions,
I
I
think
we
already
tim,
already
sent
the
email
to
the
sequence
lifecycle
mailing
list.
We
should
open
a
discussion
there
if
somebody
has
questions
just
email,
the
the
same
thread
so.
A
Yeah,
all
right
should
we
go
on
to
sub
project
readouts.
Are
there
any
other
group
topics
that
folks
want
to
discuss.
A
Once
twice
three
times
right,
some
project
readouts
the
mirror.
C
For
qb,
we
found
some
bugs
in
the
core
dns
migration
library.
I
also
wanted
to
show
this
because
I
know
the
costa
rica
is
using
the
same
code
or
at
least
a
similar
variant
of
the
code.
Basically,
this
issue
that
I
mentioned
in
the
dock
unraveled
a
number
of
bugs
that
are
in
the
coordination,
migration
coding,
cubanium
and
at
least
a
couple
of
pr's
were
already
sent.
One
of
them
was
backported
to
119
as
well.
C
Eventually,
we
want
to
move
coordinates
to
an
external
addon,
whether
it's
an
operator
or
something
else.
This
will
help
us
potentially
just
bump
a
version
of
an
image
and
that's
it,
but
now
we
have
to
go
through
the
whole
milestone
and
backboard
process,
because
of
some
of
these
bugs
and
potentially
quasar
api
can
also
consume
the
same
add-on,
because
otherwise
we
now
have
to
maintain
like
two
separate
instances
of
the
logic.
A
C
Yeah,
just
what
is
what
is
the
status
of
the
coordinate
operator
integration
that
we
adapted
for
corps
by
the
way.
D
Still
in
progress,
we
had
some
strong
feedback
that
the
operator
model.
We
need
to
tighten
up
the
operator
model,
both
in
terms
of
our
back
permissions
granted
to
the
operator
and
in
terms
of
being
able
to
have
predictability
of
what
actually
gets
run
in
the
cluster
and
so
you'll
see
some
changes
going
into
cluster
add-ons
to
essentially
refactor
things.
D
I
think
we
have
a
better
back
model
for
it
and
to
refactor
a
large
manifest
into
sort
of
on
our
back
component
and
a
deployment,
the
actual
like
thing
itself,
and
that
means
that
the
operator
needs
very
few-
are
back
permissions
and
doesn't
actually
significantly
change
the
deployment
model.
The
other
piece
is,
I
think,
trying
to
give
predictability,
and
so
there's
also
some
attempt
to
try
to
understand
what
an
operator
will
do,
at
least
for
some
simple
cases
and
that
relates.
D
There's
a
evan
cordell
from
red
hat
or
yeah.
Red
hat
has
a
pr
around
dis.
I
guess
using
containers
as
the
a
delivery
mechanism
for
manifests
using
images
as
a
delivery
mechanism
for
manifests.
I
will
put
that
in
the
notes
when
I
found
the
link,
but
that's
one
approach
to
how
you
would
discover
what
is
in
an
operator
type
thing.
C
Up
so
potentially
these
like
expanded,
undesired,
are
back
rules.
Are
they
the
artifact
of
the
operator
framework
or
like
some.
D
Yeah,
the
the
core
problem
was
in
order
to
create
a,
I
want
to
say
role
or
cluster
role,
but
in
order
to
create
our
back
objects,
you
essentially
need
all
the
permissions
that
you
are
creating,
and
so
the
operator
inherited
a
superset
of
these
permissions.
D
The
google
summer
of
code
participant
this
year,
satoshi
basically
realized.
You
don't
need
to
do
that.
D
Instead,
you
can
pre-create
the
rbac
roles
at
the
same
time
as
you're,
creating
the
operator,
and
it
doesn't
actually
change
the
deployment
scenario,
because
if
you
were
to
change,
if
you
were
to
expand
the
rack
rolls
anyway,
you
would
need
to
redeploy
the
operator
with
those
expanded,
rx
roles
in
order
to
apply
the
permissions
anyway.
So
it
doesn't
change
anything,
and
so
we
might
as
well
just
pre-create
them,
which
then
means
that
in
this
model,
the
operator
needs
very
few
permissions
or
much
much
fewer.
Many
fewer
permissions.
C
D
So
we
need,
we
would
provide
in
the
operator
manifest
the
manifest
for
the
operator.
We
would
provide
both
the
deployment
or
stateful
set
for
the
operator
and
the
are
about
permissions.
They
would
need
to
pre-create,
and
so
we
end
up
with
this
notion
of
like
slicing
up
a
manifest
in
these
ways
which
we
haven't
done
in
the
past
and
I've
I've
pr'd
a
sort
of
tool
that
might
help
with
that.
C
Okay
and
potentially,
this
is
this-
requires
changes
in
both
the
underlying
framework
and
also
the
coordinates
operator
in
quest
release
itself
like.
D
It
actually
doesn't
require
that
many
changes
it.
It
would
require
changes
to
the
manifests
to
sort
of
move
stuff
from
one
to
the
other,
like
from
the
the
operator
from
the
core
dns,
manifest
as
it
were,
to
the
operator
manifest
and
but
essentially
you
would
give
it
read
access
to
the
cluster
roles
and
it
would
verify
that
they
exist
and
the
library
is
maybe
some
changes
to
the
underlying
library,
but
actually
there
aren't
even
that
many
changes
required
to
the
the
kubota
declarative
pattern.
D
Okay,
let's
see
but
but
generally
we'd
love
to
like
figure
out
what
what
the
I'm
going
to
look
at
this
issue,
we'd
love
to
get
it
make
progress
on
the
kube
adm
integration
as
well.
I
feel,
like
people
have
sort
of
felt
in
the
past
that
they
have
got
a
bit
stuck
and
so
even
prioritizing
chaops
first,
but
if
we
can
get
into
cuba
dm
first,
that
would
be.
That
would
be
wonderful
too.
C
Yeah
we
we
just
didn't
make
the
decision.
The
core
dns
operator
was
presented
some
time
ago,
but
we,
I
think
the
main
questions
were.
How
do
you
upgrade
the
you
know
the
manifest
on
disk
that
the
user
modified
to
deploy
a
custom
configuration
of
the
operator,
and
we
had
a
discussion
with
questrano's
project
about
that
there
are
some
we
just
don't
know
yet
how
to
properly
upgrade
operators.
I
guess
I
also
saw
this
page.
C
D
I
agree
I
mean,
I
think
the
the
answer
is,
it
does
become
the
responsibility
of
kubernetes
or
or
cops
chaos
to
to
upgrade
the
operator,
and
I
think
the
advantage
of
an
operator
is
the
operator
doesn't
have
any
magic
dependencies.
In
other
words,
it
should,
it
should
purely
be
a
coupe
cuddle
apply
upgrade
there
should
be
no
other
like
sequencing,
like
the
rule
of
the
operator,
should
not
require
like
complicated
logic.
D
The
operator
itself
should
not
require
a
complicated
logic
to
upgrade,
and
you
use
the
permissions
you
have
as
the
orchestrator
of
the
cluster
to
avoid,
for
example,
if
we
had
a
if
we
had
an
operator
operator,
the
operator
operator
would
need
basically
cluster
admin
and
essentially
the
cube
adm
would
be
the
operator
operator,
because
we
assume
you
effectively
have
plus
your
admin,
but
you
could
certainly
write
an
operator
operator
if
you
wanted
to.
C
D
The
the
two
models
for
there's
one
model
where
we
essentially
execute
it
directly
execute
the
operator
in
a
one-shot
model
to
output
the
ammo
to
standard
out
or
something.
D
The
other
model
is
that
we
gloss
over
the
fact
that
the
operator
can
make
changes
to
the
manifest
and
look
at
where
the
operator
is
pulling
from
and
while
the
initial
version
of
operators
only
supported
operators
on
disk
in
the
container
image.
Now
we
support
a
variety
of
sources.
We
support
https,
git
and
shortly
evan's
image
distribution
mechanism.
D
D
Assuming
you
didn't
change
the
image
which
I
don't
think
operators
are
going
to
do.
It
would
suffice
to
to
look
at
the
underlying
manifest,
but
we
could
also
support
a
one-shot
mode.
We
just
haven't
pulled
on
that
thread.
Yet.
G
D
Yeah,
the
the
the
operator
should
draw
from
the
from
the
version
specified
and
we're
going
to
have
a
fairly
straightforward
mapping
to
things
that
could
the
cuba
dm
or
cops
or
any
other
consumer
could
themselves
do
the
same
mapping
to
find
the
underlying
manifest,
because
it's
all
in
the
same
field,
and
it
follows
a
node
format.
So
we'll
have
a
library
or
something.
A
I
think
that
we
should
take
this
because
it's
going
to
affect
so
many
things
and
really
have
it
written
down
would
be
good
for
our
own
mental
health,
at
least
my
own
mental
health,
because,
as
I
try
to
manage
twelve
thousand
things
at
once,
I'll
be
able
to
refer
to
a
document
and
make
sure
that
I'm
not
seen
hell
yet
getting
there,
though,
does
anyone
wanna
is
who
should
take
point
on
starting
to
write
this
proposal.
C
I
mean
it
feels
like
me,
justin
and
potentially
the
core
dns
maintainers.
We
should
take
points
from
that.
E
I
have
some
curiosity
about
the
responsibility
about
her
back,
but
we
can
discuss
on
the
proposal.
I
I
only
want
to
point
out
that
there
is
an
issue
I
will
link
in
the
document
in
kuben
mean
where
we
started
to
discuss
a
potential
plug-in
model
to
allow
basically
to
choose
between
which
dns
or
which
proxy
you
want
to
use.
C
Yeah
this
is
this
is
an
idea
that
fabricio
brought
that
maybe
cubed
yam
at
least
addons
can
be
plugins
in
the
way
cubecoro
is
doing
plugins.
They
can
just
be
binaries
that
live
on
the
nodes.
C
Instead
of
using
the
operator
pattern
for
addons,
it's
a
completely
different,
take
on
others,
of
course,
but
it
it
has
its
own
cavities
like
how
do
you
if
the
addon
is
external?
How
do
you
distribute
it
to
users
if
it's
a
corado,
especially.
D
Just
I
know
we
don't
sorry
just
to
riff
on
that
idea.
If
we
had
a
one
shot
mode
in
the
operator
whereby
so
you
would
specify
the
image
name.
Maybe
we
could
make
that
work
where,
like
it,
also
solves
the
distribution
problem
right
and
that
we
know
how
to
distribute
images,
and
you
would
run
the
operator
on
one
shot
mode
and
you'd
have
some
protocol
but
you'd
get
it
on
standard
out
and
then
you
would
have
your
manifest
ready
to
to
go.
A
It's
basically,
this
is
something
that
red
hat
has
talked
about
many
times
in
the
past
is
you're,
basically
containing
all
the
metadata
about.
It's
like.
I
know
that
I
know
myself.
So
when
you
run
it's
basically
saying
this
is
what
I
am
and
it
contains
the
manifest
itself
that
it
would
be
distributing.
So,
instead
of
you
having
to
actually
deploy
a
manifest,
you
can
actually
just
you
run
a
single
container.
That
knows
what
it
is.
That
container
then
drops
the
manifest
or
applies
it,
but
that's
very
meta.
D
E
Reconciling
just
to
give
a
bit
of
background,
the
issue
I'm
talking
about
is
taking
another
problem,
which
is
that
in
kubernetes
today
the
user
are
count,
are
used
to
integrated
experience
between
the
control,
plane
and
and
add-ons.
So
you
use
the
same
config,
for
instance,
to
to
set
up
the
image
repository
for
also
for
the
dawns.
So
there
is
a
kind
of
integration
between
what
is
control
plane
and
what
is
a
done,
and
this
issue
proposal
model
to
let
the
dawns
outwards
to
basically
plug
in
into
this
integrated
user
experience.
E
A
All
right,
why
don't
we
table
this?
I'm
really
excited
to
read
a
proposal
and
to
have
the
cat
fight
where
I
can
read
the
different
ideas,
because
I
think
we
could
just
like
gaze
into
the
abyss
on
this
topic
for
a
while.
So
why
don't
we
move
along?
That
was
basically
the
cubadiem
update
that
that
basically
focused
entirely
on
core
genus
and
then
goes
into
operators.
Then
talks
about
the
existential
meaning
of
life
anyways
it's
to
pass
butter,
so
the
do.
A
D
Certainly,
thank
you.
Yes,
we
are.
We
are
trying
to
do
a
mini
rebrand
to
change
from
cops
to
k-ops,
emphasizing
ops,
de-emphasizing
cops.
It
feels
like
the
a
good
thing
to
do,
given
our
political
climate
and
also
ops,
is
like
it's
more,
it's
more
like
what
it
is
and
it's
like
you
know,
it's
one
dedication
to
a
re-rant.
So
if
you
see
me
saying
chaos,
that
is
why
I
started
saying
chaops
and
are
they
more
technical
news?
D
We
are
sort
of
a
little
bit
we
caught
up
and
now
we're
a
little
behind
the
ball
on
getting.
I
think,
119
out
we're
having
some
challenges
around
the
authentication
things
which
we've
talked
about
in
previous
weeks,
but
we
are
basically
trying
to
get
119
through
the
process
and
trying
to
pick
the
least
worst
bad
things
to
make
that
happen.
D
Chaops
thank
you
tabled
for
now
trying
to
get
like.
Whenever
we
get
behind
the
ball.
We
we
everything
else,
sort
of
falls
by
the
wayside,
but
it
is
very
much
something
I'm
interested
in
and
yeah
we
just
haven't
had
a
lot
of
time
for
it.
A
D
We're
being
we're
being
relaxed
about
people
using
older
new
names.
A
Does
anybody
want
to
talk
about
cluster
api?
I
know
they're
working
on
v1,
alpha,
4,
stuff.
A
All
right,
I
will,
I
know
they
have
a
meeting
later
on
today
I'll
ask
vince
to
drop
an
update
here
and
poke
him
about
getting
back
engaged
in
this.
He
might
not
know
about
the
time
switch.
Yet
this
is
the
first
one
where
the
time
switch
for
a
second
one.
I
don't
recall,
I
think
it's
the
first
one,
all
right.
So,
let's,
let's
reach
out
to
the
sub
projects,
to
re-verify
range,
to
target
one
thing
only
and
see
if
they
respond,
my
hunt
for
red
october
humor
is
lost
on
people
now.
A
So
it's
too
early
mini
cube.
H
It
mostly
it
supports
the
newest
120
beta,
and
it
makes
our
json
output
more
robust
so
that
you
can
ingest
output
in
an
embedded
environment
better.
It
supports
it,
provides
better
support
for
parallels
as
a
vm
driver
as
well
and
and
it
adds
a
new
feature
called
scheduled
stop.
So
you
can
basically
ask
minicube
to
stop
in
10
minutes,
while
you're
deconstructing,
whatever
environment
you
want
and
then
we'll
just
automatically
in
a
cron
delete
clean
up
everything
whenever
you
want
so.
A
H
A
A
H
Haven't
we
haven't
really
talked
to
kind
like
that
much
that
recently,
the
the
the
code
for
our
docker
driver
is
less
and
less
resembles.
What
kind
does
over
time
we
we
write
our
own
base
image.
Now,
that's
not
based
on
kind
at
all,
or
it
won't
be
based
on
kind
soon,
and
so
that's
if
that
was
the
the
concern.
Otherwise
we
haven't
really.
We
haven't
really
talked
recently.
A
The
reason
I
ask
is
that
the
projects
overlap
in
their
user
stories-
yeah
you
know,
and
ideally,
in
the
fullness
of
time,
I
would
love
to
have
like
a
single
tool
that
takes
the
best
ideas
and
kind
of
melts
them
down,
because
I
see
no
reason
to
have
n
tools
when,
when
the
user
stories
overlap
so
much
now
the
test,
automation
and
the
simplicity
of
some
things,
I
think,
would
be
great
to
sort
of
channel,
because
I
you
know
at
the
time
mini
cube,
didn't
support
it,
so
kind
existed
out
of
necessity
yeah,
but
I
think
that
in
the
fullness
of
time
I
would
love
to
see
these
projects
communicate.
H
Melt
into
one
yeah,
I
think
that
our
we've
never
focused
on
testing
kubernetes
as
like
a
use
case
for
for
mini
cube.
Now
that
mini
cube,
spins
up
pretty
fast
and
supports
non-vm
drivers
like
there's
nothing
stopping
us
it's
just.
It's
never
been
a
priority
for
us,
but
yeah.
I
I
don't.
I
don't
disagree
with
that.
Also
kind's
part
of
sick
testing.
I
think,
which
is
a
little
bit
like
it.
A
C
There's
a
problem
that,
like
you
mentioned,
it
was
created
in
a
separate
sig
that
wanted
to
just
push
things
as
fast
as
possible
to
get
something
working
and
now
all
project
growth
projects
not
only
have
separate
six,
but
all
them,
but
they
also
have
separate
brands
established
in
the
community.
Minicube
is
a
brand
kind.
It's
a
brand,
so
tearing
down
a
brand
will
require.
G
B
C
In
changes
but
yeah,
I
would
agree
that,
ideally
originally
maybe
stick
testing
should
have
went
to
mini
cube
and
contributed
the
docker
driver
directly
and
we
can
use
testing
yeah.
H
That
was,
I
mean
the
the
docker
driver
took
like
six
months
to
write,
just
because
of
our
our,
like
our
existing
code
base
was
not
support,
like
was
not
ready
for
it,
but
so
I
understand
why
no
one
took
that
on
externally,
but
I
don't
disagree.
A
H
A
I'd
be
very
interested
in
cluster
api
drivers
for
this,
because
they
do
a
docker
implementation
which
actually
doesn't
use
even
kind.
It
uses
portions
of
kinds
library
to
be
able
to
do
this.
That's.
B
A
Test
automation,
stuff
I'd
really
like
to
see
stuff
leveraged
across
the
board,
so,
like
all
of
the
c
cluster
lifecycle
tools,
you
know
like
have
this
virtual
cycle,
this
feedback
loop,
where
virtuous
cycle,
where
they're
testing
you
know
they're
using
tools
to
test
the
other
tools
that
you
know
helps
to
harden
the
whole
story.
So.
G
All
right,
any
other
questions,
comments,
complaints
concerns.
So
when
you
mention
tools,
what
tools
are
you
talking
about?
Is
it
the
testing
of
the
kubernetes
cluster,
or
is
it
the
services
associated
with
that?
B
G
A
It's
the
same
cluster
lifecycle,
tooling,
we've
autumn
necessity.
Different
tools,
use
things
based
upon
the
history,
so,
like
cops,
still
has
their
own
bootstrapping
mechanism
and
other
aspects
of
it
right.
Ideally,
I
would
love
to
see
this
melted
down
in
the
fullness
of
time,
where
we
we
do
strict
layering,
where
the
job
of
bootstrapping
is
one
tool.
The
job
of
you
know,
provisioning
a
cluster
is
one
tool.
The
job
of
provisioning.
Multiple
clusters
is
another
tool
and
they
all
kind
of
use
the
layers
below
it.
Instead,
we
kind
of
have
this
weird.
A
I
don't
even
it's
like
a
it's,
a
weird
ball
of
yarn,
built
out
of
necessity
because
of
the
issues
and
timing
of
everything
for
the
most
part.
So
you
know
different
projects
were
at
different
states
in
the
past,
but
I
think
there
we
are
at
a
state
now
where
we
can
do
these
things.
It's
just
a
question
of.
If
do
we
have
the
resourcing
and
the
capability
to
make
that
happen?
A
A
A
A
All
right
does
anyone
want
to
give
any
more
subproject
updates,
or
should
I
try
to
wrangle
other
folks
for
other
stuff
next
time,
so
we
still
have
cluster
api.
We
have
all
the
other
stuff,
that's
not
on
the
list.
A
Nobody
anybody
bueller
all
right
so
I'll,
try
to
reach
out
to
folks
and
try
to
see
if
I
can
get
a
better
attendance
next.
Two
weeks
from
now,
that's
probably
going
to
be
the
last
one
before
the
new
year
and
this
time
frame
works
for
me.
So
I
should
be
able
to
be
here
and
we
can.
Unless
there's
any
other
topics,
we
can
just
end
a
little
bit
early.
C
I
added
one
topic
to
the
group
section:
this
is
this
was
about
the
discussion
of
renaming
master
to
control
plane,
I'm
not
sure.
If
team,
if
you
know
there
was
a
new
working
group
that
was
created
to
work
on
this
yeah
items,
they
they
were
able
to
produce
the
so-called
architectural
design
record,
to
make
it
more
official
like
renames,
like
control,
master
control,
plane
or
master,
which
can
also
be
an
administrator
in
terms
of
kubernetes,
are
back
potentially
renaming
it
to
administrators,
and
so
on.
This
will
have
separate
architectural
records.
C
I
linked
the
master
control
plane
one,
and
so
this
is
kind
of
still
draft,
but
basically
they
gave
us
the
green
light
to
proceed
with
the
cube
adm
changes.
So
I
just
wanted
to
let
the
other
su
projects
here
know
that
it's
kind
of
official
at
this
point.
So
if
you
have
any
flags
fields
and
so
on,
you
should
just
start
the
deprecation.
G
C
A
Topics
two
months
twice
three
times
all
right,
please
follow
up
on
the
list.
I
will
try
to
get
some
emails
out
to
make
sure
we
get
better
attendance
next
time
and
if
you
don't
see
in
two
weeks
have
a
good
holiday
bye.