►
From YouTube: Secrets Store CSI Community Meeting - 2020-02-04
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome.
Everyone
today
is
thursday
february.
The
4th-
and
this
is
our
csi
secrets
community
call
again.
We
are
under
the
cnc
cncf
guidance
for
code
of
conduct.
So
again,
if
you're
kind
with
everyone
you'll
be
okay,
we
got
a
decent
agenda.
We
got
a
couple
of
items
on
the
agenda
here
without
further
ado,
we'll
go
ahead
and
jump
into
it.
The
first
one
I
think
rita
is
going
to
talk
about
some
disconnect
scenarios.
This
one's
interesting.
A
I
got
some
some
opinions
about
it,
but
rita
when
you're
ready
go
for
it.
B
A
Yeah,
I
know
I've
seen
some
chatter.
Obviously
you
know
internally.
A
I
guess
my
concern
and
I
think
I
brought
this
up
with
the
niche
is:
are
we
getting
into
kind
of
like
an
anti
pattern
of
what
the
solution
is
to
provide
and
then,
like
you
know,
would
you
consider
like
a
like
a
time
to
live
for
the
secret?
You
know
like
what
what
would
be
the
duration
of
you
being
offline
from
talking
to
like
your
cert
provider?
C
Yeah,
I
actually,
I
brought
this
topic
up
with
tommy,
and
then
we
briefly
discussed
this
as
well,
like,
I
think
the
most
obvious
solution
that
stands
out
when
we
say
offline
is
maintaining
it
in
memory
right,
but
also
having
it
in
memory
would
mean
we
have
to
cache
the
access
method.
So
we
need
to
know
how
long
this
the
access
method
that's
used
for
accessing
the
external
secret
store,
is
valid,
and
then
we
need
to
make
sure
that
the
part
that's
using
that
particular
access
method
does
it
still
have
access
to
it.
C
B
Yeah-
and
I
think
you
know
the
csi
driver
today,
if
you
think
about
it
right,
it's
is
caching
the
content
in
a
way
right,
like
we
don't
ever
check
the
access
get
revoked
right
unless
you
have
like
the
rotation
feature
turned
on.
B
I
guess,
in
which
case
I
guess,
if
the
access
were
revoked,
you
wouldn't
be
able
to
retrieve
like
the
latest
content
or
whatever,
but
I
I
I
definitely
think
yeah
to
those
point
that
is
is
an
anti-pattern,
so
I
I'm
curious,
like
are
other
providers
or
plug-ins
also
facing
similar
request
and
as
a
community?
What
is
our
stance
around
this
specifically?
Is
this
a
supportive
scenario.
D
I'll
just
make
my
like
current
stance
on
caching
has
been
there
like
once
it's
you
know
in
the
application.
It's
like
it's
in
the
application.
They
can
hold
it
as
long
as
they
want.
Like.
That's
not
up
to
me.
It's
the
like
api
middleware
and
I
briefly
considered
caching
but
yeah.
D
I
was
most
concerned
about
like
continuing
to
hold
the
secret,
but
whether
or
not
to
like
hold
the
authorization
chain
for
like
new
schedules
or
like
new
scheduled
pods,
and
I
don't
have
much
information
on
like
what
customer
requirements
would
be
for
how
long
like
they
would
expect
kind
of
outages
to
last
to
where
back
caching
would
be
a
protection
against
that
versus
just
an
optimization
right
like
if
I
have
multiple
pods
accessing
the
same
secret
like
rather
than
making
many
calls
like.
D
I
can
batch
those
up
and
only
make
one
call
to
the
secret
manager,
but
yeah.
I
haven't
been
thinking
much
about
like
supporting
the
disconnected
scenario
in
the
csi
driver.
A
Yeah
I
mean,
I
think
it's
something
to
think
about.
I
mean,
I
think
you
know.
If
you
just
look
at
a
lot
of
the
new
architectures,
I
mean
the
the
edge
scenarios
are
coming
up
a
lot
more.
You
know.
A
But
I
guess
again
my
my
kind
of
opinion
is:
how
do
we
keep
the
integrity
of
the
purpose
of
the
solution?
You
know.
D
C
B
Right,
so
the
current
experience
is
as
long
as
they
can't
talk
to
the
key
store.
It
would
just
fail
to
mount
and
the
pod
would
just
get
stuck,
and
I
think,
depending
on
how
long
the
the
disconnect
is,
that
may
or
may
not
be
acceptable
to
that
to
the
application
workload.
Yeah.
D
This
was,
I
was
also
talking
in
the
share
on
the
care
behavior
like
we
could
also
change
the
behavior
of
like
the
plug-in
or
the
cst
driver
to
succeed
mounts
even
if
there
are
failures
accessing
secrets,
and
then
it
puts
kind
of
like
the
responsibility
on
like
watching
the
health
of
the
secret
on
the
application.
D
So
it's
less
good
in
that
way,
but
it's
it
changes
something
from
being
like
a
start-up
critical
like
dependency
to
you
know
an
optional
dependency
that,
like
can
become
healthy
later
through
the
reconciler
and
you
know
like,
but
that
does
require,
like
application
changes
to
gracefully
decorate
like
in
that.
B
D
I
think
something
like
that
would
definitely
need
to
be
an
opt-in
behavior,
because
I,
I
think
it's
much
easier
to
use
and
debug
if
it
just
like.
D
B
Yeah
so
so
far
the
ones
we've
heard
is
basically
you
know
I,
the
the
cluster
is
temporary
temporarily,
not
losing
connectivity
to
the
store
and
because
of
that
pod
restarts
would
just
fail
to
instantiate
the
pod,
that's
basically
the
gist
and
and
yeah.
We
should
definitely
clarify
like
how
long
the
disconnect
is,
and
maybe
that,
depending
on
how
long
the
disconnect
is
that
that
is
okay,
but
that's
that's
why
I'm
raising
this
question
right
is
what,
if
other
people
have
this
requirement
as
well.
A
A
A
You
know
secure
or
just
you
know,
but
then
you
have
another
kind
of
profile
of
secrets
that
can
get
mounted
that
you
say:
okay,
you
know
I'm
okay,
if
this
is
not
as
secure
where
it's
talking
directly
to
your
server
provider
for
like
the
caching
functionality.
So
so
maybe
it's
not
an
all
up
type
of
functionality
that
all
of
them
get
this
you,
you
define
the
certain
search
that
you
say:
okay,
these
are
the
ones
I
wanted
to
be
offline.
If
need
be.
D
I
think
we
might
be
able
to
like
implement
a
lot
of
this
through
just
the
provider
specific
configuration
like,
I
think
I
could
probably
add
an
option
to
the
gcp
one.
That's
like
you
know,
don't
air
if,
like
there
are
errors
accessing
secrets
or
like
on
secrets,
said
like
an
acceptable
driver
cash
time.
D
So
maybe
the
thing
is
to
not
be
too
prescriptive
at
the
driver
and
explore
the
count
plugins
and
is
yeah
like
cashy
corp,
on
the
call
like
they
might
just
recommend.
You
know
like
adding
replication,
because
I
think
their
products
can,
you
know,
have
multi-data
center
replication
right.
A
E
Yeah,
I
am
on
the
call,
I'm
sorry
I
missed
the
beginning
of
the
meeting,
so
I
don't
quite
have
all
the
context.
B
Oh
so
we're
talking
about
the
first
item
on
the
agenda,
which
is
discuss.
If
folks
are
hearing
requirements
to
support,
disconnect
scenarios
and
and
one
is,
we
need
to
clarify
what
the
requirements
are
and
then
see.
If
there
are
some
potential
solutions
we
can
offer.
E
What
kind
of
disconnects
are
we
talking
all
of
the
possible
disconnects
or
something
specific.
B
Yeah,
so
could
be
a
brief
disconnect
where
maybe
the
repeated
mound
failure
and
then
retry
is
okay,
that's
the
first
one.
The
second
one
would
be
there's
a
quite
a
long
time
disconnect
for
which,
as
tommy
mentioned,
maybe
the
providers
can
then
provide
provide
a
way
to
say.
Okay,
if,
if
a
timeout
is
this
long,
then
return
succeed
for
mount
and
then
let
the
application
deal
with
the
secret
aspect
of
it
later
within
the
application.
C
E
Returned
so
yeah
yeah,
it's
an
interesting
question.
I
think
I'll
have
to
go
in
and
think
about
that
one
a
bit
but
yeah
it
would
be.
I
agree,
would
be
good
to
get
some
requirements
kind
of
guidelines
and
stuff
together.
A
B
Yeah-
and
I
think
we
have
an
issue
where
people
were
saying
like
hey:
can
you
like
cache
the
secret
in
the
cluster
right
so
like
basically
don't
delete
the
seek
the
kubernetes
secret
once
it's
created?
I
think
it's
some
somewhat
related
to
this,
so
that
if
that
connectivity
is
gone
at
least
you
have
that
secret
in
the
cluster
and
then
the
pod
can
continue
to
use
it.
C
Yeah,
I
think
that's
a
good
fallback
mechanism
because
I
I
think,
even
with
caching
the
secrets
in
the
provider,
the
problem
is:
if
the
provider
restarts
in
the
disconnect
cluster,
then
basically
we've
lost
all
the
contents.
But
kubernetes
secrets
are
persistent.
So
that
way
the
user
can
say
in
disconnect.
We
can
still
rely
on
kubernetes
secret
as
fallback,
so
the
csi
driver
can
still
succeed
the
volume
mount
and
say
yeah.
C
B
And
it's
sort
of
an
anti-pattern
like
once
you
create
it's
forever
on
the
cluster,
which
is
actually
brings
up.
Another
thing
that
I
was
talking
to
nish
about
is
like
I
wish
kubernetes
had
like
a
ttl
for
secrets
right
where
we
can
just
tell
kubernetes
after
I
want
you
to
crave
for
this
long,
but
go
delete
yourself
after
this
point
in
time.
I
think
that
would
be
super
nice
to
have,
but
I
don't
think
that
exists
today.
A
C
Again,
if
we
want
to
handle
this
in
the
csi
driver,
we
could
still
do
that
with
the
ttl
in
the
secret.
Like
I
mean
if
we
say
that
this
is
the
driver
created,
the
kubernetes
secret
will
handle
the
lifetime,
the
user
can
configure
a
ttl
which
will
add
as
an
annotation
to
the
secret
and
then
after
the
annotation
is
elapsed,
then
we'll
just
go
delete
the
secret
from
the.
E
I
don't
know
if
this
has
been
brought
up
already,
but
I
guess
another
option
is
to
have
a
configurable
that
the
driver
consumes.
That
says:
don't
block
pod
startup
on
the
secret
material
being
mounted
and
then
yeah
we
we
can
give
that
escape
valve
for
applications
to
handle
secrets
being
missing.
E
D
In
rather
than
drive
a
level
just
like
it's
easier,
I
think
for
individuals
like
it,
I
get
yeah.
It
could
be
both,
but
I
think
like
it
could
be
implemented
in
plug-ins
like
today,
without
kind
of
like
going
through
the
like
just
kind
of
like
treating
the
plugins
as
like
a
playground
for
for
some
of
the
options
before
making
it
like
driver
like
behavior
or
making
it
into
like
the
driver.
D
E
Kind
of
pop
there
yeah
fair
enough
yeah.
I
guess
I
was
just
thinking
there
was
like
a
kind
of
pod
level
configurable
which,
which
I
kind
of
see,
is
more
the
driver,
responsibility
but
yeah.
It's
probably
right.
It's
easier
to
remove
features
from
plugins
than
from
the.
B
Driver
okay:
this
is
a
really
really
good
discussion.
I
think
for
next
step
I
think
let's
create
an
issue
for
disconnect
scenarios
and
and
then
ask
users
like
what
are
some
requirements.
I
think
that's
that
will
at
least
give
us
a
sense
of
like.
Is
this?
How
many
people
want
this
and
what
the
acceptable
behaviors
might
be
right
and
then
maybe
we
can
come
up
with
certain
proposals.
B
A
Okay,
thanks
rita
all
right,
so
now
we're
going
to
move
on
to
the
proposal.
Tommy,
you
want
to
talk
to
us
about
the
secret
store
csr
driver
file.
I
o
consolidation
and
I
got
your
doc
open
here
as
well.
D
Yeah
so
I
mentioned
this,
I
think
briefly
on
one
or
two
meetings
ago,
but
there's
you
know
right
now.
The
plugins
are
responsible
for
writing
the
files
to
the
file
system
and
it
turns
out
writing
files
to
the
file
system
can
be
more
tricky
than
one
would
hope,
and
it's
just
resulted
in
like
kind
of
two
file
system.
I
o
bugs
so
far,
and
it's
like.
D
For
new
plugins
to
be
created,
they
will
need
to
be
mindful
of
these
same
issues
that,
like
have
been
fixed
here,
so
the
idea
is,
can
we
like
solve
this
once
in
the
driver
to
make
writing
plugins
easier.
D
And
due
to
do
so,
my
proposal
here
or
the
initial
draft
of
it
is
we
currently
have
the
mount
response
where
we
have
the
object
versions,
which
is
kind
of
the
metadata
about
like
around
the
rotation
of
the
files,
but
to
extend
it
to
include
the
actual
like
file
contents.
D
D
D
This
yeah
has,
I
think
there
is
a
bug
open
about
permissioning
of
files.
Some
provide
a
way
to
to
solve
that.
But
one
of
the
main
drawbacks
is
that,
like
right
now,
the
plugins
can
like
read
a
secret
write.
It
write
the
value
immediately
like
read.
The
next
secret
write
it,
but
this
would
require
the
entire
like
contents
to
be
in
memory
and
passed
over
the
rpc
channel.
D
D
But
to
make
this
work
both
like
the
driver
and
the
plug-ins
would
need
to
update
their
like
deployment
limits
to
support
like
the
largest
volume
that
is
being
mounted.
So
I
just
did
some
napkin
math
and
some
assumptions
of
like.
I
think
it's
rare
to
have
more
than
10
secrets
in
one
volume
and
that,
like
how
many
concurrent
requests
like
would
be
be
happening
and.
E
D
Some
stuff
there
we
can
change
the
the
grpc
message
size
limits,
but
if
you
did
try
to
do
a
mount
that
was
over
these
limits,
you
could
just
repeatedly
fail
or
you
could
the
process
and
yeah
that
would
be
difficult
to
get
out
of.
D
One
other
option
would
be
to
change
from
like
a
unary
request
response
to
a
streaming
to
where,
like
a
response,
would
be
streamed
to
the
driver
of
each
like
file
by
the
time,
but
that's
I
think
that
might
be
harder
to
do
backwards
compatible
way
and
to,
depending
on
the
like
semantics
of
partial
failures
that
the
driver
wants
to
enforce.
D
E
D
B
I
think
yeah
I
mean
the
most
relevant
one
I
think
is
the
same
one
that
you
just
mentioned,
which
is
like.
How
does
this
approach
now
affect
the
size
of
the
secret?
That
is
supported,
because
we,
I
don't
think
we
ever
really
needed
to
think
about
that
for
plug-ins
right,
because
you
just
get
the
data
mounted,
but
now
that
there
is
this
communication
between
the
two
now
we
have
to
think
about
what
does
the
grpc.
D
D
I
guess
I'm
also
not
sure
when,
when
the
driver
makes
the
tempo
houses
does
that
get
charged
to
the
system
or
like
where
who
does
who
does
that
get
charged
to
the
you
know,
the
memory
required
in
the
file
system
does
that
get
charged
to
the
the
driver
pod
or
is
it
just
system
overhead.
D
E
Charge,
I
would
think
the
driver
pod,
because
yeah
in
general
memory
volumes
are
part
of
the
the
the
specified
pods
memory
allocation,
and
so
it
seems
like
it
would
be
the
same
deal
for
the
driver,
but
not
100.
D
B
Yeah,
I
never,
I
didn't
think
about
this
one,
because
I
just
assume
that
it's
mounting
on
the
host
and
and
then
the
plug-in
just
writes
to
the
path.
D
Yeah
I'll
I'll
take
that
as
a
question
to
like
to
look
up
or
to
experiment
with,
because
if
it
is
charged
against
the
pod-
and
we
don't
see
this
like
then
like
most
of
this-
probably
doesn't
matter,
but
if
it
is
against,
if
it's
not
charged
against
the
pod,
then
like
it.
C
E
B
Right,
I
was
thinking
more
that
like
now,
we
have
to
worry
about
because
you're
sending
the
file
content
over
to
the
driver
now
right,
so
that
is
a
limit
in
itself
and
that
we
haven't
had
that
kind
of
restriction.
Yet
so
it
will
be
a
new
limitation
that
we're
introducing,
which
is
fine,
but
it
is
just
net
new.
Yes,.
D
Yeah,
I
think
if
we
went
to
streaming
it
like
to
string
the
responses
back
then
that
we
could
have
the
you
know,
like
the
stream
size,
be
big
enough
to
carry
any
of
these
secret
sizes,
but
to
do
all
files
at
once
in
like
one
one
go,
I
think,
is
the.
B
Yeah,
so
it
sounds
like
that's
the
only
like,
I
guess
open
question
at
the
moment.
I
I
think
you
know
the
the
failure
behavior
and
all
that
that
that
seems
pretty
straight
forward
to
me
and
I
don't
see
that's
like
any
different
from
what
we
do
today
right.
So
it's
not
like
a
regression
or
anything.
I
just
want
to
confirm
my
understanding.
D
Yeah,
okay,
some
of
that,
I
think,
can
be
plug-in.
Behavior
like
what
how
the
plug-in
responds
to
the
driver
like
on
partial
failures
is,
is
just
why
I
bring
it
up
because
they
would
become
a
well.
I
guess
the
plug-in
could
choose
to
just
give
back
the
files
that
worked
so
yeah.
I
think
I
think,
there's
no
change
there.
C
Yeah
I
mean
we
had
a.
I
mean
me
and
tommy
discussed
this
often
as
well
like
today.
The
way
we
handle
partial
errors,
so
we
don't
do
partial
errors
today,
right
today,
all
the
plugins.
What
they
do
is
they
start
going
through
the
loop
and
then
the
first
content
that
they
hit
an
error
with
they
return
an
error.
So
if
you
have
five
items,
then
if
two
of
them
are
able
to
be
accessed,
the
third
one
has
an
issue
with
accessing.
C
C
One
option
is
out
of
all
the
contents.
If
there
are
errors,
we
keep
consolidating
the
errors,
but
we
continue
to
the
rest
of
the
items
and
write
the
remaining
items
and
just
return
all
the
aggregate
of
errors.
So
that
is
one
option
or
the
other
option
is
being
atomic.
So
if
even
one
item
fails,
then
we
say
we
are
not
going
to
write
any
of
this
to
the
file
system
and
then
we
fail
the
entire
amount.
D
Yeah,
I
I
just
pushed
a
change
to
the
gcp
plugin,
where
I
batch
read
all
the
secrets
at
once
and
then
lock
on.
Writing
them
to
know.
If
there
are
any
failures,
I
haven't
pushed
that
as
a
release
yet,
but
I
wanted
to
to
test
out
that
kind
of
error
behavior,
so
that
if
you
would
never
get
a
partial
error
by
doing
that,
it
has
the
same
like
okay.
Now
that
the
plug-in
needs
to
hold
all
of
the
secret
contents
in
memory.
D
But
I
with
like
64k
secret
sizes,
I
pretty
confident
that
it
won't
you'd
have
to
have
a
pretty
large.
E
Yeah
agreed
it's
not
likely
to
be
a
problem
frequently
if
we
can,
if
we
can
come
up
with
a
mitigation
or
some
kind
of
escape
hatch
for
the
times
when
people
do
want
to
scale
it
past
yeah.
C
E
Megabytes
per
message
that
would
be
ideal
but
yeah.
There
are
a
lot
of
really
nice
properties
of
this
proposal
in
terms
of
security
and
kind
of
separating
concerns
between
the
provider
and
driver.
So
I'm
really
keen
on
helping
to
find
a
way
around
the
limitations
that
we'll
be
introducing
grpc.
I
think
it
should
be
worth
it.
B
D
Score:
yeah,
okay,
it
yeah
it
could
also
be
like
a
documentation
and
providing
good
knobs
to
turn
too
so
yeah.
Take
it.
Take
a
look
the
comments,
I'll
get
back
to
the
comments.
B
But
also
regarding
what
you're
saying
what
you
guys
were
saying
about
the
batch
batch
mount
thing,
I
think
not
having
it
done
in
a
batch,
is
helpful
for
the
scenario
we
talked
about
earlier,
where
you
just
like
mound
the
things
that
are
successful
and
then
for
anything
that
failed.
We
just
we
wait
for
the
timeout
and
then
just
succeed,
and
then
let
the
application
deal
with
it.
C
Yeah,
I
think
the
interesting
scenario
is
during
the
rotation
right,
so
during
rotation.
If
you
do,
we
want
to
batch
or
still
allow
partial,
because
if
we
allow
partial,
if
our
two
keys
are
in
sync
username
password,
we
don't
want
to
update
one
of
them
and
doing
batch.
There
would
be
the
right
way
to
go
to
say
everything
will
be
updated
or
we'll
leave
everything
to
the
old
value.
B
A
All
right
cool,
so
yeah.
Definitely
it's
a
good
discussion
I'll
make
sure
I'll
post
this
in
the
slack
to
have
the
rest
of
the
community.
Take
a
look
at
it.
Okay,
let
me
just
well:
are
we
done?
Let
me
make
sure
before
we
move
on.
B
So
should
we
sorry
just
when,
should
we
make
a
decision
about
this?
I
guess
I
just
want
to
make
sure
you're
not
blocked
yeah.
D
I
would
recommend
we
let
this
sit
for
like
at
least
until
another
or
through
another
meeting
like
another
two
weeks.
You
know
get
people
yeah
have.
D
And
then
I
guess
after
that,
then
I
would
probably
just
create
the
github
issue
and
yeah,
depending
on
depending
on
comments.
C
All
right
yeah,
I
think
it
makes
sense.
I
think
one
good
point
that
rita
had
brought
up
is
during
the
last
meeting
was.
This
is
the
change.
If
this
is
what
the
way
we
want
to
go,
then
we
want
to
have
a
release
with
this
at
some
point
so
that
this
could
be
used
for
the
security
audit
as
well,
but
right
now
the
security
audit
is
still
in
the
initial
process.
They
haven't
found
a
vendor
and
all
that,
so
we
still
have
time,
but
I'm
just
bringing
that
up.
B
Yeah-
and
this
is
added
to
the
agenda
as
well,
so
I
think
a
lot
in
the
last
meeting
initially
mentioned
that
maybe
in
a
month
or
so.
C
B
So
there
is
so
that
was
like
two
weeks
ago,
so
maybe
a
few
weeks
they
may
find
a
vendor
and
so
there's
some
urgency
there.
I
guess.
C
Yeah,
so
we,
I
think
the
pr
is
ready
to
be
merged
yesterday
for
the
audit,
so
it
will
probably
be
merged
this
week
and
then
the
next
week
is
the
call
where
we
talk
about
the
vendors,
but
I'll
also
update
the
this
meeting
about
what's
happening
in
the
other
one,
so
that
we
know
where
the
timeline
and
everything
in
terms
of
progress
we
need
to
make.
C
A
A
We
got,
let's
see
just
real
quick,
I
mean
we
got
six
open.
Do
we
want
to
chat
about
these
or.
C
So
the
first
three
I've
added
it
to
the
next
milestone
and
then
the
fourth
one
is
work
in
progress.
So
that's
basically
allowing
e
to
e
test
to
run
locally
for
the
e
to
e
test
framework
to
be
updated.
The
person
who
opened
the
pre
said
he'll
update
the
pr
he
hasn't
been
able
to
get
to
it
so,
but
I'm
also
going
to
ping
him
and
see
if
he
needs
any
help
so
that
we
can
help
and
get
rid
across
the
line
and
for
the
fifth
one
is
interesting:
the
helm,
3
stuff.
C
It's
been
there
for
a
while,
but
I'll
take
a
look
at
it.
I
think
everyone
can
take
a
look
at
it.
It's
basically
doing
crds
the
correct
way
using
helm3
moving
the
crds
to
the
crds
directory,
and
then
that
also
means
that
cids
won't
get
deleted
during
uninstall
won't
get
updated
during
the
next
upgrade.
C
So
one
is
fine,
but
the
second
thing
that
I
had
seen
while
trying
out
the
pr
was
in
an
effort
to
move
from
helm2
to
help
three,
the
charts
that
get
published
when
they
are
installed.
They,
if
not
installed
when
they're
upgraded,
they
delete
the
existing
crds,
which
could
be
a
breaking
change
and
data
loss.
So
I'm
trying
to
find
a
way
to
get
around
that.
A
Cool
thanks,
anesh
all
right.
I
think
that
takes
us
to
the
end.
So,
let's
see
the
next
meeting
will
be
thursday
february
18th.