►
Description
Don’t miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from April 17-21, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
Yeah
but
yeah
were
there
any
talks
about.
You
know
the
batch
HPS,
also
in
the
in
the
normal,
like
in
the
rest
of
the
days,
or
was
it
only
in
the
clock
because
we
might
have
yeah
we
might
have
went
to
those.
A
B
Yeah
yeah
I
know
that
during
the
even
there
was
quite
a
few
like
few
related
to
GPU,
like
the
guy
from
Google,
was
saying
some
stuff
to
open,
optimalize,
Google
I
think
there
was
another
for
from
Vulcan
or
about
Vulcan.
Don't
buy,
can
remember
the
vendor
now
all
right,
and
there
was
a
few
others,
but
like
me,
for
example,
I
focus
more
on
kubernetes
itself,
not
necessarily
only
on
the
batch.
B
So
one
of
the
my
observation
and
Main
team
in
kubecon
for
the
remaining
days
was
ebpf
so
as
a
way
of
improving
one
performance,
getting
more
observability
and
more
control
on
your
network
layer
in
kubernetes.
So
there
was
a.
We
spoke
to
like
few
vendors
on
the
booth
like
kalika
and
psyllium,
both
very
interesting.
That's
something
we're
gonna,
probably
try
soon
or
soonish
in
in
gr
as
well
to
play
a
bit
more
with
the
BPF
other
interesting,
quite
a
bit
around
Automation
and
practices
like
people.
B
D
B
It
was
as
well
like
it
was
my
first
coupon
and
I
was
surprised
by
a
number
of
people
attending
that
was
like
7,
000
or
so
people
on
the
side,
yeah,
and
sometimes
it
was
quite
challenging
to
get
into
the
room
if
you're
not
enough
into
the
room,
especially
with
the
particular
talk
was
like
super
interesting
and
getting
a
lot
of
traction.
It
was
like,
if
you
are
not
15
minutes
earlier
in
the
room
you're
not
getting.
B
Luckily,
there
was
a
way
of
watching
them
virtually,
so
that
was
as
well
quite
cool
because
there
is
one
talk,
I
haven't
watched
even
yet,
where
I
think
it
was
Mercedes
Benz
explaining
how
they
immigrate,
700
or
7
000
clusters
from
terraform
and
other
infrastructure
to
using
cluster
API,
so
that
that's
yeah.
Quite
a
few
lessons
there
and
a
thing
that
was
another
versus
observation,
like
kubernetes,
become
more
and
more
a
framework
to
doing
stuff
like
that.
B
There
is
quite
a
few
talks
around
cross
plane
so
cross
plane
as
a
way
of
managing
infrastructure
other
than
just
pure
kubernetes
stuff,
using
kubernetes
and
kubernetes
Reconciliation
Loop
to
enforce
the
state
and
stuff
like
that,
although
it
was
quite
cool
as
well.
Yeah
nice
few
plus
model
times
as
well.
So
few
talks
about
postmortem.
D
B
Don't
know
23
of
us
in
the
same
time
yeah
cool
yeah,
so
that
was
as
well
quite
a
lot
of
talks
like
casual
talks
happening
during
the
lunch
to
various
people
and
then
again
to
vendors.
So
one
of
the
vendor,
we're
kind
of
using
is
the
one
behind
oppa
I,
don't
know
whether
nlpo
is
using
Opera
open
is
a
way
of
defining
a
policy.
So
before
going
to
keep
going,
we
Face
a
few
issue
where
we
try
restrict
some
stuff,
so
we
start
discussing
with
them
and
so
on.
D
A
Yeah
I
think
I
actually
been
to
the
one
improving
GP
utilizations,
yes,
I
was
there
I
think
was
it
from
Google.
Yes,
that
was
quite
interesting,
but
there
can
it
kind
of
was
a
letdown
that
they
didn't
actually
talk
about
the
implementation
details,
but
it
was
quite
interesting
about
how
you
know
the
theoretical
sides
of
things.
A
We
also
try
and
grab
something
else
out,
because
I
even
find
my
scat
from
here
it
was
one
I
feel
one
of
the
agenda
was
the
one
that
I
would
attended.
That
was
quite
interesting.
Was
a
network
aware
scheduling,
although
it
did
look
like
something
that
require
quite
a
lot
of
time
before
it
can
be
implemented
on
production
cluster?
Let
me
see
if
I
can
find
it
start
over
here.
Oh
yeah,
we
just
so
this
one
was
quite
interesting,
but.
B
Yeah
another
interesting
one
was
a
ephemeral
containers,
so
I'm
not
sure
whether
you're
aware
about
American
continents,
that's
a
new
thing
in
kubernetes,
122,
23
and
I.
Think
it's
gonna
be
even
more
mature
in
future.
They,
the
idea,
is
when
you
need
to
troubleshoot
a
problem
with
your
report
or
with
your
application
inside
the
container,
rather
than
using
Cube
CTL
exec,
which
many
people
do
you'd
run
command.
Cube,
CTL,
debug.
B
So
by
doing
that,
you
have
the
access
to
the
same
network,
namespace
and
so
on,
and
you
can
use
NS
enter
so
namespace
enter
to
even
get
access
to
bit,
namespace
and
a
few
other
name
Linux
namespaces,
and
that
that
was
quite
cool
because
it
allows
you
to
have
very
minimalistic
image.
You
can
use
probably
this
strollers
and
start
building,
building
more
tools
and
yeah
and
by
that
in
in
the
into
side,
containers
and
the
site
container.
C
So
we
we
actually
do
use
that
that
turn
like
it.
It
became
better
in
123,
but
it
was
Alpha
before
so.
You
could
enable
it
in
the
Clusters
and
we
were
enabling
it
and
what
we
do
is
we
keep
one
image
that
has
all
the
debugging
tools.
We
need
for
networking
everything
file
systems
whatever,
and
we
use
that
image
to
debug
and
detach
using
an
informal
containers
when
we
debug
stuff.
C
B
B
Through
my
sketch,
oh
another,
one
interesting
kind
of
interesting
depends.
What
you
do
is
about
qvirt,
so
Cube.
Weird
again,
you
use
kubernetes
to
manage
your
VMS
simply,
but
they
added
quite
a
few
additional
features
like
live
migration
and
so
on.
So
that's
another
product
which
is
becoming
more
and
more
mature
and
in
the
future.
Probably
it
could
be
our
way
to
like,
rather
than
directly
using
I,
don't
know
openstack
to
spin
your
VM.
You
can
use
Cube
beard
to
control
and
do
the
Via
migration
and
other
stuff
yeah.
So
kubernetes
has
a
framework
again.
C
B
A
A
It's
easier
to
to
capture
the
the
traffic
right.
A
The
vehicle
to
actually
produce
it
get
the
Test
match
for
an
application.
B
B
That's
a
good
point:
there
was
an
interesting
talk
about
I,
can't
remember
the
title:
I
will
find
about
different
approach
to
doing
a
load
test
or
in
general,
a
test
because,
let's
say
you
run
a
web
application
on
your
kubernetes.
It's
a
web
service,
so
you
to
deploy
a
new
version
of
that
web
service.
So
you
can
do
few
approaches,
one
you
have,
let's
say
staging
cluster
or
something
like
that.
You
deploy
there
and
maybe
test
somehow
like
write
some
integration
tests.
Another
one
is
having
a
canary
approach.
B
So,
like
blue
green,
where
you
redirect
a
bit
of
traffic
to
your
new
version,
that's
what
people
quite
often
do
with
Services
much
like
Linker
deal
and
stuff
like
that.
But
the
talk
was
that
the
problem
with
that
approach
is
you
not
necessarily
has
consistent
inputs
going
to
that
web
service?
Let's
say
if
you're
doing
the
change
in
middle
of
night,
you
don't
have
the
same
traffic
or
the
same
number
of
users,
so
you
don't
really
know
that
whether
the
the
way
you're
promoting
your
no
stop
new
application
is
working
at
all
or
not.
B
So
the
idea
is
I
think
it's
still
using
ebpf
as
well
to
capture
the
traffic,
so
in
other
BBF
related
is
to
kind
of
tap
like
like
start
tapping
the
the
traffic
and
record
that
traffic.
Once
you
have
the
traffic
kind
of
recorded,
you
can
reapply
that
later
on
any
time
and
it's
gonna
have
the
same
volume
and
so
on.
B
C
I
I
just
pasted
it
there.
It
should
be
the
last
link
in
the
other.
C
A
Yeah
there
was
a
it
was
a
the
recording
team
of
eppf
this
year.
It's
quite
interesting,
I
I,
remember,
hearing
about
EPF
evpf
quite
a
few
times
in
the
past,
but
this
one
really
was
yeah.
You
can
see
already
that
the
tooling
is
is
getting
more
mature
or
at
least
more
more
known
around
the
community.
In
fact,
it
was
another
talk.
I
think
this
is
also
quite
relevant
to
the
scheduling
part,
which
is
about
bandwidth
management
using
ebpf.
Again,
let
me
just
paste
the
link
here.
Yes,.
E
A
Quite
agree,
I
think
it
was
something
that
we
could
see
coming
already
for
a
while
now,
but
yeah.
This
one
was
definitely
the
confirmation
that
it's
gonna
get
bigger
from
now
on.
In
fact,
the
one
from
Batman
management
was
quite
interesting
because
it
was
starting
to
yeah.
A
They
were
adding
this
basically
possibility
of
adding
into
the
into
your
basically
the
deployment,
also
the
resource
around
how
much
bandwidth
you
want
to
allocate
to
a
specific
Bell
pod-
and
that's
quite
that's
quite
interesting,
as
that's
only
possible
through
the
the
way
ebpf
works
and
how
you
can
get
those
kind
of
informations
out
of
the
kernel.
A
Yeah
I
mean
again
very
cool.
Let's
press
the
link
there
I
don't
right
now,
I
should
have
oh
yeah.
There
was
sorry.
I
was
looking
at
them
at
the
at
the
information
here.
There
was
something
they
were
talking
about
where
they
were
also
mentioning
how,
with
the
with
this
new
approach,
you
could
also
have
higher
speed
of
communications.
Let
me
just
look
at
this.
The
scalability
limits
of
token
bucket
filter,
but
the
one
would
plug
in
earliest
departure
time.
A
Yeah
combined
with
ebpf
yeah,
something
about
being
quite
yeah
cool
both
in
bandwidth
management
and
getting
more
speed
out
of
what's
available.
C
So
one
looking
at
with
the
ppf
is
with
psyllium
to
do
sort
of
like
cluster
mesh
and
not
only
like
a
service
mesh,
but
really
oh.
C
Multiple
clusters
to
to
be
mixed
together
at
the
Pod
level,
even
and
and
you
can
easily
do
load
balancing
across
clusters
without
having
to
rely
on
Services,
which,
for
the
batches
case,
is
actually
quite
interesting
because
we
don't
want
to
have
the
like.
We
don't
really
care
about
the
service
abstraction
we
just
care
about
the
workloads
and
that
this
is
something
we
started
prototyping,
which
is
to
mesh
multiple
clusters
and
be
able
to
schedule
across
them
from
a
single
plane.
Basically,.
E
Well,
so
so
that's
interesting
because
when
I
chatted
with
them
about
scheduling
across
multiple
clusters,
I
thought
that
their
response
was
oh
no.
This
is
really
only
meshing.
The
networks
together
so
that
pods
can
speak
to
other
pods
in
other
networks
in
other
clusters,
but
the
scheduling
of
them
you'll
still
need
to
do
somewhere
else
right.
E
C
Yeah
but
but
it
allows
you
to
to
like,
even
if
you
have,
if
you
want
to
distribute
the
workloads
across
clusters,
you
can
rely
on
having
like
some
Services
running
internally
in
one
cluster,
without
having
to
replicate
them
everywhere,
for
example,
and-
and
you
just
like,
you
could
have
this
workload
clusters
that
are
really
disposable.
While
you
have
the
service
clusters
in
the
same
mesh
or
this,
the
service,
the
component
clusters
in
the
same
mesh,
so
we've
been
playing
with
this
also,
but
it
could
actually
with
some
tricks
you
can.
C
E
D
C
D
B
C
But
there
you
need
some
sort
of
like
VPN
connectivity,
I
guess,
because
you
need
to
expose
all
nodes
to
all
nodes.
So
this
is
this
is
our
dream,
which
is
to
burst
using
a
mesh
like
this,
but
it,
but
it's
actually
trickier
than
than
it
could
be.
I
guess
if
you,
if
like,
if
you
look
at
other
things
for
service
connectivity,
they
use
gateways
here,
it's
really
like
a
full
mess
between
all
notes,
at
least
my
understanding
up
to
now.
C
But
it
is
promising,
sounds
amazing,
but
it
it's
actually
something.
Maybe
maybe
we
should
bring
them
to
present
psyllium
and
ubpf
to
the
group
that
would
be
cool,
yeah,
I'll
I'll.
We
we
we're
getting
at
least
to
come
to
CERN
in
two
weeks,
so
maybe
she
can
also
do
a
talk.
The
same
talk
group-
let's,
let's
put
it
for
for
the
list
here.
A
They
will
definitely
bring
in
a
lot
of
more
interested
parties
into
these
research.
B
A
C
C
Yeah
the
other
stuff
I
had
here
in
the
summary
is
just
I.
I
saw
that
work
a
lot
of
references
to
batch
workloads,
not
only
in
in
the
talks,
but
also
in
the
keynote.
So
in
the
TOC
update,
it
was
mentioned
that
there
was
the
new
group
formed
as
part
of
the
attack
runtime
and
then
also
in
the
kubernetes
updates,
the
batch
working
group
in
six
scheduling
and
then
like
the
keynote.
Also
from
from
CERN,
we
mentioned
the
the
Computing
use
cases
and.
E
C
Also
in
the
other
ones
like
this
has
been
appearing
a
bit
everywhere,
but
I
think
there
was.
It
was
clear
that,
from
the
references
constantly
in
different
Keynote,
slowly
building
momentum
and
when
we
see
the
other
activities
as
well
and
then
yeah
so
and
then
there
was
one
session
dedicated
to
the
kubernetes
working
group
patch.
That
will
also
be
the
video
uploaded.
C
So
Aldo
gave
an
overview
of
the
work
that
has
been
going
on
already
and
and
the
plans,
and
there
was
not
a
lot
of
different
people
speaking,
but
there
were
I
I
talked
to
a
few
and
it
seemed
like
they
were
both
developers
and
and
also
end
users
interested
in
using
these
tools,
so
that
that
was
quite
nice
and
just
really
really
quickly.
So
they
they
summarized
the
motivation.
I
think
we
all
know
about
it
here,
but
they
also
mentioned
that
their
goal
is
to
it's
three
main
tasks.
C
One
is
to
update
the
job
API
to
allow
new
types
of
workloads
that
are
not
just
the
typical
batch
job
as
defined
by
kubernetes.
Up
to
now,
then
things
like
queuing
and
Advanced
scheduling
and
then
I
think
the
the
interesting
part
that
there
was
a
nice
talk
in
the
co-located
event
about
was
the
the
optimized
scheduling
on
the
Node
itself
to
make
sure
that,
like
the.
E
Yep
no
I
was
there
very
jet
lagged,
but
yes,
I
was
there.
It
was
good.
I
would
just
reiterate
the
the
number.
The
amount
of
back
scheduling,
related
talks
that
had
the
batch
day
and
other
talk
and
and
I
was
on
the
panel
a
day
later
and
then
you
spoke
in
the
keynote
and
it
was
we
weren't.
Quite
an
ebpf
status,
but
batch
was
was
rising
in
the
ranks
of
conversation.
It's
good.
C
And
I'll
pitch
one
more
talk,
which
was
from
some
other
CERN
colleagues,
and
they
they
gave
a
talk
later,
I
think
Thursday
I,
don't
think
the
video
is
uploaded
yet.
C
But
basically,
what
they've
done
like
we
have,
this
large
great
Computing,
environment
and
they've,
been
playing
with
getting
kubernetes
being
a
great
site,
and
it
doesn't
matter
if
it's
on
premise
on
a
public
Cloud,
whatever
nice
presentation,
where
they
showed
that
they
could
scale
a
single
kubernetes
cluster
to
100
000
cores
in
the
Google
cloud
in
this
case
quite
easily
and
fast,
and
then
even
scratch
it
when
they
don't
need
it
yeah,
and
they
Justified
that
this
is
like
an
out
of
the
box
solution
to
integrate
new
resources
into
our
great
infrastructure
and
also
the
ability
to
to
request
resources
that
we
don't
have
yes
or
tpus,
and
their
dream
is
to
have
like
a
home
chart.
C
That
does
help
install
grid
site
and-
and
you
just
add
it
to
the
infrastructure,
so
they
gave
they
gave
some
some
summaries
here
of
their.
What
they've
been
doing
integrating
heterogeneous
like
arm
and
gpus,
and
then
they
actually
built
an
analysis
facility.
On
top
of
this,
so
they
have
the
kubernetes
layer
as
kind
of
the
base
layer
to
add
the
resources,
but
then
they
add
like
Jupiter
Hub,
and
they
had
the
ability
to
deploy
like
task
clusters
dynamically
for
different
users.
C
So
I
don't
think
the
video
is
uploaded
yet,
but
for
sure
it
will
be
Nathan.
So
I
from
from
the
link
I
will
find
the
link
in
the
agenda
for
you
and
then
there
should
be
like
a
list
with
the
video
some
reason
my
computer
is
blocking
a
bit
but
I'll
I'll,
post
yeah,
yeah,
I'll
post
the
link
in
a
bit
so
I
think
it's
it's
an
interesting
talk,
because
it's
a
a
real
use
case
and
pretty
large
of
doing
both
patch
and
and
kind
of
more
interactive
analysis
is.
C
Panda
is
like
it's
a
specific
scheduler
for
Atlas,
so
they
they
have
their
own
workflow
manager.
On
top.
D
C
So
I'll
give
you
I'll,
give
you
one
where,
where
the
actual
documentation
is.
D
C
C
So
here's
the
link
to
the
to
this
one
and
yeah
the
video
should
appear
there
I
think
they
they
are
done
with
all
the
collocated
events
and
they
started
uploading
the
main
conference
videos
as
well
there's
some
sort
of
delay
where
videos
are
available
like
if
you
have
the
virtual
access
you
can
go
to
to
the
virtual
platform
and
watch
the
videos
right
now.
Otherwise
they
they
will
get
to
YouTube
at
some
point
as
well.
C
F
It's
pretty
awesome.
I
was
really
disappointed
to
miss
out
actually,
but
definitely
going
to
be
there
and
we're
gonna
try
to
be
there
in
Detroit.
A
I
really
felt
like
it
was
three
years
worth
of
budget
all
spent
into
one
cucumber
because
of
the
pandemic.
I
mean
quite
quite
a
lot
of
things
going
on,
I
have
to
say.
D
C
A
Definitely
I
remember:
I,
attended
the
virtual
one
the
previous
year,
but
then
yeah
you
could
see.
You
could
definitely
feel
there
was
like
yeah.
It
just
felt
so
less
so
this
so
to
speak.
This
one
I
think
I
really
enjoyed
the
part
of
the
the
one.
The
part
that
was
not
in
the
virtual
one
last
year,
which
was
the
sponsorship
boost.
Basically,
you
could
just
go
around
and
and
find
people
and
just
talk
to
them,
which
was
something
of
course.
It
was
difficult.
A
Was
funny
between
was
was
killing
me
to
be
honest
three
days
after
I
was
I
just
couldn't
move
anymore.
C
C
Yeah
so
I
I
think
that's
that's
what
I
had
actually
stop
here,
but
one
one
thing
that
I
I
wanted
to
ask
as
well,
because
there's
not
a
lot
of
time
between
now
and
October.
Basically,
so,
if
we
organize
like
a
new
packs
in
HPC,
co-located
I
think
it
would
be
nice
because
it
would
help
keep
the
momentum,
but
we
need
to
be
really
proactive
to
reaching
out
to
people
to
to
do
submissions
to
make
sure
we
have
enough
content.
C
There
were
a
couple
of
talks
that
were
quite
good
that
we
didn't
select
for
this
one,
but
maybe
we
need
to
make
sure
we
advertise
this
as
much
as
possible,
both
in
the
like
new
world,
but
also
in
like
there
are
some
interest
like
Nathan
is
here.
There
was
some
interest
in
like
involving
more
things
like,
like
more
established
components
like
slurm
in
the
HPC
environment
and
and
try
to
kind
of
to
the
bridge
between
the
two
and
see.
C
E
Do
you
think
Ricardo
a
reached
out
and
suggested
that
we
submit
something
around
Armada
we'd
be
happy
to
to
do
something?
Of
course,
I
also
wanted
in
that
batch
Day.
Do
you
know
how
pretty
base
ended
up
on
batch
day?
I
can't
seemed
like
it
was.
It
was
a
weird
one
to
include,
especially
if
we
had
other
good,
which
one
sorry
there
was
a
whole
talk
on
pretty
base
during
batch
day,
which
seemed
I
like
pretty
bass.
E
It
was
Travis
Adair,
and
you
know
the
people
who
did
horovod
and
Ludwig
Ai,
and
it
was
more
ml
ml.
C
Yeah,
so
I
think
it
was
more
to
get
a
yeah
I
I
will
have
to
go
back
to
the
notes,
but
I
think
it
was
because
they
had
like
a
this
idea
of
a
nodeless
kubernetes.
C
That
was
the
reasoning,
yeah
I,
think,
because
if
you
look
at
the
schedule,
there's
there's
quite
a
lot
of
components
or-
and
it's
not
really
vendor
but
like
component
based
talks,
but
not
so
much
end
user
talks
and
I
think
for
the
next
one.
It
would
be
really
interesting
to
have
those,
but
we
need
to
reach
out
to
I
think.
E
People
yeah,
but
I
mean
if
we
could
get
end
users
of
any
of
the
schedules
that
spoke
last
time.
That
might
be.
That
might
be
very
interesting.
C
F
Another
thing
we
probably
need
to
do
what
we
definitely
need
to
do
is
work
out.
The
next
set
of
agendas
for
this
I
think
we'll
run
out
now
tends
to
work
quite
well.
I
think
certainly
up
front.
F
C
C
C
D
C
Maybe
we
if
people
can
add
their
what
they
would
like
to
hear
about,
so
we
just
talked
about
psyllium
and
evpf.
C
C
The
atlas
people
also
to
present,
because
it's
it's
a
like
a
use
case-
would.
C
E
Is
that
is
that
so
germane
to
this
group
I
mean
it's
it's
interesting.
It's
good
I
don't
know.
Do
people
suffer
that
in
in
this
world
of
research.
E
E
Interesting,
okay:
I
can
see
that
a
little
bit.
It
just
seems
like
it's
so
much
more
directly
useful
for
if
I
have
a
product
and
I
need
different,
different
HTTP
end
going
to
go
to
different
places.
Okay,.
D
C
D
C
F
C
Down,
it's
done
something
about
Numa
as
well.
That
would
be
pretty
cool
Maybe.
D
F
E
So
that
and
it's
coming
up
in
the
next
kubernetes
release
right.
F
F
Oh,
maybe
I
I
don't
know,
but
but
yeah
that
was
at
least
a
big
step
forward.
Having
it
looks
like
an
agreement
on
not
to
do
it
I'm
doing
it.
E
Yeah,
no,
that's
a
super
exciting
one.
Maybe
the
excitement
in
my
voice
is
not
quite
relaying
that
excitement,
but
yes,
super
exciting.
F
F
Yeah,
the
enhancement
got
merged.
Basically,
after
about.
C
C
All
right
that
sounds
amazing,
actually
I
think
Jonathan
just
put
usernames
and
ruthless
stuff.
That
would
be
pretty
nice
and.
C
C
We
can
add
those
Nathan
would
that
be
okay
to
give
her,
like
you
just
mentioned,
also
that
you
have
some
reports
from
sites
on
what
they
want
and
what
they
report.
F
C
C
We
did
get
a
talk
from
about
ruthless
quite
a
while
ago,
right.
It
wasn't
specifically
about
usernetes,
but.
C
F
C
Sure
it
was
him
but
I.
He
gave
a
talk
about
ruthless,
but
it
was
more
like
all
the
issues
of
like
Network
and
overly
fs,
and
when
you
do
a
user
space
stuff,
but
maybe
like
a
more
focused
talk
on
the
usernetes,
would
be
cool
as
well.
Yeah.
C
C
C
F
I
was
just
trying
to
see
if
I
could
explore
the
text
easily,
but
yeah.
That's
fine,
we'll
grab
it
later.
No,
nothing
else
for
me,
I
haven't,
got
a
huge
amount
of
contrary
this
time.
Unfortunately,
because
I
wasn't
there,
no
it's
good
to
see.
People
got
a
lot
out
of
it
anyway.
Well.
C
One
thing
is
that
we
do
I
forgot
that,
because
we
didn't
do
this
this
time,
but
remember
we
have
the
possibility
of
doing
talk
in
the
maintenance
track
as
well
about
the
group
and
this
last
time
it
actually
was
quite
nice.
We
got
a
few
people
interested
in
the
group
as
well,
so
we
can.
We
can
consider
for
Detroit
to
also
have
a
slot
for
the
group
yeah.
C
Need
to
submit
it
when
the
maintenance
track
will
come.
We
need
to
submit
it
there
yeah.
F
Just
tap
me
up:
I'll
working
it
with
you,
okay,
cool.