►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yeah
but
yeah
were
there
any
talks
about.
You
know
the
batch
hps,
also
in
the
in
the
normal,
like
in
the
rest
of
the
days,
or
was
it
only
in
the
colloquy
because
we
might
have
yeah
we
might
have
went
to
those.
There
was
quite
a
few
that
we
had.
A
We
attended,
given
yeah
yeah
our
interest,
but
I
wasn't
sure
if
you
listed
those
because
I
kind
of
was
trying
to
look
into
my
schedule
and
see
what
was
what
did
I
pinned
exactly,
but
I
can't
find
my
schedule.
Okay,
right.
B
C
C
Now
there
was
a
few
others,
but
like
me,
for
example,
I
focus
more
on
kubernetes
itself,
not
necessarily
only
on
the
batch,
so
one
of
them,
my
observation
and
main
team
in
kubecon
for
the
remaining
days,
was
ebpf
so
as
a
way
of
improving
one
performance,
getting
more
observability
and
more
control
on
your
network
layer
in
kubernetes.
C
So
there
was
a.
We
spoke
to
like
few
vendors
on
the
booth
like
calico
and
psyllium,
but
very
interesting.
That's
something
we're
gonna,
probably
try
soon
or
soonish
in
ngr
as
well
to
play
a
bit
more
with
the
ppf.
C
Other
interesting
are
quite
a
bit
around
automation
and
practices
like
people.
You
can
see
clearly
that,
like
argo,
cd
is
getting
more
mature,
so
I
went
to
a
talk
where
the
how
it's
called
the
let's
say,
vendors,
but
the
the
team
responsible
for
for
owning
argo
cd
or
talking
how
to
extend
it.
What
they
plan
the
roadmap
and
stuff
like
that,
that
was
quite
cool.
B
Yeah,
I
actually
mentioned
the
bats
in
hpc
day,
but
there
was
also
gitobscon
as
a
co-located
before
the
conference
and
then
a
ton
of
con
talks
about
githubs
as
well
during
the
conference
both
for
argo
and
flux
yeah.
But
maybe
maybe
we
can
like,
if
you
find
the
links
in
the
sketch,
maybe
drop
them
in
the
chat.
Yeah
yeah.
B
C
Again,
it
was
as
well
like
it
was
my
first
keep
going
and
I
was
surprised
by
a
number
of
people
attending
that
was
like
7,
000
or
so
people
on
the
side,
and
sometimes
it
was
quite
challenging
to
get
into
the
room
if
you
know
enough
into
the
room,
especially
if
the
particular
talk
was
like
super
interesting
and
getting
a
lot
of
traction.
It
was
like,
if
you
are
not
15
minutes
earlier
in
the
room
you're
not
getting
in.
C
There
is
quite
a
few
talks
around
crossplane,
so
crossplane
as
a
way
of
managing
infrastructure
other
than
just
pure
kubernetes
stuff,
using
kubernetes
and
kubernetes.
A
reconciliation
loop
to
enforce
the
state
and
stuff
like
that
was
quite
cool
as
well:
yeah,
nice,
few
plus
small
times
as
well.
So
a
few
talks
about
post-mortem
so,
if
you're
running
kubernetes
for
a
scale
for
people,
so
that
was
as
well
quite
interesting,
seeing
how
I
think
it
was
datadock.
So
datadog
running
into
the
issue
and
spending
like
few
months
investigating
stuff
on
their
amazon
infrastructure.
B
C
Cool
yeah,
so
there
was
also
quite
a
lot
of
talks
like
casual
talks
happening
during
lunch
to
to
various
people
and
then
again
to
vendors.
So
one
of
the
vendors
we're
kind
of
using
is
the
one
behind
oppa.
I
don't
know
whether
any
of
you
is
using
oppa
open
is
a
a
way
of
defining
a
policy.
So
before
going
to
kubecon,
we
face
a
few
issues
where
we
try
to
restrict
some
stuff,
so
we
start
discussing
with
them
and
so
on.
So
that
was
again
quite
a
nice
thing.
D
B
Okay,
so
I
tried
to
collect
a
few
talks
and
put
it
in
the
chat
but
feel
free
to
add
if
I
missed
them,
yeah.
A
I
think
I
actually
been
to
the
one
the
improving
gp
utilizations.
Yes,
I
was
there,
I
think,
was
it
from
google.
Yes,
that
was
quite
interesting,
but
they
can
kind
of
wasn't
let
down
that
they
didn't
actually
talk
about
the
implementation
details,
but
it
was
quite
interesting
about
how
you
know
the
theoretical
sides
of
things.
A
We
also
try
and
grab
something
else
out,
because
I
didn't
find
my
scat
from
here.
There
was
one
a
few
one
of
the
attendant
was
the
one
that
I
was
attending.
That
was
quite
interesting
was
network.
I
was
scheduling,
although
it
did
look
like
something
that
would
require
quite
a
lot
of
time
before
it
can
be
implemented
on
production
cluster.
Let
me
see
if
I
can
find
it
just
casually.
E
E
C
Yeah,
the
another
interesting
one
was
ephemeral
containers,
so
I'm
not
sure
whether
you're
aware
about
american
containers,
that's
a
new
thing
in
kubernetes
120
through
823,
and
I
think
it's
gonna
be
even
more
mature
in
future.
They,
the
idea,
is
when
you
need
to
troubleshoot
a
problem
with
your
report
or
with
your
application
inside
the
container,
rather
than
using
cubesteel
exec,
which
many
people
do
you'd
run
the
command
cube,
ctl
debug.
C
So
you
can
define
that
you
have
completely
different
set
of
tools
in
that
site
containers
and
by
being
a
site
container,
they
share
the
same
name:
space,
linux
name
spaces
not
to
kubernetes
namespaces.
So
by
doing
that,
you
have
the
access
to
the
same
network,
namespace
and
so
on,
and
you
can
use
ns
enters
on
a
space
enter
to
even
get
access
to
bit
namespace
and
a
few
other
name
linux
namespaces,
and
that
that
was
quite
cool
because
it
allows
you
to
have
very
minimalistic
image.
C
B
So
we
we
actually
do
use
that
at
cern
like
it
became
better
in
123,
but
it
was
alpha
before
so.
You
could
enable
it
in
the
clusters
and
we
were
enabling
it
and
what
we
do
is
we
keep
one
image
that
has
all
the
debugging
tools.
We
need
for
networking
everything
file
systems
whatever,
and
we
use
that
image
to
debug
and
detach
using
informal
containers
when
we
debug
stuff.
C
Going
through
my
sketch,
oh
another,
one
interesting
kind
of
interesting,
it
depends.
What
you
do
is
about
cube
weird,
so
keep
weird
again.
You
use
kubernetes
to
manage
your
vms
simply,
but
they
added
quite
a
few
additional
features
like
live
migration
and
so
on.
So
that's
another
product
which
is
becoming
more
and
more
mature
and
in
the
future.
Probably
it
could
be
a
way
to
like,
rather
than
directly
using.
I
don't
know
openstack
to
spin
your
vm.
You
can
use
keep
here
to
control
and
do
the
immigration
and
other
stuff
so
keyboard.
B
C
A
I
think
big
c
here
to
to
capture
the
the
traffic
right
there.
Yeah.
F
A
To
be
able
to
actually
produce
it,
get
a
test
actually
an
application.
C
Yeah
yeah
that
that's
a
good
point.
There
was
an
interesting
talk
about.
I
can
remember
the
title.
I
will
find
about
different
approach
to
doing
a
load
test
or
in
general,
a
test
because,
let's
say
you
run
a
web
application
on
your
kubernetes,
it's
a
web
service.
So
you
to
deploy
a
new
version
of
that
web
service.
You
can
do
few
approaches,
one
you
have,
let's
say
staging
cluster
or
something
like
that.
You
deploy
there
and
maybe
test
somehow
like
write
some
integration
tests.
Another
one
is
having
like
canary
approach.
C
So,
like
blue
green,
where
you
redirect
a
bit
of
traffic
to
your
new
version.
That's
what
people
quite
often
do
with
services,
much
like
linker,
d
and
stuff
like
that.
But
the
talk
was
that
the
problem
with
that
approach
is
you
not
necessarily
has
consistent
inputs
going
to
that
web
service?
Let's
say
if
you're
doing
the
change
in
middle
of
night,
you
don't
have
the
same
traffic
or
the
same
number
of
users,
so
you
don't
really
know
that
whether
the
the
way
you're
promoting
you
know
stuff,
new
application
is
working
at
all
or
not.
C
So
the
idea
is,
I
think,
it's
still
using
ebpf
as
well
to
capture
the
traffic,
so
in
other
ebbf
related
is
to
kind
of
tap,
let
like
start
tapping
the
the
traffic
and
record
that
traffic.
Once
you
have
the
traffic
kind
of
recorded,
you
can
reapply
that
later
on
anytime
and
it's
gonna
have
the
same
volume
and
so
on.
C
A
It
there
it
should
be
the
last
link
in
the
chat
yeah,
the
one,
the
revolution
issue
in
your
ci
pipeline-
that
that's
that's
the
one.
B
A
A
In
fact,
there
was
another
talk.
I
think
this
is
also
quite
relevant
to
the
scheduling
part,
which
is.
E
About
bandwidth
management
using
ebpf
again,
let
me
just
paste
the
link
here
yeah.
It
was.
B
E
A
A
They
were
adding
this
basic
possibility
of
adding
into
the
into
your
basically
the
deployment
the
also
the
resource
around
how
much
bandwidth
you
want
to
allocate
to
a
specific
bell
pod-
and
that's
quite
that's
quite
interesting,
as
that's
only
possible
for
the
the
way
ebpf
works
and
how
you
can
get
those
kind
of
informations
out
of
the
kernel
yeah,
I
mean
again
very
cool,
let's
press
the
link
there,
I
don't
right
now.
I
still
have
oh
yeah.
A
There
was
sorry
I
was
looking
at
the
at
the
at
the
information
here.
There
was
something
they
were
talking
about
where
they
were
also
mentioning
how,
with
the
with
this
new
approach,
you
could
also
have
higher
speed
of
communications.
A
Let
me
just
look
at
these.
The
scalability
limits
of
token
bucket
filter,
but
the
one
would
plug
in
early
departure
time,
yeah
combined
with
ebpf
yeah,
something
about
being
quite
yeah
cool,
both
in
bandwidth
management
and
getting
more
speed
out
of
what's
available.
So
one.
B
Looking
at
with
the
ppf
is
with
cilium
to
do
sort
of
like
cluster
mesh,
so
not
only
like
a
service
mesh,
but
really
I
was
able
to.
C
B
Multiple
clusters
to
to
be
meshed
together
at
the
pod
level,
even
and
and
you
can
easily
do
load
balancing
across
clusters
without
having
to
rely
on
services
which,
for
the
batch
use
case,
is
actually
quite
interesting
because
we
don't
want
to
have
the
like.
We
don't
really
care
about
the
service
abstraction
we
just
care
about
the
workloads
and
that
this
is
something
we
started
prototyping,
which
is
to
match
multiple
clusters
and
be
able
to
schedule
across
them
from
a
single
plane.
Basically
well.
F
So
so
that's
interesting,
because
when
I
chatted
with
them
about
scheduling
across
multiple
clusters,
I
thought
that
their
response
was
oh
no.
This
is
really
only
meshing.
The
networks
together
so
that
pods
can
speak
to
other
pods
in
other
networks
in
other
clusters,
but
the
scheduling
of
them
you'll
still
need
to
do
somewhere
else
right.
Okay,.
B
Yeah
but
but
it
allows
you
to
to
like,
even
if
you
have,
if
you
want
to
distribute
the
workloads
across
clusters,
you
can
rely
on
having
like
some
services
running
internally
in
one
cluster,
without
having
to
replicate
them
everywhere,
for
example,
and-
and
you
would
just
you
could
have
this
workload
clusters
that
are
really
disposable.
While
you
have
the
service
clusters
in
the
same
mesh
or
just
the
service,
the
component
clusters
in
the
same
mesh,
so
we've.
B
F
B
We've
seen
is
the
that
you
still
need
like
note
to
note
connectivity
like
layer,
three
connectivity
between
all
nodes
across
all
clusters,
which
yeah
it
kind
of
makes
sense,
but
it's
not
like
a
gateway
or
anything.
You
need
like
full
full
mesh
between
nodes
as
well.
C
I
think
something
like
that
was
mentioned
in
the
data
doc
talk
about
dns,
where
they
try
that
they
are
using
celium
exactly
to
do
a
pot
to
pot
routing
possible
across
multiple
cluster.
I'm
not
sure
whether
across
multiple
different
cloud
providers,
maybe
they're
achieving
that.
B
B
I
guess
if
you
if,
like
if
you
look
at
other
things
for
service
connectivity,
they
use
gateways
here,
it's
really
like
a
full
mesh
between
all
nodes,
at
least
my
understanding
up
to
now,
but
it
is
promising.
It
sounds
amazing,
but
it
it
it's
actually
something.
Maybe
maybe
we
should
bring
them
to
present
psyllium
and
uvpf
to
the
group.
B
I'll
we
we're
getting
liz
to
come
to
cern
in
two
weeks,
so
maybe
she
can
also
do
a
talk.
The
same
toolkit
group-
let's,
let's
put
it
for
for
the
list
here.
A
They're
definitely
bringing
a
lot
of
more
interested
parties
into
this.
C
B
B
Yeah
did
the
stuff
I
had
here
in
the
summary
it's
just
I
I
saw
that
work,
a
lot
of
references
to
batch
workloads,
not
only
in
in
the
talks,
but
also
in
the
keynotes.
So
in
the
toc
update
it
was
mentioned
that
there
was
the
new
group
formed
as
part
of
the
tag
runtime
and
then
also
in
the
kubernetes
updates,
the
batch
working
group
in
six
scheduling
and
then
like
the
keynotes.
Also
from
from
cern.
We
mentioned
the
computing
use
cases
and
there.
B
B
Yeah
so,
and
then
there
was
one
session
dedicated
to
the
kubernetes
working
group
patch.
That
will
also
be
the
video
uploaded,
so
aldo
gave
an
overview
of
the
work
that
has
been
going
on
already
and
and
the
plans,
and
there
was
not
a
lot
of
different
people
speaking,
but
there
were
I,
I
talked
to
a
few
and
it
seemed
like
there
were
both
developers
and
also
end
users
interested
in
using
these
tools,
so
that
that
was
quite
nice
and
just
really
really
quickly.
So
they
summarized
the
motivation.
B
I
think
we
all
know
about
it
here,
but
they
also
mentioned
that
their
goal
is
to
it's
three
main
tasks.
One
is
to
update
the
job
api
to
allow
new
types
of
workloads
that
are
not
just
the
typical
batch
job
as
defined
by
kubernetes.
Up
to
now,
then
things
like
queuing
and
advanced
scheduling,
and
then
I
think
the
the
interesting
part
that
there
was
a
nice
talk
in
the
collocated
event
about
was
the
the
optimized
scheduling
on
the
node
itself
to
make
sure
that,
like
the.
B
F
Yep
now
I
was
there
very
jet
lagged,
but
yes,
I
was
there,
it
was
good.
I
would
just
reiterate
the
the
number
the
amount
of
back
scheduling
related
talks.
They
had
the
batch
day
and
elder
talk
and-
and
I
was
on
the
panel
a
day
later
and
then
you
spoke
and
the
keynote-
and
it
was
we
weren't
quite
at
ebpf
status
but
batch
was
was
rising
in
the
ranks
of
conversation.
B
B
I
don't
think
the
video
is
uploaded
yet,
but
basically,
what
they've
done
like
we
have.
This
large
grid,
computing
environment
and
they've
been
playing
with
getting
kubernetes
being
a
great
site,
and
it
doesn't
matter
if
it's
on
premise
on
the
public
cloud,
whatever
next
presentation,
where
they
showed
that
they
could
scale
a
single
kubernetes
cluster
to
100
000
cores
in
the
google
cloud
in
this
case
quite
easily
and
fast,
and
then
even
scratch
it
when
they
don't
need
it
yeah,
and
they
justify
that.
B
This
is
like
an
out
of
the
box
solution
to
integrate
new
resources
into
our
great
infrastructure
and
also
the
ability
to
to
request
resources
that
we
don't
have
is
our
gpus
and
their
dream
is
to
have
like
a
home
chart
that
does
help
install
grid
site
and-
and
you
just
add
it
to
the
infrastructure,
so
they
gave
they
gave
some
some
summaries
here
of
their
what
they've
been
doing
integrating
heterogeneous
like
arm
and
gpus,
and
then
they
actually
built
an
analysis
facility.
B
B
So
I
don't
think
the
video
is
uploaded
yet,
but
for
sure
it
will
be
nathan.
So
I,
from
from
the
link
I
I
will
find
the
link
in
the
agenda
for
you
and
then
there
should
be
like
a
little
done
with
the
video
some
reason
my
computer
is
blocking
a
bit,
but
oh
opposed
yeah
yeah
yeah
yeah
I'll
post
the
link
in
a
bit.
B
B
Panda
is
like
it's
a
specific
scheduler
for
atlas,
so
they
they
have
their
own
workflow
manager
on
top
and
that's
where
all
the
work
so
panda
is
their
thing.
B
Give
you
I'll
give
you
one
where,
where,
where
the
actual
documentation
is
so.
F
B
Is
a
generic
tool,
but
it's
very
much
it's
used
by
other
experiments
as
well,
but
it
was
developed
within
atlas.
I
pasted
the
link
there
cool
thanks.
B
So
here's
the
link
to
the
to
this
one
and
yeah
the
video
should
appear
there.
I
think
they
they
are
done
with
all
the
collocated
events
and
they
started
uploading
the
main
conference
videos
as
well
there's
some
sort
of
delay
where
videos
are
available
like
if
you
have
the
virtual
access
you
can
go
to
to
the
virtual
platform
and
watch
the
videos
right
now.
Otherwise
they
they
will
get
to
youtube
at
some
point
as
well.
D
That's
pretty
awesome.
I
was
really
disappointed
to
miss
out
actually,
but
definitely
going
to
be
there
and
we're
going
to
try
to
be
there
in
detroit,
which
would
be.
A
Good,
I
really
felt
like
it
was
three
years
worth
of
budget
all
spent
into
one
cube
gun
because
of
the
pandemic.
I
mean
quite
quite
a
lot
of
things
going
on.
I
have
to
say
yeah.
It
was
a
good
one
to
be
able
to
pretend
to.
B
A
Yeah
definitely
I
remember
I
attended
the
virtual
one
the
previous
year,
but
then
yeah
you
could
see.
You
could
definitely
feel
there
was
like
yeah.
It
just
felt
so
less
so
so
to
speak,
this
one.
I
think
I
really
enjoyed
the
part,
the
one,
the
part
that
was
not
a
virtual
one
last
year,
which
was
the
sponsorship
booth.
Basically,
you
could
just
go
around
and
and
find
people
and
just
talk
to
them,
which
was
something
of
course.
It
was
diffic.
A
You
can't
do
virtually
not
in
the
way
you
can
do
it
in
typical,
at
least
in
person.
B
A
A
B
Yeah,
so
I
I
think,
that's
that's
what
I
had
accessed
up
here,
but
one
one
thing
that
I
I
wanted
to
ask
as
well,
because
there's
not
a
lot
of
time
between
now
and
october.
Basically,
so,
if
we
organize
like
a
new
batch
and
hpc
co-located,
I
think
it
would
be
nice
because
it
would
help
keep
the
momentum,
but
we
need
to
be
really
proactive
to
reaching
out
to
people
to
do
submissions
to
make
sure
we
have
enough
content.
B
There
were
a
couple
of
talks
that
were
quite
good
that
we
didn't
select
for
this
one,
but
maybe
we
need
to
make
sure
we
advertise
this
as
much
as
possible,
both
in
the
like
new
world,
but
also
in
like
there
are
some
interest
like
nathan
is
here.
There
was
some
interest
in
like
involving
more
things
like,
like
more
established
components
like
slurm
in
the
hpc
environment
and
and
try
to
kind
of
to
the
bridge
between
the
two
and
see
what's
the
way
forward.
F
I
think
ricardo
a
reached
out
and
suggested
that
we
submit
something
around
armada
we'd
be
happy
to
to
to
do
something.
Of
course.
I
I
also
wanted
in
that
batch
day.
Do
you
know
how
pretty
bass
ended
up
on
batch
day?
I
can't
seem
like
it
was.
It
was
a
weird
one
to
include,
especially
if
we
had
other
good,
which
one
sorry
there
was
a
whole
talk
on
pretty
base
during
batch
day,
which
seemed
I
like
pretty
bass.
F
It
was
travis
adair,
and
you
know
the
people
who
did
horrible
and
ludwig
ai
and
it
was
more
ml.
B
Ml
yeah,
so
I
think
it
was
more
to
get
a
yeah.
I
would
have
to
go
back
to
the
notes,
but
I
think
it
was
because
they
had
like
this
idea
of
a
nodeless
kubernetes.
That
is
quite
interesting
and
also
because
they
had
different
use
case
with.
F
B
F
F
B
Yeah
and
the
other
question
will
be
depending
on
how
many
submissions
there
are,
if
we
make
it
still
half
day
or
a
full
day,.
D
D
Another
thing
we
probably
need
to
do:
what
we
definitely
need
to
do
is
work
out.
The
next
set
of
agendas
for
this
I
think,
we've
run
out
now,
tends
to
work
quite
well.
I
think
certain
upfront.
D
B
B
D
B
B
F
B
Maybe
we
if
people
can
add
their
what
they
would
like
to
hear
about,
so
we
just
talked
about
psyllium
and
ebpf.
B
The
atlas
people
also
to
present,
because
it's
like
a
use
case,
would
that
be
okay
as
well.
B
Okay,
we
mentioned
the
gateway
avi.
Did
we
ever
get
a
presentation
on
that,
probably
not
right.
F
F
Interesting,
okay,
I
can
see
that
a
little
bit,
it
just
seems
like
it's
so
much
more
directly
useful
for
if
I
have
a
product
and
I
need
different,
different
http
endpoints-
to
go
to
different
places.
Okay,.
B
E
B
Dawn
is
done
something
about
noma
as
well.
That
would
be
pretty
cool.
B
D
F
Yeah
yeah
that
and
it's
coming
up
in
the
next
kubernetes
release
right.
D
F
D
D
Oh
hold
on
I've
got
it
here,
someone
sent.
D
But
yeah
the
enhancement
got
merged.
Basically,
after
about
six
years,
which
is
good.
D
B
All
right,
so
that
sounds
amazing.
Actually
I
think
jonathan
just
put
usernames
and
ruthless
stuff
that'll
be
pretty
nice
and.
B
We
can
add
those
nathan
would
that
be
okay
to
give
like
you
just
mentioned,
also
that
you
have
some
reports
from
sites
on
what
they
want
and
what
they
report
to
be
interesting
to
to
hear
about
that
as
well.
Would
that
be
fine.
D
B
D
B
D
D
B
Yeah,
so
I
I'll
just
drop
the
link
here,
because
it's
where
all
this
ruthless
stuff
is
being
tracked
by
giuseppe
and
duck
here
as
well.
That's
a
good
link
to
have.
B
D
No,
I
was
just
trying
to
see
if
I
could
explore
the
text
easily,
but
yeah.
That's
fine,
we'll
grab
it
later.
No,
nothing
else.
For
me,
I
mean
I
haven't,
got
a
huge
amount
of
contribute
this
time.
Unfortunately,
because
I
wasn't
there,
no
it's
good
to
see
people
got
a
lot
out
of
it
anyway.
B
One
thing
is
that
we
do.
I
forgot
that,
because
we
didn't
do
this
this
time,
but
remember
we
have
the
possibility
of
doing
a
talk
in
the
maintainer
track
as
well
about
the
group
and
this
last
time
it
actually
was
quite
nice.
We
got
a
few
people
interested
in
the
group
as
well,
so
we
can.
We
can
consider
for
detroit
to
also
have
a
slot
for
the
group.